text
stringlengths
174
640k
id
stringlengths
47
47
dump
stringclasses
17 values
url
stringlengths
14
1.94k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
43
156k
score
float64
2.52
5.34
int_score
int64
3
5
Overall rating 4 out of 54 (3 ratings) Last updated 30 May 2013, created 13 July 2011, viewed 6,389 Key ideas - could be used as a summary of ideas. Weblink takes you to the webpage fro more info. Please provide a rating. Lovely opening slide!!! Just one point: slide 7; balloon will become negatively charged when rubbed on jumper, unless I have it all wrong as a non-specialist? Useful for a Biologist! To the point and examples/diagrams are helpful! I like it. Short and snappy. Useful diagrams. TES Editorial © 2013 TSL Education Ltd. All pages of the Website are reproduce, duplicate, copy, sell, resell or exploit any material on the Website for any commercial purposes. TSL Education Ltd Registered in England (No 02017289) at 26 Red Lion Square, London, WC1R 4HQ
<urn:uuid:d29f63d0-fc18-465a-938b-920a7b37c1ee>
CC-MAIN-2013-20
http://www.tes.co.uk/teaching-resource/Static-Electricity-PowerPoint-6098224/event/22/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.882018
196
2.828125
3
Artificial noses have orbited the Earth in spacecraft, inhaled in doctors’ offices, and sniffed in food-processing plants, all in an effort to surpass the sensitivity and specificity of the mammalian olfactory organ. Sure, dogs have been known to smell a cancer, but can they tell you what kind it is, too? Hossam Haick, a professor at the Technion-Israel Institute of Technology, has developed a device that can do just that. By collecting a breath sample from patients, Haick’s electronic nose can determine whether that person has lung cancer, as opposed to breast, prostate, or head and neck tumors, and even whether it’s non-small cell or small cell lung cancer. Cancer patients emit a suite of volatile organic compounds in their breath that is different from the composition of healthy patients’ breath—and that differs from cancer to cancer. “If we develop an artificial nose which can detect very tiny amounts—at the range of parts per billion or parts per trillion—of these biomarkers of cancer, then we can provide a very simple and inexpensive way to detect cancer,” says Haick. “And most importantly, this is not invasive.” Haick’s cancer sniffer is currently in clinical trials, but there are already some electronic noses on the market. Alpha MOS, a French company, markets electronic noses that are used in food and beverage quality control, in plastics and packaging manufacturing to detect contaminants, and in flavor and perfume development. If we develop an artificial nose which can detect very tiny amounts of these biomarkers of cancer, then we can provide a very simple and inexpensive way to detect cancer. —Hossam Haick, Technion-Israel Institute of Technology Pretty much all electronic noses are based on the same approach, “sort of a fingerprint pattern recognition, like in the human process of olfaction,” says Alpha MOS spokesperson Marion Bonnefille. The mammalian sense of smell uses a combinatorial code composed of responses from different olfactory receptors. Rather than have an odor receptor specific to a particular odor, mammals interpret different scents by the pattern of receptors stimulated and the neural responses they excite. “In this way, 1,000 different receptors can recognize a million different odorants,” says Nate Lewis, a professor at CalTech and a pioneer in developing electronic noses. Lewis’s own technology works through an array of tiny sensors made of polymer film that act like sponges. Each one responds to an odor slightly differently, and the amount of swelling of the “sponge” in the presence of a vapor changes its electronic resistance. The pattern of resistance changes is distinct for each odor, giving the electronic nose the ability to distinguish between good wine and bad wine, toluene and benzene, and even between mirror images of the same molecule. Lewis has been able to detect compounds diluted down to the tens of parts per trillion. But there is a limit to the seemingly endless uses for artificial noses. “What we are not good at . . . is [breaking] down a complex mixture into hundreds of different compounds,” says Lewis. Gas chromatography-mass spectrometry, and the human nose to some degree, can tell you the specific composition of a sample, whereas an e-nose can only tell you whether or not the sample matches a particular profile. “It would be very good to know what are the biomarkers found inside [a breath sample], but this would require further studies and further redevelopment of the device,” says Haick. Perena Gouma of the State University of New York, Stony Brook, has made artificial noses that can detect particular components of a person’s breath. “We have arrays of sensors, each of which can target a specific biomarker . . . or class of chemicals,” she says. In such a way, her lab is developing gadgets to measure the likelihood of? having a health condition. Whereas the electronic nose uses the overall differences between healthy breath and diseased breath to distinguish between them, Gouma’s approach requires knowledge of the particular biomarkers in advance so that sensors can be developed specifically to detect them. (See “Vital Signs,” The Scientist, August 2011.) The inability to describe the composition of a sample aside, Tufts University chemist David Walt says that during his research on electronic noses “we pretty much couldn’t find a problem that we couldn’t solve.” That is, except for one very big challenge: for each problem an e-nose can solve, say, to distinguish spoiled milk from fresh or safe packaging from contaminated packaging, “every one of those different problems is a training and then validation that can eat up a huge amount of money,” Walt says. Additionally, recalibrating each device might be a massive undertaking if it requires, for example, bringing in numerous patients with a certain disease. “That’s not trivial.” Walt gave up on researching electronic noses several years ago because he could not find investors to move the technology into the real world, presumably because of these challenges. But he is somewhat optimistic for those who remain in the field—if they can figure out how to make it easier to train the devices to recognize signals of particular odors. “Lots of opportunities are there,” he says.
<urn:uuid:fed110c9-e0a9-407e-9136-d053086881d2>
CC-MAIN-2013-20
http://www.the-scientist.com/?articles.view/articleNo/32540/title/Get-a-Whiff-of-This/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94713
1,131
3.546875
4
On December 3, 2012, the planets Mercury, Venus, and Saturn will align with the Giza Pyramids in Egypt. This will be the first planetary/pyramid alignment in 2,737 years! Now, the three Giza pyramids are also in perfect alignment with the three stars of Orion’s belt. In 1983, Robert Bauval proposed this Orion correlation theory and published this idea in Discussions in Egyptology in 1989. The Giza pyramids were built in the 3rd millennium B.C. The alignment is very curious. Could the Egyptians have built the Giza pyramids that way on purpose?
<urn:uuid:a7918977-7bb4-4095-b13c-a2ebdcabb080>
CC-MAIN-2013-20
http://astronomybythecosmos.com/2012/08/22/pyramids-planets-alignment/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945161
128
2.84375
3
Meditation is one of the Five Principles of Yoga. It is the practice by which there is constant observation of the mind. It requires you to focus your mind at one point and make your mind still in order to perceive the 'self'. Through the practice of Meditation, you will achieve a greater sense of purpose and strength of will. It also helps you achieve a clearer mind, improve your concentration, and discover the wisdom and tranquility within you. Know the basics of Meditation and learn the different Meditation Exercises and Techniques in the following sections: In this section, know what Meditation is, get familiar with its essential aspects, and learn what makes it different from other forms of relaxation or related practice. Meditation is an important tool in achieving mental clarity and health. Learn how Meditation works and know the various health benefits brought about by Meditating. There are many Meditation Poses that you can practice. Learn how to perform the Full Lotus Posture, Half Lotus Pose, Burmese Pose, and the Egyptian Pose. A set of guidelines was formulated to help people understand the different aspects of Meditation. In this section, take a look at the main Principles of Meditation. Meditation can contribute to the psychological and physiological well-being of an individual. It can also help you in having a positive outlook in life. Learn the various health benefits of Meditation. Meditating can be done in various ways depending on your goal/s. This section covers the basic Types of Meditation. Practice what suits your needs. Tratak or steady gazing is an excellent concentration exercise. Learn how to do the Tratak technique and know the different diagrams and symbols used in that practice. Meditation involves a lot of exercises, poses, and techniques. Learn the different Meditation Exercises and practice the one which suits you best. Mantra is a profound and practical method of self-awakening, opening and self-transcendence. Know more about Mantra and learn the different Mantra Types. If you want more information on Meditation, please visit ABC-of-Meditation.com. This website has a wide variety of topics related to Meditation such as techniques and postures, Meditation courses, and Meditation products.
<urn:uuid:62359f46-7f4b-4227-8add-5e7ba3898d3b>
CC-MAIN-2013-20
http://www.abc-of-yoga.com/meditation/home.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930743
454
2.78125
3
"Transcendent legal/moral standard over human life creates a critically important human equality before the law. " "The grounding of all moral obligation in God's law had a deep impact on the understanding of human law." Shalom - the dream of God for a redeemed world, for an end to our division, hostility, fear, drivenness and misery. Shalom happens when humans stop killing each other, and therefore life's dignity is honored at its fundamental level. Shalom means: Delight, obedience to God (the precondition of shalom), the healing of broken bodies and spirits, enough to eat and drink, an inclusive community, the rebuilding of the human community Matthew 4 - Jesus did 2 new things 1. turned the eschatological future into an inaugurated eschatological present 2. Embodied the kingdom of justice, peace, and healing, in which human beings at last treat others and are treated, as God originally desired. Jesus' inclusive ministry in a religious culture in which: - Women were devalued - Leaders subjugated human well being to legal observance - Sinners treated as beyond the pale of God's care - Children were devalued - The sick ere often cast out of the community - The occupying Romans were hated - Tensions between jews and Samaritans - A woman on her own faced desperate financial challenges - Social-economic divisions were acut "The paradox of the incarnation is that when divinity stooped low and took on humanity, humanity revealed its loliness and yet was elevated through God's mercy." Jesus died for "the world" - everyone, people in all states, conditions, nations and orientations toward God and neighbor. Everyone should matter to us because everyone matters to God Christ rose in a body, the victory of God over evil, and the resurrection marks the triumph of life. Acts depicts rapidly growing church...more inclusive and hospitable community ethos. Paul offers an expansive theological effort to defend transformation of relationships (Gal 3:28) All divisive human distinctions are transfigured and overcome through Jesus Christ. Momentum toward radically inclusive and egalitarian community Multi-ethnic, multi-racial, gender-inclusive, class-inclusive community Congregations that believed that in their own experience of transformed human relations lay the beginnings of the redemption of the world. "Only because God became human is it possible to know and not despise real human beings...this is not because of the real human being's inherent value, but because God has loved and taken on the real human being. The reason for God's love for human beings does not reside in them..." D. Bonhoeffer"A secular, rootless human dignity ethic may be the best that our culture thinks it can manage. But Christians know not only that we can do better but that we must do better and that the resources for doing better are embedded in our tradition." We must claim our own rich, theological heritage.
<urn:uuid:9837a3f4-181e-42eb-8605-b0d52f5395f2>
CC-MAIN-2013-20
http://flashpointfiles.blogspot.com/2009/07/theological-roots-ofhuman-dignity-dr.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946433
622
2.796875
3
In recent years, federal, state, and local governments have spent increasing amounts of taxpayer money on Missouri’s public schools. Analysis of Missouri spending and test data, however, finds no relationship between increases in per-pupil expenditures and increases in student achievement. While many well-intentioned reform efforts have been unsuccessful — such as decreased class size and adopting a uniform set of curriculum standards — a few reforms have been effective. A better education reform strategy, according to education experts Rick Hess, director of education policy studies at the American Enterprise Institute, and Eric Hanushek, senior fellow at the Hoover Institution at Stanford University, is to allow free competition among schools for students. Such competition would allow schools that provide a quality education to flourish while punishing schools that provide a poor education. A review of all available empirical studies of school voucher programs — a school choice policy that allows students to take public dollars with them to schools that they choose — found that the majority of studies showed that voucher programs improved student outcomes and public schools. Unfortunately, education vouchers are not a viable option in Missouri because they might violate the state constitution’s Blaine Amendment. Increased choice frequently produces cost savings. In Washington D.C., for example, charter school students are outperforming traditional public schools while operating at a per-pupil cost of $11,000, compared to the $17,000 per-pupil expenditure of traditional public schools. Options are limited in Missouri because state law restricts the creation of charter schools to the cities of Saint Louis and Kansas City. Furthermore, Missouri has many rural areas without a critical mass of students to support the infrastructure of multiple schools. In fact, two-thirds of Missouri’s school districts have fewer than 1,000 students. For students whose educational choices are limited by geography, restrictive laws, financial constraints, or some combination of the three, a new approach is necessary to give them the benefits of educational competition and course diversity. Virtual schools and distance learning can offer these benefits to nearly all of Missouri’s students. Full Case Study (PDF)
<urn:uuid:89ce024c-bc75-4553-95a7-003404a431d3>
CC-MAIN-2013-20
http://showmeinstitute.org/publications/case-study/education/582-virtual-learning-beyond-brick-and-mortar.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959625
428
3.171875
3
First-born children may have a higher risk of developing high blood pressure or diabetes than their siblings, according to a new study by the University of Auckland’s Liggins Institute in New Zealand. The study documents that first-born children have higher daytime blood pressure and are less able to absorb sugars into their bodies than their younger siblings. First-born children experienced a 21 percent drop in insulin sensitivity and a 4 mmHg increase in blood pressure. “Although birth order alone is not a predictor of metabolic or cardiovascular disease, being the first-born child in a family can contribute to a person’s overall risk,” said Wayne Cutfield, MBChB, DCH, FRACP, of the University of Auckland. The study involved 85 healthy children aged 4 to 11, 32 of which were first-born children. Children were selected as participants because insulin sensitivity can be affected by puberty and adult lifestyles. Researchers measured their height, weight, body composition, and fasting lipid and hormonal profiles. Researchers speculated that the metabolic differences between first-born children and their siblings might be caused by changes in the mother’s uterus during pregnancy. After the changes occur, the mother’s body may increase nutrient flow to the fetus during subsequent pregnancies. “Our results indicate first-born children have these risk factors, but more research is needed to determine how that translates into adult cases of diabetes, hypertension and other conditions,” Cutfield said. The article, “First-born Children Have Reduced Insulin Sensitivity And Higher Daytime Blood Pressure Compared To Later-born Children,” appears in the March 2013 issue of The Endocrine Society’s "Journal of Clinical Endocrinology & Metabolism."
<urn:uuid:e1d8047d-c9db-45b9-a303-99731db94a29>
CC-MAIN-2013-20
http://www.examiner.com/article/higher-risk-of-diabetes-metabolic-disorders-may-be-linked-to-birth-order?cid=rss
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955044
358
3.046875
3
An anticipated feature to Photoshop and only available in Photoshop CS, and above, is the Text on a path Tool. The Path can be either an open or closed path as well as a vector filled path. Start a new document. Preset options of your choice. Select either the Pen Tool (P) , the Line Tool, the FreeForm Pen Tool or any of the Shape Tools with ‘Paths’ Option Bar feature active. Draw A Path (Open Path Example) In this example lets draw a sine curve using the Pen Tool with Paths active. See example of below for creating a simple ‘s’ curve. Prepare Type Settings Next Select the Type Tool (T) from the Toolbar (Also open the ‘Window > Character’ Palette and set your desired type parameters) then place the cursor at the beginning of the open path as I have captured below. Notice the Cursor turns into an I-Beam, in anticipation for you to begin the Type on a Path. Click to begin and type on the path as I have animated below. Move Text Along Path To move the type along the path, first select the ‘Path Selection Tool (P)’ from the Toolbar and simply click and drag at the beginning of type as I have animated below. Notice the cursor now becomes a double sided arrow facing in the direction of your type. Re-Position Text along a Path To drop the text to the bottom of the path, select the Path Selection Tool (P), then click and hold, then drag the I-beam below. (Takes a little getting use to) Type on a Closed Path. This example uses the Elipse Tool (U) to create the circular path with Options Bar ‘Paths’ enabled. Type on a Vector Shape. This example uses the ‘Rounded Rectangle Tool (U)’ with Options Bar ‘Shape Layers’ active.
<urn:uuid:1cbd2f96-8dde-4102-a28e-b5d00d361bd2>
CC-MAIN-2013-20
http://www.heathrowe.com/text-on-a-path-in-photoshop-cs/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.862311
418
2.65625
3
Much of the excitement in genetics today comes from its lively offspring, "genomics." This newcomer specializes in large-scale analyses of all the genetic material in the genomes of organisms ranging from bacteria to mammals. Genomics is expected to provide the functional meaning of newly revealed DNA sequences: What do these genes really do? And that precise knowledge, in turn, heralds a revolution in the diagnosis, monitoring, and treatment of diseases. Meanwhile, new DNA-sequencing techniques using high-speed robots are flourishing. So are ingenious combinations of YACs (yeast artificial chromosomes), BACs (bacterial artificial chromosomes), PACS (fragments of DNA in a vector derived from a bacteriophage known as P1), and MACs (mammalian artificial chromosomes), which supply genetic fodder for the machines that do the sequencing. A Gifted Young Patient Battles Cystic Fibrosis Jeff Pinard, the young man with cystic fibrosis who was seeking his own genetic flaw as a college student, is now 31 and hanging in there. After going home to his parents in Grand Rapids, Michigan, he was hospitalized again because of pancreatic problems that produced excruciating pain. These problems remain unsolved, despite a variety of treatments. Until recently, he did computer work for an electric company, mostly from home. But he has had to stop because of repeated crises that landed him in the hospital. He remains optimistic, however, and according to his mother, "he's had some periods of time when he has been without pain." How long? "Oh, a couple of weeks at a time ...." Lap-Chee Tsui of the University of Toronto finally identified and sequenced Pinard's second, milder CF mutation, adding it to the list of more than 850 known mutations in the CF gene. But it is difficult to count CF mutations these days, Tsui says, because "many mutations are now found in atypical diseases, such as male infertility and pancreatitis." A major study of pancreatitis led by Jonathan Cohn of Duke University Medical Center and published in the New England Journal of Medicine recently concluded that many adults who suffer from so-called "idiopathic" pancreatitis (pancreatitis of unknown cause) actually have cystic fibrosis. The authors add that these findings "will change how physicians treat patients with this condition." A Natural Antibiotic The major symptom of CF is lung infection. In 1996, Michael Welsh, an HHMI investigator who teaches medicine and physiology at the University of Iowa, discovered why these infections occurand offered a new approach to treatment that is still being developed. Welsh had a longtime interest in epithelia, the sheets of cells that line the internal and external surfaces of the body, including those lining the airway. When the CFTR gene was identified, he was among the first to examine the role of the protein made by this gene. "We found out that it's actually a chloride channel, through which salt moves across the membrane," Welsh says. "That was very satisfying, because then you could begin to tie together the physiologydefective epitheliaand the gene product, the chloride channel." Then his team made an intriguing discovery: Normal epithelial tissue can kill a large number of bacteria, while similar tissue from people with CF fails to do so, or even allows the bacteria to multiply. Welsh guessed that the fluid covering the airway normally contains factors such as defensins, molecules that are part of our innate, nonspecific defense system. He wondered whether these were also present in people with CF. To his surprise, he found natural antibiotic substances both in healthy people and in those with CF. In CF patients, however, the substances' activity was greatly reduced by the abnormally high salt concentration resulting from the defective CFTR channel. When Welsh lowered the salt concentration, even the epithelia of CF patients became able to kill bacteria. The team's conclusion: Drugs that reduce the salt concentration in airway fluid may help treat or prevent the sometimes fatal lung infections of CF patients. Other antibiotic drugs that resemble defensins may also be developed for this purpose. As for his experiments with gene transfer, "they remain just thatexperiments," says Welsh. "We can deliver the normal CFTR gene, but we cannot deliver it efficiently enough," he explains. "The problem is the delivery. We need to go back to the lab and try to make it work better." "We are slowly moving closer and closer to implementation of genetic screening for CF," says Arthur Beaudet of the Baylor College of Medicine. Several labs around the country now provide such tests. "And we can get 50 different mutations on a single test for as little effort as one," Beaudet says. He adds that the tests have become so sensitive that "somewhat over 90 percent of CF carriers would be correctly identified." Therefore the tests would detect more than 81 percent of the couples at risk. Beaudet believes that newly married couples should be given a set of prepared mouth swabs to take home, so that they can test themselves at their leisure. Further tests would then be necessary only if both members of the pair are carriers of CF. About 40 specialized centers worldwide offer in vitro fertilization to avoid genetic diseases. At the Illinois Masonic Hospital in Chicago, for example, Charles Strom and Yuri Verlinsky screen the eggs of mothers who are carriers of CF before fertilizing them with the husband's sperm. In this analysis they use only the eggs' polar bodies, which would be cast off anyway, Strom explains. If the test indicates the egg is free of the CF mutation, the doctors proceed with fertilization. "More than 16 healthy children have been born to CF carriers with this method," Strom reports. The method has now been extended to a variety of genetic diseases, including hemophilia, thalassemia, and sickle cell anemia. New Findings About Brain Disorders There was great rejoicing when Nancy Wexler's quest for the cause of Huntington's disease finally succeeded in 1993, after nearly eight years of effort described as "a nightmare of false leads, confounding data, and backbreaking work." The faulty gene was named huntingtin. The guilty mutation turned out to code for an extra-long, repeated stretch of glutamine, an amino acid in huntingtin, the protein made by this gene. But no one knew how the expanded glutamine repeats cause brain neurons to sicken and die. Several other "triplet-repeat" diseases are known. They all attack some part of the nervous system, and all of them are still mysterious. Scientists have begun to search for clues to the function of huntingtin in the proteins that interact with it. "The beauty of having the Huntington's disease gene in hand is that we are now able to place it in animals and learn its effects," says Wexler. In 1996, Gillian Bates and her team at Guy's Hospital, London, put fragments of the human HD gene into mice for the first time. The mice developed HD-like symptoms two months after birth and died soon Researchers then discovered that the abnormal form of huntingtin produces misfolded proteins, which stick together in toxic clumps inside patients' brain cells. Next, working with mouse models of HD, Columbia University scientists Ai Yamamoto and Rene Hen found that shutting off the production of the abnormal protein not only halted the progression of the disease, but actually cleared some of the toxic clumps. Fruit flies were also enlisted in the fight. In 1998, George Jackson of UCLA's Department of Neurology inserted fragments of the HD gene into the large nerve cells in the eye of a fruit fly. He found that, just as in human beings, the cells' fate depended on the number of glutamine repeats in the HD gene's DNA. The eyes of flies whose gene had only two repeats remained normal. Those with 75 repeats were normal for a month, but then began to degenerate slowly. When the flies had 120 repeats, their eyes suffered massive cell destruction. Wexler is greatly encouraged by these findings. She points out that "the fly eye is a perfect laboratory to test the effects of drugs that will protect the eye and prevent degeneration." And because of many similarities between HD and other neurodegenerative conditions, including Alzheimer's, and Parkinson's diseases, scientists hope that the findings from one of these areas will advance research in the others. The Viking Genes Other gene defects uncovered in recent years include mutations predisposing people to such widespread ailments as breast cancer, familial polyposis of the colon, Alzheimer's, and Many family groups have helped in these searches. Scientists now look forward to working with the biggest genetic trove of all--the Viking gene pool, which can be found in very pure form among the 170,000 people of Iceland. Some of these families can be traced back for hundreds of years, and their records will soon be available to researchers. As DNA-based biology expands, so does the need for large groups of people with detailed and accurate The Dolan DNA Learning Center at Cold Spring Harbor Laboratory, an interactive primer on genetics and molecular biology. National Human Genome Research Institute (NHGRI), a description of the Human Genome Project and status report. Ethical, Legal and Social Issues(ELSI) and the Human Genome Project, information from the Department of Energy about the ethical, legal and social issues surrounding the Human Genome Project. Information also available from NHGRI. National Organization for Rare Disorders (NORD), a federation of voluntary health organizations dedicated to assisting those with rare disorders. The Genetic Alliance, a consortium of support groups for individuals with genetic conditions and their families as well as advocacy and public education.
<urn:uuid:6f86c2b9-aa0b-4d7a-bc1e-513778f2ab1a>
CC-MAIN-2013-20
http://www.hhmi.org/genetictrail/h100.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95368
2,119
2.984375
3
Most colds go away in a few days. Some things you can do to take care of yourself with a cold include: · Get plenty of rest and drink fluids. · Over-the-counter cold and cough medicines may help ease symptoms in adults and older children. They do not make your cold go away faster, but can help you feel better. Over-the-counter (OTC) cough and cold medicines are not recommended for children under age 4. · Antibiotics should not be used to treat a common cold. Many alternative treatments have been tried for colds, such as vitamin C, zinc supplements, and echinacea. Talk to your doctor before trying any herbs or supplements. The fluid from your runny nose will become thicker and may turn yellow or green within a few days. This is normal, and not a reason for antibiotics. Most cold symptoms usually go away within a week. If you still feel sick after 7 days, see your health care provider to rule out a sinus infection, allergies, or other medical problem. Colds are the most common trigger of wheezing in children with asthma. Fashner J, Ericson K, Werner S. Treatment of the common cold in children and adults. Am Fam Physician. 2012;86(2):153-159. Singh M, Das RR. Zinc for the common cold. Cochrane Database of Systematic Reviews 2011, Issue 2. Art. No.: CD001364. Linda J. Vorvick, MD, Medical Director and Director of Didactic Curriculum, MEDEX Northwest Division of Physician Assistant Studies, Department of Family Medicine, UW Medicine, School of Medicine, University of Washington. Also reviewed by A.D.A.M. Health Solutions, Ebix, Inc., Editorial Team: David Zieve, MD, MHA, Bethanne Black, Stephanie Slon, and Nissi Wang.
<urn:uuid:e2975165-30ef-4e69-b702-45e7d54f8c86>
CC-MAIN-2013-20
http://www.stjoesoakland.org/body_annarbor.cfm?id=431&action=detail&AEArticleID=000678&AEProductID=Adam2004_5117&AEProjectTypeIDURL=APT_1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.911616
403
2.875
3
Learn something new every day More Info... by email A condition precedent is a legal term for something that must occur before another thing occurs. In other words, it is a condition that must precede or predate a specified event. It is, in effect, the catalyst that causes something else to occur. The idea of a condition precedent is common in contract law. A contract can stipulate that something will occur if and only if another event occurs. That other specified event is the condition precedent. If there is a dispute as to whether the condition was fulfilled, a court can resolve the dispute by looking at the language in the contract describing the condition precedent. The court can then determine whether the condition was fulfilled. When including condition precedents in either contracts or wills, the terms of the condition must be clear. This means that the contract or legal document must state exactly what the condition is. If the condition is not clearly stated, the court must interpret the condition in light of the parties' intent. The court normally looks at the legal document as a whole in order to determine if the condition was fulfilled. A condition precedent is also common in estate planning, and in establishing trusts. For example, a parent can set up a trust for his her her child, but make receipt of the money in the trust based on the child fulfilling a condition. This condition is referred to as the condition precedent on inheritance. Such a condition can also be established in a will. For example, a parent can stipulate that a child is to inherit only if that child is married at the time, or if that child has graduated from college. Likewise, in a trust context, a parent can stipulate that a child is to begin earning income or receiving assets from the trust only upon fulfilling certain criteria. The criteria that the child must fulfill, such as graduating form college, is thus a condition that must precede inheritance. These conditions can be anything that a person wishes, as long as they do not violate public policy. For example, a parent could not stipulate that a child must commit murder before inheriting, because fulfilling the condition would violate the law. When a condition is stated as a requirement for something to occur, normally the contract or other legal doctrine will stipulate what happens if the condition is not fulfilled. For example, if a contract becomes active only when a condition is fulfilled, the terms of the contract will often state what is to happen if the condition is never fulfilled. The same is true when condition precedents are included in wills or trust documents.
<urn:uuid:c5689a58-433b-4761-87aa-0179fa35be5f>
CC-MAIN-2013-20
http://www.wisegeek.com/what-is-condition-precedent.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958391
519
3.203125
3
May 10, 2010 Immediately after we celebrated the birth of the National Zoo’s baby kiwi Apteryz mantelli bird in March, the first question that came to mind was “What are you going to call it?” (Maybe that was just on my mind.) But keepers at the Zoo were saving that honor for Roy Ferguson, the ambassador to the United States from New Zealand, the kiwi’s native country. On Friday, the zoo told us Ferguson had an answer: The bird will be called Hiri (“HEE-ree”), a name that, in New Zealand’s native language of M’ori, means “important and great.” There are only 12 female kiwi birds in zoos outside of New Zealand, which means Hiri is one of few birds who can help increase the species’ captive population. Zoo keepers say her genes will make her a valuable breeder. Hiri isn’t available for public viewing right now, but you can see her and her adorable beak on the zoo’s Kiwi Cam. Or, visit Hiri’s oldest brother, Manaia, at 11 a.m. every Monday, Wednesday and Friday at the zoo’s Meet a Kiwi program in the Bird House. Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
<urn:uuid:7a5473f3-abbe-42c8-b20c-f5b16c88a5f9>
CC-MAIN-2013-20
http://blogs.smithsonianmag.com/aroundthemall/2010/05/have-you-ever-meet-a-kiwi-who-was-just-named-hiri-down-by-the-zoo/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95294
301
2.5625
3
Pope John III A Roman surnamed Catelinus, d. 13 July, 574. He was of a distinguished family, being the son of one Anastasius who bore the title of illustris . The year of his birth is not recorded, but he was consecrated pope seemingly on 17 July, 561. Owing to the necessity of waiting for imperial confirmation of his election, an interval of five months elapsed between the death of Pelagius I and the consecration just noted. Although John reigned nearly thirteen years very little is known of his pontificate. It fell during the stormy times of the Lombard invasion, and practically all the records of his reign have perished. He would seem, however, to have been a magnanimous pontiff, zealous for the welfare of the people. An inscription still to be seen in the fifteenth century testified that "in the midst of straits he knew how to be bountiful, and feared not to be crushed amidst a crumbling world". Two most unworthy bishops, Salonius of Embrun and Sagittarius of Gap, had been condemned in a synod at Lyons (c. 567). They succeeded, however, in persuading Guntram, King of Burgundy, that they had been condemned unjustly, and appealed to the pope. Influenced by the king's letters, John decided that they must be restored to their sees. It is to be regretted that the papal mandate was put into effect. The most important of the acts of this pope were those connected with the great general, Narses. Unfortunately the "Liber Pontificalis" is enigmatic regarding them. By feminine intrigue at the court of Constantinople, a charge of treason was trumped up against the general, and, in consequence, the only man capable of resisting the barbarians was recalled. It is quite possible that Narses may then have invited the Lombards to fall upon Italy ; but it is perhaps more probable that, hearing of his recall, they invaded the country. Knowing that Narses was the hope of Italy, John followed him to Naples, and implored him not to go to Constantinople. The general hearkened to the voice of the pope, and returned with him to Rome (571). But seemingly the court party in the city was too strong for Narses and the pope. John retired to the catacomb of Prætextatus, where he remained for many months. He even held ordinations there. On the death of Narses (c. 572), John returned to the Lateran Palace. His sojourn in the catacombs gave him a great interest in them. He put them in repair, and ordered that the necessaries for Mass should be sent to them from the Lateran. John died 13 July, 574, and was buried in St. Peter's . More Catholic Encyclopedia Browse Encyclopedia by Alphabet The Catholic Encyclopedia is the most comprehensive resource on Catholic teaching, history, and information ever gathered in all of human history. This easy-to-search online version was originally printed in fifteen hardcopy volumes. Designed to present its readers with the full body of Catholic teaching, the Encyclopedia contains not only precise statements of what the Church has defined, but also an impartial record of different views of acknowledged authority on all disputed questions, national, political or factional. In the determination of the truth the most recent and acknowledged scientific methods are employed, and the results of the latest research in theology, philosophy, history, apologetics, archaeology, and other sciences are given careful consideration. No one who is interested in human history, past and present, can ignore the Catholic Church, either as an institution which has been the central figure in the civilized world for nearly two thousand years, decisively affecting its destinies, religious, literary, scientific, social and political, or as an existing power whose influence and activity extend to every part of the globe. In the past century the Church has grown both extensively and intensively among English-speaking peoples. Their living interests demand that they should have the means of informing themselves about this vast institution, which, whether they are Catholics or not, affects their fortunes and their destiny. Browse the Catholic Encyclopedia by Topic Copyright © Catholic Encyclopedia. Robert Appleton Company New York, NY. Volume 1: 1907; Volume 2: 1907; Volume 3: 1908; Volume 4: 1908; Volume 5: 1909; Volume 6: 1909; Volume 7: 1910; Volume 8: 1910; Volume 9: 1910; Volume 10: 1911; Volume 11: - 1911; Volume 12: - 1911; Volume 13: - 1912; Volume 14: 1912; Volume 15: 1912 Catholic Online Catholic Encyclopedia Digital version Compiled and Copyright © Catholic Online
<urn:uuid:443942a2-9b06-4b77-9ddc-853ca2235a24>
CC-MAIN-2013-20
http://www.catholic.org/encyclopedia/view.php?id=6356
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967893
977
3.21875
3
Independent report to the Australian Government Minister for the Environment and Heritage Beeton RJS (Bob), Buckley Kristal I, Jones Gary J, Morgan Denise, Reichelt Russell E, Trewin Dennis (2006 Australian State of the Environment Committee), 2006 The Australian Antarctic Territory (AAT) is ‘that part of the territory in the Antarctic seas that comprises all the islands and territories, other than Adélie Land, situated south of the 60th degree south latitude and lying between the 160th degree east longitude and the 45th degree east longitude’. This includes Australia’s sub-Antarctic Territory of Heard Island and the McDonald Islands, and Macquarie Island. Antarctica and the surrounding Southern Ocean are important to Australia because of their historic associations, influence on regional and global climate processes, and contribution to biodiversity, as well as the economic value of the Southern Ocean fisheries. Data from long-term monitoring provide a platform for better understanding the functioning of global systems, and predicting climate and other atmospheric trends. Antarctica and its surrounding oceans are dominated and shaped by the global climate. Five main areas of local human activity have the potential to impact adversely on the Antarctic environment, although the intensity of impact is low. These are the conduct of scientific research, logistic support operations, tourism, construction of buildings and infrastructure, and commercial harvesting of living resources. Australia has nine permanent scientific stations; an extremely low density when considered in the context of the almost 6 million square kilometres of the Australian Antarctic Territory (AAT). Nevertheless, there is a local environmental impact of the day to day operation of the stations. Tourist visits to the Australian sub-Antarctic islands and the AAT have not increased, in contrast with an increase in visitors of 10 per cent a year to other parts of Antarctica outside Australia’s jurisdiction. A variety of marine and terrestrial species and ecosystems are found in the Territory, and their vulnerability to human pressures is not fully known. Some species have been exploited over the last 200 years, and a number of those have not yet recovered. There are also difficult practical challenges in the long-term conservation of historic sites and objects. Despite these difficulties, scientists need to find ways of conducting research and sharing and aggregating data that will give a better overall picture of the state of the Antarctic environment.
<urn:uuid:a2888143-8e1e-4a42-9248-6c899ffdb167>
CC-MAIN-2013-20
http://environment.gov.au/soe/2006/publications/report/antarctic.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927857
478
3.0625
3
Vitamin toxicity is a condition in which a person develops symptoms as side effects from taking massive doses of vitamins. Vitamins vary in the amounts that are required to cause toxicity and in the specific symptoms that result. Vitamin toxicity, which is also called hypervitaminosis or vitamin poisoning, is becoming more common in developed countries because of the popularity of vitamin supplements. Vitamins are organic molecules in food that are needed in small amounts for growth, reproduction, and the maintenance of good health. Some vitamins can be dissolved in oil or melted fat. These fat-soluble vitamins include vitamin D, vitamin E, vitamin A (retinol), and vitamin K. Other vitamins can be dissolved in water. The water-soluble vitamins include folate (folic acid), vitamin B12, biotin, vitamin B6, niacin, thiamin, riboflavin, pantothenic acid, and vitamin C (ascorbic acid). Taking too much of any vitamin can produce a toxic effect. However, megadoses with the fat-soluble vitamins are more likely to become toxic than with water-soluble vitamins because fat-soluble vitamins are often stored in the body while excess water-soluble vitamins are usually excreted in the urine. Vitamins A and D are the most likely to produce hypervitaminosis in large doses, while riboflavin, pantothenic acid, biotin, and vitamin C appear to be the least likely to cause problems. Vitamins in medical treatment Vitamin supplements are used for the treatment of various diseases or for reducing the risk of certain diseases. For example, moderate supplements of folic acid appear to reduce the risk for certain birth defects such as neural tube defects, and possibly reduce the risk of cancer. Therapy for diseases brings with it the risk for irreversible vitamin toxicity only in the case of vitamin D. This vitamin is toxic at levels that are only moderately greater than the recommended dietary allowance (RDA). Niacin is commonly used as a drug for the treatment of heart disease, but niacin is far less toxic than vitamin D. Vitamin toxicity is not a risk with medically supervised therapy using any of the other vitamins. With the exception of folic acid supplements, the practice of taking vitamin supplements by healthy individuals has little or no relation to good health. Most adults in the United States can obtain enough vitamins by eating a well-balanced diet. It has, however, become increasingly common for people to take vitamins at levels far greater than the RDA. These high levels are sometimes called vitamin megadoses. Megadoses are harmless for most vitamins. But in the cases of a few of the vitamins—specifically, vitamins D, A, and B6—megadoses can be harmful or fatal. Researchers have also started to look more closely at megadoses of vitamins C and E, since indirect evidence suggests that these two vitamins may reduce the risks of cancer, heart disease, and aging. It is not yet clear whether taking megadoses of either vitamin C or vitamin E has any influence on health. Some experts think that megadoses of vitamin C may protect people from cancer. On the other hand, other researchers have gathered indirect evidence that vitamin C megadoses may cause cancer when combined with smoking. VITAMIN D. Vitamins D and A are the most toxic of the fat-soluble vitamins. The symptoms of vitamin D toxicity are nausea, vomiting, pain in the joints, and loss of appetite. The patient may experience constipation alternating with diarrhea, or have tingling sensations in the mouth. The toxic dose of vitamin D depends on its frequency. In infants, a single dose of 15 milligrams (mg) or greater may be toxic, but it is also the case that daily doses of 1.0 mg over a prolonged period may be toxic. In adults, a daily dose of 1.0 to 2.0 mg of vitamin D is toxic when consumed for a prolonged period. A single dose of about 50 mg or greater is toxic for adults. The immediate effect of an overdose of vitamin D is abdominal cramps, nausea, and vomiting. Toxic doses of vitamin D taken over a prolonged period of time can result in irreversible deposits of calcium crystals in the soft tissues of the body that may damage the heart, lungs, and kidneys. The dietary reference intake (DRI) suggests an upper tolerable limit of 25 micrograms (mcg) per day for children and 50 mcg per day for adults. The DRI is between 5–15 mcg from childhood to adulthood in the absence of adequate sunlight. Older adults have a requirement on the higher end of the scale due to generally reduced sun exposure. VITAMIN A. Vitamin A toxicity can occur with long-term consumption of 20 mg of retinol or more per day. The symptoms of vitamin A overdosing include accumulation of water in the brain (hydrocephalus), vomiting, tiredness, constipation, bone pain, and severe headaches. The skin may acquire a rough and dry appearance, with hair loss and brittle nails. Vitamin A toxicity is a special issue during pregnancy. Expectant mothers who take 10 mg vitamin A or more on a daily basis may have an infant with birth defects. These birth defects include abnormalities of the face, nervous system, heart, and thymus gland. It is possible to take in toxic levels of vitamin A by eating large quantities of certain foods. For example, about 30 grams of beef liver, 500 grams of eggs, or 2,500 grams of mackerel would supply 10 mg of retinol. VITAMIN E. Megadoses of vitamin E may produce headaches, tiredness, double vision, and diarrhea in humans. Studies with animals fed large doses of vitamin E have revealed that this vitamin may interfere with the absorption of other fat-soluble vitamins. The term absorption means the transfer of the vitamin from the gut into the bloodstream. Thus, large doses of vitamin E consumed over many weeks or months might result in deficiencies of vitamin D, vitamin A, and vitamin K. The DRI suggests an upper tolerable limit between 200–800 mg per day for children and teenagers, depending on age (younger children have requirements on the lower end of the scale), and 1000 mg per day for adults. The DRI is 15 mg per day for adults and pregnant women. VITAMIN K. Prolonged consumption of megadoses of vitamin K (menadione) results in anemia, which is a reduced level of red blood cells in the bloodstream. When large doses of menadione are given to infants, they result in the deposit of pigments in the brain, nerve damage, the destruction of red blood cells (hemolysis), and death. A daily injection of 10 mg of menadione into an infant for three days can kill the child. This tragic fact was discovered during the early days of vitamin research, when newborn infants were injected with menadione to prevent a disease known as hemorrhagic disease of the newborn. Today, a different form of vitamin K is used to protect infants against this disease. FOLATE. Folate occurs in various forms in food. There are more than a dozen related forms of folate. The folate in oral vitamin supplements occurs in only one form, however—folic acid. Large doses of folic acid (20 grams/day) can eventually result in kidney damage. Folate is considered, however, to be relatively nontoxic, except in cases where folate supplementation can lead to pernicious anemia. The DRI suggests an upper tolerable limit between 300–800 mcg per day for children and teenagers, depending on age (younger children have requirements on the lower end of the scale), and 1000 mcg per day for adults. The DRI is 400 mcg per day for adults and slightly lower in children; 600 mcg during pregnancy and 500 mcg while lactating. VITAMIN B12. Vitamin B12 is important in the treatment of pernicious anemia. Pernicious anemia is more common among middle-aged and older adults; it is usually detected in patients between the ages of 40 and 80. The disease affects about 0.1% of all persons in the general population in the United States, and about 3% of the elderly population. Pernicious anemia is treated with large doses of vitamin B12. Typically, 0.1 mg of the vitamin is injected each week until the symptoms of pernicious anemia disappear. Patients then take oral doses of vitamin B12 for the rest of their life. Although vitamin B12 toxicity is not an issue for patients being treated for pernicious anemia, treatment of these patients with folic acid may cause problems. Specifically, pernicious anemia is often first detected because the patient feels weak or tired. If the anemia is not treated, the patient may suffer irreversible nerve damage. The problem with folic acid supplements is that the folic acid treatment prevents the anemia from developing, but allows the eventual nerve damage to occur. VITAMIN B6. Vitamin B6 is clearly toxic at doses about 1000 times the RDA. Daily doses of 2–5 grams of VITAMIN C. Large doses of vitamin C are considered to be toxic in persons with a family history of or tendency to form kidney stones or gallbladder stones. Kidney and gallbladder stones usually consist of calcium oxalate. Oxalate occurs in high concentrations in foods such as cocoa, chocolate, rhubarb, and spinach. A fraction of the vitamin C in the body is normally broken down to produce oxalate. A daily supplement of 3.0 grams of vitamin C has been found to double the level of oxalate that passes through the kidneys and is excreted into the urine. The DRI suggests an upper tolerable limit between 400–1200 mg per day for children and teenagers, depending on age (younger children have requirements on the lower end of the scale), and 2000 mg per day for adults. NIACIN. The DRI for niacin is 14–16 mg per day in adults. Niacin comes in two forms, nicotinic acid and nicotinamide. Either form can satisfy the adult requirement for this vitamin. Nicotinic acid, however, is toxic at levels of 100 times the RDA. It can cause flushing of the skin, nausea, diarrhea, and liver damage. Flushing is an increase in blood passing through the veins in the skin, due to the dilation of arteries passing through deeper parts of the face or other parts of the body. In spite of the side effects, however, large doses of nicotinic acid are often used to lower blood cholesterol in order to prevent heart disease. Nicotinic acid results in a lowering of LDL-cholesterol (so-called bad cholesterol), an increase in HDL-cholesterol (so-called good cholesterol), and a decrease in plasma triglycerides. Treatment involves daily doses of 1.5–4.0 grams of nicotinic acid per day. Flushing of the skin occurs as a side effect when nicotinic acid therapy is started, but may disappear with continued therapy. The DRI suggests an upper tolerable limit between 10–30 mg per day for children and teenagers, depending on age (younger children have requirements on the lower end of the scale), and 35 mg per day for adults. The DRI for vitamin C in adults is between 75–90 mg per day, slightly more during pregnancy. The diagnosis of vitamin toxicity is usually made on the basis of the patient's dietary or medical history. Questioning the patient about the use of vitamin supplements may shed light on some physical symptoms. The doctor can confirm the diagnosis by ordering blood or urine tests for specific vitamins. When large amounts of the water-soluble vitamins are consumed, a large fraction of the vitamin is absorbed into the bloodstream and promptly excreted into the urine. The fat-soluble vitamins are more likely to be absorbed into the bloodstream and deposited in the fat and other tissues. In the cases of both water-soluble and fat-soluble vitamins, any vitamin not absorbed by the intestines is excreted in the feces. Megadoses of many of the vitamins produce diarrhea, because the non-absorbed nutrient draws water out of the body and into the gut, resulting in the loss of this water from the body. In all cases, treatment of vitamin toxicity requires discontinuing vitamin supplements. Vitamin D toxicity needs additional action to reduce the calcium levels in the bloodstream because it can cause abnormally high levels of plasma calcium (hypercalcemia). Severe hypercalcemia is a medical emergency and is treated by infusing a solution of0.9% sodium chloride into the patient's bloodstream. The infusion consists of 2–3 qt (liters) of salt water given over a period of one to two days. The prognosis for reversing vitamin toxicity is excellent for most patients. Side effects usually go away as soon as overdoses are stopped. The exceptions are severe vitamin D toxicity, severe vitamin A toxicity, and severe vitamin B6 toxicity. Too much vitamin D leads to deposits of calcium salts in the soft tissue of the body, which cannot be reversed. Birth defects due to vitamin A toxicity cannot be reversed. Damage to the nervous system caused by megadoses of vitamin B6 can be reversed, but complete reversal may require a recovery period of more than a year. Health care team roles Health care professionals should familiarize themselves with the symptoms of vitamin toxicities in order to successfully diagnose toxic levels. Health care professionals can direct patients in learning about the recommended requirements for each vitamin so that toxicities do not pose a risk. The DRI can be referred to for information regarding recommended intakes for individuals, estimated average requirements, and upper tolerable limits. The healthiest way to acquire vitamins is through good nutrition via food. Following the Dietary Guidelines for Americans, published by the U.S. Department of Agriculture and Health and Human Services, can provide a broad overall view of good nutrition. The Food Guide Pyramid was created by the U.S. Department of Agriculture to help Americans choose foods from each food grouping. The food pyramid, developed by nutritionists, provides a visual guide to healthy eating. Vitamin toxicity can be prevented by minimizing the use of vitamin supplements or by only taking a dose within recommended levels of the DRI or RDA. If vitamin D supplements are being used on a doctor's orders, monitoring the levels of plasma calcium help prevent toxicity. The development of hypercalcemia with vitamin D treatment indicates that the patient is at risk for vitamin D toxicity. Absorption—The transfer of a vitamin from the digestive tract to the bloodstream. Ascorbic acid—Another name for vitamin C. Dietary reference intakes (DRI)—These standards explain the daily amounts of energy, protein, minerals, and fat-soluble and water-soluble vitamins needed by healthy males and females, from infancy to old age. Hypercalcemia—A condition marked by abnormally high levels of calcium in the blood. Hypervitaminosis—Another name for vitamin toxicity. Megadose—A very large dose of a vitamin, taken by some people as a form of self-medication. Menadione—A synthetic form of vitamin K, sometimes called vitamin K3. Pernicious anemia—A rare disorder in which the body does not absorb enough vitamin B12 from the digestive tract, resulting in an inadequate amount of red blood cells produced. Retinol—Another name for vitamin A. Food and Nutrition Board. Recommended Dietary Allowances, 10th ed. Washington, DC: National Academy Press, 1989. Institute of Medicine. Dietary Reference Intakes: Applications in Dietary Assessment. Washington, DC: National Academy Press, 2001. Institute of Medicine. Dietary Reference Intakes: Risk Assessment (Compass Series). Washington, DC: National Academy Press, 1999. Larson-Duyff, Roberta. The American Dietetic Association's Complete Food & Nutrition Guide. New York: John Wiley & Sons, 1998. Mahan, L. Kathleen, and Sylvia Escott-Stump. Krause's Food, Nutrition, & Diet Therapy. London: W. B. Saunders Co.,2000. Mindell, Earl, and Hester Mundis. Earl Mindell's Vitamin Bible for the 21st Century. London: Warner Books, 1999. Rodwell-Williams, Sue. Essentials of Nutrition and Diet Therapy. London: Mosby-Year Book, 1999. American Dietetics Association. "Women's Health and Nutrition—Position of ADA and Dietitians of Canada." Journal of the American Dietetic Association (1999): 99: 738-51. Azais-Braesco, V., and G. Pascal. "Vitamin A in Pregnancy: Requirements and Safety Limits." American Journal of Clinical Nutrition (2000): 71: 1325S-33. Mills, J. L. "Fortification of Foods with Folic Acid—How Much is Enough?" New England Journal of Medicine (2000): 342: 1442-45. Traber, Maret G. "Vitamin E: Too Much or Not Enough?" American Journal of Clinical Nutrition 73 (2001): 997-98. American Dietetic Association. 216 W. Jackson Blvd., Chicago, IL 60606-6995. (312) 899-0040. <http://www.eatright.org/>. Food and Nutrition Information Center Agricultural Research Service, USDA. National Agricultural Library, Room 304, 10301 Baltimore Avenue, Beltsville, MD 20705-2351. (301) 504-5719. (301) 504-6409. <email@example.com>. Food and Nutrition Professionals Network. <http://nutrition.cos.com>. Crystal Heather Kaczkowski, M.Sc.
<urn:uuid:e9af6a9d-31de-4307-903e-969253716bb4>
CC-MAIN-2013-20
http://www.healthline.com/galecontent/vitamin-toxicity-1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898984
3,744
3.6875
4
This archive of companion sites to NOVA broadcasts is no longer being updated. To see new content, go to NOVA's beta site. The Big Energy Gamble Can California's ambitious plan to cut greenhouse gases actually succeed? Darwin's Darkest Hour A two-hour drama on the crisis that forced Darwin to publish his theory of evolution. Seven doctors, 21 years... Saving lives is only part of the story. An acclaimed photographer teams up with scientists to document the runaway melting of arctic glaciers. Hubble's Amazing Rescue The unlikely story of how the world's most beloved telescope was saved. The Incredible Journey of the Butterflies Follow the 2,000-mile migration of monarchs to a sanctuary in the highlands of Mexico. Meet the monitors, the largest, fiercest and craftiest lizards on Earth. Megabeasts' Sudden Death Scientists propose a radical new idea of what killed off mammoths and other large animals at the end of the Ice Age. Oliver Sacks explores how the power of music can make the brain come alive. June 30, 2009 (NOVA scienceNOW) Visit a factory that grows diamonds, learn how experts identified the source of the 2001 anthrax attacks, hear amazing results from pitch-correction software, and meet a computer scientist who wants to harness the brainpower of 500 million people. July 7, 2009 (NOVA scienceNOW) Join astronomers hunting for Earth-like planets, see how computers distinguish authentic art from forgeries, meet a spider biologist who studies sexual cannibalism, and learn about genes that may be involved in causing autism. July 14, 2009 (NOVA scienceNOW) Watch how an "exercise pill" turns couch-potato mice into athletes, explore a controversial new theory of what killed the dinosaurs, meet the first Latino-American astronaut, and find out why the beautiful northern lights signal a threat to our electronic society. July 21, 2009 (NOVA scienceNOW) Discover why picky eaters may have a genetic excuse, learn about a new strategy for capturing carbon dioxide from the atmosphere, see just how intelligent marine mammals can be, and meet a biomedical engineer who has figured out a way to make tiny livers in her lab. July 28, 2009 (NOVA scienceNOW) Follow a NASA satellite looking for water on the moon, see what ancient salt deposits reveal about life 250 million years ago, learn how bird brains are remarkably similar to our own, and meet a climatologist who digs for clues to climate change in the world's highest glaciers August 18, 2009 (NOVA scienceNOW) Explore the controversies behind genetic testing and genome sequencing, learn about algae fuel, follow an expedition to the Arctic Ocean seafloor, and meet a woman engineer designing prosthetic limbs controlled by human thought. August 25, 2009 (NOVA scienceNOW) Get an astronaut's view of the Hubble repair mission, find out why cowbirds are called "gangster birds," meet a Mexican immigrant farmworker-turned-brain surgeon, and learn how neuroscientists are finding ways to erase memories. September 1, 2009 (NOVA scienceNOW) Learn about a massive earthquake potential in the U.S. Midwest, meet a South Korean geophysicist with unique talents, and more. Why do huge swarms of rats overrun a bamboo forest in India once every half-century? The Spy Factory Examine the high-tech eavesdropping carried out by the National Security Agency and the pitfalls of surveillance in an age of terrorism. What Are Dreams? Psychologists and brain scientists have new answers to an age-old question.
<urn:uuid:88ea7e40-e2a1-4d37-9bdc-705c2e991ee9>
CC-MAIN-2013-20
http://www.pbs.org/wgbh/nova/archive/year_2009.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901235
759
2.5625
3
The text search engine allows queries to be formed from arbitrary Boolean expressions containing the keywords "AND", "OR",and "NOT", and grouped with parentheses. - You can also use * as a wildcard returning many forms of a word. - This Search supports the standard Boolean operators and syntax: And (and, &); Or (or, |); Not (and not, &!); and related precedence operators such as (), "", etc. - Search is not case sensitive: you can type your query in uppercase or lowercase. - You may search for any word except for those in the exception list (for English, this includes a, an, and, as, and other common words). Words in the exception list are ignored during a search. (Words in the exception list are treated as placeholders in Exact Phrase searches.) - Punctuation marks such as the period ( . ), colon ( : ), semicolon ( ; ), and comma ( , ) are ignored during a search. - To use specially-treated characters ( ( & ), ( | ), ( ^ ), ( # ), ( @ ), ( $ ), ( ( ), ( ) ) ) in a query, enclose your query in quotation marks ( " ). finds documents containing words starting with 'act', like acts, acting, actor, actress, activate, etc. finds documents containing words ending with 'act', like contact, contract, fact, etc. finds documents containing words beginning or ending with 'act', like actor, contact, actress, facts, activate, contract, etc. - information retrieval - finds documents containing 'information' or - information or retrieval - same as above - information and retrieval - finds documents containing both 'information' and - information not retrieval - finds documents containing 'information' but not - (information not retrieval) and WAIS - finds documents containing 'WAIS', plus 'information' but not 'retrieval'
<urn:uuid:f1ca7061-ad6e-4270-858f-46643d194a55>
CC-MAIN-2013-20
http://optn.transplant.hrsa.gov/helpSearch.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.812883
420
2.890625
3
What Makes a Tragic Hero Tragic? All tragedies need a tragic hero or heroine, but what makes a character tragic? Who are the most famous tragic heroes and what do they have in common? Tragedy: When The Feeling's Gone and You Can’t Go On My apologies to The Bee Gees First things first, what is a tragedy? Well, in theatrical and literary terms, it’s a devastating event, or string of events, that cause the downfall of a protagonist. What’s That You Say, Aristotle? Aristotle's views on tragedy have dictated every medium of drama So, what makes a tragic hero or heroine? To answer that question, we need to turn to the authority on drama: Aristotle. In Poetics, Aristotle sketched out the template from which he believed true tragedy was made. He also suggested that there are some rules to which all tragic heroes must conform. The tragic hero or heroine must evoke pity or fear in an audience. In order to do this effectively, the tragic protagonist must fit a particular character type. He, or she, must be a person to whom we can relate - therefore, a tragic hero cannot be unrealistically virtuous. However, he or she cannot be evil either, otherwise we will not feel empathy for their plight. In fact, a tragic hero must be morally blameless for his misfortune (this part of Aristotle’s prescription has been interpreted rather more loosely by some playwrights, as you’ll see below). Tragic protagonists meet with a downfall and, often, death, which is precipitated by a fatal flaw (hamartia) that causes him or her to perpetrate some unintended, although usually pretty horrific, act. Five Famous Tragic Heroes and Heroines To give all that some context, let’s take a look at some of the most famous tragic protagonists and explore whether or not they fit into Aristotle’s template. Aristotle’s favourite tragic hero was Oedipus - in fact, as far as he was concerned, Oedipus Rex is the most perfect example of tragedy. Of course, we should bear in mind that he has missed out on a subsequent two millennia of plays, so perhaps he would change his mind if he were around now. Anyway, Oedipus is, on the face of it, an all-around good guy. However, it is his arrogance, or hubris, which causes him to leave his presumed parents’ home, kill his real father and marry a beautiful older woman, who turns out to be his mother. Macbeth and Lady Macbeth Macbeth is problematic in terms of fitting the Aristotelian prescription, because he is arguably not on morally safe ground in killing Duncan. However, he is not all bad and it is his hamartia of ambition that culminates in his downfall. By no stretch of the imagination, can he be described as blameless, in his downfall - but it seems that Shakespeare foregoes this part of Aristotle’s view on tragedy in favour of creating a tragic hero who is extremely human. Subsequently, despite the fact that his actions are morally dubious (to say the least), his downfall still prompts empathy (and perhaps a little fear) from an audience. 17th century tragic drama from playwright Jean Racine, who borrows from Greek mythology (Phaedra). Phedre is another tragic protagonist who is not morally pure; thanks to her incestuous desires. However, this is thanks to a curse - so, perhaps we can forgive her for it. It is the hereditary curse, coupled with her husband’s infidelity, that lead her to make a series of unwise decisions, which culminate in her inescapable downfall. |Phaedra (MGM Limited Edition Collection)| The wife of a Greek shipping tycoon has a love affair with her stepson.This product is manufactured on demand using DVD-R recordable media. Amazon.com's standard return policy ... Moving further away from Aristotle, we shift into ‘modern tragedy’ territory. However, Arthur Miller’s All My Sons, remains faithful to many of Aristotle’s views, including the three unities. The main difference in modern tragedies, is that the tragic hero can now be an ‘ordinary’ man or woman. Previously, it was the downfall of a man of high status that constituted tragedy. Joe Keller lies about his involvement in shipping damaged aircraft parts, which caused the deaths of 21 airmen during World War II, for three years. On one sunny afternoon, the truth finally emerges and Joe justifies his actions, by telling his family that he did it for them. His efforts to prioritise moral obligations then lead him to commit suicide. Willy Russell’s Mrs Johnstone’s is a typical Liverpudlian single mother, struggling to get by, but doing the best she can for her children. When she is forced to give one of her twin boys away, it is superstition which causes her to remain silent. Years later, in a desperate bid to stop the son she kept from murdering his twin brother, she blurts out the truth. Poster for Blood Brothers So, as you can see, tragic heroes and heroines differ greatly. However, they all have a few things in common. Things that comply with Aristotle's vision of tragedy and things that, quite frankly, make them miserable. But that's why we love them!
<urn:uuid:162bf727-3da4-47ec-abe1-7ff6d3f4091c>
CC-MAIN-2013-20
http://wizzley.com/what-makes-a-tragic-hero-tragic/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965176
1,155
3.5
4
An earthquake, a tsunami, a nuclear meltdown -- residents of Japan's northeast coast suffered through three intertwined disasters after a massive 9.0 magnitude temblor struck off the coast on March 11, 2011. TOKYO — Like the persistent tapping of a desperate SOS message, the updates keep coming. Day after day, the operators of the wrecked Fukushima Dai-ichi nuclear power plant have been detailing their struggles to contain leaks of radioactive water. The leaks, power outages and other glitches have raised fears that the plant — devastated by a tsunami in March 2011 — could even start to break apart during a cleanup process expected to take years. The situation has also attracted the attention of the International Atomic Energy Agency, which sent a team of experts to review the decommissioning effort last month. They warned Japan may need longer than the projected 40 years to clean up the site. A full report is expected to be released later this month. Journalists have been given a rare glimpse inside Japan's Fukushima Daiichi nuclear power plant, which was crippled in the 9-magnitude earthquake and tsunami that hit the country two years ago. NBC News' Arata Yamamoto reports. The discovery of a greenling fish near a water intake for the power station in February that contained some 7,400 times the recommended safe limit of radioactive cesium only served to heighten concern. There was also some reassuring news in February, when a report by the World Health Organization said Fukushima had caused “no discernible increase in health risks” outside Japan and “no observable increases in cancer above natural variation” in most of the country. But for the most affected areas, the report said the lifetime risks of various cancers were expected to increase. For example, baby boys were predicted to have up to a 7 percent greater chance of getting leukemia in their lifetime and for baby girls the lifetime risk of breast cancer could be up to 6 percent higher than normal. Independent nuclear expert John Large — who has given evidence on the Fukushima disaster to the U.K. parliament and written reports about it for Greenpeace — said there would be hundreds of tons of “intensely radioactive” material in the plant. He said normally robots could be sent in to remove the fuel relatively easily, but this was difficult because of the damage caused by the tsunami. Large said the plant was close to the water table, so it was difficult to stop water getting in and out. “Until you can stop that transfer, you will not contain the radioactivity. That will go on for years and years until they contain it,” he said. "The structures of containment start breaking down. Engineered structures don’t last long when they are put in adverse conditions." Larged added: "It may have some marked effect on the health of future generations in Japan. What it will create is a Fukushima generation — like in Nagasaki and Hiroshima - where girls particularly will have difficulty marrying because of the stigma of being brought up in a radiation area." Leaks into the sea would not only affect the marine environment, Large said, as tiny radioactive particles would be washed up on the beach, dried in the sun and then blown over the surrounding countryside by the wind. View side-by-side the progress that Japan has made since the tsunami and earthquake in March 2011. Japanese activists are also worried by the ongoing leaks from the plant. The Associated Press reported that "runoff ... and a steady inflow of groundwater seeping into the basement of their damaged buildings produce about 400 tons of contaminated water daily at the plant." According to the plant's operator, 280,000 tons of contaminated water has been stored in tanks there. Hisayo Takada, energy campaigner with Greenpeace Japan, complained no real progress had been made. “It’s still a very fragile situation and measures implemented by the government and [power company] TEPCO are only temporary solutions,” she said. "The issue with the contaminated water is very serious and we're very concerned. And we're very angry because it’s been two years and they've been saying that everything's safe." Greenpeace has been testing food sold in supermarkets, and to date has not found “radiation levels higher than government guidelines,” Takada said. But she said the “land and sea will never return to the way it was before the accident.” One man who knows this all too well is cattle farmer Masami Yoshizawa. He lives in the Namie area, which was once inside a 12-mile, mandatory evacuation zone but is now among the places where people have been allowed to return. He tends his herd of 350 cows as “a living symbol of protest.” Nearly a year after a tsunami and 9.0 magnitude earthquake hit Japan, NBC News Chief Foreign Correspondent Richard Engel travels to the evacuation zone surrounding the Fukushima Daiichi nuclear plant. The plant suffered a triple meltdown in the wake of the earthquake, turning the neighborhoods in the 12 mile radius of the plant into ghost towns. Engel journeyed near the mangled plant which remains very much a hotspot. Radiation levels were so high, the NBC News team on the ground had to wear face masks and full body suits. Even as NBC News drove half a mile from the reactor, radiation monitors were screaming in alarm. “As long as they're alive, I will keep them to show to the world -- these cows that have been exposed to radiation, cows that are no longer marketable, and that I’m being told to have slaughtered,” said Yoshizawa, 59. “For us farmers, it’s impossible for us to return to work in Namie. Our community will disappear. It’s going to become like Chernobyl … Only the elderly who say they don't care about the radiation will return. Children will never return,” he said. The nuclear industry in the U.S. argues its safety standards are higher than at Fukushima. Steve Kerekes, a spokesman for the Nuclear Energy Institute, said it was “incredibly unlikely” that a similar accident could happen in the U.S. Significant safety improvements were made in the U.S. after Fukushima, the Sept. 11 terrorist attacks and the last major nuclear incident in America at Three Mile Island in 1979, he said. “Our layers of defense extend beyond what the Japanese had in place,” he said. “We’re now well into our fifth or sixth layer of back-up defenses to ensure there would not be – regardless of the cause – a serious accident that would jeopardize public safety.” A survey for the institute in February found that 68 percent of Americans supported nuclear energy. “[Support] did drop for about six to eight months after the Fukushima accident … it hasn’t quite reached the pre-Fukushima historic highs, but we have rebounded to a considerable extent,” Kerekes said. Part of this support comes from those who see nuclear energy as key in the fight against climate change. Kerekes pointed to a report by climatologist James Hansen — until recently head of NASA’s Goddard Institute — that said nuclear power had stopped the release of massive amounts of greenhouse gases and saved 1.8 million deaths related to air pollution. “Every technology has pros and cons. We feel when you look at the benefits of nuclear energy, it’s very effective, round-the-clock electric supply,” Kerekes said. “As we look to help try to drive our economy and provide jobs that people need, there’s a strong role for nuclear energy going forward. We believe that’s widely recognized on a bipartisan basis.” It remains to be seen whether this support will be eroded by the drip, drip of leaks from Fukushima. The Associated Press contributed to this report.
<urn:uuid:85aed2fa-cd77-441b-9cc5-8643d7bf5390>
CC-MAIN-2013-20
http://worldnews.nbcnews.com/fukushima
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965862
1,653
2.84375
3
The Human Genome Project was completed in 2003, 50 years after the discovery of the structure of DNA and 17 years after an influential debate at the annual Cold Spring Harbor Laboratory Symposium about the Project's feasibility. The 2003 Symposium was dedicated to examining what has been learned so far from the human genome sequence. This book contains over sixty contributions from the world's leaders in this field and covers genome structure and evolution, methods of data analysis, lessons from species comparison, and the application of sequence data to the understanding of disease. Purchasers of the hard cover edition of this book are entitled to access to the Symposia website. The site contains the full text of the written communications from the 2003 Symposium and the Symposia held in 1998 through 2002 (Volumes LXIII-LXVII). Subscribers to the site also gain access to archive photographs and selected papers from the 60-year history of the Annual Symposium, and will have the opportunity to receive as-it-happens text, audio, and video reporting from the 2004 symposium to be held June 2nd-7th on Epigenetics.
<urn:uuid:a5e0054d-344a-4cde-aac4-dc777c71ea71>
CC-MAIN-2013-20
http://www.biohealthmatics.com/BookLists/BK000000011783.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941515
230
2.5625
3
The themes, symbolism, and aesthetic forms of Akira Kurosawa’s films owe their origins to the ideas and sensibilities that captured his imagination as a young man. These include Marxism, which caught the attention of the Japanese intelligentsia in the twenties and thirties; classical Russian novels, which mesmerized the country’s cultural elite; impressionist painting, which rocked the contemporary art world; and the sport of kendo, which Kurosawa practiced as a young boy. In 1928, when Kurosawa was eighteen years old, Japan attacked Manchuria and assassinated the warlord Zhang Zuolin. Society was in turmoil. A year later, the Great Depression struck, and as Marxist thinking carried the day, Kurosawa joined the Proletariat Artists’ League. Though he later renounced his belief in political organizations and actions as effective means to correct social ills, Kurosawa never denied the populist slant of his films. He said it was youthful passion that brought him to join a left-wing organization, but his compassion for the plight of the lower classes and his practice of engaging class differences as dramatic structure are readily discernible in Seven Samurai (1954), Ikiru (1952), High and Low (1963), and Dodes’ka-den (1970). Another major influence on Kurosawa was his elder brother, Heigo, who was addicted to the novels of Fyodor Dostoyevsky and Maksim Gorky. Additionally, he introduced Akira to Western art and the auteur cinema of Fritz Lang, John Ford, Vsevolod Pudovkin, and Sergei Eisenstein. Heigo, however, was to commit suicide when Akira was twenty-three years old. In his memoir Something Like an Autobiography, Kurosawa wrote about his brother’s profound influence on his development in art and literature, and especially in nurturing his passion for Dostoyevsky. Their only difference, he wrote, was that “my brother was pessimistic and negative, and I was optimistic and positive.” One time, Kurosawa met an actor who knew his brother, and the actor told him, “You are exactly like your brother, only he’s the negative, and you’re the positive print.” From Dostoyevsky, Kurosawa inherited the concept of redemption. As had Dostoyevsky’s czarist Russia, Kurosawa’s Japan was going through momentous economic changes and had to brace itself against an impending catastrophe. The tortures of historical change produced in the artist a humanitarian ideal, to seek redemption through acts of self-sacrifice. In Seven Samurai, the samurai display great perseverance in protecting the farmers, their social inferiors. In the closing sequence, as the farmers joyously plant rice seedlings and sing, the surviving samurai stand by their comrades’ grave, on a mound, and sigh, “The victory belongs to those peasants, not to us.” Besides Dostoyevsky (whose novel The Idiot Kurosawa adapted to the screen in 1951), Gorky was also a significant influence. Kurosawa penned an adaptation of his The Lower Depths, bringing to the screen Gorky’s insights into lowly human behavior born out of evil, cruelty, and poverty. The warmth and moderation in human nature so celebrated in Yasujiro Ozu’s films have no place in Kurosawa’s works. There is instead much affinity with Gorky in matters concerning the contradictions and innate antagonism in human nature, as well as the fierce struggle for survival. This also explains why Kurosawa was fond of the films of Kenji Mizoguchi, particularly those of the 1930s. Kurosawa’s early training in Western painting and kendo, both under his father’s supervision, was also instrumental in his creative life. Van Gogh’s and Gauguin’s dense, layered brushstrokes and sensitivities find their glorious way into Kurosawa’s screen images, evident in their composition, outline, and emotional vibrancy. His is a strong and robust emotion that favors the seasons of winter and summer and the plain flavor of daily life. Meanwhile, the sport of kendo endowed Kurosawa with a high-spirited heroism, complete with an unbending faith in the pursuit of perfection. An individual hero, powerful and carrying within him a humanitarian ideal bequeathed by literature and politics, goes on a quest to put society on a just path: such is the philosophical backbone of Kurosawa’s Bushido—or “way of the warrior”—cinema. Taiwanese film scholar and critic Peggy Chiao has published more than forty-five books and founded the China Express Film Awards in 1989. She has also produced and written many Taiwanese films, including Betelnut Beauty (2001), Beijing Bicycle (2001), and The Drummer (2007). This piece was originally published in the Criterion Collection’s 2006 edition of Seven Samurai.
<urn:uuid:49f24908-7f9f-4d08-ba13-cc33795ae41c>
CC-MAIN-2013-20
http://www.criterion.com/current/posts/444
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965409
1,058
2.6875
3
So I’m sure you’ve read a thing or two about all those crazy electronic voting machines being inaccurate. One thing I find slightly perplexing is why the misrepresented votes seem to always be in favor of the Republican party. I don’t get it. It’s not like the voting machine companies would be so blatant or stupid to try to rig an election so outright. Especially in a world that is already so suspicious of electronic machines. But if it were purely a bug, wouldn’t it be equally likely that Democrats benefit? Of course this could always be explained by the fact that perhaps there is a procedure in place for inputting candidates and Republicans and Democrats get placed into the system in a specific order (such as Democrats being added in first). Who knows. So with the constant attention those digital voting machines get, a lot of people ask, “WHAT is so difficult about writing software that tallies votes?” Now I’m not one to study the Diebold machines, but I thought it would be interesting to pick at the problem. First of all, the votes must be logged. But not just any log. It must be secure and immune from tampering. And when I say “tamper,” I am talking about from everybody. That includes the developers, the database administrators, the voters, and the polling staff. I can only begin to imagine that they use a bunch of one way hardware encryption and md5 checksums. The votes would need to be isolated from each other from the data integrity perspective: if vote #35252 breaks the system, all prior votes (#1 through #35251) must remain unscathed. Although most modern databases use transactions to ensure data integrity, I would imagine there is no fool proof means without creating a replica of the vote on a second or third physical location. Of course, such data replication causes problems in the event data is inconsistent. What happens if the primary fails and the vote was only recorded on one of the two slaves. Do you count that half vote? What if a replication error had occurred where one slave copied something differently from the primary? Which is right? These things happen (database corruption) and they usually tend to clump up together to result in catastrophic failures. Purposeful Fraud Issues Let’s attack this from another angle. The main culprit to election day problems will probably be human “error.” An electronic machine must protect against this. Unlike a punch card that the actual human physically pokes, a digital machine does the card punching for you (on its hard drive), which is almost like telling someone to punch in your vote as you specify. There’s been instances of a programmer placing bugs in slot machines that gave them jackpots if they bet in a certain order. There have been cases of system administrators leaving back doors into the servers. There’s a huge list of historical events that show that no system, no matter how hard a company tries, is secure from malicious employees. But that is exactly what this system must be designed to fight. How would you ensure it is safe? Peer code reviews? Multi-part passwords that require three separate people with three separate passwords to authenticate? Physical keys, like the one you see in movies, where both people have to have different keys turned at the same time to open a machine? Okay, so let’s say you somehow secure your employees. The problem doesn’t stop there. I’m setting up the machines. “Let’s see,” I say to myself with a grin, “Kerry is going to be candidate 1, and Bush will be candidate 2… for now. At the end of the night, I go back and say, “Oops, I meant 1 equates to Bush and 2 equates to Kerry!” With any regular database, this is entirely possible, and everybody’s votes just got reversed. Of course a smart voting machine would never let you change around the names for a created record. But then again, hackers don’t need to worry about that. So the voting machine company decides that you “can’t” change the name of a candidate after it’s been put into the system. What happens if I were to put in a second “Bush” to dilute his votes between his mystical twin? Or what happens if I create a new candidate half way through the election under his name? Well, in some instances, the software might just show him twice (this is good) or in others, it would show him once (this is very bad). In crappier software, that of course means voters would be voting for one OR the other “Bush,” but nobody would know exactly which. Of course the voting company would protect us from ourselves by ensuring candidates can’t be added in after the machine is shipped out. But therein lies another problem. Let’s say you’re running the voting company that is running an election across a few dozen districts. Of course, all the votes must be tallied. A “Bush” vote in one county must group up with a “Bush” vote in another. But how? The human answer is to use the name, but realistically, we know that another “Bush” might be running under a different position in some counties. You can’t just use the name as the qualifier because it is not unique. So you would use IDs, I presume. But of course this means every machine must use an ID that is not internal to it. You would say, “All 1′s are Kerry’s and all 2′s are for Bush!” Now that this is decided, you would have shipped out all of the machines to only accept votes for Kerry = 1 and Bush = 2. And when the machine gets back, you would save it into the main system as 1 = Kerry and 2 = Bush. But where’s the sanity check? Who knows what happened while that box was out there in the wild. How do you know that 1 is indeed still representing Kerry for that box? How do you know that everybody that voted “Kerry” on that box got saved in as a “1″? This is even more of a problem if you do the counting right in the same place that the voting is taking place. And even if you did use names, despite it being a horrible idea, how do you know that a “Kerry” vote got saved as “Kerry?” For all you know, there is a bug, and all Kerry votes are getting saved as “Bush” and all Bush votes are getting saved as “Nadar” because someone forgot that array indices start at 0, not 1 (theoretical technical explanation for how these bugs could arise). So of course, that means you would write a binary log of all activities that box experienced. But what is this log for? Auditing? Shouldn’t auditing be happening at every step of the way regardless? If anything, problems are much harder to catch in the digital version of voting so this audit trail would rarely if ever be used except in the most extreme cases. Okay, so I’ve convinced you that it should be used all the time, right? Okay, but then what? Is it being replicated? Is it safe from incomplete transactions? Will a corrupted insert break the entire file? What happens if the power cuts out right as it is writing a record? Is the whole file toast? Suddenly you realize the log file must also use a database to ensure its integrity. Possibly on a separate process to ensure it is isolated from the main vote records. But what the hell is the point of all this? If there is going to be a discrepancy, shouldn’t it have been caught during testing? Why go through all this trouble double logging and replicating all of this data? The last point is the most important. You’ll notice that through simple logic, we suddenly had to have tons of auditing overhead to do something so simple. And despite your best testing efforts, things that should be absolutely positively without error are still being audited to ensure their integrity. So what happens when you overlook one of these “no-brainer” assumptions? You get voter fraud. This only covers some theoretical problems that I might face when trying to put together a voting machine. I would assume a well-funded corporation would generate a list or problems 10x this length. While tallying votes may be simple in concept, if your application must be 200% bug free and hacker proof, developing the application becomes immensely difficult. This still doesn’t explain my original thought about the Republican vote bias though.
<urn:uuid:eef94c76-0e6c-44e9-8923-a33565c3beef>
CC-MAIN-2013-20
http://www.michikono.com/tag/entertainment/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960527
1,840
2.53125
3
Cucumis sativus, the garden cucumber, is a widely cultivated plant in the gourd family (Cucurbitaceae), which includes squash, and in the same genus as the muskmelon and cantaloupe. The cucumber likely originated in India, where it appears to have been cultivated for more than 3,000 years, then spread to China. The Romans likely introduced it throughout Europe. Hundred of cultivars of varying size and color are now grown in warm areas worldwide, commercially and in home gardens. Cucumber is a frost-sensitive annual—its heat requirement is greater than that for most common vegetables, and in northern climates, it is often grown in greenhouses or hoop houses. It has a hairy climbing, trailing, or creeping stem, and is often grown on frames or trellises. Leaves are hairy and have 3–5 lobes; branched tendrils at leaf axes support climbing. Plants are usually monoecious (male and female flowers on separate plants), but varieties show a range of sexual systems. Female flowers are yellow with 5 petals, and develop into a cylindrical fruit, which may be as large as 60 centimeters (24 in) long and 10 centimeters (3.9 in) in diameter. The color ranges from green to yellow to whitish; in many varieties, fruits are bicolored with longitudinal stripes from stem to apex. Some varieties produce seedless fruit without pollination, but others are most productive with pollination by various bee species. Hives of honeybees, Apis mellifera are often transported to cucumber fields just before flowering time, but bumblebees (Bombus spp.) and other bee species can also serve as pollinators. The numerous varieties of cucumbers have been categorized in diverse ways. One general classification is to group them as “slicing,” which are large and smooth- but somewhat tough-skinned and generally eaten when green to avoid a bitter flavor; “pickling,” which are usually smaller, with prickly skins; and "burpless,” which include seedless varieties as well as long, narrow, Asian types. When mature, the cucumber fruit is 90% water, and is not particularly high in nutrients, but its flavor and texture have made it popular for use as a fresh addition to salads, as well as pickled and prepared in relishes. In Africa, cucumber seeds are used to make an oil for use in salads and cooking. Cucumbers are also used in skin tonics and other beauty aids. In 2009, total production of cucumbers and gherkins (which can refer to a cucumber variety but also to fruit of the related Cucumis anguria) was 60.6 million tons, harvested from 2 million hectares. China was by far the largest producer, with a harvest of 44.3 million tons; Turkey, Iran, and the Russian Federation followed, producing 1–2 millon tons, and the U.S. ranked 5th, with 888 thousand tons. Within the U.S., Florida, California, Georgia, and Michigan are generally leading producers. (Encyclopedia Britannica 1993, FAOSTAT 2011, Hedrick 1919, Kirkbride 1993, Whittaker and Davis 1962, Wikipedia 2011) - Encyclopedia Brittanica. 1993. “Cucumber.” Micropedia 3: 776–7. Chicago: Encyclopedia Brittanica, Inc. - FAOSTAT. 2011. FAOSTAT 2011. Searchable online database from Food and Agriculture Division of the United Nations. Retrieved 20 November 2011 from http://faostat.fao.org/site/567/DesktopDefault.aspx?PageID=567#ancor. - Hedrick, U.P. 1919. Sturtevant’s Notes on Edible Plants. State of New York, Dept. of Agriculture, 27th Annual Report, Vol 2., Part II. Albany, NY. Available online from GoogleBooks: http://books.google.com. - Kirkbride, J.H., Jr. 1993. Biosystematic Monograph of the Genus Cucumis (Cucurbitaceae). Boone, NC: Parkway Publishers. 159 p. - Whittaker, T.S., and G.N. Davis. 1962. Cucurbits: Botany, Cultivation, and Utilization. 1962. New York: Interscience Publishers. 249 p. - Wikipedia. 2011. "Cucumber." Wikipedia, The Free Encyclopedia. 31 Oct 2011, 09:37 UTC. 14 Nov 2011 from http://en.wikipedia.org/w/index.php?title=Cucumber&oldid=460565145. No one has provided updates yet.
<urn:uuid:2de2a55c-e36d-499d-a9ba-953286132ff8>
CC-MAIN-2013-20
http://eol.org/data_objects/15632686
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918526
1,004
3.78125
4
Natural Pregnancy Information |Natural pregnancy means taking care of your body, including exercising and staying hydrated.| Pregnancy is natural. Work with it naturally. Being pregnant and giving birth are natural life experiences for which a woman's body is well designed. In most of the world, women labor and give birth with midwives, as they have throughout history. Midwife care has been proven to be a safe, nurturing alternative to physician-attended hospital birth. A woman's body is innately prepared with the strength, stamina, and ability to nourish a safe and natural pregnancy and childbirth. By supporting the body's own instinctive knowledge, unnecessary medical intervention can often be avoided. Natural pregnancy includes creating an internal and external environment of healthy, positive elements: healthy eating, appropriate exercise, listening to positive birth stories, gathering knowledge, planning the ideal care, and partnering with a caregiver who can lead you through each step safely and confidently. This caregiver can be a midwife.
<urn:uuid:7edfbbad-1cf0-40e7-844c-ae33aa2d3e5a>
CC-MAIN-2013-20
http://mothersnaturally.org/naturalPregnancy/index.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94989
204
2.625
3
Please Read How You Can Help Keep the Encyclopedia Free Space and Time: Inertial Frames A “frame of reference” is a standard relative to which motion and rest may be measured; any set of points or objects that are at rest relative to one another enables us, in principle, to describe the relative motions of bodies. A frame of reference is therefore a purely kinematical device, for the geometrical description of motion without regard to the masses or forces involved. A dynamical account of motion leads to the idea of an “inertial frame,” or a reference frame relative to which motions have distinguished dynamical properties. For that reason an inertial frame has to be understood as a spatial reference frame together with some means of measuring time, so that uniform motions can be distinguished from accelerated motions. The laws of Newtonian dynamics provide a simple definition: an inertial frame is a reference-frame with a time-scale, relative to which the motion of a body not subject to forces is always rectilinear and uniform, accelerations are always proportional to and in the direction of applied forces, and applied forces are always met with equal and opposite reactions. It follows that, in an inertial frame, the center of mass of a system of bodies is always at rest or in uniform motion. It also follows that any other frame of reference moving uniformly relative to an inertial frame is also an inertial frame. For example, in Newtonian celestial mechanics, taking the “fixed stars” as a frame of reference, we can determine an (approximately) inertial frame whose center is the center of mass of the solar system; relative to this frame, every acceleration of every planet can be accounted for (approximately) as a gravitational interaction with some other planet in accord with Newton's laws of motion. This appears to be a simple and straightforward concept. By inquiring more narrowly into its origins and meaning, however, we begin to understand why it has been an ongoing subject of philosophical concern. It originated in a profound philosophical consideration of the principles of relativity and invariance in the context of Newtonian mechanics. Further reflections on it, in different theoretical contexts, had extraordinary consequences for 20th-century theories of space and time. - 1. Relativity and Reference Frames in Classical Mechanics - 2. Inertial Frames in the 20th Century: Special and General Relativity - 2.1 Inertial Frames in Newtonian Spacetime - 2.2 The Conflict Between Galilean Relativity and Modern Electrodynamics - 2.3 Special Relativity and Lorentz Invariance - 2.4 Simultaneity and Reference-Frames - 2.5 From Special Relativity and Lorentz Invariance to General Relativity and General Covariance - 2.6 The Equivalence of Inertia and Gravity - 2.7 The Equivalence Principle and General Covariance - 2.8 The Extension of the Relativity Principle - 2.9 From Inertial Frames to Curved Spacetime - Other Internet Resources - Related Entries The term “reference frame” was coined in the 19th century, but it has a long prehistory, beginning, perhaps, with the emergence of the Copernican theory. The significant point was not the replacement of the earth by the sun as the center of all motion in the universe, but the recognition of both the earth and the sun as merely possible points of view from which the motions of the celestial bodies may be described. This implied that the basic task of Ptolemaic astronomy—to represent the planetary motions by combinations of circular motions—could take any point to be fixed, and that, as Copernicus suggested in the opening arguments of “On the revolutions of the heavenly spheres,” the choice of any particular point required some justification on other than astronomical grounds. As the basic programme of Ptolemy and Copernicus gave way to that of early classical mechanics, this equivalence of points of view was made more precise and explicit. Galileo demonstrated that the Copernican view does not contradict our experience of a seemingly stable earth, through a principle that, in the precise form that it takes in Newtonian mechanics, has become known as the “principle of Galilean relativity”: mechanical experiments will have the same results in a system in uniform motion that they have in a system at rest. Therefore the experiments claimed as evidence against Copernicus—e.g., that a stone dropped from a tower falls to the base of the tower, instead of being left behind—would happen just as they do whether the earth were moving or not, provided that the motion is sufficiently uniform. See Figure 1. Figure 1: Galileo's Argument If the earth is rotating sufficiently uniformly, a stone dropped from the tower will fall straight to the base, just as a stone dropped from the mast of a uniformly moving ship will fall to the foot of the mast. In both cases the stone's vertical motion will be smoothly composed with its horizontal motion. Hence a sufficiently uniform motion will be indistinguishable from rest. Leibniz, later, articulated a more general “equipollence of hypotheses”: in any system of interacting bodies, any hypothesis that any particular body is at rest is equivalent to any other. Therefore neither Copernicus' nor Ptolemy's view can be true—though one may be judged simpler than the other—because both are merely possible hypothetical interpretations of the same relative motions. This principle clearly defines (what we would call) a set of reference frames, differing in their arbitrary choices of a resting point or origin, but agreeing on the relative positions of bodies at any moment and their changing relative distances through time. For Leibniz and many others, this general equivalence was a matter of philosophical principle, founded in the metaphysical conviction that space itself is nothing more than an abstraction from the geometrical relations among bodies. In some form or other it was a widely shared tenet of the 17th-century “mechanical philosophy”. Yet it was flatly incompatible with physics as Leibniz himself, and the other “mechanists,” actually conceived it. For the basic program of mechanical explanation depended essentially on the concept of a privileged state of motion, as expressed in the assumption that bodies maintain a state of rectilinear motion until acted upon by an external cause. Thus their fundamental conception of force, as the power of a body to change the state of another, likewise depended on this notion of a privileged state. This dependence was clearly exhibited in the vortex theory of planetary motion, in which every orbit was explained by the balance between the planet's inherent centrifugal tendency (its tendency to follow the tangent to the orbit) and the pressure of the surrounding medium. For this reason, the notion of a dispute between “relativists” or “relationists” and “absolutists” or “substantivalists”, in the 17th century, is a drastic oversimplification. Newton, in his controversial Scholium on space, time, and motion, was not merely asserting that motion is absolute in the face of the mechanists' relativist view; he was arguing that a conception of absolute motion was already implicit in the views of his opponents—that it was implicit in their conception, which he largely shared, of physical cause and effect. The general equivalence of reference-frames was implicitly denied by a physics that understood forces as powers to change the states of motion of bodies. Newton therefore held that physics required the conception of absolute space, a distinguished frame of reference relative to which bodies could be said to be truly moving or truly at rest. Assuming, as both Newton and Leibniz did, that states of motion could be distinguished by their causes and effects, the distinguished status of this frame of reference is physically well founded—and metaphysically well-founded for a metaphysics that, like Newton's or Leibniz's, takes force to be a well-founded notion. On Leibniz's conception of force, in particular, a given force is required to generate or to maintain a given velocity—for objects “passively” resist motion, but maintain their states of motion only by “active” force—so that, on dynamical grounds, “every body truly does have a certain amount of motion, or, if you will, force.” This implies that there is in principle a distinguished frame of reference in which the velocities of bodies correspond to their true velocities, i.e. to the amounts of moving force that they truly possess, and it implies that in any frame that is in motion relative to this one, bodies will not have their true velocities. In short, such a conception of force, if it could be applied physically, would give a precise physical application of Newton's conception of absolute space. The difficulty with Newton's view of absolute space comes from the Newtonian conception of force. If force is defined and measured solely by the power to accelerate a body, then obviously the effects of forces—in short, the causal interactions within a system of bodies—will be independent of the velocity of the system in which they are measured. So the existence of a set of equivalent “inertial frames” is imposed from the start by Newton's laws. Suppose that we determine for the bodies in a given frame of reference—say, the rest frame of the fixed stars—that all observable accelerations are proportional to forces impressed by bodies within the system, by equal and opposite actions and reactions among those bodies. Then we know that these physical interactions will be the same in any frame of reference that is in uniform rectilinear motion relative to the first one. Therefore no Newtonian experiment will be able to determine the velocity of a body, or system of bodies, relative to absolute space. In other words, there is no way to distinguish absolute space itself from any frame of reference that is in uniform motion relative to it. Newton thought that a coherent account of force and motion requires a background space consisting of “places” that “all keep given positions in relation to one another from infinity to infinity” (1726, p. 412). But the laws of motion enable us to determine an infinity of such spaces, all in uniform rectilinear motion relative to each other, and furnish no way of singling out any one as “immovable space.” Oddly enough, no one in the 17th century, or even before the late 19th century, expressed this equivalence of reference-frames more clearly than Newton himself. Newton explicitly derived it from the laws of motion as Corollary V: When bodies are enclosed in a given space, their motions in relation to one another are the same whether the space is at rest or whether it is moving uniformly straight forward without circular motion. (1726, p. 423.) This is the first clear statement of the Galilean relativity principle. It implied that the dispute between the heliocentric and geocentric views of the universe was mistakenly framed: the proper question about “the system of the world” was not “which body is at rest in the center?” but “where is the center of gravity of the system, and which body is closest to it?” For in a system of orbiting bodies, only their common center of gravity will be unaccelerated, and by Corollary V, the motions of the bodies in the system will be the same, whether its center of gravity is at rest or in uniform rectilinear motion. The system is indeed approximately Keplerian, since the sun has by far the greatest mass and is therefore little disturbed from the center of gravity, which is therefore very close to the common focus of the approximately Keplerian ellipses in which the planets orbit the sun. But by Corollary V, the nearly-Keplerian structure of the system is completely independent of the system's state of motion in absolute space. The Galilean relativity principle thus expressed the insight that different states of uniform motion, or different uniformly-moving frames of reference, determine only different points of view on the same physically objective quantities, namely force, mass, and acceleration. We can see this insight expressed more explicitly in Newton's understanding of inertia. For Leibniz (among others) , as we saw, moving force, the power of a body to change the motion of another, was determined by velocity. It was therefore seen as an active power, fundamentally different from the passive power of a resting body to resist any change of position. Newton, in contrast, understood the “force of inertia” as a Galilei-invariant quantity: [A] body exerts this force only during a change of its state, caused by another force impressed upon it, and the exercise of this force is, depending on viewpoint, both resistance and impetus: resistance in so far as the body, in order to maintain its state, strives against the impressed force, and impetus in so far as the same body, yielding only with difficulty to the force of a resisting obstacle, endeavors to change the state of that obstacle. Resistance is commonly attributed to resting bodies and impetus to moving bodies; but motion and rest, in the popular sense of the term, are distinguished from each other only by point of view, and bodies commonly regarded as being at rest are not always truly at rest. (1726, p. 404–05.) Newton thus recognized the powers distinguished by Leibniz as the same thing seen from different points of view. Newton understood the Galilean principle of relativity with a degree of depth and clarity that eluded most of his “relativist” contemporaries. It may seem bizarre, therefore, that the notion of inertial frame did not emerge until more than a century and a half after his death. He had identified a distinguished class of dynamically equivalent “relative spaces,” in any of which true forces and masses, accelerations and rotations, would have the same objectively measured values. Yet these spaces, though empirically indistinguishable, were not equivalent in principle; evidently Newton conceived them as moving with various velocities in absolute space, though those velocities could not be known. Why should not he, or someone, have recognized the equivalence of these spaces immediately? This is not the place for an adequate answer to this question, if indeed one is possible. For much of the 20th century, the accepted answer was that of Ernst Mach: Newton lived in an age “deficient in epistemological critique,” and so was unable to draw the conclusion that these empirically indistinguishable spaces must be equivalent in every meaningful sense, so that no one of them deserves even in principle to be designated as “absolute space.” Yet even those whom the 20th century credited with more sophisticated epistemological views, such as Leibniz, evidently had difficulties understanding force and inertia in a Galilei-invariant way, despite a philosophical commitment to relativity. Perhaps it suffices to say that to abandon the intuitive association of force or motion with velocity in space, and to accept an equivalence-class structure as the fundamental spatiotemporal framework, requires a level of abstraction that became possible only with the extraordinary development of mathematics, especially of a more abstract view of geometry, that took place in the 19th century. (See geometry: in the 19th century.) In the 17th century only Christiaan Huygens came close to expressing such a view; he held that not velocity, but velocity-difference, was the fundamental dynamical quantity. He therefore understood, for example, that the “absoluteness” of rotation had nothing to do with velocity relative to absolute space, but arose from the difference of velocity among different parts of a rotating body—a difference which would, evidently, be the same irrespective of the velocity of the body as a whole in absolute space. But of this Huygens gave only the merest suggestion, in manuscripts that remained unpublished for two centuries. (See Stein 1977.) The concept of inertial frame therefore emerged only in the late 19th century, when, as we shall see, it did not seem to be of any great immediate importance. Meanwhile, the relativity principle was understood as the equivalence of uniform states of motion, but any systems in such a state was implicitly understood to have a definite, though unknown and unknowable, velocity in absolute space. Euler (1748), for example, defended Newton's conceptions of space and time against the thesis that space and time are ideal, and motion merely relative; his broad argument was that metaphysics had no standing to criticize conceptions that are required by the established laws of physics. Yet he noted that the laws of motion permit us to determine, not the velocity of any motion in space, but only the absolute sameness of direction of an inertial trajectory over time, and the equality of time-intervals in which an inertially-moving particle moves equal distances. To Euler, these irreducibly spatial and temporal aspects of the laws of motion implied that space and time could not possibly be ideal. Like Newton, therefore, he upheld both the relativity of velocity and the reality of absolute space. The inconsistency of such a theory can be seen in two ways. On the one hand, we can see it as a fundamental incoherence, even if, again, we excuse those who held it on the grounds of the limited mathematical tools available to them. On the other hand, it does represent a deep appreciation of the indistinguishability of velocities in absolute space, and a consequent effort to make sure that the actual treatment of actual physical systems is not undermined by this uncertainty. Newton hoped to analyze the dynamical interactions that hold the solar system together; he wanted to show that his dynamical account, and the view of “the frame of the system of the world” that emerges from it, is a matter of “reasoning from phenomena” rather than of plausible conjecture. It was therefore a very circumspect, even prescient, move on his part to demonstrate, through his use of Corollaries IV and V, that the analysis is completely independent of any conceivable translation of the system in absolute space. The development of this concept began with a renewed critical analysis of the notion of absolute space, for reasons not anticipated by Newton's contemporary critics. Its starting point was a critical questions about the law of inertia: relative to what is the motion of a free particle uniform and rectilinear? If the answer is “absolute space,” then the law would appear to be something other than an empirical claim, for no one can observe the trajectory of a particle relative to absolute space. Two quite different answers to the question were offered in 1870, in the form of revised statements of the law of inertia. Carl Neumann proposed that when we state the law, we must suppose that there is a body somewhere in the universe—the “body Alpha”—with respect to which the motion of a free particle is rectilinear, and that there is a time-scale somewhere relative to which it is uniform (Neumann 1870). Ernst Mach (1883) claimed that the law of inertia, and Newton's laws generally, implicitly appeal to the fixed stars as a spatial reference-frame, and to the rotation of the earth as a time-scale; at least, he held, such is the basis for any genuine empirical content that the laws have. The notion of absolute space, it followed, was only an unwarranted abstraction from the practice of measuring motions relative to the fixed stars. Mach's proposal had the advantage of a clear empirical motivation; Neumann's “body Alpha” seemed no less mysterious than absolute space, and almost sounds comical to the modern reader. But Neumann's discussion of a time-scale was somewhat more fruitful. He noted that the law of inertia defines a time-scale: equal intervals of time are those in which a free particle travels equal distances. Such a definition is another aspect of the Newtonian theory first made explicit by Euler (1748). Neumann also noted, however, that this definition is quite arbitrary. For, in the absence of a prior definition of equal times, any motion whatever can be stipulated to be uniform. It is no help to appeal to the requirement of freedom from external forces, since the free particles presumably are known to us only by their uniform motion. We have a genuine empirical claim only when we state of at least two free particles that their motions are mutually proportional; equal intervals of time can then be defined as those in which two free particles travel mutually proportional distances. Neumann's definition of a time-scale directly inspired Ludwig Lange's conception of “inertial system,” introduced in 1885 . An inertial coordinate system ought to be one in which free particles move in straight lines. But any trajectory may be stipulated to be rectilinear, and a coordinate system can always be constructed in which it is rectilinear. And so, as in the case of the time-scale, we cannot adequately define an inertial system by the motion of one particle. Indeed, for any two particles moving anyhow, a coordinate system may be found in which both their trajectories are rectilinear. So far the claim that either particle, or some third particle, is moving in a straight line may be said to be a matter of convention. We must define an inertial system as one in which at least three non-collinear free particles move in noncoplanar straight lines; then we can state the law of inertia as the claim that, relative to an inertial system so defined, the motion of any fourth particle, or arbitrarily many particles, will be rectilinear. The notions of inertial system and Neumann's time-scale, which Lange called an “inertial time-scale,” may be combined as follows: relative to a coordinate system in which three free particles move in straight lines and travel mutually-proportional distances, the motion of any fourth free particle will be rectilinear and uniform. The questionable Newtonian concepts of absolute rotation and acceleration, Lange proposed, could now be replaced by the concepts of “inertial rotation” and “inertial acceleration,” i.e. rotation and acceleration relative to an inertial system and inertial time-scale. See Figures 2 and 3. Figure 2: Neumann's Time-Scale: By Newton's first law, a particle not subject to forces travels equal distances in equal times. But which particles are free of forces? This might appear to be a matter of convention. Either P1 or P2 can be arbitrarily stipulated to be at the origin of a system of coordinates, and to serve as the measure of equal times But I can say of two particles with different velocities: in intervals of time in which one moves a given distance d1, the other moves a proportional distance d2 = kd1 (where k is a constant; i.e., d1/d2 = k). Or I can compare a particle to a freely rotating planet: in intervals of time through which the planet rotates through equal angles, the particle moves equal distances. Figure 3: Lange's Definition of ‘inertial system’ (1885): An inertial system is a coordinate system with respect to which three free particles, projected from a single point and moving in non-coplanar directions, move in straight lines and travel mutually-proportional distances. The law of inertia then states that relative to any inertial system, any fourth free particle will move uniformly. At about the same time, apparently unaware of the work of Mach, Neumann, and Lange, James Thomson expressed the content of the law of inertia, and the appropriate frame of reference and time-scale (“dial-traveller”), somewhat differently: For any set of bodies acted on each by any force, a REFERENCE FRAME and a REFERENCE DIAL-TRAVELLER are kinematically possible, such that relatively to them conjointly, the motion of the mass-centre of each body, undergoes change simultaneously with any infinitely short element of the dial-traveller progress, or with any element during which the force on the body does not alter in direction nor in magnitude, which change is proportional to the intensity of the force acting on that body, and to the simultaneous progress of the dial-traveller, and is made in the direction of the force. (Thomson 1884, p. 387) More simply, an inertial reference-frame is one in which Newton's second law is satisfied, so that every acceleration corresponds to an impressed force. Thomson did not reject the term “absolute rotation,” holding instead that it has to be understood as rotation relative to a reference frame that satisfies his definition. The definition does not express, as Lange's does, the degree of arbitrariness involved in the construction of an inertial system by means of free particles. Moreover, like Lange's, it leaves out a crucial condition for an inertial system as we understand it: all forces must belong to action-reaction pairs. Otherwise we could have, as on a rotating sphere, merely apparent (centrifugal) forces that are, by definition, proportional to mass and acceleration, and so the rotating sphere would satisfy Thomson's definition. Therefore the definition needs to be completed by the stipulation that to every action there is an equal and opposite reaction. (This completion was actually proposed by R.F. Muirhead in 1887.) But, so completed, Thomson's definition has two advantages over Lange's. First, by appealing to Newton's second law instead of his first, it shows that we can apply the notion of inertial frame without having to consider the question whether there really are any free particles in nature. Second, it exhibits more clearly an essential point about the relation between the laws of motion and the inertial frames: that the laws assert the existence of at least one inertial frame. The original question, “relative to what frame of reference do the laws of motion hold?” is revealed to be wrongly posed. For the laws of motion essentially determine a class of reference frames, and (in principle) a procedure for constructing them. For the same reason, a skeptical question that is still commonly asked about the laws of motion—why is it that the laws are true only relative to a certain choice of reference frame?—is also wrongly posed. If Newton's laws are true, then we can construct an inertial frame; their truth doesn't depend on our ability to construct such a frame in advance. By the early years of the 20th century, this notion of inertial system seems to have been widely accepted, even if the specific works of Lange and Thomson were largely forgotten; in writing “On the electrodynamics of moving bodies” in 1905, Einstein took it to be obvious to his readers that classical mechanics does not require a single privileged frame of reference, but an equivalence-class of frames, all in uniform motion relative to each other, and any of which “the equations of mechanics hold good.” Two inertial frames with coordinates (x, y, z, t) and (x′, y′ z′ t′) are related by the Galilean transformations, x′ = x − vt y′ = y z′ = z t′ = t (assuming that the x axis is defined to be the direction of their relative motion). These transformations clearly preserve the invariant quantities of Newtonian mechanics, i.e. acceleration, force, and mass (and therefore time, length, and simultaneity). As far as Newtonian mechanics was concerned, then, the problem of absolute motion was completely solved; all that remained was to express the equivalence of inertial frames in a simpler geometrical structure. The lack of a privileged spatial frame, combined with the obvious existence of privileged states of motion—paths defined as rectilinear in space and uniform with respect to time—suggests that the geometrical situation ought to be regarded from a four-dimensional spatiotemporal point of view. The structure defined by the class of inertial frames can be captured in the statement that spacetime is a four-dimensional affine space, whose straight lines (geodesics) are the trajectories of particles in uniform rectilinear motion. See Figure 4. Figure 4: Inertial Trajectories as Straight Lines of Spacetime The uniformly moving particle will travel the same distance in the same intervals. A particle that accelerates after t1 will move a greater distance during t2 and therefore its path in spacetime changes direction. That is, spacetime is a structure whose automorphisms—the Galilean transformations that relate one inertial frame to another—are equivalent to affine transformations: they take straight lines into straight lines (i.e. an inertial motion in one inertial frame will be an inertial motion in any other inertial frame, and likewise for an accelerating or rotational motion), and parallel lines into parallel lines (i.e. uniformly-moving particles or observers who are relatively at rest in one frame will also be relatively at rest in another). (See Stein 1967, Ehlers 1973, and Friedman 1983 for further explanation.) An inertial frame can be characterized as a family of parallel straight lines “filling” spacetime, representing the possible trajectories of a family of free particles that are relatively at rest. See Figure 5: Each of these families of straight lines, F1 and F2, represents the trajectories of a family of free particles that are relatively at rest, and therefore each defines an inertial frame. Relative to each other, the frames defined by F1 and F2 are in uniform motion. Each of the surfaces S is a “hypersurface of absolute simultaneity” representing all of space at a given moment; evidently (given the Galilean transformations) two inertial frames will agree on which events in spacetime are simultaneous. From this we can see that the assertion that an inertial frame exists imposes a global structure on spacetime; it is equivalent to the assertion that spacetime is flat. As we can see from the Galilean transformations, distinct inertial frames will agree on time and simultaneity. Therefore, in the four-dimensional picture, the decomposition of spacetime into hypersurfaces of absolute simultaneity is independent of the choice of inertial frame. Another way of putting this is that Newtonian spacetime is endowed with a projection of spacetime onto time, i.e. a function that identifies spacetime points that have the same time-coordinate. Similarly, absolute space arises from a projection of spacetime onto space, i.e. a function that identifies spacetime points that have the same spatial coordinates. See Figure 6. |The relation of simultaneity “decomposes” spacetime into 3-dimensional pieces, each representing “all of space at a given time,” by projecting spacetime onto time, i.e., by identifying spacetime points that have the same time coordinates.||Similarly, one can think of the notion of “same place” as projecting spacetime onto space, i.e., by identifying spacetime points that have the same spatial coordinates; each of the trajectories thus singled out represents “a given place at all times.”| But this latter projection is arbitrary: while it assumes that we can identify the same time at different spatial locations, Newtonian mechanics provides no physical way of identifying the same spatial point at different times. Thus the equivalence of inertial frames can be thought of as the arbitrariness of the projection of spacetime onto space, any such projection being, essentially, the arbitrary choice of some particular inertial frame as a rest-frame. |Here is a spacetime diagram of motions relative to the inertial frame in which O1, O2, and P are at rest. This can be seen as arising from the projection of each of their inertial trajectories onto a single point of space.||Here is the same situation viewed from an inertial frame in which O3 and P′ are at rest. Now O1, O2, and P are in uniform motion.| |O1 and O2 are at rest||O3 is at rest| |O3 is in uniform motion||O1 and O2 are in uniform motion| |O4 is accelerating any old way||O4 is accelerating any old way| |O5 and O6 are revolving around their common centre of gravity P, which is at rest||O5 and O6 are revolving around their common centre of gravity P, which is in uniform motion| |O7 and O8 are revolving around their centre of gravity P′, which is in uniform motion.||O7 and O8 are revolving around their centre of gravity P′, which is at rest| By the time that this representation of the Newtonian spacetime structure was developed, however, the Newtonian conception of inertial frame had been essentially overthrown. First, 19th-century electrodynamics raised again the question of a privileged frame of reference: the conception of light as an electromagnetic wave in the ether implied that the rest-frame of the ether itself should play a distinguished role in electrodynamical phenomena. On the one hand, physicists such as Maxwell and Lorentz were careful to point out that velocity relative to the ether was not equivalent to absolute velocity, and that the state of motion of the ether itself was necessarily unknown—in other words, that this conception of light did not violate the classical principle of relativity. On the other hand, the existence of such a preferred frame made the equivalence of inertial frames correspondingly less interesting, even if it was true in principle. This is why the appearance of the idea of inertial frame in the 1880's, as I suggested earlier, was not of pressing physical interest to the majority of physicists, and seemed to be a mere philosophical sidelight. The attempts to measure the effects of motion relative to the ether commanded considerably more attention. Second, the abandonment of the ether—following the failure of attempts to measure velocity relative to the ether and, more generally, the apparent independence of all electrodynamical phenomena of motion relative to the ether—did not vindicate the Newtonian inertial frame, but required a dramatically revised conception. Special relativity might be said to have applied the relativity principle of Newtonian mechanics to Maxwell's electrodynamics, by eliminating the privileged status of the rest-frame of the ether and admitting that the velocity of light is independent of the motion of the source. As Einstein expressed it, “the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good.” (1905, p. 38.) But as Einstein also pointed out, the invariance of the velocity of light and the principle of relativity, at least in its Galilean form, are incompatible. It simply makes no sense, according to Galilean relativity, that any velocity should appear to be the same in inertial frames that are in relative motion. Einstein solved this difficulty through his analysis of simultaneity: frames in relative motion can agree on the velocity of light only if they disagree on simultaneity; only the relativity of simultaneity makes possible the invariance of the velocity of light. This means that the transformations between inertial frames that preserve the velocity of light will not preserve simultaneity. These are the Lorentz transformations: Evidently these transformations do not preserve length and time, and so the invariant quantities of Newtonian mechanics, which presuppose invariant measures of length and time, must now depend on the choice of inertial frame. By the same token, the notions of force, mass, and acceleration can no longer be appealed to in the definition of an inertial frame. The definition must instead appeal to the invariant quantities of electrodynamics: an inertial frame is one in which light travels equal distances in equal times in arbitrary directions. What seems impossible, from the point of view of Galilean relativity, is that a frame that moves uniformly relative to such a frame should also satisfy the definition. But that, again, rests on the assumption that two inertial frames will have a common measure of simultaneity. If, as Einstein asserts, the only reasonable definition of simultaneity is one provided by light signals, then there is no determination of simultaneity that will give the same results in different inertial frames. The spacetime structure that is implied by special relativity is thus an affine space, like Newtonian spacetime, but it is not objectively divided into hypersurfaces of absolute simultaneity; the sets of simultaneous events for any inertial frame are the hyperplanes orthogonal to the trajectories that determine that frame. In other words, the choice between two inertial frames determines a choice between two distinct divisions of spacetime into space and time. See Figure 8: The inertial frames F and F′ are in relative motion, and therefore, as the Lorentz transformations indicate, they disagree on simultaneity. F and F′ thus determine distinct decompositions of spacetime into instantaneous spaces, S and S′, respectively The details of Einstein's argument and the structure of Minkowski spacetime can be found elsewhere (see, e.g., Einstein 1951 and Geroch 1978). Here only one more point is worth making. It could be argued that Einstein's and Lorentz's view are completely equivalent. That is, we could assume that there is indeed a privileged frame of reference, and that the apparent invariance of the velocity of light is explained by the effects on bodies of their motion through the ether (the Lorentz contraction and time dilation). This purported distinction between empirically indistinguishable frames has often been criticized on straightforward methodological grounds, but it could be (and surely has been) argued that it is more intuitively plausible than the relativity of simultaneity. After all, knowing that (as Einstein showed) the Lorentz contraction can be derived from the invariance of the velocity of light does not, by itself, entitle us to say which of the two is the more convincing starting-point. This is why it is so important that Einstein's 1905 paper begins with a critical analysis of the entire notion of a frame of reference. It is tacitly assumed by Lorentz's theory, and classical electrodynamics generally, that we have a reference-frame in which we can measure the velocity of light. But how is such a reference-frame determined? The distances between points in space can only be determined if it is possible to determine which events are simultaneous. In practice this is always done by light-signalling, if only in the informal sense that we identify simultaneous events when we see them at the same time. But if the spatial frame of reference is determined by light-signals, and is then to be used to measure the speed of light, we would appear to be going in a circle; the underlying assumption must be that, while light-signalling is useful and practical, it is not essential to the definition of simultaneity, and that there is a fact of the matter about which events are simultaneous that is independent of this method of signalling. This assumption was actually made explicit by James Thomson. He recognized—alone, apparently, before Einstein—that the measurement of distance involves the difficulty as to imperfection of our means of ascertaining or specifying, or clearly idealizing, simultaneity at distant places. For this we do commonly use signals by sound, by light, by electricity, by connecting wires or bars, and by various other means. The time required in the transmission of the signal involves an imperfection in human powers of ascertaining simultaneity of occurrences at distant places. It seems, however, probably not to involve any difficulty of idealizing or imagining the existence of simultaneity. Probably it may not be felt to involve any difficulty comparable to that of attempting to form a distinct notion of identity of place at successive times in unmarked space. (1884, p. 380). In other words, Thomson assumed that it was not a difficulty in principle, like the difficulty of determining rest in absolute space. But Einstein showed that it was precisely the same kind of difficulty, and that determinations of simultaneity involve reference to an arbitrary choice of reference-frame, just as much as determinations of velocity. Einstein's conclusion is, of course, entirely contingent on the empirical facts of electrodynamics; it could have been avoided if there were in nature a useful signal of some kind whose transmission would provide a criterion of absolute simultaneity, so that the same events would be determined to be simultaneous in all inertial frames. Or, experiments might have been able to reveal the dependence of the velocity of light on the state of motion of the source. Then synchronization by light-signals could still have been regarded as a mere practical substitute for a notion of absolute simultaneity that stood on independent grounds, empirically as well as conceptually. But as Einstein saw, because of the apparent independence of the velocity of light of the motion of the source, even “idealizing or imagining the existence of simultaneity” involves light-signaling more essentially than anyone could have realized. Unless some other criterion of simultaneity is provided, therefore, the establishment of a spatial frame of reference involves light-signaling in an essential way. In the absence of such a criterion the speed of light cannot be, as Lorentz supposed, empirically measured against the background of an inertial frame; in that case the only empirically sound definition of an inertial frame is the one that appeals to the speed of light. It may seem surprising that, after this insightful analysis of the concept of inertial frame and its role in electrodynamics, Einstein should have turned almost immediately to call that concept into question. But he had a compelling combination of physical and philosophical motives to do so. On the physical side, he realized (along with many others) that special relativity would require some fundamental revision of the Newtonian theory of gravity. On the philosophical side, he became convinced, largely by his reading of Mach (1883), that the central role of inertial frames was an “epistemological defect” that special relativity shared with Newtonian mechanics. (Einstein 1916, pp. 112–113.) Only relative motions are observable, yet both of these theories purport to identify a privileged state of motion and use it to explain observable effects (such as centrifugal forces). Coordinate systems are not observable, yet both of these theories assign a fundamental physical role to certain kinds of coordinate system, namely, the inertial systems. In either theory, inertial coordinates are distinguished from all others, and the laws of physics are said to hold only relative to inertial coordinate systems. In an epistemologically sophisticated theory, both of these problems would be solved at once: the new theory would only refer to what is observable, which is relative motion; it would admit arbitrary coordinate systems, instead of confining itself to a special class of system. Why, after all, should any genuine physical phenomenon depend on the choice of coordinate system? Another way of putting the same point is to say that, in Newtonian mechanics and special relativity, rotation is “absolute” because the transformations between inertial frames (Galilean or Lorentzian) preserve rotational states. Thus the “absoluteness” of rotation arises precisely from singling out one type of frame, by one type of transformation, instead of allowing arbitrary transformations and arbitrary frames. Einstein held that this epistemological insight had a natural mathematical representation in the principle of general covariance, or the principle that the laws of nature are to be invariant under arbitrary coordinate transformations. More precisely, what this means is that coordinate transformations are no longer required (as in the affine spaces of Newtonian mechanics and special relativity) to take straight lines to straight lines, but only to preserve the smoothness of curves (i.e. their differentiability). The general theory of relativity was intended to be a generally covariant account of spacetime, and its general covariance was intended to express the general relativity of motion. And the theory came into being because Einstein perceived a deep connection between this project and that of finding a relativistic theory of gravitation. The philosophical motivations and implications of Einstein's view are dealt with elsewhere. (See, for example, the entries on Einstein's philosophy of science; the hole argument; and early philosophical interpretations of general relativity.) We will consider here only the bearing of general relativity on the notion of an inertial frame. It is questionable whether Einstein succeeded in establishing the general relativity of motion, but it is clear that general relativity undermines the concept of inertial frame in important respects. This arises from the equivalence principle: that inertial mass—the quantity that enters into Newton's second law, and that is a measure of a body's resistance to acceleration—is equivalent to gravitational mass, the quantity that enters into Newton's law of universal gravitation. A more empirical way of expressing it is that all bodies fall with the same acceleration in the same gravitational field, or, the trajectory of a body in a given gravitational field will be independent of its mass and composition. This is the principle that Newton tested by constructing pendulums with wooden boxes as their bobs, which he would fill with different materials in order to see whether those differences made a difference to the speed of falling; they didn't. Eötvös made more precise tests in the late 19th century, and established the principle to much greater accuracy; these are the results on which Einstein would have relied. Newton also tested the principle for bodies whose masses differ greatly, by observing that Jupiter and its four moons all received precisely the same acceleration from the sun's gravitational field. The equivalence principle suggests, however, that a freely-falling frame of reference is physically indistinguishable from an inertial frame. Newton had already noticed this, and indeed he stated it, more or less, in Corollary VI to the laws of motion: If bodies are moving in any way whatsoever with respect to one another and are urged by equal accelerative forces along parallel lines, they will all continue to move with respect to one another in the same way as they would if they were not acted on by those forces. (1726, p. 423.) For example, he was able to treat the system of Jupiter and its moons as if it were (nearly) at rest or moving uniformly in a straight line, because the attractive force of the sun acts (almost) equally on every part of the system. See Figure 9: Figure 9: Newton's Corollary VI What seem, within a given system, like equal and parallel accelerations may be, on a larger scale, unequal and converging on some distant massive object; e.g., the system of Jupiter and its moons is falling toward the sun, but “locally” the accelerations are very nearly equal and parallel, and may therefore be neglected. He even applied this reasoning to the entire solar system, in order to justify treating it as an isolated system: if there were any outside force acting on it, it must have been acting more or less equally and in parallel directions on all parts of the system. It may be alleged that the sun and planets are impelled by some other force equally and in the direction of parallel lines; but by such a force (by Cor. VI of the Laws of Motion) no change would happen in the situation of the planets to one another, nor any sensible effect follow; but our business is with the causes of sensible effects. Let us, therefore, neglect every such force as imaginary and precarious, and of no use in the phenomena of the heavens….(1729, volume 2 p. 558) Now, it is a familiar fact that in an orbiting spacecraft, bodies behave as if no forces were acting on any of them (as if they were “weightless”), because the attraction of the earth acts equally on all of them. But these phenomena are not, by themselves, evidence that no phenomena are capable of distinguishing an inertial frame from a falling frame. Einstein was willing to generalize the equivalence principle, and to conclude that the classical idea of a distinguished class of frames of reference has no physical basis. Any frame that we might regard as inertial might be, for all we can tell by experiment, in free fall. By the same token, any frame that is uniformly accelerating is indistinguishable from one that is at rest in a uniform gravitational field. Suppose that you are in a box at rest on the earth; you and everything in the box, by the equivalence principle, will be accelerated downward with the acceleration g (= 9.8 meters/second/second). Now suppose that the box itself is in empty, gravity-free space, but accelerating upward (i.e. in the direction of its roof) with the acceleration -g. Obviously, because of their inertia, bodies in the box, including your own, will exert the same force—have the same “weight”—on the floor as if the box were at rest and sitting on the earth. To get a clearer idea of the physical significance of the equivalence principle, and its connection with general covariance, consider the Newtonian procedure for analyzing motion in the solar system, here sketched very roughly: - Determine the accelerations of all the planets relative to the fixed stars. - Using the laws of motion, their corollaries, and all the propositions proved from these in Book I of Principia, derive from the accelerations the forces needed to produce them; in particular, derive from the orbits the centers of those orbits, and the masses of the bodies needed to produce those forces. This crucially involves the law of action and reaction, for otherwise it would be impossible to break down the total acceleration of any planet into the components contributed by particular other planets; the earth's acceleration, for example, is the sum of its accelerations toward all the other planets, and each individual acceleration is part of an action-reaction pair involving some other planet. - When we understand the mutual interactions among the planets, we are in a position to estimate their relative masses. In Newton's case, this was necessarily restricted to the planets with satellites, because only in those cases could he compare the accelerations they determine at given distances and so deduce the differences in mass. By this reasoning he estimated the ratios of the Sun's mass to those of Jupiter (1067 to 1), Saturn (3021 to 1), and the earth, and was able to calculate that the center of mass of the entire solar system would never be more than one solar diameter from the center of the Sun. - Having found the center of mass, we have in principle determined an inertial frame: by Corollary IV to the laws of motion, the center of mass will be at rest or moving uniformly in a straight line. That is, the mutual actions of the bodies in the system will not change the state of motion of the center of mass. And having determined an inertial frame, we are in a position to say that the accelerations relative to the center of mass frame are the true accelerations. One might think that the problem of relativity arises right from the start: the reliance on the fixed stars already seems to introduce an arbitrary assumption that threatens to vitiate Newton's procedure as an account of the true motions. But the framework of the fixed stars, initially just taken for granted, turns out to be justified in the course of the analysis. If it turns out that all the accelerations relative to the fixed stars can be analyzed into action-reaction pairs involving bodies within the system, leaving no “leftover” accelerations that need to be traced to some yet-unknown influence, then we can conclude that the stars are a suitable (sufficiently inertial) frame of reference after all. (By the later 19th century, observations became sufficiently precise to reveal that there is in fact a leftover acceleration, namely the famous extra precession of Mercury. But that could not affect Newton's analysis in 1687.) In contrast, had we chosen the earth as a frame of reference, we would find that there are accelerations relative to this frame—e.g. Coriolis and centrifugal accelerations—that don't satisfy the law of action and reaction. The relativistic aspect of this situation arises from the equivalence principle. Newton's Corollary VI said that the inertial frame we construct by this procedure is effectively indistinguishable from one in which all the bodies are undergoing equal and parallel accelerations caused by some force that acts equally on all of them; the equivalence principle asserts that gravity is such a force. In following the Newtonian procedure for constructing an inertial frame, we have constructed a frame which might be, for all we can determine empirically, falling in the gravitational field of some other system. Here again, as in his use of Corollary V, we can see that Newton was being remarkably circumspect about his frame of reference: he needed to show that his analysis of the forces at work, and his conclusion about the nearly-heliocentric structure of the system, are not affected by any unknown forces acting on the system as a whole, and his appeal to Corollary VI precisely satisfies this need. By the same token, however, the accelerations relative to this frame cannot be known to be the “true accelerations”; they may be accelerations relative to a freely-falling trajectory just in case the center of mass is itself freely falling, in which case they have to be added to the gravitational acceleration of the center of mass before we can arrive at the true accelerations. But the acceleration of the center of mass may have to be added to some larger acceleration—and so on. This means that we can't know the true strength of the gravitational field by observing the motions in this frame. The only hope of doing so would be to include all the mass in the universe in one dynamical system; if we knew the center of mass of the entire universe, we could rule out the possibility that something else is exerting an accelerative force, since by hypothesis there would be nothing else. We can see the significance of this more clearly by looking at the equations of motion (in a very simplified form). Newton's equation of motion for a particle subject to no force asserts that it moves uniformly, with zero acceleration. Obviously, in a gravitational field, the particle's acceleration will depend on the field. In effect, we are accounting for the trajectory of the falling particle by “decomposing” it into two parts, the part determined by its natural tendency to move uniformly in a straight line, and the part contributed by the gravitational field. But by the analysis of the equivalence principle, determining the inertial part—and therefore determining the gravitational part—depends on our assumption that the center of mass frame is inertial rather than freely falling. And this assumption is arbitrary; that is, it amounts to an arbitrary choice of the coordinate system in which we define the equation of inertial motion. This implies that the gravitational field depends on the coordinate system in precisely the same way. The principle of general covariance, then, acquires its physical significance in conjunction with the equivalence principle. By itself, it says that the geometrical structures of spacetime don't depend on the coordinates in which we express them, or on the set of points that we may think comprises spacetime. This is an important principle, but it doesn't recommend general relativity over other theories, since special relativity and Newtonian mechanics also involve spacetime structures that can be defined in a generally-covariant way, through the same kinds of coordinate-independent mathematical objects that we use in general relativity. Combined with the equivalence principle, however, it implies that a central Newtonian idea—that gravity is a force causing deviations from uniform rectilinear motion—is based on an arbitrary choice of coordinates. For a trajectory that satisfies all empirical criteria for being inertial in a particular frame of reference—e.g. the trajectory of the center of mass in our example—may be freely falling relative to some other trajectory that satisfies the same criteria. By contrast, a freely-falling trajectory is a freely falling trajectory in any coordinate system; it is only the decomposition of it into its inertial and gravitational parts that will be different in different coordinate systems. General covariance is thus not an argument against privileged states of motion, as Einstein had hoped it would be. It is an argument that the privileged states of motion should not be mere artifacts of our choice of coordinates, i.e. that they should be coordinate-independent. Precisely what this means depends, then, on what physical means we have at our disposal to identify states of motion other than by simply setting down coordinates. Combined with the equivalence principle, it is an argument for regarding gravitational free-fall as the privileged state of motion, rather than as a forced deviation from the privileged state of motion. And in this way it provides an argument for spacetime curvature. As we saw, in Newtonian and Minkowski spacetime the inertial trajectories are, by definition, the straight lines or geodesics of spacetime. And the flatness of spacetime consists in the fact that these geodesics behave like straight lines in a flat space or surface: parallel geodesics remain parallel, and non-parallel geodesics do not accelerate relative to one another. (In any inertial frame, the motion of any other inertial frame appears uniform.) By the equivalence principle, however, free-fall trajectories satisfy all empirical criteria for being inertial trajectories, and so the distinction between the two types of trajectory depends on the mere choice of coordinates. General covariance suggests, then, that the free-fall trajectories ought to be identified as the inertial trajectories—and therefore, as the geodesics of spacetime. But if free-fall trajectories are the geodesics of spacetime, then spacetime is curved. For the free-fall trajectories exhibit relative accelerations, and the relative acceleration of geodesics is a defining characteristic of curved geometry. The curvature of the earth's surface, for example, is revealed in the fact that geodesics that begin in parallel directions can begin to approach one another—for example, two lines of longitude can both be perpendicular to the equator, but converge on one another as they approach the poles. And since the relative accelerations of falling bodies depend on the distribution of mass, as we already knew from Newton's theory, we now conclude not only that spacetime is curved, but that its curvature is determined by the distribution of mass. (For further explanation see Geroch 1978.) The curvature of spacetime, finally, determines the status of inertial frames in general relativity. The statement that all reference-frames, rather than just inertial frames, are equivalent is a misleading way of describing the situation; rather, the variable curvature of spacetime makes the imposition of a global inertial frame impossible. So the status of the latter is like the status of a plane rectangular coordinate system on the surface of the earth. Over a sufficiently small area, the coordinate plane may be a good approximation to the surface, but over increasingly large areas it diverges increasingly from the contours of the earth. And if two such coordinate systems, with their origins at different points on the earth, are extended until they meet, they will be seen to be “disoriented” relative to one another. In contrast, a flat plane can be so coordinatized, and coordinate systems originating at different points can be smoothly combined into one system. Similarly, in the affine spaces of Newtonian and special-relativistic physics, any inertial coordinate system can be extended over the whole of spacetime. And in any system so extended, the trajectory of every other inertial observer will be a uniform rectilinear motion. But if spacetime is variably curved, according to the distribution of mass and energy, local inertial systems will be “disoriented” relative to one another; indeed, the degree of this “disorientation” is one of the measures of curvature. And an inertially-moving—i.e. freely falling—particle will in that case be accelerating in the local inertial system of another freely-falling particle. Thus there are inertial trajectories, but no extended inertial systems. See Figures 10–13: This Cartesian coordinate system can evidently be simply “set down” over the plane below. Any coordinate system defined at any point of the plane can be smoothly extended over the entire plane. Figure 11: “Magnified” View of Flat “Local” Coordinate Systems on a Curved Surface This arbitrary curved surface won't allow for the global laying down of a coordinate system, but must be coordinatized in small overlapping pieces, which generally won't be parallel to one another. In a flat spacetime, the rest-frame of any inertial observer an be “extended” over all of spacetime in such a way that, in this global inertial frame, the trajectory of every other inertial observer will be an inertial trajectory. In a curved spacetime, inertial trajectories will be relatively accelerated; indeed the relative acceleration of geodesics is a measure of curvature. Therefore the local inertial frame of any freely-falling observer cannot be extended into a global frame in which all other inertial observers are moving uniformly. The inertial frames of different freely-falling observers will be, like local coordinate systems on a curved surface, “disoriented” relative to one another. One could try to express this idea with Einstein's remark about the need to “free oneself from the idea that coordinates must have an immediate metrical meaning.” (Einstein 1949, p. 67.)But even this might be misleading. Einstein evidently was thinking that, in general relativity, coordinates, and coordinate transformations, no longer represent the possible displacements of rigid bodies or the transport of ideal clocks. The insight underlying this is that the notion of rigid displacement—therefore of rigid coordinate system, and inertial frame—imposes a priori a degree of uniformity, or symmetry, on spacetime; the displacement of bodies without change of dimension, and the transport of an ideal clock without distortion of time-intervals, requires a homogeneous space. And so rigid displacement cannot be a basic principle in a theory in which spacetime curvature varies according to the distribution of mass and energy. The possibility of a rigid displacement, and therefore the existence of an inertial frame, can only arise a posteriori, as the result of a peculiar distribution of mass-energy (for example, in a universe empty of mass and energy, or with a highly symmetrical distribution). The serious defect in the notion of inertial frame is not that it makes an arbitrary distinction among coordinate systems—for the distinction is quite as genuine as the distinction between flat and curved spacetime—but that it extends indefinitely over spacetime a structure that, in our universe, only corresponds approximately to very small regions. - DiSalle, R. (1988). Space, Time and Inertia in the Foundations of Newtonian Physics. Unpublished Ph.D. Dissertation, University of Chicago. - ––– (1991). “Conventionalism and the origins of the inertial frame concept.” In PSA 1990. East Lansing: The Philosophy of Science Association. - ––– (2002). “Newton's philosophical analysis of space and time.” Forthcoming in The Cambridge Companion to Newton. Cambridge: Cambridge University Press. - Earman, J. (1989). World Enough and Spacetime: Absolute and Relational Theories of Motion. Boston: M.I.T. Press. - Ehlers, J. (1973) “The nature and structure of space-time.” In The Physicist's Conception of Nature. Edited by Jagdish Mehra, 71–95. Dordrecht: Reidel. - Einstein, A. (1905). “On the electrodynamics of moving bodies.” In Einstein, et al. (1952), pp. 35–65. - Einstein, A. (1916). “The foundation of the general theory of relativity.” In Einstein,, et al. (1952), pp. 109–164. - Einstein, A. (1949), “Autobiographical notes.” In P.A. Schilpp, ed., Albert Einstein, Philosopher-Scientist. Chicago: Open Court. - Einstein, A. (1951). Relativity: The Special and the General Theory. R. Lawson, tr. New York: Crown Publishers Inc.. - Einstein, A., H. A. Lorentz, H. Minkowski, and H. Weyl (1952). The Principle of Relativity. W. Perrett and G.B. Jeffery, trs. New York: Dover Books. - Euler, L. (1748).“Réflexions sur l'espace et le temps.” Histoire de l'Academie Royale des sciences et belles lettres 4 (1748): 324–33 - Friedman, M. (1983). Foundations of Space-Time Theories. Princeton: Princeton University Press. - Lange, L. (1885). “Ueber das Beharrungsgesetz.” Berichte der Königlichen Sachsischen Gesellschaft der Wissenschaften zu Leipzig, Mathematisch-physische Classe 37 (1885): 333–51. - Leibniz, G. (1970). Philosophical Papers and Letters. Edited by Leroy Loemker. Dordrecht: Reidel. - Mach, E. (1872). Die Geschichte und die Wurzel des Satzes von der Erhaltung der Arbeit. Prague: J.G. Calve'sche K.-K. Univ-Buchhandlung. - Mach, E. (1883). Die Mechanik in ihrer Entwickelung, historisch-kritisch dargestellt. 2nd edition. Leipzig: Brockhaus. - Minkowski, H. (1908). “Space and time.” In Einstein, et al. (1952), pp. 75–91. - Misner, C., K. Thorne, and J.A. Wheeler (1973). Gravitation. San Francisco: Freeman - Muirhead, R.F. (1887). “The laws of motion.” Philosophical Magazine, 5th series, 23: 473–89. - Neumann, C. (1870). Ueber die Principien der Galilei-Newton'schen Theorie. Leipzig: B. G. Teubner, 1870. - Newton, I. (1726). The Principia: Mathematical Principles of Natural Philosophy, tr. I. Bernard Cohen and Anne Whitman. Berkeley and Los Angeles: University of California Press, 1999. - Newton, I. (1729). Sir Isaac Newton's Mathematical Principles of Natural Philosophy and his System of the World. 2 vols. Edited by Florian Cajori. Translated by Andrew Motte. Berkeley: University of California Press, 1962. - Seeliger, H. (1906). “Über die sogenannte absolute Bewegung.” Sitzungs-Berichte der Bayerische Akademie der Wissenschaft: 85–137. - Stein, H. (1967). “Newtonian space-time.” Texas Quarterly 10: 174–200. - Stein, H. (1977). “Some philosophical prehistory of general relativity.” In Foundations of Space-Time Theories. Edited by John Earman, Clark Glymour, and John Stachel, 3–49. Minnesota Studies in the Philosophy of Science, Vol. 8. Minneapolis: University of Minnesota Press. - Thomson, J. (1884). “On the law of inertia; the principle of chronometry; and the principle of absolute clinural rest, and of absolute rotation.” Proceedings of the Royal Society of Edinburgh 12: 568–78. - Torretti, R. (1983). Relativity and Geometry. Oxford: Pergamon Press. [Please contact the author with suggestions.] Einstein, Albert: philosophy of science | general relativity: early philosophical interpretations of | geometry: in the 19th century | space and time: conventionality of simultaneity | space and time: the hole argument
<urn:uuid:a71a03ed-7c4a-4a8a-88f6-2accb5703874>
CC-MAIN-2013-20
http://plato.stanford.edu/entries/spacetime-iframes/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93607
14,427
3.796875
4
Does this test have other names? What is this test? This test looks for bacteria or other organisms in a wound. The test is used to find out if a wound is infected. It can also identify the type of organism that's causing the infection. This test requires a small sample of cells or fluid from a wound. Then the sample is cultured and looked at under a microscope to look for bacteria or other organisms. An infected wound may need special treatment, such as antibiotics. The antibiotics stop the infection and keep it from spreading to other areas of the body. Treating the infection also helps the wound to heal. Why do I need this test? You may need this test if your doctor suspects that your wound is infected or if you were bitten by an animal, insect, or another person. Symptoms of an infected wound include: Swelling or a sudden lump under the skin Pus or bad-smelling fluid draining from the wound Skin around the wound that feels hot to the touch Bumps near the wound that look like boils, pustules, spider bites, or a rash In more advanced infections, you may also have exhaustion, confusion, fever, and chills. What other tests might I have along with this test? Your doctor may also order these tests: Blood tests, including those to check liver function, blood proteins, and blood sugar, as well as a complete blood count, or CBC What do my test results mean? Many things may affect your lab test results. These include the method each lab uses to do the test. Even if your test results are different from the normal value, you may not have a problem. To learn what the results mean for you, talk with your health care provider. Normal results are negative, meaning that no organisms grew in the culture from your wound. A positive result means that bacteria or other organisms did grow and that your wound is infected. From your test results, your doctor can determine what's causing the infection and give you the best antibiotic to treat it. How is this test done? This test requires a swab of the fluid or cells from an open wound. Your health care provider will carefully clean the wound and flush out any dirt with water. Then he or she will collect a sample using a long cotton swab to gently wipe the wound. If the wound isn't oozing, your doctor may moisten the swab with a sterile saline solution. Does this test pose any risks? This test poses no known risks. What might affect my test results? Other factors aren't likely to affect your results. How do I get ready for this test? You don't need to prepare for this test.
<urn:uuid:297242aa-dbec-435a-bca5-09be38272135>
CC-MAIN-2013-20
http://www.barnesjewish.org/HealthLibrary/default.aspx?pTitle=Test&ContentTypeID=167&ContentID=wound_culture&alpha=W
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941549
564
2.9375
3
You are needed on a case. Your assistance has been requested by Detective Penn Pincher to serve as his assistant. It seems that someone has recently noticed that an American dollar coin with Susan B. Anthony on its face has not been seen in quite awhile. You are to investigate and prepare a report for the top brass concerning what happened to the Susan B. Anthony coin. Your task is to solve the mystery of the missing Susan B. Anthony dollars. To get started you'll need to do some research about money: what it is and what its uses are. The information you gather through the process will give you a foundation for solving the mystery. Use your Agent's Notebook to aid you in your investigation. Before you start on your investigation, you'll get some help from a money specialist (your teacher) who has been hired to hold a workshop for all the agents on this case. This workshop will cover the fundamentals of what and why money is used. Be sure to record any relevant information from this workshop in your Agent's Notebook. You'll need to know about the history of the Susan B. Anthony dollar in order to solve the mystery. Read this biography on Susan B. Anthony to view a good source of information about the Susan B. Anthony coin. Knowledge of how money is made and circulated is important for this case.view this page on U.S. Currency to learn about the life of a dollar. Be sure to record your findings in your Agent's Notebook. JUST IN! Eyewitnesses have reported sightings of a golden-colored coin with the same characteristics as the Susan B. Anthony dollar. Investigate this new coin. Research on this coin will affect your investigation of the disappearance of the Susan B. Anthony coin. Our intelligence department has been able to find a source describing the new coin. View the U.S. Mint website and look at the Native American $1 Coin Act page for more details. Answer the final questions in your Agent's Notebook. These questions will help you to prepare the final report to be submitted to the top brass. Not just anything will serve as money: If it's money, it has to be durable, divisible, portable and widely accepted. Believe it or not, you could make your own money! However you may not copy the money that is circulated and printed by the Federal Reserve, and you may not identify money you make as a Federal Reserve Note. So, for the most part, you probably won't find people or merchants willing to accept your homemade money as payment for goods and services. Complete your Agent's Notebook. Prepare a report for the top brass regarding the failure of the Susan B. Anthony dollar. Also, provide your recommendations regarding the new golden coin.
<urn:uuid:a0e61f55-9149-4445-b5f8-08dbfdd25a3e>
CC-MAIN-2013-20
http://www.econedlink.org/lessons/projector.php?lid=573&type=student
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952512
563
2.9375
3
Wi-Fi though technically advanced, could be something very prone to safety issues. An unsecured Wi-Fi would allow almost anyone within a certain range to access your personal accounts and possibly hack or use your computer for unlawful activities. It is imperative that you protect your Wi-Fi from all intruders and set up a system to detect who could be using your Wi-Fi through unauthorized means. The first and foremost step is to get a security password on your router, which would prevent any unauthorized access, however, that option is not strong enough, as passwords can also be hacked. There is a program by the name ZAMSOM WIRELESS NETWORK that can help to you to view the number of computers that are logged onto your Wi-Fi account. - Before downloading the program shut down all extra computers and devices, if you happen to be managing a wireless router. - Now on the computer that your are still using, download the Zamzom wireless network tool from the ZAMZOM website. - Install the Zamzom wireless network tool and then launch it. - Get the software to review the computer and it will provide the computer’s IP address, the computer name and the Media Access Control (MAC). - Allow the Zamzom wireless network tool to scan the entire network through the fast or deep scan. - Now after the scan, you should be able to view only your own computer’s IP address. If there is more than that, then it means someone is using your network. But do make sure all other network that you are using should be shut down, otherwise you’ll end up checking on your own devices. - Remember though that the program does not exactly tell you who is using the network, it just shows the IP address, but it is also a fact that Wi-Fi is just within a limited range, so anyone who is using it would be very close to you. Through this simple software you can keep time to time checks on your network and prevent any intrusive activities. However, not all intruders do so illegally, sometimes you could also get a clumsy neighbor who gets on to your network by mistake. You might also like |Best Software Update Tools? There are so many application alerts that you get when an update is available. But what about the applications...||Google Search Tool: How to Search for MP3 and AVI Files Google is the most powerful search tool on planet. It has most of the reachable Internet indexed on its...||All About Passwords It is an obvious fact that you tend to protect your home and property from any uninvited intrusion by...||Five Troubleshooting Tips for Apple iOS 4 Often people look forward to unlocking their devices or jailbreak their iDevices. After the success of...|
<urn:uuid:5c88bd2f-07b5-4398-a937-88077bafb851>
CC-MAIN-2013-20
http://www.ixibo.com/how-to-find-out-whos-using-your-wi-fi-through-zamzom-tool/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943933
575
2.5625
3
Undescended testes (cryptorchidism) are testes that remain in the abdomen instead of descending into the scrotum just before birth. About 3 of every 100 boys have undescended testes at birth. Most testes descend on their own within about 6 months. Boys born prematurely are much more likely to have the condition as are boys whose family members had undescended testes. Half of the boys with the condition have an undescended testis only on the right side, and one fourth are affected on both sides. Undescended testes cause no symptoms. However, undescended testes can become twisted in the abdomen (testicular torsion—see Penile and Testicular Disorders: Testicular Torsion), impair sperm production later in life, and increase the risk of hernia and testicular cancer. Surgery is usually performed to bring the testes down into the scrotum if the testes remain undescended at 1 year of age. Retractile (hypermobile) testes are descended testes that easily move back and forth between the scrotum and the abdomen. Retractile testes do not lead to cancer or other complications. The testes usually stop retracting by puberty and do not require surgery or other treatment. Last full review/revision February 2009 by Elizabeth J. Palumbo, MD
<urn:uuid:45714e6a-353d-455e-90ff-f9638953c7fb>
CC-MAIN-2013-20
http://www.merckmanuals.com/home/childrens_health_issues/problems_in_infants_and_very_young_children/undescended_and_retractile_testes.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943283
286
3.171875
3
The Kwoma, Nukuma, and Yessan-Mayo are closely related peoples living in the hills north of the middle Sepik River in northern New Guinea. These groups share a unique art tradition associated with a religious cult centered on the cultivation of yams. A sequence of three rituals, called "yena," "minja," and "nokwi," each involving a different type of figure, is performed in honor of the yam harvest. Carved wooden heads such as this one are created for yena, the first of these harvest rituals. With bulging eyes and pendulous noses, yena heads represent ancient and powerful spirits. During the ritual, village men assemble a pile of yams within the ceremonial house. The sticklike bases of the yena are inserted into the yam pile and the heads are decorated with brightly colored leaves, feathers, and other ornaments. The men then dance and sing to honor the yena spirits. At the conclusion of the ritual, the display is dismantled, the heads stored away, and preparations for the next ritual begun.
<urn:uuid:2410e402-3b7b-42d8-9a9b-247c48881f6b>
CC-MAIN-2013-20
http://metmuseum.org/Collections/search-the-collections/50004700?high=on&rpp=15&pg=1&rndkey=20120906&ft=*&where=Sepik&pos=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955598
221
3.109375
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | - Main article: Alcoholic intoxication Drunkenness is the state of being intoxicated by consumption of alcohol to a degree that mental and physical faculties are noticeably impaired. Common symptoms may include slurred speech, impaired balance, poor coordination, flushed face, reddened eyes, reduced inhibition, hiccuping, and uncharacteristic behavior. Drunkenness can result in temporary experience of a wide range of emotion, ranging from anger, sadness, and depression to euphoria, lightheartedness and joviality. Consuming excessive amounts of alcohol may lead to a hangover the next day. Addiction researcher Griffith Edwards points out the dual chemical and psycho-cultural influences on the behaviour of a drunken person: "Intoxication with alcohol is a temporary chemically induced mental disorder where the intoxicated person is generally not out of touch with reality, but will still respond to what culture dictates." Laws on drunkenness vary between countries. In the United States, for example, it is commonly a minor offense for an individual to be so intoxicated in a public place that he or she is unable to care for his or her own safety or the safety of others. This degree of intoxication is considerably higher than the standard for driving under the influence of alcohol or drugs ("drunk driving"), which commonly requires intoxication to the degree that mental and physical faculties are impaired. In the United States, United Kingdom, Mexico, New Zealand, Republic of Ireland and Canada, this is legally defined as a blood alcohol content (BAC) of 0.08% or greater for operating a motor vehicle. In countries such as Australia, the BAC limit is lower at 0.05%. Additionally, the U.S. Federal Aviation Administration prohibits pilots from operating aircraft with any BAC greater than 0.04%, or operating an aircraft after consuming any alcoholic beverage within 8 hours. A legally drunk person on public property may also be taken into custody for public intoxication in many jurisdictions, even when not operating a vehicle. There are often many legal restrictions relating to sale and supply of alcohol, and particularly relating to those persons under 18 years of age (19 or 21 in some jurisdictions) or to somebody who is already intoxicated. However in some countries such as Austria, Germany, Switzerland, the Netherlands and Denmark, customers can buy alcoholic drinks such as beer, cider or wine from the age of 16 years, although not spirits. Many religious groups permit the consumption of alcohol but prohibit intoxication. Some prohibit alcohol consumption altogether. In Islam, there is an absolute prohibition on the consumption of all alcoholic beverages, and intoxication is considered as an abomination in the Qur'an and Hadith. Islamic schools of law (Madh'hab) have interpreted this as a strict prohibition of the consumption of all types of alcohol while allowing the use of cannabis and hashish. The Catechism of the Catholic Church states in paragraph 2290: "The virtue of temperance disposes us to avoid every kind of excess: the abuse of food, alcohol, tobacco, or medicine. Those incur grave guilt who, by drunkenness or a love of speed, endanger their own and others' safety on the road, at sea, or in the air." The Church does not prohibit the use of alcohol in moderation; and indeed, the ritual use of alcoholic altar wine during the Mass is central to the Roman Catholic liturgy. Many Protestant Christian denominations prohibit drunkenness due to the Biblical passages condemning it (for instance, Proverbs 23:21, Isa. 28:1, Hab. 2:15) but many allow moderate use of alcohol (see Christianity and alcohol). - Bales, Robert F. Attitudes toward Drinking in the Irish culture. In: Pittman, David J. and Snyder, Charles R. (Eds.) Society, Culture and Drinking Patterns. NY: Wiley, 1962, pp. 157-187. - Gentry, Kenneth L., Jr., God Gave Wine: What the Bible Says about Alcohol. Lincoln, Calif.: Oakdown, 2001. - "Out of It. A Cultural History of Intoxication" by Stuart Walton. (Penguin Books, 2002) ISBN 0-14-027977-6 - "Modern Drunkard" magazine - a humorous magazine about drink and the art of getting drunk - Famous Drinking Quotes - a collection of quotes about drinking from famous alcohol enthusiasts - ↑ Griffith Edwards. Alcohol: The World's Favourite Drug. 1st US ed. Thomas Dunne Books: 2002. ISBN 0-312-28387-3. p 57. - 2. Sigmund, Paul. St. Thomas Aquinas On Politics And Ethics. W.W Norton & Company, Inc:1988 p. 77 - 3. http://khidr.org/cannabis.htm |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:4662b4cd-4299-4816-8a38-bcee37fa4d68>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Drunkenness?direction=prev&oldid=80642
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897814
1,016
3.25
3
"For those of us who work at the United States Department of Justice, everyday is September 12, 2001. Everyday is that day after. Everyday requires renewed commitment to combating and preventing terrorism." - Former Attorney General Alberto R. Gonzales October 25, 2006 Joint Terrorism Task Forces (JTTFs) are small cells of highly trained, locally based, passionately committed investigators, analysts, linguists, SWAT experts, and other specialists from dozens of U.S. law enforcement and intelligence agencies. It is a multi-agency effort led by the Justice Department and FBI designed to combine the resources of federal, state, and local law enforcement. The National JTTF was established in July 2002 to serve as a coordinating mechanism with the FBI's partners. Some 40 agencies are now represented in the NJTTF, which has become a focal point for information sharing and the management of large-scale projects that involve multiple partners. Learn more about the National Joint Terrorism Task Force Learn more about the Joint Terrorism Task Forces
<urn:uuid:d19a0022-ec4e-4839-aee6-05a139a5080d>
CC-MAIN-2013-20
http://www.justice.gov/jttf/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950046
206
2.640625
3
When you drop something onto sand you are used to seeing it crater the sand a little and stop with a thud. New research shows that if an object dropping is dense enough then it will reach a terminal velocity in the granular material and keep going forever. It’s a highly unexpected and nonintuitive result but has been shown by dropping metal balls into polystyrene beads. The researchers dropped ping pong balls filled with metal of different masses into a 5 meter deep silo of polystyrene balls. They attached a thread with marks along it to the ping pong ball so that a high speed camera could capture the movement as the ping pong ball dropped. With this setup, the researchers could achieve 2 millimeter precision with their depth measurements. Above a certain mass, the ping pong balls continued to fall all the way to the bottom of the tube and had reached a constant (terminal) velocity. In this regime, the polystyrene beads seemed to be acting just like a fluid. The researchers answer the question of how massive/dense a ping pong ball sized object would need to be to continue falling infinitely deeply in sand and they get an answer of about 14 kg or a density of 400 g/cm^3. That is about 400 times the density of water and there is nothing on Earth known of that density. So unless you discover some crazy new material, have no fear about dropping your keys at the beach and having them continue to sink until they hit bedrock.
<urn:uuid:ec8126c6-3985-45ca-ad95-2345e7a7e756>
CC-MAIN-2013-20
http://www.thephotonist.net/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95025
305
2.921875
3
Brief SummaryRead full entry Ornithodoros species are soft ticks (Family Argasidae). Included in this genus are several important vectors of tick-borne relapsing fever in humans, as well as parasites of domestic and wild mammals. Like other argasids, Ornithodoros ticks have multihost life cycles. Argasid ticks have two or more nymphal stages, each requiring a blood meal from a host. Unlike the ixodid (hard) ticks, which stay attached to their hosts for up to several days while feeding, most argasid ticks are adapted to feeding rapidly (for about an hour), then dropping off the host. Two Ornithodoros species of public health concern in the United States, Ornithodoros hermsi and O. turicata, are vectors of tick-borne relapsing fever (TBRF) spirochetes. In Africa, Ornithodoros moubata (and possibly several related species) are important vectors of TBRF spirochetes (Cutler et al. 2009).
<urn:uuid:93a0e1bf-887b-4172-981a-70f05ecc9d24>
CC-MAIN-2013-20
http://eol.org/pages/53332/overview
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908458
224
3.15625
3
Kolleru Lake is the largest freshwater lake and is located in Andhra Pradesh. Kolleru is located between Krishna and Godavari delta and covers an area of 308 km². The lake serves as a natural flood-balancing reservoir for these two rivers. The lake is fed directly by water from the seasonal Budameru and Tammileru streams, and is connected to the Krishna and Godavari systems by over 68 inflowing drains and channels. It serves as a habitat for migratory birds. It supports the livelihood of fishermen and riparian population in the area. The lake was notified as a wildlife sanctuary in November 1999 under India's Wild Life (Protection) Act, 1972, and designated a wetland of international importance in November 2002 under the international Ramsar Convention. Thousands of fish tanks were dug up inside the wetland converting the lake into a mere drain. Apart from this the farmers had converted the land use pattern of the lake. This had a lot of impact in terms of pollution leading to even difficulty in getting drinking water for the local people. The total area of the lake converted to aquaculture ponds accounts for 99.73km2 in 2004in comparison to 29.95km2 in 1967. The area under agricultural practice in the wetland also increased from 8.40 km2 in 1967 to 16.62km2 in 2004. Sewage inflow from the towns of Eluru, Gudivada and even Vijayawada and industrial effluents, pesticides and fertilizers from the Krishna-Godavari delta region contaminate the lake. Eleven major industries release about 7.2 million litres of effluents into the lake every day. In 1982, the Andhra Pradesh government set up the Kolleru Lake Development Committee (KLDC), which had set up a Rs 300-crore master plan for Kolleru. It also called for the creation of a Kolleru lake development authority to check encroachments, regulate and monitor pollution, clear the lake of weeds and use it as compost and raw material to produce biogas. Dr. T. Patanjali Sastry, President, Environment Centre, Danavaipeta, Rajahmundry moved the High Court to protect the ecosystem of the Kolleru lake. Later on the fishermen association also filed another PIL claiming that the ecosystem was degraded not due to the fish tanks but due to sewage coming out from the industries and the residential areas. The court gave preference to the ecology of the lake first. In 2006, the Central Empowered Committee (CEC), appointed by the Supreme Court directed the state to remove all sorts of encroachment including the fish tanks. This caused a huge hue and cry among the fishermen community. The government is undertaking many projects to restore back the glory of the lake. This lake is under several PILs (W.P. Nos. 33567 of 1998, 23210 of 1999, 4350 and 4375 of 2000 and 2354 and 12497 of 2001) in the HC of AP from 1998 to 2001. The High Court of AP passed a judgement in 2001. In 2005, Shri Pranay Waghray approached the SC of India for implementation of the judgement by the HC. Writ petition filed by Dr. T. Patanjali Sastry, President, Environment Centre, Danavaipeta,Rajahmundry, in the High Court declaring the action of the respondents in not stopping the discharge of effluents from the industries that have come up in the vicinity of Kolleru lake and in permitting the construction of houses and roads in the catchment area of the lake and to direct the respondents to take appropriate steps to restore the lake to its pristine glory as before. Writ petitions filed by the Kolleru Fishermen and Small-Scale Farmers Association and other organizations in High Court, complaining that the Government was not taking steps to stop pollution of the lake due to discharge of effluents from industrial units and untreated drainage from municipalities. They also said that the ecological imbalance of the lake is not due to fish tanks, but it was only due to the neglect of the lake by the Government and failure to have a check on the pollution caused by several industries and the municipal corporations. Government issued a notification constituting Kolleru Wild Life Sanctuary and defining boundaries and margins. Because of the enforcement of GO Ms No 120 through which the State government declared the lake a wildlife sanctuary, the rights of nearly two lakh people, who are basically fishermen, came at stake. By doing this, the government had made it clear that the right of the local fishermen to do fishing by traditional methods is not taken away, but aquaculture in the form of any tank is prohibited. Notice issued of demolition of all fish tanks in the area would commence from April 20, 2006. Dr. Ambedkar Harijan Fishermen Cooperative Society filed a PIL in the HC to challenge the validity of GO Ms. No. 120. In the same year another PIL in the HC by Dr. Ambedkar Cooperative Collective Farming Society Limited to challenge the validity of GO Ms. No. 120. High Court dismissed the writ petition of 1999 saying that wet eco system cannot be exploited to the detriment of people at large for temporary gains. The Bench dealt with the contention that the notification was ultra vires, and said that the Government had the powers to issue the notification. The Bench directed the Government to adhere to the standards laid down by the Ministry of Environment regarding the lakes and effluents. And all writ petitions were disposed of. In Amrch, the Kolleru Fishermen Cooperative Society moved to the National Human Rights Commission (NHRC) and the Amnesty International, seeking to protect the livelihood of nearly two lakh people in the Plus Five contour of Kolleru Lake. The Kolleru Fishermen Cooperative Society moved to challenge the validity of the GO Ms No 120 in Supreme Court by seeking to protect the people's right to life in the area. The ground was being prepared to file a writ petition in the Supreme Court, alleging that the Government, emboldened by the ruling by the High Court in defence of the need for wildlife sanctuary in the lake, was trying to enforce the GO Ms No 120 without having any forethought on the implications of evicting the local people at the cost of their right to life. In March, restoration work for the lake started. In November the aqua farmers and local people staged a rasta roko and threatened to commit mass suicide if their tanks were demolished. The district administrator took serious action against the agitators. It was decide that a regulator will be constructed at a cost of Rs.30 crores on Upputeru at the mouth of Kolleru to provide the straight cut for the lake water to flow into the sea. The State Government will seek Rs.600 crores loan from the World Bank and other external sources for rehabilitation of the lake to its old glory with the removal of encroachments including aqua tanks up to the 10 ft. contour level. Shri Pranay Waghray approached the Supreme Court for implementation of HC judgment. The Central Empowered Committee (CEC), appointed by the Supreme Court to monitor environmental issues in the country, has directed the state government to remove all the encroachments on Kolleru Lake. About 31,000 acres of fish ponds had been removed from the lake. In February, over 30 big leaseholders each owning fishponds of more than 100 acres gave separate written undertakings to the Krishna District Administration that they would vacate the lake before March 3. In March, Kolleru Fishermen Cooperative Society challenged the validity in Supreme Court by seeking to protect the people's right to life in the area. He said that the government is following the Supreme Court orders without having any forethought on the implications of evicting the local people at the cost of their right to life. In June, Lok Satta, a NGO said that the destruction of fish tanks in the Kolleru lake area could not bring back the past glory of the lake. It said that the massive operation to bring down these tanks would rob thousands of people of their livelihoods. It said industrial effluents and municipal sewage, and not fish tanks, were the major sources of pollution of the lake. Lok Satta has taken up the issue of Kolleru to ensure that the people receive a fair compensation and relief & rehabilitation package. In December, the state government sanctioned Rs 15 crores for rehabilitation of the fishermen affected by the cleaning of the lake. The forest department, which started work on demarcating the area of the sanctuary, is hamstrung because of the non-cooperation of the revenue officials. The forest officials had to stop work in some pockets following opposition and threat from tank owners. The main opposition party, Telugu Desam, is backing the tank owners in some stretches and even took out a rally in Akivedu demanding a stop to demarcation work. Eco-tourism project of the Andhra Pradesh Tourism Development Corporation (APTDC) for Kolleru Lake became a reality in February, with the corporation launching Rs. 1.5 Cr for the project. In June it was decided that Kolleru lake would be developed at a cost of nearly Rs 860 crore over five years. The international NGO, Wetland International South Asia, prepared proposals at the behest of the state government. Of the total cost proposed for development, Rs 500 crore had been allocated for water management works. In September the state government decided that the Kolleru Lake would be transformed into a bird watching. The state would be taking up this major project to develop it as an eco-tourism destination. A Rs.9-crore project was sanctioned by the Union Government. In January, the state government put pressure on the forest department to finalize a proposal to reduce the sanctuary area of famous Kolleru Lake from the present plus five to three contour levels. In April, Telugu Desam Party (TDP) president N Chandrababu Naidu said that the Lake would be regularized and the surplus land from the lake will be distributed to the poor farmers. In September, the state government decided to protect the lake up to contour-3.
<urn:uuid:8c151107-ee87-4f3f-a383-f4176e27446b>
CC-MAIN-2013-20
http://www.cseindia.org/node/2564
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955562
2,125
3.25
3
Footage captured by NHK and Discovery Channel in July 2012 shows a giant squid in the sea's depths.NHK/NEP/Discovery Channel Footage captured by NHK and Discovery Channel in July 2012 shows a giant squid in the sea near Chichi island.NHK/NEP/Discovery Channel The elusive giant squid, which can grow to a monstrous 26 feet in length and is likely the source of the Nordic legend of the kraken, has been captured on film at last. The creature spends its days trawling the depths of the Pacific Ocean, at a depth where there is little oxygen or light and crushing pressure from the immense weight of the water above. It was spied by Japan’s National Science Museum, working in tandem with Japanese broadcaster NHK and the Discovery Channel. The first recorded footage of the giant squid in its natural habitat will air on the Discovery Channel as “Monster Squid: The Giant Is Real,” on Sunday, Jan. 27 at 10PM EST. But every creature needs a good name. What should we call the tentacled terror?
<urn:uuid:d7233a49-be21-44c7-a761-08cf7c3fb830>
CC-MAIN-2013-20
http://www.foxnews.com/science/2013/01/08/help-us-name-kraken/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+foxnews%2Fscitech+%28Internal+-+SciTech+-+Mixed%29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940701
229
2.734375
3
Why hasn’t commercial air travel gotten any faster since the 1960s? Specified cruising speeds for commercial airliners today range between about 480 and 510 knots, compared to 525 knots for the Boeing 707, a mainstay of 1960s jet travel. Why? “The main issue is fuel economy,” says Aeronautics and Astronautics professor Mark Drela. “Going faster eats more fuel per passenger-mile. This is especially true with the newer ‘high-bypass’ jet engines with their large-diameter front fans.” Observant fliers can easily spot these engines, with air intakes nearly 10 feet across, especially on newer long-range two-engine jetliners. Older engines had intakes that were less than half as wide and moved less air at higher speeds; high-bypass engines achieve the same thrust with more air at lower speed by routing most of the air (up to 93 percent in the newest designs) around the engine’s turbine instead of through it. “Their efficiency peaks are at lower speeds, which causes airplane builders to favor a somewhat slower aircraft,” says Drela. “A slower airplane can also have less wing sweep, which makes it smaller, lighter and hence less expensive.” The 707’s wing sweep was 35 degrees, while the current 777’s is 31.6 degrees. There was, of course, one big exception: the Concorde flew primarily trans-Atlantic passenger routes at just over twice the speed of sound from 1976 until 2003. Product of a treaty between the British and French governments, the Concorde served a small high-end market and was severely constrained in where it could fly. An aircraft surpassing the speed of sound generates a shock wave that produces a loud booming sound as it passes overhead; fine, perhaps, over the Atlantic Ocean, but many countries banned supersonic flights over their land. The sonic-boom problem “was pretty much a show-stopper for supersonic transports,” says Drela. Some hope for future supersonic travel remains, at least for those able to afford private aircraft. Several companies are currently developing supersonic business jets. Their smaller size and creative new “boom-shaping” designs could reduce or eliminate the noise, and Drela notes that supersonic flight’s higher fuel burn per passenger-mile will be less of an issue for private operators than airlines. “But whether they are politically feasible is another question,” he notes. For now, it seems, travelers will have to appreciate the virtues of high-bypass engines, and perhaps bring along a good book. – Peter Dunn
<urn:uuid:cba3776c-cd20-4972-9d60-139cdd5d6e1b>
CC-MAIN-2013-20
http://engineering.mit.edu/live/news/188-why-hasnt-commercial-air-travel-gotten-any-faster
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960248
568
3.171875
3
Guidelines for Constructing Objective Tests The following information provides only brief guidelines for constructing different types of test questions as well as for formatting tests. Writing good test questions requires understanding of the do's and don'ts, careful attention, practice, and time. One also needs to recognize that each type of question has advantages and disadvantages so that an informed choice can be made about what kinds of questions are appropriate to use for a given assessment. Download the test for this activity by clicking on the link below. 2. Take the test, then, identify its errors. 3. Read the material presented in this web page. After reading the material, you will be presented with a second activity, that is a continuation of this activity. 4. Participate in a threaded discussion about the errors in this Download Activity Test in Adobe Acrobat format View test (pop-up) - A lot of vocabulary can be assessed in a minimal time - Construction is relatively easy - The understanding assessed is likely to be trivial (recall/knowledge level) - Difficult to avoid ambiguity in constructing questions - Scoring requires careful reading for unanticipated but correct answers - Leave only important terms blank. - Keep items brief. - Limit the number of blanks per statement to one, at the most two for older students. - Limit the response called for to single words or very brief phrases. - Try to put the blanks near the end of the statement (or better yet, see 9 and 4 below). - Try to ensure that only one term fits each blank. - Indicate the units if the answer called for involves a numerical - Give students credit for unanticipated yet correct responses. - Number the blanks and provide lines down the right-hand side of the page, all of the same length, for students to write their answers. (If you're left-handed, put them on the left.) - Lift statements directly from the book. - Use "a" or "an" before a blank; make it "a/an" if needed to make it grammatical. - Count a misspelled or non-grammatical answer entirely wrong. Let students know in advance that spelling and grammar count. - Provide lines in the statement, e.g., "water and alcohols are ____________ molecules." Instead, use numbered blanks that are all of the same length, e.g., "water and alcohols are examples of –1– molecules" and "water in its gaseous state is called –2– ." - These questions would be appropriate for quick checks of essential vocabulary. - The point value assigned should be minimal. - Consider reworking such questions into short answer format. - Many topics can be covered in the time available for students to respond. - These questions are quickly and easily scored. - The understanding assessed is likely to be trivial (recall/knowledge level). - It is difficult to avoid ambiguity in constructing these questions. - The odds of guessing a correct answer are 50:50. - Better students tend to read too much into the questions. - Use a single point that determines the truth of the statement. An example violation: The cm is larger than the mm and the mm is larger than the dm. - Take care with grammar and spelling. - Use a single clause, simply and directly stated; if two clauses are used, the main clause should be true and the subordinate clause true or false. An example violation: Lilies are considered annuals because their bulbs live from year to year. - Have approximately half of the statements true and half false. It is easier to start with all true statements, then go back and change some to false statements. - Use a random pattern in the sequence of answers, e.g., ttfft is okay, tftft is not. - Use tricky questions. - Use unnecessary words and complicated content. - Use statements directly from the text. Rephrase them so students must at least have comprehended the material as opposed to recognizing it. - Avoid negatives; this means not just words like not or none, but negative prefixes and suffixes as well. Double negatives are never grammatical. - Avoid specific determiners. Statements with words such as generally, may, most, often, should, and usually are generally true. Statements with words such as all, alone, only, no, none, never, and always are Variations: (Be careful with your directions and teach students how to answer in advance.) - Antonyms and synonyms. - True/false/can't say as options. - True/false/converse as options. - True/false with correction of an underlined word if statement is false. - True/false with diagrams, maps, drawings, etc. - These questions are okay for quick checks of vocabulary and concepts. - The point value assigned should be minimal. - A large number of related ideas cans be addressed in a short period of - Answers are easily and quickly scored. - Such questions are restricted to recognition of simple understandings. - Clues are difficult to avoid. - A common error is lack of consistency of relationship throughout the question. - Make certain that the relationship between the stems and the responses is the same throughout the question. For example, all of the items might be things OR events, but a combination of things and events is inappropriate. (Warning: this is more difficult than it appears!) - State the specific relationship between the stems and responses in the directions to the question. Check that it fits each stem and its response. If it doesn't, rework the question. - Put the stems ("question") column on the left and number - Put the blanks for students to record their answers next to the stems (or on an answer sheet). - Put the responses (answers) column on the right and letter them (capital - Order the responses in some logical fashion, e.g., alphabetically, - Make the stems longer than the responses. - Use between five and ten stems. - Provide more responses than needed (about 40-50% more than stimuli). - Split a matching question between pages. - Fail to check that directions state a relationship and that it is correct across the entire question. - Provide more than one correct response for a single stem, unless you've been very clear with the directions and have taught students to do this kind of question in advance. - Change the grammar across stems and responses, e.g., between plural Formatting matching tests (This is not a question, just an illustration of formatting.) For an example, download the Sound test at the bottom of this page - MS Word) |___ 1. This is the longer stem to the ___ 2. The stems have the blanks. |A. Lettered response B. Logical order C. More responses than needed D. Shorter responses - Consider using photographs, diagrams, graphs that illustrate structures - Sometimes one graphic can be used twice, e.g., names and functions of - Having just a few responses when some can be used twice (or more) is okay. Multiple Choice Questions - A large number of ideas can be addressed in a short period of response - These questions are easily and quickly scored. - Questions can elicit responses from all cognitive levels, from knowledge - Questions can be improved over time by analyzing them in light of student - It is time-consuming to write good items, especially those at higher cognitive - Test-wise and English fluent students tend to be favored. - Use the same number of distractors (wrong answers) for every question. - Use plausible distractors that are related to the stem and are similar in character; tricky, cute, and 'throw-away' ones are anathema. - Have all distractors (and the correct answer) about the same length. - Use correct grammar; if the stem is an incomplete sentence, each distractor should be grammatically consistent with it and complete the sentence. - Put all of the distractors in a single column, not side by side or in two columns. - Use reasonable vocabulary and avoid wordiness and ambiguity. - State the authority to be used in items calling for judgment. - Vary the position of the correct answer (the tendency is to make it B or C). - Examine questions carefully for subtle clues in word choice or phrasing. - Use specific determiners in distractors such as all, none, only, and alone because they usually indicate an incorrect answer. Likewise, avoid generally, often, usually, most, and may because they often indicate the correct answer. - Avoid negatives, including less obvious ones, such as without, because they can confuse or be missed by students; highlight the negative word if you find you must use one. - Provide clues in the stem, such as “a” or “an” at the end; put these articles with the distractors. - Avoid using "all of the above" and "none of the above." If you do use them, make them as frequently the incorrect answers as they are the correct answers. - Typical errors or misconceptions (keep track of these as you teach). - Misstated relationships, where the correct terms are connected with a wrong relationship. - Combine conclusions and explanations such that both are right, the former is wrong, the latter is wrong, both are wrong. - Using a graphic may make writing more challenging questions easier, e.g., - Analyses are easily done on difficulty level and whether distractors are working; some machine scoring programs can compute these. The organization and appearance of the test on paper should facilitate student understanding of what is being asked of him/her and how to respond. - Provide complete directions, both for the overall test and for each item format. Directions include guidance for what is expected in the response as well as how to respond. - Group like questions, e.g., put all of the multiple choice questions - Never split a question between pages. - Avoid splitting like questions between pages. - Indicate the point value for each question. - Sequence the questions from those that are quickly answered to those requiring more time to answer, i.e, the essay questions are almost invariably - Use plenty of "white space" to set off directions, questions and answers, and sections of the test. Don't crowd things together. 1. On the plus side, this means that the test items can be used from year to year, there is often a savings of paper, and there is less paper for you to deal with and carry around. On the negative side, students may misplace answers (there are things to do that help them avoid this) and they will not have them for future study, e.g., for final exams. Constructing A Test Test questions should require students to show that they can use what they have learned. The best questions ask them to apply their understanding and to use it to analyze, synthesize, and evaluate novel instances of the concepts. If the instances are the same as used in instruction, students are only being asked to recall. Questions that ask students simply to remember (knowledge level) and/or to do simple translations (comprehension) are best used for knowledge that should be retained for a lifetime. - Write the questions as you teach–or even before you teach! Why? - Ask questions that address significant learning outcomes. Why? - Weight (put value on) questions that address significant outcomes. - Weight questions as well according to time spent on the topic(s) addressed. - Weight questions as well on the student thought and effort that go into answering them. - Ask a variety of kinds of questions. Why? - Recognize that a test is a sample of students' learning. Implication? - Make alternative forms of the test to deter cheating and to provide for make-up testing. - Consider using a grid of topics by types of questions/thinking to structure the overall test.* * What does the following table tell us about what was taught and how it will be tested if the guidelines above have been followed? |Application - 50% |Comprehension - 20% |Analysis - 20% |Knowledge - 10% Making Tests Accessible This 3D image of the inner ear would be useful to include in the Sound Test if you had the accompanying 3D model for use with visually impaired students.This would allow both groups to be tested on the same material using the same model. Something to consider. If you use a model to test a visually impaired learner, you need to make sure that the model you are using to test the learner with is the same model that was used to learn the material. This will eliminate the possible effect variance may have on the learners ability to succeed. The adjacent Sound test is a good example of a test that is accessible to people with visual impairments. The test utilizes diagrams and pictures as well as a good sized font. Other techniques that can be used to make tests accessible include adding texture. For example, gluing a piece of string to a waveform graphic on a test will give a tactile representation of the sound. Download the sound test and see if you can spot any other techniques that would increase the accessibility of the test for people with visual or other impairments. Test what you've learned 1. If you didn't look at the sample test at the top of the page, view the test now (pop-up), or download the test in .pdf format. Read the questions carefully. Then, using what you have learned, rewrite the following questions so that they meet the guidlines detailed in the sections above: - question 3 – matching * - question 5 – completion - question 7 – multiple choice - question 8 – true/false - question 10 – essay *you may have to radically alter question 3 to fit the guidelines. 2. Format your rewritten questions into a test and post your results on blackboard. Discuss the posted tests with your classmates and select the test with the best formatting. return to top
<urn:uuid:2fb00083-ca10-466a-b3cf-7423b3a94234>
CC-MAIN-2013-20
http://oct.sfsu.edu/assessment/measuring/htmls/objective_tests.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.900909
3,080
4.40625
4
Entanglement goes macroscopic Sep 3, 2003 Quantum entanglement is a phenomenon usually associated with the microscopic world. Now, however, physicists from the Universities of Chicago and Wisconsin in the US and University College London have seen its effects in the bulk properties of a magnetic material for the first time. The researchers believe that their work has implications both for understanding quantum magnetism and in building quantum computers – where entanglement is the key to the increased power of such devices (S Ghosh et al. 2003 2003 Nature 425 48). Entanglement is a feature of quantum mechanics that allows particles with two distinct quantum states to share a much closer relationship than classical physics allows. If two particles are entangled, then we can know the state of one particle by measuring the state of the other. For example, if one particle has a spin ‘up’ then the other automatically has a spin ‘down’. Entanglement is crucial for quantum computing and teleportation but its effects are not generally seen beyond the scale of subatomic particles. Thomas Rosenbaum at the University of Chicago and colleagues performed their experiment on a single crystal of a simple magnetic salt that contains lithium, holmium, yttrium and fluorine (figure 1). The holmium atoms in this salt all behave like tiny magnets and, in the absence of a magnetic field, their magnetic moments point in random directions. When a field is applied, however, the moments align up with the direction of the field (figure 2). The researchers measured the ease with which the magnetic moments aligned with the field at different temperatures. They then compared this ‘susceptibility’ to the material’s ability to absorb heat and found that the two properties were very different. The susceptibility increases smoothly as the sample cooled while the heat absorption varies in a more irregular way. This is in contrast to ordinary materials and, according to the researchers, can only be explained if there is quantum mechanical mixing – or entanglement - of the different magnetic states in the system. This is because entanglement effects contribute much more strongly to the susceptibility than to the heat absorption. To confirm their findings the researchers combined their experimental results with computer simulations and theory. The salt's susceptibility was found to match theoretical values that had been predicted to take quantum entanglement into account. The researchers say that their work shows that entanglement can occur in a disordered solid that is far from perfect. “We see these dense, solid state magnets as promising systems for both fundamental quantum mechanics and potential quantum computing applications,” Rosenbaum told PhysicsWeb. “The challenge remains to manipulate the entanglement to perform actual quantum logic operations.” About the author Belle Dumé is Science Writer at PhysicsWeb
<urn:uuid:0cc8c42c-e1f9-4f7f-9ea2-2e1b1aac5354>
CC-MAIN-2013-20
http://physicsworld.com/cws/article/news/2003/sep/03/entanglement-goes-macroscopic
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927309
571
3.34375
3
Epiphany 3. The brilliant youth worker illustrated the Pauline text with a string of soft yarn. The thin wool as a single strand was loosely wound and easily broken by a teenager passing a finger through it. . When the yarn was wound around the hands into a strand of more than ten it was virtually unbreakable. Along with Paul’s wise insights into the heart of Christian (and other) community, I’ve always liked Franklin’s remarks to the assembled delegates in Philadelphia at the beginning of the American Revolution: “Either we hang together or assuredly we shall hang separately.” If Paul addresses the means and power of community, Jesus, quoting Isaiah, gives us our vision and mission statement. “Bring good news to the poor, sight to the blind, liberation for the oppressed and imprisoned, and a reordering of wealth and power”. It’s hard. Many of us get so far out front on issues that when we look behind there is no one there. But it all seems to fit together like a weaving when we let other strong hearts lead .Each thread: the higher vision, the gifts of the community, and the vulnerability of the priest and the community leaders, all weaving together the threads of their own beautiful and broken humanity to strive for the higher gifts. And don’t these gifts include, foremost of all, compassion for the poor and each other. It doesn’t seem we get anywhere without the ability to walk in each others shoes.
<urn:uuid:2ccddb62-89bf-486b-82fb-e35fb02aec23>
CC-MAIN-2013-20
http://storiesfromapriestlylife.wordpress.com/2012/10/13/3rd-epiphany-year-c-doesnt-seem-we-get-anywhere-without-walking-in-each-others-shoes/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944643
310
2.5625
3
Before we proceed to building a proxy geometry, we need to clean up the stl file. In particular, we need to separate the skull and jaw into two distinct geometries, while maintaining that they have no holes – ie. that they remain watertight. A useful tool to find bridge connections is the select connected command, or whatever it’s called in your software of choice. One method is select all connected which will show that they are connected, and the grow method of selecting neighbors, which helps pinpoint where the bridges are. Above is an example of this grow method as the polys connect across what we would wish to be distinct geometries. Now that you know where the problem area is, you can get in there and select the connecting polygons and delete them. Note that there are many ways to do this – a truism for all of the steps in this process, and its generally advisable to build on your skillset. A programmer, for example, can code and might understand mathematics. So she might write a script that selects polygons based on an occlusion algorithm. I, on the other hand, can shovel. So I go in there and dig out one poly after the next, in glorious hands-on intimacy. 3D has something for everyone, and you’ll surely be bringing your own skills to the table. You’ll undoubtedly run into single shared vertices… thinking a full edge has to be shared in order to unite the two meshes, but no… such is the injustice of 3D clean-up. Eventually you’ll achieve that satiating moment when – upon double-clicking to select all connected – only the skull or jaw lights up. Enjoy it… … because now you have to climb back don into the trenches, closing all the holes you’ve just made. Make that look like… …this. Over and over again. It goes by rather quickly however. Forgot to mention that: >finding holes is quickly done by selecting boundary edges. If this isn’t supported by your software, select edges with >4 poly associations, this will often >fillin g the holes can be done easily by selecting edge, creating poly and triangulating. This maintains stl support for future printing etc. You can also bridge using an extrapolation of the edge poly’s normals, which maintains the anticipated surface shape in cases where the hole is so large that the first technique would result in a flat dent. Both techniques are likely within the range of acceptable deviance.
<urn:uuid:3070a2e3-4664-45f5-a384-c90cbe4faf89>
CC-MAIN-2013-20
http://www.drip.de/?p=2581
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94885
530
2.8125
3
Quick Overview on Trademarks The following points are the bare minimum everyone should know if they're involved in selecting, buying or using a trademark. - A trademark is a brand name or other symbol used to sell goods or services. - Not all marks receive the same protection, some cannot be registered, others are more expensive to register, while yet others receive strong protection and can be registered more easily. - Trademarks need not be registered to be protected, but registration gives you certain advantages. Prior to registering or even using a trademark, you should search to see if your mark will infringe a pre-existing mark. Registration is more complex than would seem necessary, and more difficult than many cheap providers make it sound. - There are important rules for using, or letting others use, your trademark. Failure to adhere to these rules may diminish the strength of your mark, or may even lead to complete abandonment of your trademark rights inadvertently. - Trademark law is one of the most subjective and complex areas of law today. There are many unexpected court decisions and surprising rejections from the Trademark Office. You must be willing to accept some amount of risk, and a great deal of frustration in order to create & preserve a strong trademark into the future. The rewards for doing so are substantial. A trademark is often considered a business's most valuable asset.
<urn:uuid:e373a92c-3a01-4f64-aad0-83cc84a089d9>
CC-MAIN-2013-20
http://marklaw.com/index.php?option=com_content&view=article&id=335&Itemid=36
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928315
277
2.703125
3
Located on the Arabian Peninsula, Oman is an Islamic country practicing Ibadhism, a more moderately conservative version of the religion. Although the Portuguese conquered parts of Oman’s coastal region on the 1500s, and the Persians controlled part of the nation at one time, Oman has long been an independent country. During the 18th century, Great Britain and France fought over Oman’s rich natural resources, but by the 19th century, Oman had allied itself with the British. Oil was discovered in 1954, and since that time, political unrest has forced more than one leader out of power. The US and Oman have had a military cooperation agreement since 1980, and relations were further strengthened by Oman’s contributing troops to the Gulf War and the US-led invasion of Iraq in 2003. In 2006, Oman signed a free trade agreement with the United States, similar to NAFTA and CAFTA. It entered into force on January 1, 2009, and will increase bilateral trade and investment between Oman and the US. This trade agreement has resulted in criticism from those who believe that it will lead to Oman becoming a sweatshop for the garment industry, with low wages and terrible working conditions. There are also strong protests against the US allowing Oman to own and run its ports in the area. In a worst-case scenario, Oman would have the ability to countermand homeland security laws and impose its own rules of order, as was the case in the Dubai Ports agreement with the United Arab Emirates. Lay of the Land: In southwest Asia, the sultanate of Oman occupies the southeastern edge of the Arabian Peninsula between Yemen to the southwest and the United Arab Emirates to the north. Oman first adopted Islam in the 7th century, but by the following century, Ibadhism, a form of Islam differing from Shiaism or other orthodox schools of Sunnism, became the dominant form practiced in the country. Ibadhism is known for its “moderate conservatism,” including its willingness to establish new rulers according to consensus and consent. Today, Oman is still the only Muslim country with a majority Ibadhi population. According to the US Department of State, the US signed a treaty of friendship and navigation with Muscat in 1833, which was one of the first of its kind with an Arab state. On December 20, 1958, this treaty was replaced by the Treaty of Amity, Economic Relations, and Consular Rights. In March 2005, the US and Oman launched negotiations on a Free Trade Agreement that were successfully concluded in October 2005. The FTA was signed on January 19, 2006, and was put into effect on January 1, 2009. According to the US Census Bureau, imports from the US to Oman generally increased between 2005 and 2010, from $555 million to $772.7 million. Exports also increased from $570.7 million to $1.1 billion. This amounts to a positive US trade balance of $328.8 million in 2010. US-Oman Free Trade Pact Raises Alarms First on the State Department’s list of problems with Oman’s human rights record is the fact that Oman’s citizens do not have the right to change their government. Furthermore, the government restricted freedoms of speech, press, assembly, religion, and association. Despite legislated equality for women, discrimination and domestic violence persisted due to social and cultural factors. The government restricted the activity of nongovernmental organizations (NGOs) and did not permit domestic human rights groups to operate in the country. There was a lack of sufficient legal protection and enforcement to secure the rights of migrant workers. There were reports that expatriate laborers, particularly domestic workers, were placed in situations amounting to forced labor and that some suffered abuse. Gary A. Grappo was sworn in as US Ambassador to the Sultanate of Oman on March 6, 2006. Grappo holds a BS in mathematics from the US Air Force Academy, MS in geodesy and survey engineering from Purdue University, and an MBA from Stanford University Graduate School of Business. The embassy in Muscat was established Jul 4, 1972, with Clifford J. Quinlan as Chargé d'Affaires ad interim. William A. Stoltzfus, Jr. Appointment: Feb 29, 1972 Presentation of Credentials: Apr 17, 1972 Termination of Mission: Appointment terminated, Jul 16, 1974 Note: Also accredited to Bahrain, Kuwait, Qatar, and the United Arab Emirates; resident at Kuwait. William D. Wolle Appointment: Jun 20, 1974 Presentation of Credentials: Jul 17, 1974 Termination of Mission: Left post Apr 25, 1978 Marshall W. Wiley Appointment: Oct 11, 1978 Presentation of Credentials: Nov 7, 1978 Termination of Mission: Left post May 19, 1981 John R. Countryman Appointment: Aug 27, 1981 Presentation of Credentials: Oct 14, 1981 Termination of Mission: Left post Jul 29, 1985 George Cranwell Montgomery Appointment: Aug 1, 1985 Presentation of Credentials: Sep 11, 1985 Termination of Mission: Left post Jan 18, 1989 Richard Wood Boehm Appointment: Nov 22, 1988 Presentation of Credentials: Nov 12, 1989 Termination of Mission: Left post Oct 31, 1992 Note: Commissioned during a recess of the Senate; recommissioned after confirmation on Oct 10, 1989. David J. Dunford Appointment: Oct 9, 1992 Presentation of Credentials: Nov 1, 1992 Termination of Mission: Left post Jun 21, 1995 Frances D. Cook Appointment: Dec 19, 1995 Presentation of Credentials: Jan 2, 1996 Termination of Mission: Left post Jan 10, 1999 John Bruce Craig Appointment: Oct 26, 1998 Presentation of Credentials: Feb 15, 1999 Termination of Mission: Left post Sep 22, 2001 Note: Robert W. Dry served as Charge d'Affaires Sep 2001–Nov 2001 Richard L. Baltimore, 3rd Appointent: Oct 3, 2002 Presentation of Credentials: Nov 5, 2002 Termination of Mission: Left post, Mar 17, 2006 Hunaina Sultan Ahmed Al-Mughairy became ambassador of Oman to the United States on December 2, 2005. Career diplomat Greta C. Holtz was nominated by President Barack Obama on May 24, 2012 to be the next U.S. ambassador to Oman. Holtz received a Bachelor of Science in political science from Vanderbilt University (1982) and a Master of Arts in international relations from the University of Kentucky (1984). She entered the Foreign Service in March 1985. Her overseas postings sent her to the U.S. missions in Saudi Arabia, Yemen, Tunisia, Syria and Turkey, where she was the consul in the city of Adana (2002-2003). While serving as consul in Tunisia in 1991, Holtz became suspicious about the passport of an American who appeared at the U.S. embassy to apply for a birth certificate and passport for his infant daughter. Sure enough, William Patrick Alston turned out to be a murderer who had escaped from a prison in Pennsylvania 11 years earlier. Holtz alerted various authorities and Alston was rearrested. In 2004 Holtz earned a Master of Science degree in national security studies from the National War College. From 2004-2006 Holtz was the State Department’s coordinator for the Organization for Security and Cooperation in Europe. She was the director of The Middle East Partnership Initiative, managing the State Department’s democracy promotion program in the Middle East (2006-2007). Holtz served as head of the Office of Provincial Affairs at the U.S. embassy in Baghdad, Iraq (2009-2010), which included running the U.S. Provincial Reconstruction Teams. Her previous assignment before becoming ambassador was deputy assistant secretary of state for public diplomacy and strategic communications in the Bureau of Near Eastern Affairs. Holtz speaks Arabic, Turkish, and French. -Noel Brinkerhoff, David Wallechinsky Official Biography (State Department) State Department Cables (cablegatesearch.net)more
<urn:uuid:4d90e985-8be8-48b0-b82e-ece524db458e>
CC-MAIN-2013-20
http://www.allgov.com/nations?nationID=3584
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959537
1,730
2.96875
3
Roget's Int'l Thesaurus Fowler's King's English The King James Bible Brewer's Phrase & Fable Frazer's Golden Bough Shelf of Fiction Ulysses S. Grant > Chapter VII. Ulysses S. Grant The Mexican WarThe Battle of Palo AltoThe Battle of Resaca de la PalmaArmy of InvasionGeneral TaylorMovement on Camargo General Taylor was away with the bulk of his army, the little garrison up the river was besieged. As we lay in our tents upon the sea-shore, the artillery at the fort on the Rio Grande could be distinctly heard. The war had begun. There were no possible means of obtaining news from the garrison, and information from outside could not be otherwise than unfavorable. What General Taylors feelings were during this suspense I do not know; but for myself, a young second-lieutenant who had never heard a hostile gun before, I felt sorry that I had enlisted. A great many men, when they smell battle afar off, chafe to get into the fray. When they say so themselves they generally fail to convince their hearers that they are as anxious as they would like to make believe, and as they approach danger they become more subdued. This rule is not universal, for I have known a few men who were always aching for a fight when there was no enemy near, who were as good as their word when the battle did come. But the number of such men is small. On the 7th of May the wagons were all loaded and General Taylor started on his return, with his army reinforced at Point Isabel, but still less than three thousand strong, to relieve the garrison on the Rio Grande. The road from Point Isabel to Matamoras is over an open, rolling, treeless prairie, until the timber that borders the bank of the Rio Grande is reached. This river, like the Mississippi, flows through a rich alluvial valley in the most meandering manner, running towards all points of the compass at times within a few miles. Formerly the river ran by Resaca de la Palma, some four or five miles east of the present channel. The old bed of the river at Resaca had become filled at places, leaving a succession of little lakes. The timber that had formerly grown upon both banks, and for a considerable distance out, was still standing. This timber was struck six or eight miles out from the besieged garrison, at a point known as Palo AltoTall trees or woods. Early in the forenoon of the 8th of May as Palo Alto was approached, an army, certainly outnumbering our little force, was seen, drawn up in line of battle just in front of the timber. Their bayonets and spearheads glistened in the sunlight formidably. The force was composed largely of cavalry armed with lances. Where we were the grass was tall, reaching nearly to the shoulders of the men, very stiff, and each stock was pointed at the top, and hard and almost as sharp as a darning-needle. General Taylor halted his army before the head of column came in range of the artillery of the Mexicans. He then formed a line of battle, facing the enemy. His artillery, two batteries and two eighteen-pounder iron guns, drawn by oxen, were placed in position at intervals along the line. A battalion was thrown to the rear, commanded by Lieutenant-Colonel Childs, of the artillery, as reserves. These preparations completed, orders were given for a platoon of each company to stack arms and go to a stream off to the right of the command, to fill their canteens and also those of the rest of their respective companies. When the men were all back in their places in line, the command to advance was given. As I looked down that long line of about three thousand armed men, advancing towards a larger force also armed, I thought what a fearful responsibility General Taylor must feel, commanding such a host and so far away from friends. The Mexicans immediately opened fire upon us, first with artillery and then with infantry. At first their shots did not reach us, and the advance was continued. As we got nearer, the cannon balls commenced going through the ranks. They hurt no one, however, during this advance, because they would strike the ground long before they reached our line, and ricochetted through the tall grass so slowly that the men would see them and open ranks and let them pass. When we got to a point where the artillery could be used with effect, a halt was called, and the battle opened on both sides. The infantry under General Taylor was armed with flint-lock muskets, and paper cartridges charged with powder, buck-shot and ball. At the distance of a few hundred yards a man might fire at you all day without your finding it out. The artillery was generally six-pounder brass guns throwing only solid shot; but General Taylor had with him three or four twelve-pounder howitzers throwing shell, besides his eighteen-pounders before spoken of, that had a long range. This made a powerful armament. The Mexicans were armed about as we were so far as their infantry was concerned, but their artillery only fired solid shot. We had greatly the advantage in this arm. The artillery was advanced a rod or two in front of the line, and opened fire. The infantry stood at order arms as spectators, watching the effect of our shots upon the enemy, and watching his shots so as to step out of their way. It could be seen that the eighteen-pounders and the howitzers did a great deal of execution. On our side there was little or no loss while we occupied this position. During the battle Major Ringgold, an accomplished and brave artillery officer, was mortally wounded, and Lieutenant Luther, also of the artillery, was struck. During the day several advances were made, and just at dusk it became evident that the Mexicans were falling back. We again advanced, and occupied at the close of the battle substantially the ground held by the enemy at the beginning. In this last move there was a brisk fire upon our troops, and some execution was done. One cannon-ball passed through our ranks, not far from me. It took off the head of an enlisted man, and the under jaw of Captain Page of my regiment, while the splinters from the musket of the killed soldier, and his brains and bones, knocked down two or three others, including one officer, Lieutenant Wallen,hurting them more or less. Our casualties for the day were nine killed and forty-seven wounded. At the break of day on the 9th, the army under Taylor was ready to renew the battle ; but an advance showed that the enemy had entirely left our front during the night. The chaparral before us was impenetrable except where there were roads or trails, with occasionally clear or bare spots of small dimensions. A body of men penetrating it might easily be ambushed. It was better to have a few men caught in this way than the whole army, yet it was necessary that the garrison at the river should be relieved. To get to them the chaparral had to be passed. Thus I assume General Taylor reasoned. He halted the army not far in advance of the ground occupied by the Mexicans the day before, and selected Captain C. F. Smith, of the artillery, and Captain McCall, of my company, to take one hundred and fifty picked men each and find where the enemy had gone. This left me in command of the company, an honor and responsibility I thought very great. Smith and McCall found no obstruction in the way of their advance until they came up to the succession of ponds, before describes, at Resaca. The Mexicans had passed them and formed their lines on the opposite bank. This position they had strengthened a little by throwing up dead trees and brush in their front, and by placing artillery to cover the approaches and open places. Smith and McCall deployed on each side of the road as well as they could, and engaged the enemy at long range. Word was sent back, and the advance of the whole army was at once commenced. As we came up we were deployed in like manner. I was with the right wing, and led my company through the thicket wherever a penetrable place could be found, taking advantage of any clear spot that would carry me towards the enemy. At last I got pretty close up without knowing it. The balls commenced to whistle very thick overhead, cutting the limbs of the chaparral right and left. We could not see the enemy, so I ordered my men to lie down, an order that did not have to be enforced. We kept our position until it became evident that the enemy were not firing at us, and then withdrew to find better ground to advance upon. By this time some progress had been made on our left. A section of artillery had been captured by the cavalry, and some prisoners had been taken. The Mexicans were giving way all along the line, and many of them had, no doubt, left early. I at last found a clear space separating two ponds. There seemed to be a few men in front and I charged upon them with my company. There was no resistance, and we captured a Mexican colonel, who had been wounded, and a few men. Just as I was sending them to the rear with a guard of two or three men, a private came from the front bringing back one of our officers, who had been badly wounded in advance of where I was. The ground had been charged over before. My exploit was equal to that of the soldier who boasted that he had cut off the leg of one of the enemy. When asked why he did not cut off his head, he replied: Some one had done that before. This left no doubt in my mind but that the battle of Resaca de la Palma would have been won, just as it was, if I had not been there. There was no further resistance. The evening of the 9th the army was encamped on its old ground near the Fort, and the garrison was relieved. The siege had lasted a number of days, but the casualties were few in number. Major Jacob Brown, of the 7th infantry, the commanding officer, had been killed, and in his honor the fort was named. Since then a town of considerable importance has sprung up on the ground occupied by the fort and troops, which has also taken his name. The battles of Palo Alto and Resaca de la Palma seemed to us engaged, as pretty important affairs; but we had only a faint conception of their magnitude until they were fought over in the North by the Press and the reports came back to us. At the same time, or about the same time, we learned that war existed between the United States and Mexico, by the acts of the latter country. On learning this fact General Taylor transferred our camps to the south or west bank of the river, and Matamoras was occupied. We then became the Army of Invasion. Up to this time Taylor had none but regular troops in his command; but now that invasion had already taken place, volunteers for one year commenced arriving. The army remained at Matamoras until sufficiently reinforced to warrant a movement into the interior. General Taylor was not an officer to trouble the administration much with his demands, but was inclined to do the best he could with the means given him. He felt his responsibility as going no further. If he had thought that he was sent to perform an impossibility with the means given him, he would probably have informed the authorities of his opinion and left them to determine what should be done. If the judgment was against him he would have gone on and done the best he could with the means at hand without parading his grievance before the public. No soldier could face either danger or responsibility more calmly than he. These are qualities more rarely found than genius or physical courage. General Taylor never made any great show or parade, either of uniform or retinue. In dress he was possibly too plain, rarely wearing anything in the field to indicate his rank, or even that he was an officer; but he was known to every soldier in his army, and was respected by all. I can call to mind only one instance when I saw him in uniform, and one other when I heard of his wearing it, On both occasions he was unfortunate. The first was at Corpus Christi. He had concluded to review his army before starting on the march and gave orders accordingly. Colonel Twiggs was then second in rank with the army, and to him was given the command of the review. Colonel and Brevet Brigadier-General Worth, a far different soldier from Taylor in the use of the uniform, was next to Twiggs in rank, and claimed superiority by virtue of his brevet rank when the accidents of service threw them where one or the other had to command. Worth declined to attend the review as subordinate to Twiggs until the question was settled by the highest authority. This broke up the review, and the question was referred to Washington for final decision. General Taylor was himself only a colonel, in real rank, at that time, and a brigadier-general by brevet. He was assigned to duty, however, by the President, with the rank which his brevet gave him. Worth was not so assigned, but by virtue of commanding a division he must, under the army regulations of that day, have drawn the pay of his brevet rank. The question was submitted to Washington, and no response was received until after the army had reached the Rio Grande. It was decided against General Worth, who at once tendered his resignation and left the army, going north, no doubt, by the same vessel that carried it. This kept him out of the battles of Palo Alto and Resaca de la Palma. Either the resignation was not accepted, or General Worth withdrew it before action had been taken. At all events he returned to the army in time to command his division in the battle of Monterey, and served with it to the end of the war. The second occasion on which General Taylor was said to have donned his uniform, was in order to receive a visit from the Flag Officer of the naval squadron off the mouth of the Rio Grande. While the army was on that river the Flag Officer sent word that he would call on the General to pay his respects on a certain day. General Taylor, knowing that naval officers habitually wore all the uniform the law allowed on all occasions of ceremony, thought it would be only civil to receive his guest in the same style. His uniform was therefore got out, brushed up, and put on, in advance of the visit. The Flag Officer, knowing General Taylors aversion to the wearing of the uniform, and feeling that it would be regarded as a compliment should he meet him in civilians dress, left off his uniform for this occasion. The meeting was said to have been embarrassing to both, and the conversation was principally apologetic. The time was whiled away pleasantly enough at Matamoras, while we were waiting for volunteers. It is probable that all the most important people of the territory occupied by our army left their homes before we got there, but with those remaining the best of relations apparently existed. It was the policy of the Commanding General to allow no pillaging, no taking of private property for public or individual use without satisfactory compensation, so that a better market was afforded than the people had ever known before. Among the troops that joined us at Matamoras was an Ohio regiment, of which Thomas L. Hamer, the Member of Congress who had given me my appointment to West Point, was major. He told me then that he could have had the colonelcy, but that as he knew he was to be appointed a brigadier-general, he preferred at first to take the lower grade. I have said before that Hamer was one of the ablest men Ohio ever produced. At that time he was in the prime of life, being less than fifty years of age, and possessed an admirable physique, promising long life. But he was taken sick before Monterey, and died within a few days. I have always believed that had his life been spared, he would have been President of the United States during the term filled by President Pierce. Had Hamer filled that office his partiality for me was such, there is but little doubt I should have been appointed to one of the staff corps of the armythe Pay Department probablyand would therefore now be preparing to retire. Neither of these speculations is unreasonable, and they are mentioned to show how little men control their own destiny. Reinforcements having arrived, in the month of August the movement commenced from Matamoras to Camargo, the head of navigation on the Rio Grande. The line of the Rio Grande was all that was necessary to hold, unless it was intended to invade Mexico from the North. In that case the most natural route to take was the one which General Taylor selected. It entered a pass in the Sierra Madre Mountains, at Monterey, through which the main road runs to the City of Mexico. Monterey itself was a good point to hold, even if the line of the Rio Grande covered all the territory we desired to occupy at that time. It is built on a plain two thousand feet above tide water, where the air is bracing and the situation healthy. On the 19th of August the army started for Monterey, leaving a small garrison at Matamoras. The troops, with the exception of the artillery, cavalry, and the brigade to which I belonged, were moved up the river to Camargo on steamers. As there were but two or three of these, the boats had to make a number of trips before the last of the troops were up. Those who marched did so by the south side of the river. Lieutenant-Colonel Garland, of the 4th infantry, was the brigade commander, and on this occasion commanded the entire marching force. One day out convinced him that marching by day in that latitude, in the month of August, was not a beneficial sanitary measure, particularly for Northern men. The order of marching was changed and night marches were substituted with the best results. When Camargo was reached, we found a city of tents outside the Mexican hamlet. I was detailed to act as quartermaster and commissary to the regiment. The teams that had proven abundantly sufficient to transport all supplies from Corpus Christi to the Rio Grande over the level prairies of Texas, were entirely inadequate to the needs of the reinforced army in a mountainous country. To obviate the deficiency, pack mules were hired, with Mexicans to pack and drive them. I had charge of the few wagons allotted to the 4th infantry and of the pack train to supplement them. There were not men enough in the army to manage that train without the help of Mexicans who had learned how. As it was the difficulty was great enough. The troops would take up their march at an early hour each day. After they had started, the tents and cooking utensils had to be made into packages, so that they could be lashed to the backs of the mules. Sheet-iron kettles, tent-poles and mess chests were inconvenient articles to transport in that way. It took several hours to get ready to start each morning, and by the time we were ready some of the mules first loaded would be tired of standing so long with their loads on their backs. Sometimes one would start to run, bowing his back and kicking up until he scattered his load; others would lie down and try to disarrange their loads by attempting to get on the top of them by rolling on them; others with tent-poles for part of their loads would manage to run a tent-pole on one side of a sapling while they would take the other. I am not aware of ever having used a profane expletive in my life; but I would have the charity to excuse those who may have done so, if they were in charge of a train of Mexican pack mules at the time. to shop the
<urn:uuid:a3631976-2610-4d32-859c-de33e50611bb>
CC-MAIN-2013-20
http://www.bartleby.com/1011/7.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.98922
4,163
3
3
Panel lady rocks of the artist Leonardo da Vinci ... (Virgin of the Rocks "Virgin Of The Rocks" is currently found in the Museum of London local time) The original mission entrusted by Leonardo Da Vinci is the drawing board Madonna Of The Rocks - Our Lady of rocks - has cost the Society Catholic religious calling itself "the spirit of the pure", which was in need of painting to adorn the centerpiece of the three sections of the altar in their church "The Church of San Francesco "in Milan. The nuns gave Leonardo specific dimensions and details gunned be present in the painting - the Virgin Mary and Child and John the Baptist and Christ Child Aoriel all sheltering in a cave. Although Da Vinci did what was asked but when he gave the work was the group's reaction is disgust and horror. The painting was full of strange and disturbing details, sparking outrage him and the details which he composed in his portrait. Da Vinci had calmed the anxiety of the association of religious painted panel a second copy of the Madonna Of The Rocks - Lady rock - was all the people against their will as the Assembly in proportion to the Catholic Church. And this version are preserved in the Museum of London under the name of the local Virgin Of The Rocks - Virgin of the Rocks - "
<urn:uuid:05fb283e-d450-4b1a-b0e1-e845d846bd82>
CC-MAIN-2013-20
http://www.ontoplist.com/articles/panel-lady-rocks-of-the-artist-leonardo-da-vinci_4f8fa2129a581/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968218
261
2.640625
3
We think in might be absurdly vain, but wouldn’t it be fun to give everyone in your family a chocolate modeled after your mug this holiday season? [Eok.gnah] has already worked out a system to make this possible. It consists of three parts: scanning your head and building a 3D model from it, using that model to print a mold, and molding the chocolate itself. He used 123D to scan his face. No mention of hardware but this face scanning rig would be perfect for it. He then cleaned up the input and used it to make a mold model by subtracting his face from a cube in OpenSCAD. That needs to be sliced into layers for the 3D printer, and he used the Slic3r program which has been gaining popularity. Finally the mold was printed and the face was cast with molten chocolate. We’d suggest using a random orbital sander (without sand paper) to vibrate the bottom of the mold. This would have helped to evacuate the bubble that messed up his nose. You know, it doesn’t have to be your face. It could be another body part, even an internal one… like your brain!
<urn:uuid:ae154ca9-3e8a-4a46-9506-6c070601d69f>
CC-MAIN-2013-20
http://hackaday.com/2012/05/23/your-face-in-chocolate/?like=1&source=post_flair&_wpnonce=ca20eaf6a9
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979786
248
2.671875
3
is a group of islands in the South Pacific. The main Island Grande Terre is surrounded by the world's largest lagoon and sits inside the second largest coral reef on Earth. There are smaller islands around the main island including the Loyalty Island Group and the stunning, Isle of Pines. New Caledonia is also one of the world's top "biodiversity hotspots" with over 76% of it's plants being endemic. Noumea the biggest city shows off New Caledonia's French culture, and outside of that are areas that are still relatively untouched with the native Kanak culture still intact. Click on our photos above to explore this South Pacific paradise and learn about its unique cultures. This web site, logo, name, content, photos, and design are protected by international copyright law. Original versions of our photos can be purchased & web versions can be shared subject to conditions.
<urn:uuid:0e59c2d5-6b7e-4efa-8eba-10de1bfb7f58>
CC-MAIN-2013-20
http://virtualoceania.net/newcaledonia/photos/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925661
186
2.65625
3
Science Fair Questions Coming up with good science fair questions can seem overwhelming. There are so many different topics from which to choose. Botany, Zoology, Medicine, Computer Science, and many other fields of study are fair game for science fair and other academic events. When entering a science fair, you get to choose a question that you want to study. How Can I Think of a Science Fair Question? When considering different science fair questions, keep in mind that you should select a topic that can be answered with research. Try to come up with an original question rather than choosing from among the most common science fair topics. It is also important to select something that interests you. You are more likely to enjoy working on the project and do a better job if you are interested in the question that you are trying to answer. What are Some Example Science Fair Questions? Chemistry is a popular science fair question topic. Consider the following possible questions related to this field: Maybe you're interested in finding out what metals are the most malleable? To dive into this question you need to set up a testing environment where you can record how much something bends or can be shaped. You’ll need heat and instruments to shape the metals and something to take their temperature. You'll need to take measurements, record the temperature, and keep track of other relevant variables. Have you wondered how much pressure or strain it takes to break stuff? Why not make it into a science fair question? You’ll need a number of materials to break and a device that measures pounds per square inch. Your science lab probably has this type of device. If not, you can devise different ways to break stuff. Whatever you do, make sure the testing is the same throughout. For instance, you can drop a bowling ball on things to check the strength of materials, but ensure that you drop the same ball from the same height for each thing in order to keep the results accurate. Do plants always grow towards sunlight? To answer this question, you can get different types of plants and keep a daily record of their growth. You will have to measure height and angle to the sun. Since most science fair questions have a testing phase of one or two months, you should have plenty of time to collect enough data to make a conclusion. This question is more advanced, but perhaps you can answer “Can engines run on more than just gasoline? Solicit the help of something knowledge in engine mechanics in you can. Then, just test out different liquids to see if they will run or you can even thin out the liquids with a little gasoline to see if liquid diluted with a small amount of gasoline works. What Do I Do Now? After all the experimenting and fact-finding missions, the final step is to compete. You can select a science fair at your school level to county fairs and state-sanctioned science fairs. If you are interested in competing in upper division science fairs, it's a good idea to start small and get some practice with easier questions and experiments before heading to the bigger leagues of county and state fairs which require more detailed experiments and more sophisticated questions and research techniques.
<urn:uuid:545cf272-6a30-45cb-a39e-3f5d742ae1fc>
CC-MAIN-2013-20
http://adviceopedia.org/skins/referAFriend.php?title=Science_Fair_Questions
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948836
649
3.53125
4
by Tetsuo Kogawa Radio art or radioart is a new genre of art and I think this is the most advanced genre among the arts using electronics. The first international festival of “radio art” was held in Dublin, Ireland, August 12 to 18, 1990. At the time, however, radio art was mostly considered as an art using the existing radio station that consisted of regular transmission facilities. The difference would be in the contents and the audio facilities. In short, radio art was mere a new family member of radio programs. This would not be impolite to the numerous admirable works of sounds and music using radio station. I am talking about what the concept of radio art or radioart is or should be. As long as we use this term, it should express something newer than the existing genre. In order to rethink on this point, let’s use “radioart” rather than “radio art” from now on. What is radioart? Who is radioart? Popular meaning of “radio” has been a receiving tool of radio signals. There is and can be radioart using such a tool. More positively, radioart would be involved in wireless transmission. However, such a transmission remains in the function of broadcasting. Radio station broadcasts. Broadcast means 'cast broadly'. Sometimes broadcasting is done not so broadly. It is called "narrowcasting". But it still does cast. Broadcasting has been seeking for more and more broader range of transmission toward nation-wide, worldwide and space-wide (satellite) broadcasting. Broadcasting presumes the dualistic two elements: sender and receiver or transmitting and receiving. These two elements must be in accordance by tuning of the input and the output. In this accordance, they say that messages are delivered from one point to another. Broadcasting is considered as a point-to-point relationship. Broadcasting has been seeking to expand the distance between such points as far as possible. The development of recent electronic technology has been easily enabling to expand the distance. Digital broadcasting is supposed to perfectly enable such an accordance of the input of and output. However, difference between the input and the output never disappears unless extreme and forced abstraction or simplification is introduced. One of the most obvious examples of such an enforcement and abstraction would be Morse code communication. Even this simplified communication has to rely on the process of interpretation of the signals. As long as a person operates sending and receiving, arbitrary and redundancy would intervene in the communication. In this sense, radio has been seized with a modernist paranoia of accordance of I/O by tuning. Digital technology is expected to satisfy such a paranoiac dream. The Internet already sketches what very different things are happening in the contemporary electronic medium. Potentially, the Internet erases the difference between the sender and the receiver. It proves that the set concept of "sender" and "receiver" is obsolete. It should be relevant to think about radio transmission from the perspective of computer. In fact, computer is a transmitter. This transmitter does not deliver anything. Computer does not deliver messages but can duplicate everything omnipresent. You could say that a message is virtually delivered from one place to another. But the fact is that duplication appears in different places. Computer does not expand messages from one place to another but operates sampling locally and can proliferate it globally. I called this function as “translocal”. Different from “global medium” such as satellite broadcasting to cover global zone with homogeneous contents, the Internet can infinitely multiply local units and simultaneously can duplicate them remotely. Computer creates polymorphous space where different unit of semiological sings relate each other and interwoven. I was involved in Mini FM that means a miniaturized FM radio station that was very popular in the 80s in Japan. From the early 80s to the mid 80s, hundreds of stations appeared across the country. Before the new name of "Mini FM", we called our activity as "free radio" deeply influenced by the Italian free radio scene in the late 70s. Among various types of stations, we started our Mini FM “Radio Home Run “ as a narrowcasting for local communities or venues where artists and activists got together. “Radio Home Run“ connotes “over the border”. While being involved in Mini FM, I found that a certain limitation of service area created a new communication. It was amazing that a walking distance radio was not a childish attempt but provided different form of communication: the station was not only a transmitting place but also a totally unconventional space for artists, activists, students and bohemians. The program might have been poor and ill-organized, but the space revitalized new emotion of the participants. Meanwhile I became familiar with Félix Guattari's concepts of "micro politics" and "molecular revolution" , and was able to make sure that Mini FM unconsciously stepped into such dimensions. In my understanding of Guattari-Deleuze, "molecular" is the minimum unit of singularity and multiplicity. Unless you don't change this level, any thing will be changed at all. In 1985, Guattari came to Japan and visited Radio Home Run. He recognized that our attempt was seeking a kind of his "molecular revolution" and "Schizoanalysis" in his idiosyncratic term. I became aware that Mini FM could reveal every details and trivia of what we are thinking and feeling. In 1995, I started my website called “Polymorphous Space” that has a subtitle of “Translocal Weaving Connections”. My experience with the Internet made sure that the authentic function of transmitter (computer) is not to cast but to vitalize and the transmitting size of "local or global" is not so important. Every local unit of transmission is translocal and it contains something global in it. This is quite natural in the area of organic cells from the perspective of molecular biology. Now we can say that Mini FM was a transitional form from broadcasting radio to something to overcome it. One of the new things is that the action of transmission itself can be considered as a collective performance art. Also, Mini FM let me find that in radio the point is not the type of contents but the size of transmission. And then I became interested in how far minimized the size is. Finally I have arrived at a notion that a hand size of radio transmission could be possible. I insist "hands" because hands are the minimal unit of our body as long as they have the dual functions: touching and being touched. Also, the concept of art derives from techne in old Greek and meant 'hand-work'. Therefore I can say that a minimal Mini FM could be a modest model of radio art. The concept of "radio art" is quite old. Since the Futurists' interest in radio in the thirties, many artists and theoreticians have been involved in radio from the perspective of art. However, as I mentioned, most of them relied on already existed radio (broadcasting) stations. The point was contents that the stations carried. It was "art radio" instead of "radio art". They considered radio as a medium just like paper for book. Radio technology was secondary. John Cage was one of the earliest artists who used radio technology for creating his new sound pieces and his performance art. But even Cage used radio as a tool for music and sound art. Instead of historic and scholastic consideration of "radio art", I would like to thrust into examining the concept itself. When does radio become into radioart beyond being a medium? For newspaper, for instance, paper is a medium. So plastic and liquid crystal display (LCD) can be substituted for it. How and when paper becomes an art? It is when the material of "paper" changes itself into a different material. Whatever you write and draw on a sheet of paper, it remains a medium. Therefore such attempts create not paperart but art on the paper. And when you crumple up it, it becomes a garbage. Adorno argued that “all post-Auschwitz culture, including its urgent critiques, is garbage ” . This “garbage” (Muell) is, however, not a worthless thing but a new material of art in Adorno’s critical perspective. In my interpretation, post-modern arts (arts after the modernism) starts with Adorno’s “garbage . His argument advocated “trash art”. But considering his critiques against the electronic mass medium such as radio and television we can argue that the most post-modern material as “garbage” would be airwaves. Thinking about how airwaves as garbage become an art, the aforesaid example of paper might help us. When a sheet of paper is crumpled, it becomes garbage and at the same time it has many folds. They damage the material as a writing/drawing paper but change this material into another. Giles Deleuze provides an interesting understanding on fold although it is in relation to Leibniz' monadology. A labyrinth is said, etymologically, to be multiple because it contains many folds. the multiple is not only what has many parts but also what is folded in This argument is very suggestive because this talks about the difference of multiplicity not by contents of material but by the material itself. Parts create multiplicity of contents, but they do not change the material itself. They are only a parasite on the material. In the example of airwaves, contents/parts are parasites on the airwaves: that's why broadcasting airwaves are called "carrier". Radioart has started with intervening directly in the material called "airwaves". They are classified from different and various waves to which airwaves belong. These waves are sometimes audible and visible and sometimes inaudible and invisible. Conventionally, airwaves are classified as EHF, SHF, UHF, VHF, HF, LF, VLF and so on. Whatever frequency is, every airwave radiates. Radio is radiation. Radioart tries to intervene in radiation by electromagnetic transmission and create a certain form of waves. The form of radiation is oscillation. Waves oscillate. When our body and senses can have an appropriate 'resonance', we can perceive the waves. But waves oscillate themselves even without our perception. Waves oscillate by themselves. Waves swing back and forth, up and down, omnidirectionally. Waves themselves don't convey anything. They are a Heraclitean free play of waves: panta rhei. Radio art is a way to be involved in such an oscillation of airwaves. As long as we don't expect any kind of telepathy or extrasensory perception, we need a detection interface that enables us to perceive as vibrations, sounds, or lights. But these are not the sign of messages but the sign of an "inner life" of waves. In his conversation with Daniel Charles, John Cage said an interesting comment using an example of a ashtray: he said, an ashtray is in state of vibration. But we can't hear those vibration. And in a anechoic chamber, "I'm going to listen to its inner life thanks to a suitable technology, which surely will not have been designed for that purpose." After the Mini FM movement was over, along with my attempt to use the Internet radio and with radio party using micro transmitters, my challenge of radioart has been to shrink the size of transmission up to the minimum. How to decide the minimum? Given that I commit myself to transmitting as an artist, a man of techne, i.e. a creating-transforming subject using hands, the minimum size that the airwaves and my body act together should be the distance that I stretch out my hands full-length. That is one-meter radius. I built a transmitter that can cover one-meter radius and tried to create a kind of folds of airwaves. Recently, radio landscape (I would prefer to call it as 'radios scape') is popular especially in VLF. You can use it for creating new sound art pieces. But it would be more interesting that you consider it as a play with airwaves. But this “play” means just like what Heidegger wrote about Heraclitus: “Why does it play, the great child of world-play Heraclitus brought into view in the aion? It plays, because it plays.” The sounds would be only an index how you played with airwaves. Radio art is a process art rather than an object art. You cannot fix the live process as it was. In fact, it's very hard to control even one-meter-radius of electromagnetic field. We cannot perfectly control our own hands. Therefore we have to 'release' myself toward things themselves: airwaves themselves. Regarding the difference of play in my radioart, I can differentiate my performance from sound art, experimental music and noise music. Although it is on the experimental music, Michael Nyman’s description would be more appropriate to radioart. Duchamp once said that ‘the point was to forget with my hand…I wanted to put painting once again at the service of my mind.’ The head has always been the guiding principle of Western music, and experimental music has successfully taught performers to remember with their hands, to produce and experience sounds physiologically. Experimental music has been gradually forgetting “their hands” since digital devices fascinated musicians. The situation of our hands has been drastically changing with appearance of VR technology and robotics. Nobody can deny oneself to be a cyborg one way or the other today. So, horizon between “my hands” and “my mind” has become seamless. But I would like to insist that the point would be to forget with my mind rather than my hands. [in Acoustic.Space nr. 7: SPECTROPIA illuminating investigations in the electromagnetic specrum, 2008, Riga, Liepaja, pp.128-135] Some of the data can be read and listened in my website: http://anarchy.translocal.jp/radioart/1990IRAF.html Weil der Mensch als geschichitlier er selbst ist, muss sich die Frage nach seinem eignen Sein wandeln aus der Form: >>Was is der Mensch?<< in die Form: >>wer ist der Mensch<<? , Martin Heidegger, Einfuerung in die Metaphysik,, Max Niemeyer Verlag, 1957, p.110. Martin Heidegger deconstructs the Western Metaphysics that the modern science and technology are based on.. He argues that this Metaphysics anticipates such a truth as “the true, whether it be a matter or a proposition, is what accords, the accorcant” [das Wahre, sei es eine wahre Sache oder ein wahre Satz, ist das, whas stimmt, das Stimmende].(Von Wesen der Waharheit, Vittorio Klostermann, p.7, Pathmarks, Trans. by William McNeill, Cambridge University Press, s138. A detailed story is in “Mini FM: Performing Microscopic Distance (An E-Mail Interview with Tetsuo Kogawa”, At a Distance Precursors to Art and Activism on the Internet, edited by Annmarie Chandler and Norie Neumark, The MIT Press, 2005, pp.190-209. More theoretical analyses are in my several articles of http://anarchy.translocal.jp/non-japanese/index.html Félix Guattari, La revolution moléculare, 1977, Edition Recherches, * The recordings of my three interviews with him are at http://anarchy.translocal.jp/guattari/index.html The recordings of my three interviews with him are at http://anarchy.translocal.jp/guattari/index.html Alle Kulture nach Auschwitz, samt der dringlichen Kritik daran, ist Muell., Negative Dialectik, suhrkamp taschenbuch wissenshaft 113, 1966, p.359 Tetsuo Kogawa, “Trash-art in the age of Digital Ash”, http://anarchy.translocal.jp/non-japanese/19990808trash-art.html, The Look From the East, MediaArtLab, Moscow, 2000, pp.169-175 Adorno’s <<Strategy of Hibernation>>, The Look From the East, MediaArtLab, Moscow, 2000, pp.70-77. Gilles Deleuze, The Fold Leibniz and the Baroque, Trans. by Tom Conley, University of Minnesota Press, 1993, p.3. John Cage, For the Birds, Marion Boyars, 1976, 1995, pp. 220-221. Martin Heidegger, Der Satz vom Grund, Guenther Neske, p.188 [Warum spielt das von Heraklit im aion erblickte grosse Kind des Weltspieles? Es spielt, weil es spielt. Das <<Weil>> versinkt im Spiel. DasSpiel is ohne <<Warum>>. Es spielt, dieweil es spielt., Trans. by Reginald Lilly, The Principle of Reason, Indiana University Press, p113. Michael Nyman, Experimental Music Cage and Beyond, Second Edition, Cambridge, 1999, p.14. *This article is based on my lecture-performances at AV Festival 2008 in Newcastle, England and Deep Wireless Festival of Radio & Transmission Art in Troronto, Canada. I would like to express my appreciation for Honor Harger, Darren Copeland and Nadene Thériault-Copeland for giving me a chance to rethink about radioart.
<urn:uuid:72e825c4-88fb-499d-8916-af8a4723b07c>
CC-MAIN-2013-20
http://anarchy.translocal.jp/radioart/20080710AcousticSpaceIssue_7.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932612
3,800
2.6875
3
"Web Application Obfuscation: '-/WAFs.. Evasion.. Filters//alert(/Obfuscation/)-'" by Mario Heiderich, Eduardo Alberto Vela Nava, Gareth Heyes, David Lindsay Elsevier, Syngress | 2011 | ISBN: 1597496049 9781597496049 | 290 pages | PDF/djvu | 2/3 MB This book takes a look at common Web infrastructure and security controls from an attacker's perspective, allowing the reader to understand the shortcomings of their security systems. Web applications are used every day by millions of users, which is why they are one of the most popular vectors for attackers. Obfuscation of code has allowed hackers to take one attack and create hundreds-if not millions-of variants that can evade your security measures. Find out how an attacker would bypass different types of security controls, how these very security controls introduce new types of vulnerabilities, and how to avoid common pitfalls in order to strengthen your defenses. Looks at security tools like IDS/IPS that are often the only defense in protecting sensitive data and assets Evaluates Web application vulnerabilties from the attacker's perspective and explains how these very systems introduce new types of vulnerabilities Teaches how to secure your data, including info on browser quirks, new attacks and syntax tricks to add to your defenses against XSS, SQL injection, and more. Contents About the Authors About the Technical editor CHAPTER 1 Introduction Chapter 2: "HTML" Chapter 5: "CSS" Chapter 6: "PHP" Chapter 7: "SQL" Chapter 8: "Web application firewalls and client-side filters" Chapter 9: "Mitigating bypasses and attacks" Chapter 10: "Future developments" CHAPTER 2 HTML History and overview The document type definition The doctype declaration Why markup obfuscation? Basic markup obfuscation Structure of valid markup Playing with the markup Advanced markup obfuscation Broken protocol handlers End of statement The execScript function in VBScript The jscript.compact value The jscript.encode value The execScript function in JScript CHAPTER 5 CSS Rulesets and selectors UI redressing attacks Attacks using the CSS attribute reader Remote stylesheet inclusion attacks CHAPTER 6 PHP History and overview Obfuscation in PHP PHP and numerical data types CHAPTER 7 SQL SQL: A short introduction Relevant SQL language elements Strings in SQL CHAPTER 8 Web application firewalls and client-side filters Bypassing client-side filters Denial of service with regular expressions CHAPTER 9 Mitigating bypasses and attacks Protecting against code injections HTML injection and cross-site scripting Server-side code execution Protecting the DOM CHAPTER 10 Future developments Impact on current applications Current security model of the web Extending same origin policy New attributes for Iframe The text/html-sandboxed content type The X-Frame-Options header The X-XSS-Protection header The Strict-Transport-Security header The Content-Security-Policy header The flash plug-in The Java Plug-in with TOC BookMarkLinks
<urn:uuid:0f9db27a-ccd6-4f37-aacf-7bc0b496ba1f>
CC-MAIN-2013-20
http://ebooksmio.com/development-programming/14667-web-application-obfuscation-wafs-evasion-filters-2.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.658964
688
2.640625
3
Media professionals interested in reporting on university-related stories are encouraged to visit the media newsroom. November 12, 2009 By Skyler Dillon In the three-day International Genetically Engineered Machine (iGEM) competition last week at the Massachusetts Institute of Technology, the University of Nevada, Reno team presented a project that could potentially put a big dent in mosquito-transmitted diseases such as malaria and West Nile virus that cause more than one million deaths worldwide each year. Although this was the team's first year in the competition, their work earned a bronze medal, placing them in the company of other bronze-level teams such as MIT, Brown University and Cornell University. "The team had a great time," said Christie Howard, biochemistry and molecular biology professor and one of the team's advisors. "I think what impressed them most was meeting teams from all over the world and being able to compete on the same level as all of them." The team's goal was to transform the genes that synthesize cinnamon oil into E. coli and duckweed. The genetically transformed duckweed could then be used as an eco-friendly mosquito killer, both as a larvicide and insecticide. The idea for the project, and the plan to execute it, was developed by the 10 undergraduate students on the team. "I'm probably most proud of their self-motivation," said Howard. "We've been working on this since June, and the students did a great job of staying focused and getting the work done." Howard says the most impressive project the team encountered at the competition was that of the overall winner, Cambridge, which engineered E. coli to change color in response to environmental hazards as a detection system to guard against dangers such as heavy metal contaminants. Other project examples from the 110 teams from around the world include Stanford University's method of balancing T-cell populations in patients that could be therapeutic for those with immune disorders, like cancer or AIDS. Spain's University of Valencia's developed a "bio-screen" consisting of living cells that generate light in response to electrical impulses. The Nevada iGEM team made enough significant progress toward their goal of completing the genetic pathways into duckweed to earn them a medal-not an easy feat for first-time iGEM competitors. "We have about $33 left in our account," said Howard. "So it was a little hard to finish. But we made a lot of progress." Some team members will continue work on the duckweed project if funding becomes available. Plans for Nevada's entry into next year's competition are already in the works. In the next couple weeks, students will continue their regular Wednesday meetings to start brainstorming project ideas and fundraising plans for 2010. Though many of the team's current students will graduate before the next competition, Howard is excited to see some new faces benefit from the project. "It teaches them how to become strong molecular biologists," she said. "They'll take all these skills with them after the competition." The impact the competition has had on this year's team is already clear. Many plan to move on to medical school or graduate school, and some are considering starting their own bioengineering company. "By competing, they see the future a little more clearly," said Howard. "They see they can accomplish a task if they put their minds to it." To learn more about the iGEM competition, visit igem.org.
<urn:uuid:29765f2b-b206-49c2-8dda-06ce1c4d26de>
CC-MAIN-2013-20
http://www.unr.edu/nevada-today/news/2009/university-takes-bronze-in-igem-competition
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976763
706
2.953125
3
Integrated Pest Management and Missouri's Agriculture Department of Agronomy Integrated pest management (IPM) has different meanings to everyone who works in the agricultural environment. It can be thought of as a systematic approach to solving pest problems by applying our knowledge about pests to prevent them from damaging crops. Beginning in 1972, the U.S. Department of Agriculture made funding available to the states to develop an IPM network through the extension system. MU's IPM program has been in place since the mid-1970s. IPM programs originally focused mainly on insects and their control but today consider all categories of pests. IPM programs take actions to manage pests when their numbers are likely to exceed acceptable levels; that is, a management measure is taken in consideration of the level of pest damage, revenue losses resulting from the damage, and the cost of treatment. This concept is known as the economic threshold — a cornerstone of the IPM process. Economic thresholds have been determined for many of the agricultural pests that occur regularly in Missouri. These actions are designed to reduce economic damage caused by pests, yet limit the negative effects on beneficial organisms and the environment. Strictly applying pesticides to crops is not IPM; however, pesticide use is recognized as a legitimate management tactic of IPM. The importance of IPM Agriculture plays a key role in Missouri's economy with over $4.6 billion in annual farm cash receipts. The state ranks second nationally with approximately 110,000 farms producing a diverse range of crop and livestock commodities. Missouri also produces significant minor crops such as apples, peaches, grapes, tobacco and cucurbits. According to the Missouri Agricultural Statistics Service, the state ranks in the nation's top ten in the production of hay, sorghum, soybeans, rice, grapes, watermelon, corn, cattle, turkeys, swine and broilers. Because of the state's geographical location and climate, agricultural production occurs in a diverse range of ecosystems. Management of weeds, insects and diseases is necessary for profitable production. Missouri agricultural producers report a heavy reliance on pesticides for managing major pests. Cotton producers in southeast Missouri indicate that 82 percent of their pest management decisions are based on actual field surveys. The surveyed fields showed a gain of 50 pounds per acre in cotton lint yield when compared with acreage that was not surveyed, resulting in a net benefit of $12.2 million for the state. With such economic incentives, Missouri's growers are encouraged to practice sound IPM measures. Environmental and social Missouri's citizens are concerned about pesticide and nutrient movement into surface water and groundwater, food safety, and effects on nontarget organisms and on the health of farm workers. A healthy environment sustains agricultural production and the livestock and humans living there. A degraded environment with depleted soil resources, poor water and air quality and destroyed wildlife habitat does not. IPM can help to resolve many of the issues associated with the interaction between Missouri's rural and urban populations and promises definite benefits for both. IPM program goals Four national objectives have been identified for the IPM program: - Safeguard human health and the environment through improved application of IPM strategies and systems. - Increase the range of benefits to enterprises and individuals through improved use of IPM strategies and systems. - Increase the supply and dissemination of information and knowledge about IPM strategies and systems. - Enhance collaboration between public, private and nonprofit stakeholders to foster improved use of IPM strategies and systems. MU's IPM program has specific objectives related to local agricultural interests: - Train and provide support for regional extension specialists to serve clientele on the local level. - Provide training for growers, consultants and IPM professionals in the private sector. - Develop educational materials to aid in the pest management decision-making process for commodities and pests relevant to Missouri. - Monitor and document changes in pest management practices. Ultimately, meeting these objectives will be instrumental in increasing agricultural profitability while minimizing negative environmental effects. Steps of effective IPM Putting a successful IPM program into action in the agricultural industries involves the following five steps: - Identify key pests and the damage they cause. - Monitor pest populations on a regular basis. - Determine the potential for economic loss or significant reduction of aesthetic value. - Choose the proper management tactic or combination of tactics. - Evaluate the effectiveness of the management plan. Identify pests correctly Proper identification of a pest is important for several reasons. It may not be an economically detrimental pest after all and no control measures will be necessary. Not all insects are pests; some are natural predators or parasites that help to control pest species. The proper selection of a pesticide depends on correct identification of the pest and in some cases its life stage. Monitor for pest outbreaks Rather than calendar-based treatments, IPM stresses scouting practices to detect pests and determine if action is necessary. Time constraints and the lack of trained, competent personnel can be a major challenge to carry out a scouting program. If damage can be detected before a serious pest population becomes established, then several problems can be prevented. For example, research has shown that pesticide treatment of a soybean field is justified economically when an average of one soybean podworm per foot of row can be detected. Before pesticides are applied, scouting may show that lower than maximum registered rates can be applied to achieve acceptable levels of control of small insects. Several practical considerations can save time in a scouting program. Knowing a pest's habits and habitat can save time in the monitoring program. For example, grain sorghum is most susceptible to corn earworm attack during the two-week period following pollination. Therefore scouting for this pest should begin about one week after pollination. Wheat planted adjacent to tall fescue pastures may be especially attractive to true armyworm infestation. Such areas can be watched more frequently and closely. The anticipated time of pest development can alert a pest manager to the most opportune times for scouting. Degree-day modeling is based on the number of days when average temperatures exceed the threshold for development and activity by a particular pest. By tracking degree-days, pest managers can predict when the pest will appear and damage will occur. Establishing thresholds for control measures In the original IPM models that were developed in agricultural environments, control measures were based on an economic threshold. To justify treatment, pest populations or pest damage had to exceed this threshold. For many of Missouri's common agronomic insect pests, thresholds have been developed as a result of many years of research. These thresholds are dynamic and often depend on crop and pest growth stage. For example, treatment of first-generation European corn borer is justified when 50 percent of corn plants show leaf feeding and larvae are present. For the second generation, treatment is justified when 50 percent of plants have larvae on the first leaf above and below the ear. If there are health and safety threats or legal concerns associated with a certain pest, then thresholds are more clearly defined. For example, even in low numbers the striped blister beetle is lethal to horses. Therefore, its presence in alfalfa hay for horses is not tolerated. In some instances, pest acceptance levels may be greater because of social or cultural factors or because of concerns about the costs or hazards of pest management methods used. A variety of integrated pest management tactics are available: Abiding by local, state and federal guidelines, such as quarantines, designed to prevent the spread of pests. Using beneficial organisms, such as natural pest predators, parasites and diseases to suppress pest organisms. Alfalfa producers who have managed for greater numbers of beneficial insects are now experiencing fewer and less severe problems from the alfalfa weevil. Insecticidal control of aphids is rarely needed in Missouri cotton because beneficial insects normally control them. Using crop rotation, cultivation, sanitation and other farm practices that reduce persistent pest problems. Surveys indicate that crop rotation is the top cultural practice used to manage weed and insect pests. Using barriers, traps, trap crops, adjusting planting location or timing to evade or diminish pest pressure. Planting wheat after the "Hessian fly free dates" is a classic method of avoiding damage from that pest. Choosing resistant plant materials to avoid pest problems. One of the most common and successful strategies in managing soybean cyst nematode is to select and incorporate resistant varieties into the crop rotation scheme. Using pesticides to prevent or suppress a pest outbreak. The selection of chemicals used in IPM programs considers that the pesticide is as specific to the pest as possible and is used at the lowest effective rate. The pesticide should be short-lived in the environment, least toxic to beneficial organisms, and alternated with other chemical modes of action to help prevent development of resistant pest populations. The success of an integrated pest management program depends on evaluation of its results. What worked well, which aspects need improvement, and which should be eliminated? What are the benefits of the program in financial return and in environmental or social value? IPM successes in Missouri Black cutworm forecasting In Missouri, the black cutworm is a migratory pest that can potentially cause economic damage to corn. Rather than apply preventive preplant insecticides to all corn fields, rescue treatments can be applied to fields that have active and damaging infestations. Using trap count data to determine the arrival of black cutworm moths in Missouri and degree-day modeling to calculate the predicted date of the damaging larval stage of this pest, corn producers and crop professionals are notified to scout fields. Using this timely scouting information and current economic thresholds, informed decisions can be made to treat or not to treat. The program avoids needless insecticide applications, producing both economical and environmental benefits. Release of beneficial weevils to control musk thistle Musk thistle is an introduced noxious weed infesting Missouri's pastures and forage crops (Figures 1 and 2). Moderate infestations of this weed pest are estimated to cause yield losses of nearly 25 percent. Specific natural enemies can aid in regulating the spread of musk thistle. The musk thistle weevil is one such natural enemy. The larvae feed in the receptacle of the developing flower, disrupting seed formation. A native of Europe, like musk thistle, the musk thistle weevil was studied extensively to ensure that it would not damage economic plants. In 1975 entomologists with the USDA-ARS Biological Control of Insects Research Laboratory in Columbia, Missouri, released 490 musk thistle weevils near Marshfield in Webster County. Since then, the weevils have been found as far as 22 miles from the five-acre pasture where the original release was made (Figure 3). Extensive research at the release site shows the weevil can contribute to a 50 to 95 percent reduction in numbers of thistles. Thus, the importation and release of natural enemies offers another way to reduce infestations of musk thistles. The advantages of this biological control program are: - It is inexpensive - It poses no threat to nontarget organisms - Once established, it allows weevils to move into adjoining infested areas - It requires little additional effort once the weevil is established, while other controls must be applied periodically. Parasitic wasps for control of cereal leaf beetle. The cereal leaf beetle is an imported pest that arrived in Missouri in 1972. Two of the beetle's natural enemies were found in the mid-1990s: tiny parasitic wasps that attack the beetle's eggs and larvae. To promote the spread of these two beneficial organisms, populations have been reared in two field insectaries located on the property of the MU Agricultural Experiment Station. In recent years, parasitized host beetles have been relocated to several oat and wheat fields for release. It is estimated that cereal leaf beetle could potentially cause yield losses of 40 percent in wheat and 60 percent in oats if left unchecked. Growers and consultants are increasingly aware that their ability to continue producing depends on favorable public perception of their practices. Part of the solution is to adopt IPM. It is important to consider that as knowledge and technology evolve, so will IPM programs. IPM1003, new November 2000
<urn:uuid:9d56e75a-e75b-43d6-91df-44716a322a3e>
CC-MAIN-2013-20
http://extension.missouri.edu/publications/DisplayPub.aspx?P=IPM1003
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930271
2,487
3.515625
4
In its mark-up of the Defense Authorization bill for Fiscal 2013, the Strategic Forces Subcommittee of the House Armed Services Committee lauded the prior accomplishments of the Airborne Laser Test Bed program. It then went further by directing the Missile Defense Agency to provide a report by 31 July 2012 on the costs that would be involved in returning the Airborne Laser aircraft to an operational readiness status to continue technology development and testing, and to be ready to deploy in an operational contingency, if needed, to respond to rapidly developing threats from North Korea. This Airborne Laser program, instituted in 1996, envisioned mounting a chemical laser in a modified Boeing 747-400F aircraft to destroy enemy missiles in their boost phase before they could deploy their nuclear weapons and countermeasures. After spending about $5.2 billion on the program over 16 years, the Missile Defense Agency announced its termination in February 2012, and advised that the modified aircraft would be mothballed and retired to the aircraft bone yard in Arizona. Lt. General Patrick O’Reilly, Director of the Missile Defense Agency, noted that a new generation of smaller and far more powerful unmanned solid-state lasers, capable of operating at higher altitudes, would be developed over the decade in the hope of creating an operationally effective anti-missile laser program. A basic problem with the Airborne Laser is that the effective range of the weapon is limited by the attenuation of the beam as it passes through the atmosphere. During a House Armed Services Committee hearing on 13 May 2009, then Secretary of Defense Robert Gates stated that “the reality is that you would need a laser something like 20 to 30 times more powerful than the chemical laser in the plane … to be able to get any distance from the launch site to fire. So right now, the ABL would have to orbit inside the borders of Iran in order to be able to shoot down that missile in the boost phase.” The conclusion regarding its lack of effectiveness was not limited to the Iran case. The Secretary advised in conclusion that “nobody in uniform that I know … believes that this is a workable concept.” Nor are current prospects for the Airborne Laser any brighter in combatting missiles launched from North Korea as well as Iran. In a letter dated as recently as 30 April 2012 to the chair and ranking member of the same Strategic Forces Subcommittee of the House Armed Services Committee that is advocating revival of the Airborne Laser, the co-chairs of the “National Academy of Sciences National Research Council Committee on an Assessment of Concepts and Systems for U.S. Boost-Phase Missile Defense in Comparison to Other Alternatives” stated that “the defense cannot be based close enough to the threat during the boost-phase to kill it, even with the most optimistic assumptions about technical performance.” Why, given the consistently negative evaluations, would the House Subcommittee want to revive this discredited weapons system? Hopefully, common sense will prevail, especially in this period of fiscal stringency, and we will allow the Airborne Laser 747 aircraft to rest in peace in the aircraft bone yard. For additional reading on this topic please see: The Airborne Laser from Theory to Reality: An Insider’s Account The Airborne Laser: Shooting Down What’s Going Up The Indo-US Strategic Relationship and Pakistan’s Security
<urn:uuid:42af20db-a0a8-4f0d-98a2-a3431f81a96c>
CC-MAIN-2013-20
http://isnblog.ethz.ch/international-relations/revive-the-airborne-laser
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953524
677
2.515625
3
Avoid Tree and Utility Conflict- Plan Ahead For Immediate Release For Further Information Contact Sonia Garth: (217) 355-9411 Ext 217 Avoid Tree and Utility Conflict - Plan Ahead CHAMPAIGN, Ill. - Look up, look down. Follow this advice given by the International Society of Arboriculture (ISA) before deciding what type of tree to plant and where the tree will be planted. Proper tree and site selection will provide trouble-free beauty and pleasure for years to come. One of the most important things to consider is the location of utility lines. "Trees that are small now can create significant problems in the future as they grow into maturity and into power lines," says Derek Vannice, Executive Director of the Utility Arborist Association (UAA). The location of utility lines should have a direct impact on tree and site selection. Both overhead lines and underground lines need to be considered. Look up - Overhead Lines Overhead lines for utilities such as electric, telephone, or cable television are the easiest to see but are the most taken for granted. These lines may appear harmless but can be extremely dangerous. Children or adults climbing in trees that are too tall and growing in to the utility lines can be severely injured or possibly killed if they accidentally come in contact with the wires. If tall growing trees are planted under utility lines, then they require pruning to maintain clearance because lines making contact with the wires can result in service interruptions. Utility pruning can result in the tree having an unnatural appearance. According to Vannice, "Planting a tall growing tree under a power line will not allow the tree to realize its proper size and form." Proper selection and placement of trees around overhead utilities can help eliminate power outages, which reduces expenses for utilities and rate payers. Correct selection will also eliminate potential public safety hazards, and improve the appearance of landscapes. Look down - Underground Lines Potential problems that are much harder to recognize are those involving underground utilities such as water, sewer, and natural gas. Trees are much more than just what you can see. The root area of a tree is usually larger than the branch spread above ground. Tree roots and underground lines usually coexist without problems. However, if a tree is planted near one of these utility lines that needs to be dug up for repairs, the result could be damage to the root system of the tree. The most important thing to remember is to determine the location of utility lines before planting. Often these lines are closer to the surface than we think, so verify the location of the lines with the utility company before digging the hole. Accidentally digging into a line can cause serious personal injury as well as costly interruption of utility service. Planting Trees Around Homes This illustration indicates approximately where trees should be planted in relation to utility lines. - Tall Zone - Appropriate area for trees that grow as tall as 60 feet. Should be planted at least 35 feet from the house to allow for root development and to minimize damage to the house. - Medium Zone - Appropriate for trees that grow up to 40 feet tall. Should have planting areas at least four to eight feet wide. These trees provide decoration or framing for your house. - Low Zone - For trees that grow no more than 20 feet tall. Must be planted in an area extending at least 15 feet on either side of the utility wires. Low zone trees are good for areas with limited growing space, such as narrow planting areas (less than four feet wide). Right Tree - Right Place Planning before planting can help ensure that the right tree is planted in the right place. Proper tree selection and placement enhances your property value, prevents costly and sometimes unsightly maintenance trimming, and lowers the risk of damage to your home and property. If you need help selecting the proper tree, consult a nursery or an ISA Certified Arborist or an ISA Certified Arborist/Utility Specialist. For more information on tree selection and new tree planting, or to find a Certified Arborist visit www.treesaregood.org. To learn more about trees and utilities go to www.utilityarborist.org. The International Society of Arboriculture (ISA), headquartered in Champaign, Ill., is a nonprofit organization supporting tree care research around the world. As part of ISA's dedication to the care and preservation of shade and ornamental trees, it offers the only internationally-recognized certification program in the industry. For more information, or to find an ISA Certified Arborist, visit
<urn:uuid:5ce71a63-92bd-4c1c-873b-76f80e67ed31>
CC-MAIN-2013-20
http://treesaregood.org/pressrelease/press/treesandutilities.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932964
936
2.546875
3
lightning, electrical discharge accompanied by thunder, commonly occurring during a thunderstorm. The discharge may take place between one part of a cloud and another part (intracloud), between one cloud and another (intercloud), between a cloud and the earth, or earth and cloud; more rarely observed is the electrical discharge sometimes called "upward lightning," a superbolt between a cloud and the atmosphere tens of thousands of feet above the cloud. Lightning may appear as a jagged streak (forked lightning), as a vast flash in the sky (sheet lightning), or, rarely, as a brilliant ball (ball lightning). Illumination from lightning flashes occurring near the horizon, often with clear skies and the accompanying thunder too distant to be audible, is referred to as heat lightning. Charges are believed to accumulate in cloud regions as ice particles and droplets collide and transfer electric charges, with smaller, lighter ice particles and droplets carrying positive charges higher and heavier particles and droplets carrying negative charges lower. In a lightning strike on the ground, a negatively charged leader propagates from a negatively charged cloud region in a series of steps toward the ground; once it gets close to the ground a positively charged streamer rises to meet it. When the streamer meets the leader, an electrical discharge flows along the completed channel, creating the lighting flash. Long-lasting lightning flashes with lower current are more damaging to nature and humans than shorter flashes with higher currents. Lightning may also be produced in snowstorms or in ash clouds created by volcanic eruptions. Space probes have photographed lightning on Jupiter and recorded indications of it on Venus, Saturn, Uranus, and Neptune. Benjamin Franklin, in his kite experiment (1752), proved that lightning and electricity are identical. See also lightning rod. More on lightning from Fact Monster: See more Encyclopedia articles on: Weather and Climate: Terms and Concepts
<urn:uuid:44337ccc-89bb-4dfa-8020-ae6a141e4ca9>
CC-MAIN-2013-20
http://www.factmonster.com/encyclopedia/weather/lightning.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932587
378
3.671875
4
The relentless pursuit of extreme energy Yes, the oil spewing up from the floor of the Gulf of Mexico in staggering quantities could prove one of the great ecological disasters of human history. Think of it, though, as just the prelude to the Age of Tough Oil, a time of ever increasing reliance on problematic, hard-to-reach energy sources. Make no mistake: we’re entering the danger zone. And brace yourself, the fate of the planet could be at stake. It may never be possible to pin down the precise cause of the massive explosion that destroyed the Deepwater Horizon drilling rig on April 20th, killing 11 of its 126 workers. Possible culprits include a faulty cement plug in the undersea oil bore and a disabled cutoff device known as a blow-out preventer. Inadequate governmental oversight of safety procedures undoubtedly also contributed to the disaster, which may have been set off by a combination of defective equipment and human error. But whether or not the immediate trigger of the explosion is ever fully determined, there can be no mistaking the underlying cause: a government-backed corporate drive to exploit oil and natural gas reserves in extreme environments under increasingly hazardous operating conditions. The New Oil Rush and Its Dangers The United States entered the hydrocarbon era with one of the world’s largest pools of oil and natural gas. The exploitation of these valuable and versatile commodities has long contributed to the nation’s wealth and power, as well as to the profitability of giant energy firms like BP and Exxon. In the process, however, most of our easily accessible onshore oil and gas reservoirs have been depleted, leaving only less accessible reserves in offshore areas, Alaska, and the melting Arctic. To ensure a continued supply of hydrocarbons -- and the continued prosperity of the giant energy companies -- successive administrations have promoted the exploitation of these extreme energy options with a striking disregard for the resulting dangers. By their very nature, such efforts involve an ever increasing risk of human and environmental catastrophe -- something that has been far too little acknowledged. The hunt for oil and gas has always entailed a certain amount of risk. After all, most energy reserves are trapped deep below the Earth’s surface by overlying rock formations. When punctured by oil drills, these are likely to erupt in an explosive release of hydrocarbons, the well-known “gusher” effect. In the swashbuckling early days of the oil industry, this phenomenon -- familiar to us from movies like There Will Be Blood -- often caused human and environmental injury. Over the years, however, the oil companies became far more adept at anticipating such events and preventing harm to workers or the surrounding countryside. Now, in the rush to develop hard-to-reach reserves in Alaska, the Arctic, and deep-offshore waters, we’re returning to a particularly dangerous version of those swashbuckling days. As energy companies encounter fresh and unexpected hazards, their existing technologies -- largely developed in more benign environments -- often prove incapable of responding adequately to the new challenges. And when disasters occur, as is increasingly likely, the resulting environmental damage is sure to prove exponentially more devastating than anything experienced in the industrial annals of the nineteenth and early twentieth centuries. The Deepwater Horizon operation was characteristic of this trend. BP, the company which leased the rig and was overseeing the drilling effort, has for some years been in a rush to extract oil from ever greater depths in the Gulf of Mexico. The well in question, known as Mississippi Canyon 252, was located in 5,000 feet of water, some 50 miles south of the Louisiana coastline; the well bore itself extended another 13,000 feet into the earth. At depths this great, all work on the ocean floor has to be performed by remotely-controlled robotic devices overseen by technicians on the rig. There was little margin for error to begin with, and no tolerance for the corner-cutting, penny-pinching, and lax oversight that appears to have characterized the Deepwater Horizon operation. Once predictable problems did arise, it was, of course, impossible to send human troubleshooters one mile beneath the ocean’s surface to assess the situation and devise a solution. Drilling in Alaska and the Arctic poses, if anything, even more perilous challenges, given the extreme environmental and climatic conditions to be dealt with. Any drilling rigs deployed offshore in, say, Alaska’s Beaufort or Chukchi Seas must be hardened to withstand collisions with floating sea ice, a perennial danger, and capable of withstanding extreme temperatures and powerful storms. In addition, in such hard-to-reach locations, BP-style oil spills, whether at sea or on land, will be even more difficult to deal with than in the Gulf. In any such situation, an uncontrolled oil flow is likely to prove lethal to many species, endangered or otherwise, which have little tolerance for environmental hazards. The major energy firms insist that they have adopted ironclad safeguards against such perils, but the disaster in the Gulf has already made mockery of such claims, as does history. In 2006, for instance, a poorly-maintained pipeline at a BP facility ruptured, spewing 267,000 gallons of crude oil over Alaska’s North Slope in an area frequented by migrating caribou. (Because the spill occurred in winter, no caribou were present at the time and it was possible to scoop up the oil from surrounding snow banks; had it occurred in summer, the risk to the Caribou herds would have been substantial.) If It’s Oil, It’s Okay Despite obvious hazards and dangers, as well as inadequate safety practices, a succession of administrations, including Barack Obama’s, have backed corporate strategies strongly favoring the exploitation of oil and gas reservoirs in the deep waters of the Gulf of Mexico and other environmentally sensitive areas. On the government’s side, this outlook was first fully articulated in the National Energy Policy (NEP) adopted by President George W. Bush on May 17, 2001. Led by former Halliburton CEO Vice President Dick Cheney, the framers of the policy warned that the United States was becoming ever more dependent on imported energy, thereby endangering national security. They called for increased reliance on domestic energy sources, especially oil and natural gas. “A primary goal of the National Energy Policy is to add supply from diverse sources,” the document declared. “This means domestic oil, gas, and coal.” As the NEP made clear, however, the United States was running out of conventional, easily tapped reservoirs of oil and natural gas located on land or in shallow coastal waters. “U.S. oil production is expected to decline over the next two decades, [while] demand for natural gas will most likely continue to outpace domestic production,” the document noted. The only solution, it claimed, would be to increase exploitation of unconventional energy reserves -- oil and gas found in deep offshore areas of the Gulf of Mexico, the Outer Continental Shelf, Alaska, and the American Arctic, as well as in complex geological formations such as shale oil and gas. “Producing oil and gas from geologically challenging areas while protecting the environment is important to Americans and to the future of our nation’s energy security,” the policy affirmed. (The phrase in italics was evidently added by the White House to counter charges -- painfully accurate, as it turned out -- that the administration was unmindful of the environmental consequences of its energy policies.) First and foremost among the NEP’s recommendations was the development of the pristine Arctic National Wildlife Refuge, a proposal that generated intense media interest and produced widespread opposition from environmentalists. Equally significant, however, was its call for increased exploration and drilling in the deep waters of the Gulf, as well as the Beaufort and Chukchi Seas off northern Alaska. While drilling in the Arctic National Wildlife Refuge was, in the end, blocked by Congress, an oil rush to exploit the other areas proceeded with little governmental opposition. In fact, as has now become evident, the government’s deeply corrupted regulatory arm, the Minerals Management Service (MMS), has for years facilitated the awarding of leases for exploration and drilling in the Gulf of Mexico while systematically ignoring environmental regulations and concerns. Common practice during the Bush years, this was not altered when Barack Obama took over the presidency. Indeed, he gave his own stamp of approval to a potentially massive increase in offshore drilling when on March 30th -- three weeks before the Deepwater Horizon disaster -- he announced that vast areas of the Atlantic, the eastern Gulf of Mexico, and Alaskan waters would be opened to oil and gas drilling for the first time. In addition to accelerating the development of the Gulf of Mexico, while overruling government scientists and other officials who warned of the dangers, the MMS also approved offshore drilling in the Chukchi and Beaufort Seas. This happened despite strong opposition from environmentalists and native peoples who fear a risk to whales and other endangered species crucial to their way of life. In October, for example, the MMS gave Shell Oil preliminary approval to conduct exploratory drilling on two offshore blocks in the Beaufort Sea. Opponents of the plan have warned that any oil spills produced by such activities would pose a severe threat to endangered animals, but these concerns were, as usual, ignored. (On April 30th, 10 days after the Gulf explosion, final approval of the plan was suddenly ordered withheld by President Obama, pending a review of offshore drilling activities.) A BP Hall of Shame The major energy firms have their own compelling reasons for a growing involvement in the exploitation of extreme energy options. Each year, to prevent the value of their shares from falling, these companies must replace the oil extracted from their existing reservoirs with new reserves. Most of the oil and gas basins in their traditional areas of supply have, however, been depleted, while many promising fields in the Middle East, Latin America, and the former Soviet Union are now under the exclusive control of state-owned national oil companies like Saudi Aramco, Mexico’s Pemex, and Venezuela’s PdVSA. This leaves the private firms, widely known as international oil companies (IOCs), with ever fewer areas in which to replenish their supplies. They are now deeply involved in an ongoing oil rush in sub-Saharan Africa, where most countries still allow some participation by IOCs, but there they face dauntingly stiff competition from Chinese companies and other state-backed companies. The only areas where they still have a virtually free hand are the Arctic, the Gulf of Mexico, the North Atlantic, and the North Sea. Not surprisingly, this is where they are concentrating their efforts, whatever the dangers to us or to the planet. Take BP. Originally known as the Anglo-Persian Oil Company (later the Anglo-Iranian Oil Company, still later British Petroleum), BP got its start in southwestern Iran, where it once enjoyed a monopoly on the production of crude petroleum. In 1951, its Iranian holdings were nationalized by the democratic government of Mohammed Mossadeq. The company returned to Iran in 1953, following a U.S.-backed coup that put the Shah in power, and was finally expelled again in 1979 following the Islamic Revolution. The company still retains a significant foothold in oil-rich but unstable Nigeria, a former British colony, and in Azerbaijan. However, since its takeover of Amoco (once the Standard Oil Company of Indiana) in 1998, BP has concentrated its energies on the exploitation of Alaskan reserves and tough-oil locations in the deep waters of the Gulf of Mexico and off the African coast. “Operating at the Energy Frontiers” is the title of BP’s Annual Review for 2009, which proudly began: “BP operates at the frontiers of the energy industry. From deep beneath the ocean to complex refining environments, from remote tropical islands to next-generation biofuels -- a revitalized BP is driving greater efficiency, sustained momentum and business growth.” Within this mandate, moreover, the Gulf of Mexico held center stage. “BP is the leading operator in the Gulf of Mexico,” the review asserted. “We are the biggest producer, the leading resource holder and have the largest exploration acreage position… With new discoveries, successful start-ups, efficient operations, and a strong portfolio of new projects, we are exceptionally well placed to sustain our success in the deepwater Gulf of Mexico over the long run.” Clearly, BP’s top executives believed that a rapid ramp-up in production in the Gulf was essential to the company’s long-term financial health (and indeed, only days after the Deepwater Horizon explosion, the company announced that it had made $6.1 billion in profits in the first quarter of 2010 alone). To what degree BP’s corporate culture contributed to the Deepwater Horizon accident has yet to be determined. There is, however, some indication that the company was in an unseemly rush to complete the cementing of the Mississippi Canyon 252 well -- a procedure that would cap it until the company was ready to undertake commercial extraction of the oil stored below. It could then have moved the rig, rented from Transocean Ltd. at $500,000 per day, to another prospective drill site in search of yet more oil. While BP may prove to be the principal villain in this case, other large energy firms -- egged on by the government and state officials -- are engaged in similar reckless drives to extract oil and natural gas from extreme environmental locations. These companies and their government backers insist that, with proper precautions, it is safe to operate in these conditions, but the Deepwater Horizon incident shows that the more extreme the environment, the more unlikely such statements will prove accurate. The Deepwater Horizon explosion, we assuredly will be told, was an unfortunate fluke: a confluence of improper management and faulty equipment. With tightened oversight, it will be said, such accidents can be averted -- and so it will be safe to go back into the deep waters again and drill for oil a mile or more beneath the ocean’s surface. Don’t believe it. While poor oversight and faulty equipment may have played a critical role in BP’s catastrophe in the Gulf, the ultimate source of the disaster is big oil’s compulsive drive to compensate for the decline in its conventional oil reserves by seeking supplies in inherently hazardous areas -- risks be damned. So long as this compulsion prevails, more such disasters will follow. Bet on it. Michael T. Klare is a professor of peace and world security studies at Hampshire College. His most recent book is Rising Powers, Shrinking Planet: The New Geopolitics of Energy. A documentary movie version of his previous book, Blood and Oil, is available from the Media Education Foundation. Copyright 2010 Michael T. Klare
<urn:uuid:6331a7fc-b400-46a4-b825-8b86fdf1c59f>
CC-MAIN-2013-20
http://www.resilience.org/stories/2010-05-19/relentless-pursuit-extreme-energy
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95747
3,072
2.53125
3
Common questions: Questions about colors? • Questions about images? • Questions about Oberon leather? • FAQ’s Roses are believed to be 35 million years old. Rosa gallica officinalis growing wild in Asia was first cultivated in the gardens of Persians and Egyptians 5,000 year sago. It wasn't until the late eighteenth century that cultivated roses were introduced into Europe from China where roses were grown in Greek and Roman gardens from which they migrated to France. Throughout human history roses have come to symbolize love& beauty. But, they have also been claimed as symbols of war and political power especially in regard to the British Empire and a famous conflict know as the War of the Roses. Beloved to gardeners the world over, references to the rose in art and literature abounds, from Shakespeare, “ That which we call a rose by any other name would smell as sweet”, to Robert Burns, “Oh, my luve’s like a red, red rose” and Gertrude Stein, “a rose is a rose is a rose”.
<urn:uuid:82f440d6-bb89-4a15-a839-078f371d3995>
CC-MAIN-2013-20
http://oberondesign.com/journal-covers/pocket-moleskine-covers/wild-rose-pocket-moleskine-cover.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96393
228
3.234375
3
What is a heart-lung machine? A heart-lung machine—also called a cardiopulmonary bypass machine—is a device that takes over the function of the body’s heart and lungs during open heart or traditional surgery. The machine circulates the essential oxygen-rich blood to the brain and other vital organs during open-heart surgery, allowing the cardiac surgery team to operate on a heart that is blood-free and still. When the surgery is complete, the heart is restarted and the heart-lung machine is disconnected. The heart-lung machine intercepts the blood at the right atrium (upper heart chamber) before it passes into the heart. Using a pump, the machine delivers the blood to a reservoir, which adds oxygen to the blood. The pump then sends the oxygen-rich blood to the aorta and through the rest of the body. The machine, which is operated by a trained and certified specialist called a perfusion technologist, also removes carbon dioxide and other waste products from the blood and delivers anesthesia and medications into the recirculated blood. Also, in some cases, it cools the blood. Cool blood lowers the body’s temperature, which helps to further protect the brain and other vital organs during surgery. If you or someone close to you is experiencing a potential heart problem, call 911 in case of an emergency. For more information, please contact us at 541-222-7218 or 888-240-6484.
<urn:uuid:22bd2e49-9417-480e-a84f-c819609ed211>
CC-MAIN-2013-20
http://www.peacehealth.org/sacred-heart-riverbend/services/heart-and-vascular/surgery/Pages/heart-lung-machine.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903823
309
3.5
4
- MARKET TRENDS - WEB EXCLUSIVES - BUYER'S GUIDE A baker’s dozen is a familiar expression that has been around for generations and even centuries. Why has the baker’s dozen continued on as a perpetual phrase? For ideas, products, even industries to perpetuate, they must connect to a sense of truth or emotional certainty. There are two values the baker’s dozen phrase aligns with, no matter what the conception. Those two values are integrity and generosity. But does baker’s dozen mean value? Are bakers just being generous? Is it because they are guilty for trying to cheat me? Have they been cheating me? Are bakers not giving me what I am expecting or paying for? The phrase has morphed over time to mean different things to serve different purposes, but continues to be understood by most generations. So where did it originate? The phrase is widely believed—I say only widely because it is described by Wikipedia as “having its source in the 13th century in one of the earliest English statutes called the Assize of Bread and Ale. The phrase was instituted during the reign of Henry III when Henry revived an ancient statute that regulated the price of bread according to the price of wheat.” Bakers (or brewers) who curtailed could be fined, flogged or face other penalties. Unfortunately, the origination appears to have been birthed out of lack of integrity, not by all bakers but by enough of them that regulation was required for compliance. Is there better news for the baking industry earlier in history? Most world cultures perceived bread as foundational to the healthy diet and important for cultural strength. Bread was valued by the customer, and the expectation was that there would be fairness. Some bakers in practice, however, would short-weigh the product. This became a normal way of life, and the unfair practice reached the level of governmental officials. But it required leadership in this culture to correctly punish this act, even severely at times. How can bakers ensure that history doesn’t repeat itself? Is the new economic difficulty providing opportunity to yield to the temptation not to serve our customers appropriately? Roman history may also provide some guidelines to follow. Even though bakery products were still a dietary staple, it was during this time that baking became a profession. Circa 168 B.C., the Bakers Guild was formed. This guild of bakers united together to form Collegium Pistorum, which became a skilled profession where idea sharing, best practices, teaching and educating the art of the baking craft were highly valued. The group was so honored and of such high repute that one of its representatives held a seat in the Roman Senate. Evidence indicates these incorporated bakers regulated themselves through a certification process, by which they protected their profession/craft and at the same time separated themselves from deceptive practices. They were trusted. But more than just pure trust is needed to earn a customer’s business and respect. A folk story, “Baker’s Dozen: A Saint Nicholas Tale,” by Aaron Shepard, is a children’s story on generosity. In the story, Van Amsterdam was an honest baker, well-known for his products. He would give the customer exactly what they paid for: Nothing more, nothing less. One day, a mysterious woman changed his business paradigm by “cursing” his business until baker Van Amsterdam realized the value of generosity. He no longer saw a dozen as 12 but as an opportunity to grow customer value. He was rewarded financially and with success when he understood the mysterious lady’s meaning. How can we create greater value today? How do we share our profession in industry organizations and protect our businesses? How do we guarantee our quality, food safety and cleanliness? How can we serve customers more effectively? We have too much to lose when we aren’t honest in business dealings and don’t show the customer sincere efforts of integrity. We have too much to lose when we lack professionalism with our customers and industry partners. We must always seek to give customers what they want, but never forget to give customers what they need. Customers will most value our businesses when their needs are met and their desires are heard and delivered. We must treat each customer, supplier, employee and baker with honesty. History can repeat itself. But with integrity and generosity, we can reward the customer with value. Customers will perceive our business with a positive outlook and award us with their version of a baker’s dozen: Repeat business. SF&WB
<urn:uuid:7660dacc-a9b7-4ac9-bb99-3b037fa0f123>
CC-MAIN-2013-20
http://www.snackandbakery.com/articles/85829-a-baker-s-dozen-how-do-customers-perceive-this-today-
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972976
948
3.484375
3
Discover Your Blip. What do you want to watch? Four different gases are in a mixture. Determine how many moles of each gas are present and ad them all together to do the calculation in the problem. The exact chemical identity of the gases is not material to most gas law problems. The ChemTeam provides study resources in all standard topics for students in high school and Advanced Placement chemistry. Discover the best in original web series. © 2013 Blip Networks, Inc. All Rights Reserved.
<urn:uuid:9e52aaa3-2256-476d-8885-47107b144b14>
CC-MAIN-2013-20
http://blip.tv/chemteam/ideal-gas-law-vii-3497473
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903882
105
3.53125
4
Best-selling Indian authoress Arundhati Roy is leading new protests against daming the Narmada River. In a recent six-day "Rally for the Valley," Roy, along with celebrities and activists, made a last-ditch attempt to stop the controversial Sardar Sarovar Dam from filling up. The project started in the 1940's as part of a development vision by India's first Prime Minister, Jawaharlal Nehru. Due to complex legal and logistical arguments between states sharing the river, the project was delayed until 1979. Major funding came first from the World Bank, but after intense protests and an independent review, the Bank pulled out in 1993. "Resettlement and rehabilitation of all those displaced is not possible. Environmental impacts have not been properly considered or adequately addressed," it concluded. The Indian Government has taken over funding the dam, the most expensive and second-largest dam in the world. Four years ago, courts stopped construction on the project in response to vigorous protests. But on February 18 of this year, the Indian Supreme Court sanctioned further construction, well aware that crucial rehabilitation conditions were simply not going to be met. The Supreme Court is even questioning Roy's criticism of their decision, saying that freedom of expression has limits. So far, close to us$2 billion has been spent on the dam, and the government believes it would be a waste of money to stop now. Sardar Sarovar is the first and largest in the Narmada Project, which includes 30 more large dams, 135 medium dams and 3,000 small dams along the river. When completed, the Sardar Sarovar dam will be 450 feet high, submerge 100,000 acres of land and displace a quarter million people. The displacement probably wouldn't be so bad if the "oustees," as they are called, actually received the adequate replacement land they deserve. But the government appears to be in no mood to give in to villagers' demands. Many oustees say they will stay on their land, as they would rather die than move away from their ancestral homes. The dam is intended to bring drinking water to 40 million people, irrigate farms and generate electricity. But Roy points to India's bitter experience with 3,000 dams already built, which have displaced 50 million people in the last 50 years. She says the failure of those dams to deliver promised results should be the best argument against the Narmada project. Supporters of the dam include farmers and politicians from the areas that will be irrigated after the reservoir fills. "Sardar Sarovar Dam--a ray of hope for thirsty millions in western India," read signs along the nearby road, regardless, it seems, of the human cost.
<urn:uuid:d3d671c2-f6f9-4056-a6db-5d01816480fa>
CC-MAIN-2013-20
http://hinduismtoday.com/modules/smartsection/print.php?itemid=4307
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96598
561
2.671875
3
Clean Water - Biodiesel - Frequently Asked Questions Biodiesel is a cleaner burning diesel fuel made from natural renewable sources such as soy or vegetable oils instead of petroleum oil. It is formulated to improve diesel engine performance and yet be non-toxic and biodegradable. Q: Is biodiesel safe to put in my engine? Yes, using biodiesel at a 20 percent blend with 80 percent petroleum diesel requires no engine modification, but achieves significant performance, emissions and aesthetic benefits. Biodiesel has been tested by government agencies, universities, transit authorities, and private industry in the United States, Canada, and Europe. Biodiesel is also recognized as an alternative fuel by the Department of Energy and the Environmental Protection Agency. Q: What are the benefits of using biodiesel? The first thing you'll notice is a cleaner smelling exhaust - some users have compared it to the smell of French fries. Biodiesel will also reduce emissions, provide a cleaner burning exhaust, improve lubrication, improve cetane levels, and help clean injectors, fuel lines, pumps and tanks. It is also safer to store and transport since it has a higher flash point than traditional diesel and is classified only as combustible, not flammable or explosive. Consider keeping a handy five-gallon container on board as emergency fuel. Q: How do I use biodiesel? Biodiesel can be used as a fuel additive and mixed with petroleum diesel, with a 20 percent blend being most popular. If you use biodiesel at higher blends, modifying fuel lines might be advisable. To determine how much fuel is 20 percent of your tank, divide the total volume by five. That's how much biodiesel you need to add to make a 20 percent blend in a full tank. Q: Is biodiesel safe for the environment? Biodiesel is safer for both the air and water. In its pure form, it is non-toxic and biodegradable, which is especially important in sensitive or protected waterway areas. It is also free of sulfur and aromatics, which reduces harmful emissions. When added to petroleum diesel, it makes fuel burn cleaner. However, any fuel spill still needs to be reported and cleaned up in accordance with U.S. Coast Guard regulations. Q: Will biodiesel harm my boat in any way? You should know that at higher blend levels, biodiesel's solvent properties, over time, may begin to react with certain types of rubber gaskets and hoses. You should also be aware that biodiesel will clean fuel tanks and lines of built-up residues which will then accumulate in the fuel filter - you may have to change your fuel filters more frequently when first using biodiesel. Also, because of its solvent properties, you should promptly wipe up any spills that should occur on your boat with soap and water so that your gelcoat and teak are not affected. Q: Where can I find Biodiesel in my area?Click here to find a biodiesel distributor near you.
<urn:uuid:f2e14056-153f-4af8-ba73-88091fd8b8d1>
CC-MAIN-2013-20
http://www.boatus.com/foundation/cleanwater/biodiesel.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945798
624
2.9375
3
Healthy Farms - Healthy Agriculture Farm Assessment and Biosecurity Planning When developing guidelines, keep in mind the risks to your farm's biosecurity. Tailor management practices to address those risks, identifying the critical control points you should address. Start by asking the following questions about management practices on your farm: - Do we show any animals at fairs, shows, or exhibitions? Do we buy animals at these events? - Do we use bulls, rams, or bucks on our farm that we didn't raise ourselves? - Do heifers ever intermingle with heifers from another farm? - Do we buy replacement animals? - Do visiting vehicles cross the tracks of feed delivery or manure hauling equipment on our farm? - Do we have a designated area where haulers pick up cull animals and deadstock? - Do we host school tour groups or encounter non-agricultural visitors who want to look around? - Do any foreigners or people who have traveled outside of North America visit our farm? - Do veterinarians and other agri-service personnel arrive with clean boots and sanitize them before working with our animals? - Do we control flies and other insects, rodents, domestic animals, birds, and wildlife? Use the Risk Assessment Questionnaire and Scorecard. Used in conjunction, these documents will help you identify which actions will protect your farm against the greatest risks. Discuss ideas with your veterinarian or extension specialists. Incorporate these actions into a written plan. Involve your partners and employees in the process and communicate the reason for following each step of the plan. Implement the plan. Post appropriate signs, secure entryways to facilities as needed, and enforce the plan consistently. Make sure your friends and neighbors understand why they, too, must follow the plan. Reassess the plan annually. Your experience and new information may cause you to revise your plan. Reviewing your plan will help renew your commitment to biosecurity. Last modified March 17 2011 10:24 PM
<urn:uuid:fd51ddaf-3750-4d36-9f88-39e1a224a9cc>
CC-MAIN-2013-20
http://www.uvm.edu/~ascibios/?Page=assessment.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917532
423
2.875
3
The graph below shows the velocities of two objects of equal mass as a function of time. Forces FA, FB, ... FF acted on the objects during intervals A, B, ... F. Which of the statements below are correct descriptions of the magnitudes of the forces? (If A and E are, and the others are not, enter TFFFT).
<urn:uuid:f73b1cd3-cc9f-4892-bd0f-8843fcbede10>
CC-MAIN-2013-20
http://www.chegg.com/homework-help/questions-and-answers/graph-shows-velocities-objects-equal-mass-function-time-forces-fa-fb--ff-acted-objects-int-q2953874
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.844822
78
4.0625
4
By Mahdi Al-Kaisi, Department of Agronomy; Mark Hanna, Department of Agricultural and Biosystem Engineering; Mark Licht, Extension Field Agronomist Normally early spring soil moisture is a challenge when the soil profile is fully charged. Depending on the amount of snow we receive and duration of winter, there is a tendency for producers to enter fields at less-than-ideal soil conditions, especially when there is a short window for conducting field operations. Soil compaction caused by field traffic and machinery increases with high soil moisture. Over the past decade the size of Iowa farms has increased, leading to larger and heavier equipment. However, equipment size is only one factor among many causes of the soil compaction problem. Rushing to the field when the soil is wet, combined with the weight of equipment and traffic pattern in the field, can increase chances for severe soil compaction. Conducting field operations during wet field conditions compounds the amount of compaction occurring. Maximum soil compaction occurs when soil moisture is at or near field capacity (Figure 1) because soil moisture works as a lubricant between soil particles under heavy pressure from field equipment. Figure 1. Relationship between soil moisture and potential soil compaction. Indications of soil compaction during and immediately following a normal rainfall include slow water infiltration, water ponding, high surface runoff, and soil erosion. Additionally, soil compaction can be diagnosed by stunted plant growth, poor root system development (Photo 1), and potential nutrient deficiencies (i.e., reduced potassium uptake). These soil compaction symptoms are a result of increased bulk densities that affect the ideal proportion of air and water in the soil. Photo 1. Effect of soil compaction on root growth at three different soil bulk densities: Low, 0.7 g/cm3; Medium, 1.1 g/cm3; High, 1.6 g/cm3. The most efficient way to verify soil compaction is to use a tile probe, spade, or penetrometer to determine a relative soil density. Soil moisture conditions can have a significant effect on penetration resistance. For example, in dry soil conditions soil penetration resistance is much higher than wet conditions because soil water acts as a lubricant for soil particles. Therefore, it is wise to determine soil compaction early in the season or compare observations and measurements from suspected areas with adjacent areas that have little chance of soil compaction due to traffic patterns. Management decisions to minimize soil compaction The most effective way to minimize soil compaction is to avoid field operations when soil moisture is at or near field capacity. Soil compaction will be less severe when soil tillage, fertilizer application and planting operations occur when the field is dry. Soil moisture can be determined using a hand ball test or observing a soil ribbon test. Properly adjusted tire size and correct air pressure for the axle load being carried is a second management tool. Larger tires with lower air pressure allow for better flotation and reduce pressure on the soil surface. Additionally, using larger tires that are properly inflated increases the "footprint" on the soil. A third management decision is to use the same wheel tracks to minimize the amount of land traveled across. Most damage occurs with the first pass of the implement. Using control traffic patterns can be done effectively by using implements that have matched wheel-tread configuration for soil preparation, planting, row cultivation, spraying and harvesting. Soil compaction can be a serious problem for Iowa farmers, but with proper farm management, compaction can be minimized. Remember to hold off soil tillage operations until soil conditions are drier than field capacity and look into the benefits of conservation tillage systems. Top 10 Reasons to Avoid Soil Compaction 1. Causes nutrient deficiencies 2. Reduces crop productivity 3. Restricts root development 4. Reduces soil aeration 5. Decreases soil available water 6. Reduces infiltration rate 7. Increases bulk density 8. Increases sediment and nutrient losses 9. Increases surface runoff 10. Damages soil structure Source: Iowa State University Extension publication PM 1901b - Understanding and Managing Soil Compaction -- Resource Conservation Practices Mahdi Al-Kaisi is an associate professor in agronomy with research and extension responsibilities in soil management and environmental soil science. Mark Hanna is an extension agricultural engineer in agricultural and biosystems engineering with responsibilities in field machinery. Mark Licht is an Iowa State University Extension field agronomist serving Calhoun, Carroll, Crawford, Greene, Ida, Monona, and Sac counties.
<urn:uuid:9a03a0b2-864a-42bb-aa84-e1f8e055f193>
CC-MAIN-2013-20
http://www.extension.iastate.edu/CropNews/2009/Issues/20090302.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922803
947
3.140625
3
ColoradoEdit This Page From FamilySearch Wiki - This article is about the western state of the U.S. For other uses, see Colorado (disambiguation). United States > Colorado Welcome to Colorado, The Centennial State Most prestatehood settlers of Colorado began arriving at the time of the gold rush of 1858. They came from the northeastern and midwestern states, especially New York, Illinois, Missouri, Ohio, and Pennsylvania. Some came from the New Mexico Territory, and a few settlers came from the southern states, the Pacific Coast, and from other countries including England, Ireland, Germany, Sweden, Scotland, and Wales. Latter-day Saint settlements were made in the San Luis Valley in the 1870s and 1880s. The Plains Indians of Colorado, including the Arapaho, the Cheyenne, the Kiowa, and the Comanche, had largely been removed to Indian Territory in Oklahoma by 1870. The Ute Indians living in western Colorado did not give up their lands to white settlement until after 1880, when most of them were moved to reservations in Utah. Carbonate | Greenwood | Guadalupe | Uncompahgre - Find which county a town is in, what town a cemetery is in, even where a postoffice or building is by using the United States Geographical Survey's Geographical Names Information System. - David Rumsey Map Collection is a large online collection of rare, old, antique historical atlases, globes, maps, charts plus other cartographic treasures. - The Colorado GenWeb Project has a wealth of information and is a part of the larger USGenWeb Project. The USGenWeb Project provides internet information on every county in every state in the United States. Things you can do In order to make this wiki a better research tool, we need your help! Many tasks need to be done. You can help by: New to the Research Wiki? In the FamilySearch Research Wiki, you can learn how to do genealogical research or share your knowledge with others.Learn More
<urn:uuid:32fe0697-b421-476a-8f4f-eaecd118f124>
CC-MAIN-2013-20
http://www.familysearch.org/learn/wiki/en/index.php?title=Colorado&oldid=279790
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926875
426
2.859375
3
When I was little, I loved Benjamin Elkin’s story of The Big Jump, in which a young boy finds a stray dog he hopes to keep. The boy and the pup become fast friends, but unfortunately, in this land only kings are allowed to own dogs. The king, who can spring from the ground to the top of his castle in one leap, promises the boy he may keep the pup if he too can jump to the top of the castle. Well, the motivated lad goes home to practice but, try as he might, he can only scale two boxes. And then . . . an idea strikes! Returning to the palace, he finally does succeed in jumping to the top. How on earth does he accomplish this? Well, the king has never told him he must do it in a single bound! So the clever boy takes it one…step…at…a…time! Delighted by the boy’s “out of the box” approach, the king awards him the coveted dog. Writing is a lot like this. Our kids want to make The Big Jump, leaping from blank paper to final draft in one stride. But when they realize that their target is more reachable by taking smaller steps, they begin to believe they can do it. And in the end, they achieve a worthy goal: a polished composition they’re proud to share with others. . . . . . Do you struggle with teaching, editing, and grading your teen’s writing? Are you looking for ways to integrate the steps of the writing process into your lesson plans? Perhaps WriteShop is the answer. Visit www.writeshop.com and poke around. About WriteShop and Parent Testimonials may be good places to begin.
<urn:uuid:a5d6adc2-d16e-414f-bc29-801aee0c69bb>
CC-MAIN-2013-20
http://www.writeshop.com/blog/2008/09/05/inch-by-inch-a-cinch/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962183
368
2.6875
3
In February 2012, the US Supreme Court granted certiorari to once again decide the constitutionality of affirmative action in higher education. Next term, the Court will hear the case of Fisher v. University of Texas at Austin, which arises from a complaint made by Abigail Noel Fisher alleging that she was denied admission to the University of Texas while minority students with lower grade point averages and standardized test scores were admitted. In its April 2012 decision, the US Court of Appeals for the Fifth Circuit upheld the university's affirmative action program by finding that it passed strict scrutiny. The Supreme Court will be called on to address whether the University's program, which gives minority students an advantage on undergraduate college applications, violates the Due Process Clause of the Fourteenth Amendment. The issue of affirmative action stems from the divisive period of the Civil Rights Movement during the 1950s and 1960s. In the midst of civil strife and tense race relations, the US Congress, with the encouragement of President Lyndon B. Johnson, passed a series of laws which are popularly known as the Civil Rights Act of 1964. One section of that document has particular bearing on the issue of affirmative action — Title VII. Title VII prohibits discrimination by covered employers on the basis of race, color, religion, sex or national origin and at its heart held the Equal Protection Clause of the Fourteenth Amendment, which states that "no state shall ... deny to any person within its jurisdiction the equal protection of the laws." However, the law alone did not end the inequality. Supplementing Title VII, Johnson declared that all federal contractors be required to take affirmative action in hiring practices towards minorities in his Executive Order 11246. Subsequently, the practice became integrated into higher education in public universities. The first true challenge to the policy in regards to college admissions came in 1978 in the decision in Bakke v. Regents of the University of California. In Bakke, an applicant to the University of California Davis School of Medicine was denied admission while other, less academically qualified minority candidates were granted admission. The Supreme Court decided that the University's quota system for admitting minorities was far too rigid and thus violated the Equal Protection Clause of the Fourteenth Amendment. However, Justice Powell went further to state that despite their ruling, diversity in higher education was, in fact, a compelling interest to continue affirmative action. Many years and much debate later, two cases changed the standard of review for affirmative action cases. The cases of Freeman v. Pitts and Adarand Constructors, Inc. v. Pena, falling within three years of one another, illustrated the court's renewed interest in the policy. In Freeman, the court found that not only was diversity a compelling interest in pursuing affirmative action in higher education, but remedying past racial injustice also met this benchmark. Then, in Adarand, the court found that the standard of review for federal race and ethnicity based programs would be strict scrutiny, requiring: The decision in Adarand reaffirmed the 1989 case of City of Richmond v. Croson, which applied the standard to state-based challenges. - (1) a compelling governmental interest in promoting or restraining a certain action, and - (2) that the action be narrowly tailored to that end. The next cases of importance came a decade later in Gratz v. Bollinger and Grutter v. Bollinger in 2003, collectively known as the Michigan cases. In Gratz, the two petitioners Jennifer Gratz and Patrick Hamacher — both white residents of the state of Michigan — were denied admission to the University of Michigan's undergraduate program. The petitioners filed suit against the university in 1997 claiming that their Fourteenth Amendment rights to equal protection were infringed upon. They sought declaratory and injunctive relief. The Court heard the case in conjunction with another case that had been brought against the University's law school, Grutter v. Bollinger. In Grutter, the petitioner Barbara Grutter was similarly denied admission to the University's law school based on the school's affirmative action policy that gave minorities an advantage in admissions. The Court split their decision on the two cases. In Gratz, the Court held that Michigan's point-based admissions system was too rigid and gave too much weight to race. However, the Court diverged from this opinion in Grutter where they held that the more holistic approach to admissions utilized by the university's law school was constitutionally valid and that there was still a necessity of promoting diversity in higher education. Justice O'Conner, writing the opinion of the court stated: The Court takes the Law School at its word that it would like nothing better than to find a race-neutral admissions formula and will terminate its use of racial preferences as soon as practicable. The Court expects that 25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today.While the decision in Grutter held that affirmative action in higher education was still supported by a legitimate governmental interest, only four years later a fractured Court dealt a blow to the legal rationale underpinning their decision. In the case of Parents Involved in Community Schools v. Seattle School District No. 1, the Court found in its plurality opinion that remedying past racial diversity was no longer a sufficiently compelling governmental interest. Additionally, the Court held that denying a student admission to the school of their choice based on a pursuit of racial diversity violated that student's equal protection rights. However, Justice Kennedy writing in concurrence stated that "diversity, depending on its meaning and definition, is a compelling educational goal a school district may pursue." Further, while this case represented a changing view on the program, the Court has long held that affirmative action in higher education is uniquely privileged. In recent years, strongly competing opinions have emerged on the rationale underlying affirmative action in higher education. Former presidents of Princeton and Harvard, William G. Bowen and Derek Bok, respectively, are strong proponents of the policy. In their 1998 book, The Shape of the River, they offer empirical evidence showing the benefits of the policy on minorities. They cite reasons including increased college access to minorities, increased earning power of graduates and popular support for affirmative action as the basis for the continued necessity of the policy. Opponents of the program, including Marie Gryphon of the CATO Institute, have called their statistics into question and systematically undercut their arguments. Ms. Gryphon argued that not only were their conclusions based on flawed representative samples but contended that the policy is actually detrimental to minority students. According to Gryphon, affirmative action produces no concrete benefits to minority groups but instead produces several significant harms. First, a phenomenon called the "ratchet effect" occurs when the preferences at a handful of top schools, including state flagship institutions, worsen racial disparities in academic preparation at all other American colleges and universities. This occurs due to the fact that top schools are able to create a class that is both racially diverse and academically equivalent while less selective schools are forced to make greater concessions in order to create a racially diverse student body. Here, the schools are forced to accept less academically prepared minority candidates in order to achieve racial diversity due to the dwindling pool of applicants. This gap in preparation combines with other negative factors to create disparate graduation rates between minority and non-minority groups. Ms. Gryphon cites recent sociological research concluding that admission preferences hurt campus race relationships. According to the studies, this in turn harms minority students' performance by activating fears of confirming negative group stereotypes, lowering grades and reducing college completion rates among minority students. Finally, Gryphon argues that the benefit of affirmative action programs may not be as great as previously thought. She states that recent research shows that skills, not credentials, can narrow socioeconomic gaps between white and minority families. Therefore, policymakers should end the harmful practice of racial preferences in college admissions. Instead, they should work to close the critical skills gap by implementing school choice reforms and setting higher academic expectations for students of all backgrounds Further complicating the issue is the recent accidental release of the academic information of students at Baylor Law School, including their GPA and LSAT scores. However, even with hard data, there is still disagreement over how significant of an advantage was given to those students. Some sources view the advantage as miniscule while others view it as significant. Despite disagreement over the impact of the program, the incident has brought the subject of affirmative action to the forefront of the minds of the legal community once again and will likely play some role in the upcoming case. Ultimately, the issue of the continued implementation of affirmative action is going to come down to the decision of a Court bearing little resemblance to the one that upheld the program in Grutter. Two justices who signed on to the opinion, Justices Stevens and Souter, have been replaced with ideologically comparable successors in Justices Sotomayor and Kagan. However, Kagan has recused herself due to her role in the case as former US solicitor general. Likewise, Chief Justice Rehnquist has been replaced with conservative Chief Justice John Roberts. However, O'Conner, the opinion writer for Grutter, has since retired and has been replaced by the more conservative Justice Samuel Alito. Remaining on the Court are Justices Thomas, Scalia and Kennedy who dissented in Grutter and Justices Breyer and Ginsburg who wrote concurrences. The 5-4 Grutter majority seems to have dwindled to a 5-3 split in the other direction. The fate of affirmative action, the policy that Justice O'Conner predicted would stand for 25 years from her opinion in Grutter, will likely face tough opposition in the upcoming term. With the changing membership of the Court, the recent case law and the recent research and events it is not inconceivable that the decades old practice could come to an end. James Craig earned his B.A. in political science and history from the University of Pittsburgh in May 2011. He is currently an associate editor of JURIST's Social Media service. The opinions expressed herein are solely those of the author and do not necessarily represent those of JURIST or any other organization. Suggested citation: James Craig, The Unsure Fate of Affirmative Action, JURIST - Dateline, May 18, 2012, http://jurist.org/dateline/2012/05/james-craig-affirmative-action.php This article was prepared for publication by Elizabeth Imbarlina, the head of JURIST's student commentary service. Please direct any questions or comments to her at firstname.lastname@example.org
<urn:uuid:473d43c9-1246-4ffa-8f8a-e1865b0d855d>
CC-MAIN-2013-20
http://jurist.org/dateline/2012/05/james-craig-affirmative-action.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962181
2,158
3
3
How Healthy are Meat Substitutes? If you’ve seen meat substitutes at the local grocery store, you may be wondering how healthy they are for your diet. Here’s a closer look at the fat, protein and sodium content of meat substitutes. What Are Meat Substitutes? As the name might suggest, meat substitutes are products that are high in protein, and therefore can be used in place of beef, fish, chicken and other meat sources. The most well-known type of meat substitute is tofu, which is made from ground soybeans. However, a variety of different beans are also high in protein, and thus are also sometimes referred to as meat substitutes. One of the most important issues that must be talked about in a discussion about the nutrition of meat substitutes is their fat content. In general, when compared to beef, chicken and other types of meat, meat substitutes are relatively low in fat. The small amount of fat that does exist in meat substitutes is typically unsaturated. In contrast to saturated fats, which have been linked to the development of obesity, cardiovascular disease, and other serious conditions, unsaturated fats have actually been found to be effective in the treatment and prevention of some of these chronic conditions. Since there is not a whole lot of fat to begin with in meat substitutes, the calorie content is also relatively low. The goal of meat substitutes is to serve as an effective replacement for meats. In order to do this successfully, meat substitutes must be able to provide a high amount of dietary protein. When studied, meat substitutes such as tofu and garbanzo beans were found to be actually higher in protein content than some other forms of meat. Protein is essential for a number of bodily functions. Besides keeping your hair and nails looking healthy and shiny, protein helps to maintain and promote new muscle growth. Without adequate muscle development, you would be more likely to experience falls, breaks and fractures. In order to get a complete picture of the nutritional status of meat substitutes, you have to look at the sodium content. The sodium content of meat also refers to how much dietary salt you consume by eating a piece of meat. Sodium can be detrimental to good health in a number of ways. While salt has been found to be linked to the development of cardiovascular disease, it can also contribute to bloating and weight gain. In comparison to beef, chicken and especially pork, meat substitutes are extremely low in sodium. - 4 Healthier Substitues for Red Meat - Five Healthier Substitutes for Red Meat - How to Moderate Your Red Meat Consumption for Weight Loss - Is Red Meat Really That Unhealthy? - 4 Best Deli Meats for Healthy Lunch Sandwiches
<urn:uuid:3366a890-dce6-4d7a-91f8-4ec5201f9451>
CC-MAIN-2013-20
http://www.3fatchicks.com/how-healthy-are-meat-substitutes/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955745
549
3.140625
3
Paget's Disease of Bone Paget's disease is a long-lasting (chronic) disorder that causes abnormal bone growth. Paget's disease most often affects the bones in the pelvis, spine, skull, chest, and legs. In healthy people, bone is constantly being replaced as bone tissue is broken down and absorbed into the body, then rebuilt with new cells. In the early stages of Paget's disease, bone tissue breaks down faster than it rebuilds. To make up for this breakdown process, the body speeds up the rebuilding process. This new bone, though, is often weak and brittle, causing it to break (fracture) more easily. Most cases of Paget's disease do not cause symptoms. But the most common symptoms, when they occur, are bone pain and deformed bones (bowed legs, enlarged skull or hips, or a curved backbone). Paget's disease is most common in middle-aged and older adults. It may be treated with medicines or, in rare cases, with surgery. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org Find out what women really need. Most Popular Topics Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
<urn:uuid:f7a1cf2d-6ba1-4a2f-bd5f-d7fe1bed8742>
CC-MAIN-2013-20
http://www.emedicinehealth.com/script/main/art.asp?articlekey=133957&ref=128775
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935801
275
3.609375
4
are physicians who specialize in the care of the eyes. They conduct examinations to determine the quality of vision and the need for corrective glasses or contact lenses. Ophthalmologists also check for the presence of any disorders such as glaucoma or cataracts. They may also perform surgery to treat glaucoma, cataracts, retinal detachment or obstruction of ophthalmologist has a broad knowledge of general medicine and also years of clinical training and experience in the diagnosis and treatment, both medical and surgical of diseases and injuries that affect vision. An Ophthalmologist should not be confused with an optometrist who is licensed only to examine eyes and prescribe corrective lenses. Often patients of Ophthalmologists seek care only after their vision has been impaired or when there is significant pain. Because serious damage may have taken place by then, it pays to heed early warning signs and to seek out expert advice promptly. Some of the signs are fuzzy vision, double vision, halos, crossed eyes, "cobwebs," "floaters," flashes of light, sensitivity to light, inflamed eyes, white pupil and "cat's-eye" pupil. It is advisable for adults past the age of 40 to have periodic checkups for glaucoma and, in later years, checkups for cataracts every two years. Patients who have sickle-cell anemia or diabetes should have their eyes examined every six months.
<urn:uuid:d2d4d413-919c-4988-92a2-9b6a10766f88>
CC-MAIN-2013-20
http://www.glenrosemedicalcenter.com/ophthalmology.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940795
314
3.09375
3
What is spina bifida? Spina bifida is a birth defect. Most children who have spina bifida do not have problems from it. It occurs when the bones of the spine (vertebrae) do not form properly around part of the baby’s spinal cord. It can affect how the skin on the back looks. And in severe cases, it can make walking or daily activities hard to do without help. The disease can be mild or severe. - The mild form is the more common form. It usually does not cause problems or need treatment. You can't see the defect. So most people don't know they have it until they get a back X-ray for another reason. - The severe forms are less common. There are two types: - Meningocele (say "muh-NIN-juh-seel"). Fluid leaks out of the spine and pushes against the skin. You may see a bulge in the skin. In many cases, there are no other symptoms. - Reference Myelomeningocele Opens New Window Reference Opens New Window (say "my-uh-loh-muh-NIN-juh-seel"). Although this is the most rare and severe form of spina bifida, it is the form most people mean when they say "spina bifida." Part of the spinal nerves push out of the spinal canal, and you may see a bulge in the skin. The nerves are often damaged, which can cause problems with walking, bladder or bowel control, and coordination. In some babies, the skin is open and the nerves are exposed. What causes spina bifida? The exact cause of this birth defect is not known. Experts think that Reference genes Opens New Window and the environment are part of the cause. For example, women who have had one child with spina bifida are more likely to have another child with the disease. Women who are obese or who have diabetes are also more likely to have a child with spina bifida. What are the symptoms? Your child’s symptoms will depend on how severe the defect is. With a mild defect, your child may have no symptoms or problems. Or your child might have a dimple, a birthmark, or a hairy patch on his or her back. In severe cases, you may see nerves coming out of your child’s back or swelling on the spine. A child with a severe defect may have nerve damage that affects daily living. The child may have little or no feeling in the legs, feet, or arms. And he or she may not be able to move those parts of the body. Children with a severe defect are sometimes born with fluid buildup in the brain (Reference hydrocephalus Opens New Window). They may also have this problem after birth. It can cause seizures, intellectual disability, or sight problems. Some children also get a curve in the spine, such as Reference scoliosis Opens New Window. Many children who have severe spina bifida develop an allergy to latex (a type of rubber). How is spina bifida diagnosed? A pregnant woman can have a blood test (maternal serum triple or quadruple screen) and a Reference fetal ultrasound Opens New Window to check for spina bifida and other problems with the Reference fetus Opens New Window. If test results suggest a birth defect, she can choose to have an Reference amniocentesis Opens New Window. This test helps confirm if spina bifida exists. But the test also has risks, such as a chance of miscarriage. After birth, doctors can tell if a baby has spina bifida by how the baby’s back looks. The doctor may do an Reference X-ray Opens New Window, an Reference MRI Opens New Window, or a Reference CT scan Opens New Window to see if the defect is mild or severe. How is it treated? Treatment depends on how severe the defect is. Most children with spina bifida have only a mild defect and may not need treatment. But a child with a severe defect may need surgery. If your child has problems from nerve damage, he or she may need a brace or a wheelchair, physical therapy, or other aids. There are things you can do to support your child: - Help your child be active and eat healthy foods. - Go to all scheduled doctor visits. - Talk to your doctor about early treatment. Most children who have spina bifida and their parents work with people such as physical therapists or occupational therapists starting soon after the baby is born. Therapists can teach parents and caregivers how to do exercises and activities with the child. - Keep your child away from latex products if he or she has a latex allergy. - If your child has bladder control problems, help him or her use a Reference catheter Opens New Window each day. It can help prevent infection and kidney damage in your child. - If your child has little or no feeling in the limbs and can't sense pain, he or she may get injured and not know it. You may need to check your child’s skin each day for cuts, bruises, or other sores. - When your child is ready to go to school, talk with teachers and other school workers. Public schools have programs for people ages 3 through 21 with special needs. - Take good care of yourself so you have the energy to enjoy your child and attend to his or her needs. - Ask for help from support groups, family, and friends when you need it. How can you prevent spina bifida? Before and during pregnancy, a woman can help prevent spina bifida in her child. - Reference Get plenty of folic acid each day. Eat foods rich in Reference folic acid Opens New Window, such as fortified breakfast cereals and breads, spinach, and oranges. Your doctor may recommend that you also take a daily vitamin with folic acid or a folic acid supplement. - If you take medicine for seizures or acne, talk with your doctor before you get pregnant. Some of these medicines can cause birth defects. - Don't drink alcohol while you are pregnant. Any amount of alcohol may affect your baby’s health. - Don't let your body get too hot in the first weeks of pregnancy. For example, don't use a sauna or take a very hot bath. And treat high fevers right away. The heat could raise your baby’s risk for spina bifida All foods made from grains and sold in the United States have folic acid added. It helps prevent children from being born with spina bifida. Learning about spina bifida: Living with spina bifida: |By:||Reference Healthwise Staff||Last Revised: Reference March 21, 2011| |Medical Review:||Reference John Pope, MD - Pediatrics Reference Louis Pellegrino, MD - Developmental Pediatrics
<urn:uuid:d7d0a1ed-65c1-41c8-8714-256a7460002d>
CC-MAIN-2013-20
http://www.pamf.org/health/healthinfo/index.cfm?A=C&hwid=hw169956
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933586
1,482
3.40625
3
The Allee effect is a biological phenomenon where the per capita population growth of a species (or a population within that species) drops when the total number of members of the species drops. Stated differently, each female gives birth to more offspring when density is higher within a population. Named after American zoologist Walter Clyde Allee, the effect changed the common understanding of population growth.At the time of his studies, it was believed that populations would, in fact, thrive at a certain lower population, because more resources would be available to those fewer specimens. In other words, population growth would slow with higher numbers, and grow with smaller numbers. However, the work of Allee (and others) demonstrated that as population drops, so does the number of available mates—and the amount of group protection—and thus, the population growth slows. Conversely, the more members of a population there are, the faster growth occurs. Audience effect & drive theory The audience effect is the effect an audience has on a person—or a group of people—who are attempting to perform a certain task while being watched. An effect first studied by psychologists in the 1930s, it primarily shows up in two opposite extremes; many performers (athletes in particular) will actually raise their level of play when a large crowd is watching, while others will succumb to stress and self-consciousness and end up performing worse than their true talent level.In 1965, social psychologist Ribert Zajonc suggested that the drive theory could account for the audience effect. Zajonc suggested that what determines whether a passive audience causes a positive or a negative effect on the performer depends upon the relative “easiness” of the task being performed. If the performer believes that she should win a fight, for instance, the audience effect will tend to motivate her to perform at a high level. If she is unsure to begin with, the audience effect may facilitate a loss due to lower self-esteem. Related to the audience and drive effects is the Pygmalion Effect, which connects the positive expectations placed upon a performer to the resulting high quality of that performance. Named after the classic George Bernard Shaw play Pygmalion (upon which the film “My Fair Lady” is based), and sometimes called the “Rosenthal effect,” the effect is essentially a type of self-fulfilling prophecy. The opposite of the Pygmalion effect, which states that lower expectations may lead to lower performance levels and success, is known as the “golem effect.”The effects of Pygmalion have been studied at length in the world of athletics, business, and especially education. In business, the effect is seen most often in the way managers get results based on their expectations of their own employees; as former business professor J. Sterling Livingston noted in his studies of the effect, “The way managers treat their subordinates is subtly influenced by what they expect of them.” Similarly, the research conducted by Robert Rosenthal and Lenore Jacobson on the Pygmalion effect in the classroom suggests that when teachers expect higher performance from certain students, those students would more likely than not deliver. Landau-Kleffner Syndrome is an odd disorder; children who suffer from it—generally between the ages of five and seven—frequently lose the ability to properly express and understand language. Some people with this syndrome also suffer from seizures, and scientists are yet to understand why the disorder occurs. It’s all made stranger by the fact that the children usually develop their language skills just fine, and then seem to lose them randomly. Certain speech therapies can be helpful in managing the condition, but it is fairly difficult to treat. Aboulomania isn’t a very well-known disorder; essentially, it involves the occasional onset of crippling indecision. Aboulomania sufferers are normal in practically every other way, physically and mentally—they simply run into very serious problems whenever they’re faced with certain choices, to the extent that they struggle to regain normal function. Some aboulomania sufferers face incredible difficulties in everyday life, finding it nearly impossible to do simple things; even wondering whether or not they should go out for a walk can paralyze them with indecision. Many sufferers report that their incapacity to do what they want comes in spite of that fact that they’re aware of being physically fine—and so they seem to be imprisoned by the inability to fulfill their own will. Mary Hart Syndrome If you wanted bizarre, then you’re about to get it. It turns out that there are reported cases of people experiencing seizures upon hearing the voice of Mary Hart, a TV personality. A doctor who studied one of these claims said that the woman concerned really did fall into a seizure at the sound of Hart’s voice; he reported that the woman would also grip her head, looking distracted and confused. It is important to note, however, that this strange syndrome seems only to affect those who already have seizures for other reasons. There is only one type of amnesia Many people are only familiar with the popular culture version of amnesia, whereby someone forgets their past but can remember new things with no trouble; this is indeed a real condition, and is known as retrograde amnesia. Of course, the movies in which someone gets a bump on the head, forgets everything, and then remembers it later with another bump, are absurd. There is also a second form of amnesia most people haven’t heard of called anterograde amnesia, and this one is arguably much worse than the first one. If you are struck by anterograde amnesia, you’ll be able to remember your past—but you won’t be able to form any new long-term memories. Dreams play an important role in psychological counseling Some people believe that dreams are a valid way of understanding people’s problems, and that they are therefore used regularly in therapy—but this is not the case. This particular misconception has been propagated mostly by movies and television, which often show the fictional therapist lying their client down on a couch and asking him about his dreams. It’s important to note that dreams supposedly had a lot to do with the unconscious, according to Freud’s theories, and were very important. Many of Freud’s theories relating to dreams involved very young children. But more recent research shows that dreams in younger children have very little detail or subtext. Freud contributed greatly to psychology—but it’s generally agreed that his theories about human sexuality and dreams are total nonsense. You should not comfort babies too much when they cry This one is a bit controversial; there are a few contrasting opinions on it, to say the least. Some self-help authors have come up with the idea that parents should simply let their baby howl away. Most researchers, however, don’t think that comforting a baby will bring it any harm; some studies have even shown that ignoring a crying baby could well have detrimental effects. Importantly, some researchers have found that—at least during the first few months of its life—you should always console a crying baby. “The fear of walking or standing.” Imagine the implications of such a fear: the mere thought of standing or walking around fills you with utter terror. How in the world do you live a normal life? You certainly can’t travel around in a motorized chair all the time. Unfortunately for ambulophobes, human flying has not yet been achieved, either. It would seem that an individual suffering from this devastating phobia would be forced to confront their fear many, many times, every single day of their life. That doesn’t sound like fun. “The fear of making decisions.” As you can see, some phobias have profound psychological consequences. If someone is deathly afraid of making a decision, then how do they go about life? Do they instruct others to make a decision for them? Isn’t that a decision in itself? Do they simply follow a real life equivalent of stream-of-consciousness, simply “going with the flow”, and not interfering with the normal course of events? But isn’t THAT a decision, too? Decidophobes must be in a constant state of mental flux; as long as they contemplate a decision, they shouldn’t experience fear. It’s the act of actually making the decision that terrifies them. This essentially means that any sort of personal interaction with the world requires a decidophobe to overcome traumatizing fear. “The fear of knowledge.” What? The fear of knowledge? Indeed. No school. No education. No introduction to any new facts of any sort. Developing epistemophobia is akin to placing a cognitive cap on your development. You can’t learn anymore, unless you’re willing to withstand unrelenting terror throughout the entire process, which would obviously impair your ability to even comprehend the new material in the first place.
<urn:uuid:39bd0221-d67d-473d-accd-b2d0c72ed7a1>
CC-MAIN-2013-20
http://beben-eleben.tumblr.com/tagged/psychology
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962782
1,884
3.8125
4
TRAVELING INTO THE UNKNOWN ALL INFORMATION ON MARS IN THIS SITE HAS BEEN GATHERED FROM THE ESA AND NASA Mars has always held a fascination for people all around the world. It is our closest planet neighbor and is farthest of the planets with hard rocky terrain just before hitting the asteroid belt and the giant gas planets, such as Jupiter and Saturn. Since the first observations (using telescopes) of Mars in the early 1600s, we have supposed that Mars is more like Earth than any other planet in the Solar system. Dr. Mark Hammergren from Adler Space Planetarium in Chicago, Illinois also thinks that Mars is similar to the Earth. Click here to read an interview with him! The observation of Mars started to become real when the first robotic spacecraft began traveling there in the mid-1960s -- not all that many missions actually turned out successful. The first spacecraft to fly past Mars was a Soviet mission launched in 1962, but it happened to be silent because it lost contact with Earth before actually reached Mars. Mariner 3, the United States' first Mars mission, failed because it was unable to unfold its solar panels and never reached Mars. Finally, the first successful flyby of Mars came in 1965, when Mariner 4 came within 6,118 miles of Mars and returned with 21 close-up photos. Afterward, spacecraft returned images of a desolated, frosty planet, packed with giant craters similar to the ones seen on the moon. Still, changes were to come as the robotic exploration in the 1970s again showed Mars with different and new possibilities. And not quite so sinister. A view of the not so sinister Mars Getting to Mars has never been the easiest challenge for space agencies including NASA and the ESA. The first attempt to land on Mars, by Soviets in 1971, suffered a braking-rocket failure and crashed, returning no data. A companion mission landed successfully and returned barely 20 seconds of video before failing. The United States successfully placed a satellite in Martian orbit in 1971 (Mariner 9, the first spacecraft from Earth to orbit another planet), then landed two Viking spacecraft in 1976. Robotic explorations may prove that Mars resembles two different worlds that have been put together: From latitudes around the equator to the south, prehistoric highlands scattered with channels prove that there may is or was water once flowing through the red planet. The northern third of the planet is hollow with a much smoother surface -- perhaps the floor of a very old sea or the product of immeasurable lava flows or vast deposition of dust-bowl sediments. Credit to recent discoveries from NASA's Mars Global Surveyor and Odyssey spacecraft, scientists now believe a giant ice sheet may lay under the northern plains. By searching for clear evidence of the history of water on Mars, future spacecrafts will discover many secrets of Mars in the past, present and future. Was there ever life on Mars? Will there be life on Mars in the future? Scientists continue to search for the answer. Looking for Water and Life throughout the Universe Missions to Mars Mars Probes and Rovers Quiz Home---Exploring the Moon---Exploring Mars---Creatures in Space---Space Centers--Satellites and Probes---Timeline---Games and Quizzes---Credits
<urn:uuid:2ea74555-6251-4136-a1d5-896aaf45e89c>
CC-MAIN-2013-20
http://library.thinkquest.org/03oct/01785/Space%20Travel%20Website%203/Pages/mars1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933874
674
3.703125
4
Dictionary Meaning and Definition on 'Ascription' - assigning some quality or character to a person or thing; "the attribution of language to birds"; "the ascription to me of honors I had not earned" [syn: attribution] - assigning to a cause or source; "the attribution of lighting to an expression of God's wrath"; "he questioned the attribution of the painting to Picasso" [syn: attribution] - Ascription \As*crip"tion\, n. [L. ascriptio, fr. ascribere. See The act of ascribing, imputing, or affirming to belong; also, that which is ascribed. Wikipedia Meaning and Definition on 'Ascription' In traditional grammar, a predicate is one of the two main parts of a sentence, the other being the subject. The predicate is said to modify the subject. For the simple sentence "John is yellow" John acts as the subject, and is yellow acts as the predicate. The predicate is much like a verb phrase. In many current theories of linguistic semantics (notably truth-conditional semantics), a predicate is an expression that can be true of something. Thus, the expressions "is yellow" or "is like broccoli" are true of those things that are yellow or like broccoli, respectively. This notion is closely related to the notion of a predicate in formal logic, which includes more expressions than the former one, such as nouns and some kinds of adjectives. A predicate is one of the two main parts of a sentence (the other being the subject, which the predicate modifies). The predicate must contain a verb, and the verb requires, permits, or precludes other sentence elements to complete the predicate. These elements are: objects (direct, indirect, prepositional), predicatives, adverbs:[See more about Ascription at Dictionary 3.0 Encyclopedia] Words and phrases related to 'Ascription' Ascription Sample Sentences in News - No current Ascription news
<urn:uuid:5ee8abc7-a892-4833-8619-4a0f3de2686a>
CC-MAIN-2013-20
http://www.dictionary30.com/meaning/Ascription
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.902485
417
3.234375
3
Birth Control Behavioral Methods (cont.) Francisco Talavera, PharmD, PhD Other Methods of Periodic Abstinence Several other methods of periodic abstinence exist. - Rhythm method: Couples who practice the rhythm method, also called the calendar method, decide when to abstain from intercourse based on calendar calculations of the past 6 menstrual cycles. However, allowances are not made for the normal variations in the menstrual cycle that many women experience. This method is not as reliable as the symptothermal method of NFP or FAM. - Cervical mucus method: Also called the ovulation method, the cervical mucus method involves monitoring cervical mucus only, without also recording basal body temperature or menstrual history. The safe period is considered to be any dry mucus days just after menstruation and the 10 or 11 days at the end of the cycle. Days of menstrual bleeding are deemed infertile; however, pregnancy can occur during menstruation. Vaginal infections, sexual excitement, lubricants, and certain medications can significantly affect the accuracy of cervical mucus assessment. - Basal body temperature method: This method involves monitoring basal body temperature only, without also recording cervical mucus or other signs. Sex is avoided from the end of the menstrual period until 3 days after the increase in temperature. Must Read Articles Related to Birth Control Behavioral Methods Birth Control FAQs The practice of birth control is as old as human existence. Your choice of birth control method involves factors such as how easy it is to use, safety, risks, c...learn more >> Birth Control Overview The practice of birth control is as old as human existence. Today, the voluntary control of fertility is of paramount importance to modern society.learn more >>
<urn:uuid:ce68a471-dda9-4111-afd3-f2d8b7f387b6>
CC-MAIN-2013-20
http://www.emedicinehealth.com/birth_control_behavioral_methods/page6_em.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901819
355
2.921875
3
Anyone can become a victim of violent behaviour in their home, regardless of their gender, age or family situation. If you're a victim or worried about someone else, you should know where to go for help. What is domestic violence? Domestic violence covers any incidents of violent behaviour in a family or a relationship. This covers abuse by one family member to another or between two people in a relationship. Another form of domestic violence is child abuse. This is when a child or young person is harmed, neglected or bullied by an older adult. You don't have to be physically hurt to be a victim of child abuse. If you're constantly being sworn at or told that you're unwanted, this may also be classed as emotional abuse. If you're being hurt If you've been physically or mentally harmed by a parent, carer, older relative or someone you're in a relationship with, you should remember that you are not to blame. Many victims of domestic violence believe that they have created or caused the problems that led to the violence. This is not the case. The only person to blame is the one who is committing the violent acts. You should call the police If you feel confident enough. They take crimes like this very seriously and will be able to act quickly. If you don't want to call the police, talk to a friend or a teacher that you can trust about your feelings. If you know someone else is being hurt If you're worried that one of your friends, parents or carers is a victim of violence in their own home, tell them about your concerns. It's best to help them talk through the situation and support them if they decide to report the matter themselves. Teenage relationship advice A recent NSPCC survey showed that a quarter of girls and 18 per cent of boys have experienced physical violence in a relationship. Abuse in teen relationships doesn’t just cover physical violence. Other examples of this type of abuse include: - pressuring a partner into having sex - controlling behaviour - unnecessary jealousy or anger Remember that abuse in a relationship is never okay. Everyone deserves to be treated with respect from their partner. The 'This is Abuse' website has more information about: - what behaviour counts as abuse - how to recognise the signs of abuse - the organisations who can help you if you’re being abused by your partner Organisations with further information If you're suffering from domestic violence, or you're worried that someone you know may be suffering, there are a number of organisations you can contact for helpful advice. If you're a victim of domestic abuse and you're worried about what will happen if you report it to the police, you should call ChildLine on 0800 1111. They'll be able to let you know what will happen if you tell someone about your situation and help you work out what to do next. ChildLine is open 24 hours a day and seven days a week. Calls to ChildLine are free and they'll never appear on your phone bill. You may also be able to find useful information on their website. The National Society for the Prevention of Cruelty to Children (NSPCC) operates a helpline that offers confidential advice for people who are worried about cases of possible child abuse. The NSPCC cannot investigate suspected child abuse cases, but they can report your concerns to the police or your local children's services team. The number is 0808 800 5000 and it's open 24 hours a day. This content is subject to Crown Copyright
<urn:uuid:3af50957-d94f-4e11-b5ba-46d5d388eed7>
CC-MAIN-2013-20
http://www.findlaw.co.uk/law/criminal/crimes_a_z/10234.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966413
733
3.03125
3
Industrial production (IP) indices measure the current output of the specified manufacturing, energy, or mining industry as a ratio to the output of the base year (which is set to be equal to 100). The total index is most heavily influenced by manufacturing, reflecting the large share of manufacturing in the economy. In 1999, the latest year which data is available, manufacturing accounted for 88.8 percent of the total value added of the three industries, with 4.8 percent for mining, and 6.4 percent for utilities. Over the last 10 years, manufacturing's output grew twice the rate of utilities, while mining's output stayed around its base year level. Changes in the output levels of manufacturing, mining, and the utility industries have direct impact on the demand for transportation, because their outputs have higher weight/value ratios than those of other sectors in the economy and hence it needs more transportation service to produce a unit of output in these three industries. According to the U.S. Transportation Satellite Accounts published by the Bureau of Transportation Statistics, it requires 3.5 cents of transportation service as input to produce a $1 worth of output in the manufacturing industry, 4.3 cents in the mining industry, and 2 cents in the utility industry. In terms of modal distribution, more than three fifths of manufacturing industry's transportation demand are for trucking service, while the mining industry and the utility industry rely more on railroad service. |Industrial Production Index (Jan-92=100)||Jun-02||Jul-02| |Percent change from previous month||0.61||0.07| |Percent change from previous month||0.68||0.17| |Percent change from previous month||1.23||2.31| |Percent change from previous month||1.20||-1.23| NOTES: The three Major Industry Groups are manufacturing, utilities, and mining. Currently, industries are classified using the Standard Industrial Classification (SIC) groups, but will change to the North American Industrial System (NAICS) with the 2002 revision. There is more information at the Federal Reserve Board of New York's web site: http://www.federalreserve.gov/Releases/G17/sdtab1.pdf. Data from April to July 2002 are preliminary. The base period of the original index is the 1992 annual index. The month of January 1992 is set to be the new reference point (=100) by dividing the values of the original index by the value of January 1992 in the original index. It is important to point out that this process changes only the reference point, and not the base period of the index because the weight structure of the index did not change. SOURCE: Federal Reserve, "Industrial Production and Capacity Utilization" Statistical Release; August 15, 2002; available at: http://www.federalreserve.gov/releases/g17/download.htm.
<urn:uuid:4f4426f5-e649-467a-a951-8a58f72bcff8>
CC-MAIN-2013-20
http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/transportation_indicators/august_2002/Economy/html/Industrial_Production_Indices_Mining_Utilities_and_Manufacturing.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.889358
608
2.796875
3
May 10, 2012 Forests in the Amazon Basin are expected to be less vulnerable to wildfires this year, according to the first forecast from a new fire severity model developed by university and NASA researchers. Fire season across most of the Amazon rain forest typically begins in May, peaks in September and ends in January. The new model, which forecasts the fire season's severity from three to nine months in advance, calls for an average or below-average fire season this year within 10 regions spanning three countries: Bolivia, Brazil and Peru. "Tests of the model suggested that predictions should be possible before fire activity begins in earnest," said Doug Morton, a co-investigator on the project at NASA's Goddard Space Flight Center in Greenbelt, Md. "This is the first year to stand behind the model and make an experimental forecast, taking a step from the scientific arena to share this information with forest managers, policy makers, and the public alike." The model was first described last year in the journal Science. Comparing nine years of fire data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite, with a record of sea surface temperatures from NOAA, scientists established a connection between sea surface temperatures in the Pacific and Atlantic oceans and fire activity in South America. "There will be fires in the Amazon Basin, but our model predictions suggest that they won't be as likely in 2012 as in some previous years," said Jim Randerson of the University of California, Irvine, and principal investigator on the research project. Specifically, sea surface temperatures in the Central Pacific and North Atlantic are currently cooler than normal. Cool sea surface temperatures change patterns of atmospheric circulation and increase rainfall across the southern Amazon in the months leading up to the fire season. "We believe the precipitation pattern during the end of the wet season is very important because this is when soils are replenished with water," said Yang Chen of UC Irvine. "If sea surface temperatures are higher, there is reduced precipitation across most of the region, leaving soils with less water to start the dry season." Without sufficient water to be transported from the soil to the atmosphere by trees, humidity decreases and vegetation is more likely to burn. Such was the case in 2010, when above-average sea surface temperatures and drought led to a severe fire season. In 2011, conditions shifted and cooler sea surface temperatures and sufficient rainfall resulted in fewer fires, similar to the forecast for 2012. Building on previous research, the researchers said there is potential to adapt and apply the model to other locations where large-scale climate conditions are a good indicator of the impending fire season, such as Indonesia and the United States. Amazon forests, however, are particularly relevant because of their high biodiversity and vulnerability to fires. Amazon forests also store large amounts of carbon, and deforestation and wildfires release that carbon back to the atmosphere. Predictions of fire season severity may aid initiatives -- such as the United Nation's Reducing Emissions from Deforestation and forest Degradation program -- to reduce the emissions of greenhouse gases from fires in tropical forests. "The hope is that our experimental fire forecasting information will be useful to a broad range of communities to better understand the science, how these forests burn, and what predisposes forests to burning in some years and not others," Morton said. "We now have the capability to make predictions, and the interest to share this information with groups who can factor it into their preparation for high fire seasons and management of the associated risks to forests and human health." Other social bookmarking and sharing tools: Note: If no author is given, the source is cited instead.
<urn:uuid:aac3c537-bc39-448a-be9a-a2a0109d4c00>
CC-MAIN-2013-20
http://www.sciencedaily.com/releases/2012/05/120510225006.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945415
736
3.15625
3
ancient city-state in the NE Peloponnesus: it dominated the Peloponnesus from the 7th cent. until the rise of Sparta See Argos in American Heritage Dictionary 4 A city of ancient Greece in the northeast Peloponnesus near the head of the Gulf of Argolis. Inhabited from the early Bronze Age, it was one of the most powerful cities of ancient Greece until the rise of Sparta.
<urn:uuid:3a4707b8-ccf7-4b0e-8d4e-330118fb043c>
CC-MAIN-2013-20
http://www.yourdictionary.com/argos
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00019-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914613
94
2.96875
3
In contracted braille Check out our new website for Louis Braille's Bicentennial! Author David Adler writes biographies that appeal to children, scrupulously using facts about a person's life to tell an engaging story. Louis Braille was born in 1809 in a small village outside of Paris. At the age of three, he accidentally injured his eye while playing with his father's tools and soon after lost all the sight in both of his eyes. Encouraged by a local priest and his schoolteacher, Louis went on to live and study at the National Institute of the Blind in Paris in 1819, where, at the age of 11, he began to experiment with a new raised code for letters based on a 'night writing' code used by the French army. Wrote Louis in his diary, 'If my eyes will not tell me about men and events, ideas and doctrines, I must find another way.' Although Louis introduced his revolutionary braille code in 1824, the French government did not officially approve his dot system, simply called 'braille,' until 1854 - two years after Louis's death. A memorial plaque in his village reads, 'He opened the door to knowledge for all those who cannot see. Eventually, braille became the standard system used throughout the world. Louis Braille's alphabet, along with numbers, appears in both print and braille at the back of the book.
<urn:uuid:93662d0c-07e2-45c8-a269-b08f2596d03c>
CC-MAIN-2013-20
http://www.nbp.org/ic/nbp/LOUIS.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982138
293
3.40625
3
2007, 283 pages, Hardcover SHRMStore Item #: 61.15003 Order from the SHRMStore or call (800) 444-5006 A demographic sea change is moving across the American workplace, as unprecedented in scope as it was unforeseen. As recently as 1990, the U.S. Census believed that Hispanics would not overtake African Americans to become the nation's largest minority until 2020. It was an arresting development, therefore, when, in 2002, the U.S. Census reported that there were more Hispanics than blacks, and that the United States was the fastest-growing Spanish-speaking nation in the world. "Hispanics have edged past blacks as the nation's largest minority group, new figures released today by the Census Bureau showed. The Hispanic population in the United States is now roughly 37 million, while blacks number about 36.2 million," Lynette Clemetson declared in the New York Times in January 2003, documenting the federal government's official acceptance of the historic demographic developments in the United States in the first decade of the twenty-first century. Every year since then the Census Bureau, along with other federal agencies, has continued to document the structural changes in the American population, changes that herald the increasing proportion of Hispanics-and thus of Hispanic employees-in ways that, a mere generation ago, were unimaginable. Consider a few tantalizing facts: - On average, Hispanics are almost a decade younger than the general population. - More than a third of Hispanics are less than eighteen years old. - Fertility rates of Hispanics are higher than the natural replacement level. - More than 34 million Mexicans have a legal claim of some kind to immigrate to the United States. - Hispanics who attain graduate degrees earn more than 15 percent more than their non- Hispanic counterparts. These changes have not unfolded without comment. "It is a turning point in the nation's history, a symbolic benchmark of some significance," Roberto Suro, director of the Pew Hispanic Center, said of the emergence of Hispanics as the largest minority, displacing African Americans from their historic position. "If you consider how much of this nation's history is wrapped up in the interplay between black and white, this serves as an official announcement that we as Americans cannot think of race in that way any more."2 Other voices have been raised in alarm. "The persistent inflow of Hispanic immigrants threatens to divide the United States into two peoples, two cultures, and two languages. Unlike past immigrant groups, Mexicans and other Hispanics have not assimilated into mainstream U.S. culture, forming instead their own political and linguistic enclaves—from Los Angeles to Miami—and rejecting the Anglo-Protestant values that built the American dream," Samuel Huntington of Harvard University wrote in the pages of Foreign Affairs.3 That sweeping demographic changes are transforming America's workforce is undeniable. Consider two facts: - Hispanics in 2005 were 14 percent of the nation's population-but 22 percent of workers. - If things continue on their present course, Hispanics in 2050 will represent 32 percent of the nation's population, but 55 percent of workers. These demographic changes, with Hispanics fast expanding throughout the American workforce, come at a time when there is an emerging literature documenting how human resource management (HRM) has a direct impact on an organization's performance. Empirical economic analysis for more than a decade has continued to demonstrate that companies with HRM strategies that involve paying higher wages relative to their competitors reap significant benefits in reduced absenteeism and employee turnover and increased productivity. One of the most compelling analyses about the strategic role HRM plays in the success of an organization was conducted by James Baron, Michael Hannan, and M. Diane Burton, who examined how the philosophical approaches of start-up founders in Silicon Valley affected the success of their enterprises.6 In firms where the founders espoused and embraced a commitment-based employment model, which was defined, in part, by explicit commitment by management to the employees and endeavored to complement the firm's and employees' cultural worldview, productivity was higher and HRM conflicts less common.7 The compelling lesson of this large-scale study demonstrates that an articulated, proactive use of HRM as a management tool is a successful strategy for managing employee relations, streamlining problem resolution, and nurturing a culture of mutual commitment and reciprocal loyalty. In essence, when general management clearly articulated and defined the role of HRM, making human resource (HR) professionals an integral part of the organization's strategy for success, both profit and shareholder value were enhanced. Furthermore, a recognition that HRM consists of economic, social, and cultural components is an integral aspect of creating the proper framework for the successful administration of an organization's "human capital." This important distinction has gained currency only recently. "Human assets are hard to evaluate quantitatively, so they don't show up on the balance sheet," James Baron and David Kreps write. "But whether or not the bean counters can assign dollar values to them, assets they are, and general managers must think of human resources as a form of capital." In this present decade, building on the foundation of a growing body of empirical studies and data, HR managers have matured as partners in the strategic management of their organizations, and HRM has become more fine-tuned as a science that employs an increasingly skilled corps of professionals. To a significant degree the newfound importance of HRM arises from management's belief that HRM has become an integrated structural system, not a random collection of disparate functions centered on payroll and labor law compliance functions. This approach, often referred to as complementarities, has required a seismic shift in HRM, which stands in contrast to the way HR professionals have historically seen themselves. "There are many reasons the HR department has been slow to change, not the least of which is the widespread belief that human resources is simply a fact of organizational life that has little or no effect on business performance," argues Edward Lawler of the University of Southern California. The time has come, however, to see beyond this limited view. Technology is driving a revolution in the way HR administration can be managed, giving HR executives new data-collection and analysis tools with which they can more easily demonstrate the importance of effective human capital management-to strategy and the bottom line. Furthermore, large administrative cost savings can also be realized by outsourcing activities that don't contribute to shareholder value. Companies that hone HR's contributions in both the human capital strategy and administrative realms can build a significant competitive advantage. It is in this pursuit of competitive advantage that this book explores the Hispanization of the nation's workforce and the emergence of HRM as a strategic management tool. Return to Book.
<urn:uuid:01aad4b1-9c4e-47e9-a883-87f48d10a644>
CC-MAIN-2013-20
http://www.shrm.org/PUBLICATIONS/BOOKS/HISPANIC/Pages/excerpt.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952702
1,368
2.875
3
Thank you for wanting to support this educator with your vote. You may vote for one entry per grant program. Voting can only be completed when you are logged into the site. Literacy expert Pam Allyn shares her model for interpreting and implementing the Common Core. In this special 15-minute interview, you'll discover why visual learning is a proven method for developing social / emotional skills and critical thinking in the classroom. The award-winning science author shares his thinking on the Common Core and the increased focus the new standards place on nonfiction. Discover what the standards mean for the short- and the long-term when it comes to assessment and using student data. How will the Common Core impact the gap between the highest- and lowest-performing readers and writers? What can teachers do to help struggling students reach grade level? Inspire students to learn about our world’s population and how it impacts food, wildlife habitats, and the global status of women and girls. There are new resources being developed every day to support the new Common Core State Standards. But as an educator, how can you sort through these materials and find the ver... Tips and ideas on how games can make your lessons fun and improve student learning. Resources and lesson tips for your classroom. As schools everywhere make the transition to new common core standards, teachers everywhere are trying to wrap their minds around the why, the how and the what now. How do you make sure your curriculum helps each of your students to learn essential math skills? Take a listen to get tips, ideas and suggestions... Financial education is an essential skill for students to learn, yet fitting it into your already jam-packed curriculum seems nearly impossible. That's where we come in...
<urn:uuid:baffe065-ef04-41d2-aa86-32b794fa54cc>
CC-MAIN-2013-20
http://www.weareteachers.com/hot-topics/watcasts
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940289
356
3.59375
4
Images: None on this site. Click on each item to see an explanation of that item (Note: opens a new window) Northern hard-leaf (English) Description: Much-branched shrub or small tree. Branchlets covered in grey woolly hairs. Leaves densely spirally arranged, lanceolate-elliptic, up to 15 × 5 mm, dark green above, densely white-woolly below; margin entire, rolled under. Flowers small and inconspicuous, greenish or whitish, in axillary or terminal heads or panicles. Fruit an obovoid capsule, 6-7 mm long, hairless, crowned with the base of the calyx, dehiscent. Derivation of specific name: paniculata: with a branched racemose or cymose inflorescence (panicle) Habitat: Along forest margins and among rocks in stunted Brachystegia woodland. Altitude range: (metres) Above 1670 m Flowering time: Dec - Jun Worldwide distribution: South Africa and the Chimanimani and "Himalaya" Mts, Mozambique and Zimbabwe Red data list status: Insects (whose larvae eat this species): Display spot characters for this species comments powered by Disqus. comments powered by Other sources of information about Phylica paniculata: Flora of Mozambique Phylica paniculata African Plant Database Phylica paniculata Biodiversity Explorer (Biodiversity of southern Africa): Phylica paniculata EOL (Encyclopedia of Life): Phylica paniculata ePIC (electronic Plant Information Center): Phylica paniculata Flora Zambesiaca Phylica paniculata Google (Germplasm Resources Information Network) taxonomy for plants report for Phylica paniculata IPNI (International Plant Names Index): Phylica paniculata JSTOR Plant Science Phylica paniculata Kew Herbarium catalogue Phylica paniculata Tropicos Phylica paniculata West African Plants database Copyright: Mark Hyde, Bart Wursten and Petra Ballings, 2002-13 Hyde, M.A., Wursten, B.T. & Ballings, P. Flora of Zimbabwe: Species information: Phylica paniculata. http://www.zimbabweflora.co.zw/speciesdata/species.php?species_id=137760, retrieved 18 May 2013 Site software last modified: 19 February 2013 4:32pm (GMT +2)
<urn:uuid:14225cb4-32ed-492c-9a56-858da0585bb6>
CC-MAIN-2013-20
http://www.zimbabweflora.co.zw/speciesdata/species.php?species_id=137760
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.676885
584
3.40625
3
1911 Encyclopædia Britannica/Rebecca Riots |←Rebec||1911 Encyclopædia Britannica, Volume 22 |See also Rebecca Riots on Wikipedia, and our 1911 Encyclopædia Britannica disclaimer.| REBECCA RIOTS, the name given to some disturbances which occurred in 1843 in the counties of Pembroke, Carmarthen, Glamorgan, Cardigan and Radnor, after a slight outbreak of the same nature four years previously. During a period of exceptional distress the rioting was caused mainly by the heavy charges at the toll-gates on the public roads in South Wales, and the rioters took as their motto the words in Genesis xxiv. 60, “And they blessed Rebekah, and said unto her, Thou art our sister, be thou the mother of thousands of millions, and let thy seed possess the gate of those which hate them.” Many of the rioters were disguised as women and were on horseback; each band was led by a captain called “Rebecca,” his followers being known as “her daughters.” They destroyed not only the gates but also the toll-houses, and the work was carried out suddenly and at night, but usually without violence to the toll-keepers, who were allowed to depart with their belongings. Emboldened by success, a large band of rioters marched into the town of Carmarthen on the 10th of June and attacked the workhouse, but on this occasion they were dispersed by a troop of cavalry which had hurried from Cardiff. The Rebeccaites soon became more violent and dangerous. They turned their attention to other grievances, real or fancied, connected with the system of land-holding, the administration of justice and other matters, and a state of terrorism quickly prevailed in the district. Under these circumstances the government despatched a large number of soldiers and a strong body of London police to South Wales, and the disorder was soon at an end. In October a commission was sent down to inquire into the causes of the riots. It was found that the grievances had a genuine basis; measures of relief were introduced, and South Wales was relieved from the burden of toll-gates, while the few rioters who were captured were only lightly punished.
<urn:uuid:12528996-bb26-4e0e-bc70-45dcb54c0996>
CC-MAIN-2013-20
http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Rebecca_Riots
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986674
479
3.109375
3
- For the braconid wasp genus, see Agathis (wasp). - For details on New Zealand Kauri, see Agathis australis - The type of gastropod marine mollusc is spelled cowrie, while the New Zealand land mollusc is spelled Kauri . |This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2009)| The genus Agathis, commonly known as kauri or dammar, is a relatively small genus of 21 species of evergreen tree. The genus is part of the ancient Araucariaceae family of conifers, a group once widespread during the Jurassic period, but now largely restricted to the Southern Hemisphere except for a number of extant Malesian Agathis. Mature kauri trees have characteristically large trunks, forming a bole with little or no branching below the crown. In contrast, young trees are normally conical in shape, forming a more rounded or irregularly shaped crown as they achieve maturity. The bark is smooth and light grey to grey-brown, usually peeling into irregular flakes that become thicker on more mature trees. The branch structure is often horizontal or, when larger, ascending. The lowest branches often leave circular branch scars when they detach from the lower trunk. The juvenile leaves in all species are larger than the adult, more or less acute, varying among the species from ovate to lanceolate. Adult leaves are opposite, elliptical to linear, very leathery and quite thick. Young leaves are often a coppery-red, contrasting markedly with the usually green or glaucous-green foliage of the previous season. The male pollen cones appear usually only on larger trees after seed cones have appeared. The female seed cones usually develop on short lateral branchlets, maturing after two years. They are normally oval or globe shaped. Various species of kauri give diverse resins such as kauri copal, Manilla copal and Dammar gum. The timber is generally straight-grained and of fine quality with an exceptional strength-to-weight ratio and rot resistance, making it ideal for yacht hull construction. The wood is commonly used in the manufacture of low priced guitars due to its good resonating properties, yet relatively low price of production. It is also used for some Go boards (goban). The uses of the New Zealand species (A. australis) included shipbuilding, house construction, wood panelling, furniture making, mine braces, and railway sleepers. - Agathis atropurpurea—Black Kauri, Blue Kauri (Queensland, Australia) - Agathis australis—Kauri, New Zealand Kauri (North Island, New Zealand) - Agathis borneensis (western Malesia, Borneo) - Agathis corbassonii—Red Kauri (New Caledonia) - Agathis dammara (syn. A. celebica)—Bindang (eastern Malesia) - Agathis endertii (Borneo) - Agathis flavescens (Borneo) - Agathis kinabaluensis (Borneo) - Agathis labillardieri (New Guinea) - Agathis lanceolata (New Caledonia) - Agathis lenticula (Borneo) - Agathis macrophylla (syn. A. vitiensis)—Pacific Kauri, Dakua (Fiji, Vanuatu, Solomon Islands) - Agathis microstachya—Bull Kauri (Queensland, Australia) - Agathis montana (New Caledonia) - Agathis moorei—White Kauri (New Caledonia) - Agathis orbicula (Borneo) - Agathis ovata (New Caledonia) - Agathis philippinensis (Philippines, Sulawesi) - Agathis robusta—Queensland Kauri (Queensland, Australia; New Guinea) - Agathis silbae (Vanuatu) - Agathis spathulata—New Guinea Kauri (Papua New Guinea)
<urn:uuid:3a4eb359-17be-4969-8663-8e63f65a3f1c>
CC-MAIN-2013-20
http://eol.org/data_objects/17583680
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.854045
909
2.984375
3
Crime scene photography - The dead body should be carefully lifted and placed on to a bed sheet, length of plastic or body bag and wrapped for transport. With heavy bodies, decomposed or fragile remains, the body should be rolled on to its side and the plastic stuck underneath the body. The body can then be lifted using the plastic to hold the remains intact. In case of suicide by firearms, paper bags are placed securely over the hands. They should show (1) General relations of the scene of body to its surroundings. (2) Special relationships between the deceased and weapon or blood stains, overturned furniture, etc. (3) Means of possible entrance to and exit from the scene. (4) Position and posture of the victim. Take a scaled crime scene photography as it was first viewed. A second scaled photograph should be taken, locating in the field of camera with suitable markers any small objects, such as fired cartridge case, bullet holes in walls, etc. A scaled drawing of the room or the area of interest in the investigation should be made. The dead body should be photographed from different angles. Crime scene photography of all injuries, major and minor are essential. The skin should be cleaned of blood, dirt or foreign material. A ruler and case number or other identifying information must be present in the photograph of the injury. The ruler should be placed on the skin surface adjacent to the injury at the same height as the injury. The crime scene photography must be taken with the camera perpendicular to the skin surface. The injury should fill most of the picture area. To indicate important features, markers or pointers can be inserted. If the powder residues are on the victim’s skin, a scaled photograph should be made, including the entire area over which the powder residues exist. Photographs help the investigating officer and the doctor to refresh their memories for giving evidence in the Court. Crime scene photography also convey essential facts to the Court.
<urn:uuid:5fb87635-85aa-4505-9ea3-c0e732e43775>
CC-MAIN-2013-20
http://healthdrip.com/crime-scene-photography/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941505
397
2.53125
3
Translate hang-glider | into French | into German | into Italian Definition of hang-glider an unpowered flying apparatus for a single person, consisting of a frame with a fabric aerofoil stretched over it. The operator is suspended from a harness below and controls flight by body movement. a person flying a hang-glider.
<urn:uuid:ce8fcb5b-cfc4-437e-a323-a06a4e1f73dd>
CC-MAIN-2013-20
http://oxforddictionaries.com/definition/english/hang-glider
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929879
71
2.625
3
EPA Names Partners with Most On-Site Power Walmart tops the US Environmental Protection Agency’s list of organizations within its Green Power Partnership generating the most on-site renewable electricity. Walmart is head and shoulders above other partners, generating nearly 175 million kWh of electricity annually, while the second-place spot is taken by BMW, generating about 71 million kWh annually. EPA’s Green Power Partnership works with more than 1,400 organizations to voluntarily purchase green power. Overall, EPA’s Green Power Partners are using more than 24 billion kWh of green power annually. Of the top five Partners with on-site renewable power, Coca-Cola Refreshments comes in at number 3, generating about 47 million kWh each year; followed by the US Air Force (37 million kWh) and Kohl’s Department Stores (36 million kWh). Usage amounts reflect US operations only and are sourced from US-based green power resources. the usage figures are based on annualized Partner contract amounts (kilowatt-hours), not calendar year totals. Organizations can meet EPA Partnership requirements using any combination of three different product options: (1) Renewable Energy Certificates, (2) On-site generation, and (3) Utility green power products. - 2013 Insider Knowledge - The Logistics, Carbon, and Business Data Book: Fall 2012 Sustainability Trends - Environmental Leader Technology Reviews - Guide to Energy, Carbon and Environmental Software - ISO 50001: Frequently Asked Questions - The Growing Trend of Sustainability Scorecards - Your Guide to Total Energy and Sustainability Management - How to Reduce Cost by Increasing Accountability for Energy Efficiency - The Business Case for Corporate Sustainability Tools - January 2013 - 9 Ways to Reduce Energy Costs
<urn:uuid:a826bb10-7949-4e0d-bf34-fd316d987350>
CC-MAIN-2013-20
http://www.energymanagertoday.com/epa-names-partners-with-most-on-site-power-088626/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.883252
365
2.734375
3
b. 1509 St.-Avit, France, d. 1590 Paris ceramicist; designer; glassworker; painter A man of many interests and talents though with no formal training, Bernard Palissy became a scientist, land-surveyor, religious reformer, garden designer, glassblower, painter, chemist, geologist, philosopher, and writer, as well as a ceramist. A devout and outspoken Huguenot, he was imprisoned for his religious beliefs and for his involvement in the Protestant riots of the first of the Wars of Religion. It was only with the help of his influential Catholic patron, Anne de Montmorency, that he obtained amnesty. Catherine de'Medici, the French queen, later acted as his protector, commissioning Palissy to build a private grotto for her at the garden of the Tuileries palace. Palissy produced his designs by attaching casts of dead lizards, snakes, and shellfish to traditional ceramic forms such as basins, ewers, and plates. He then painted these wares in blue, green, purple, and brown, and glazed them with runny lead-based glaze to increase their watery realism. Beginning in 1575, Palissy gave public lectures in Paris on natural history which, when published as Discours admirables (Admirable Discourses), became extremely popular and revealed him as both a writer and experimental pioneer. In 1588, as the struggle against the Protestants grew, Palissy was again imprisoned. He died two years later of "starvation and maltreatment." French, about 1550
<urn:uuid:57d17224-da53-46d6-8fdb-37bca53b416c>
CC-MAIN-2013-20
http://www.getty.edu/art/gettyguide/artMakerDetails?maker=867
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982286
332
3.421875
3
Following a minor earthquake in Donegal last week, experts are saying that Ireland is also now susceptible to tsunamis. According to Journal.ie, the Irish National Seismic Network (INSN) has said that earthquakes "happen continually" in Ireland and that tsunamis are "quite possible." On November 1, 1755, the Great Lisbon Earthquake was felt in Cork. Two-and-a-half-hours later a tsunami engulfed the coast of Cork. There are now two areas in Ireland where major faulting occurs - Donegal and Wexford, claims Tom Blake, the director of the organization. Last week's earthquake in Donegal measured 2.2 on the Richter Scale, which Blake advised was "normal" in terms of seismicity in the area. More powerful earthquakes have been experienced in the Irish Sea with “bangs” measuring about magnitude 5 being relatively common. On 19 July 1984, an earthquake hitting 5.4 on the Richter Scale hit the Irish Sea causing some structural damage on the east coast. Blake said this can be expected to happen every 25 to 30 years, but added that no seismologist in the world can accurately predict quakes. Read More from Ireland on IrishCentral Irish leader Enda Kenny was unfairly slammed for telling the truth Successful Irish immigration meetings in Cleveland, Ohio Folklore from before the 1800s shows that earthquakes in Ireland are not a new phenomenon. “The writings of various peoples from the past show there are indications of people writing down effects of earthquakes. In the Mallow area, there are depictions of the earth shaking and crockery rattling during events of the 1800s. This folklore give us an insight into the past activity onshore and offshore Ireland,” explains Blake. Earthquakes in Ireland occur not with plate movement but with a buildup of stress and tension on the rocks. The pressure becomes too much and is released, manifesting as an earthquake. The nearest plate boundary to Ireland is the Mid-Altantic Ridge about 2,500km off the west coast. It is unlikely that an earthquake there would cause a huge problem for Ireland – unless, of course, a massive tidal wave occurs. The Lisbon earthquake could well be repeated, “Nobody was killed, thank goodness,” exclaims Blake, “But it does raise an interesting topic about monitoring seismicity around the Atlantic and Bay of Biscay.” Blake claimes that if the event is strong enough and shallow enough, another tsunami is “quite possible."
<urn:uuid:8df4f151-cd1b-429d-9aff-8a721fad150f>
CC-MAIN-2013-20
http://www.irishcentral.com/news/Irish-tsunami-quite-possible-warns-experts-after-earthquake-138313984.html?mob-ua=mobile
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956346
530
3.40625
3
Close-up View of Vesta's South Pole Region In this image, obtained by Dawn's framing camera, a peak at Vesta's south pole is seen at the lower right. The grooves in the equatorial region are about six miles wide (10 kilometers). The image was taken on July 24, 2011, from a distance of about 3,200 miles (5,200 kilometers). The Dawn mission to Vesta and Ceres is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington, D.C. It is a project of the Discovery Program, managed by NASA's Marshall Space Flight Center, Huntsville, Ala. UCLA is responsible for overall Dawn mission science. Orbital Sciences Corporation of Dulles, Va., designed and built the Dawn spacecraft. The framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by the German Aerospace Center (DLR) Institute of Planetary Research, Berlin; and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig, Germany. The framing camera project is funded by NASA, the Max Planck Society and DLR. More information about Dawn is online at http://www.nasa.gov/dawn . Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA
<urn:uuid:31686b1d-2c09-4bd1-b1ae-83949bf41f55>
CC-MAIN-2013-20
http://www.nasa.gov/mission_pages/dawn/multimedia/pia14322.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892306
294
3.140625
3
Today is World Alzheimer’s Day, a worldwide event that strives to raise awareness on all the “faces of dementia”—the patients, caregivers, family, and friends who encounter a disease that affects 5.4 million Americans. An American develops Alzheimer’s every 69 seconds. By 2050, that time will be cut in half. By then, as many as 15 million Americans are projected to suffer from Alzheimer’s, the most common form of dementia. But Alzheimer’s doesn’t just affect its patients. In 2010, almost 15 million caregivers provided 17 billion hours of unpaid care to those with dementia—valued at $202.6 billion. Unfortunately, there is no cure for the disease that not only causes memory loss, but behavioral and motor problems, as well. We spoke with Rockville’s Dr. Kevin M. Gil, who has been working in the geriatrics field for 20 years, about Alzheimer’s and some up and coming treatments that may hold some hope for patients. What are some major warning signs of Alzheimer’s that people should be aware of? Typically, the first things that show up are problems with memory. These are things you commonly do, and then suddenly cannot follow through. The best example I can think of is of the recent coach, [Pat Summit], who was very meticulous and able to come up with the plans, the meetings, and the plays. She realized she had a problem when she couldn’t call the play for the next round. Another common sign is executive functioning problems. Someone who once could balance a check book will say, “Now I can’t do it anymore, I keep making errors.” Or they’ll say, “Suddenly I find myself driving to the grocery store and I get lost.” Or, the person used to be very engaging and social, but now you notice he or she isn’t joining in on the conversation. He or she becomes angry, irritable, and more withdrawn. How do you officially diagnose someone with Alzheimer’s if there is no single test? The diagnosis of Alzheimer’s requires a number of different things. The first one is just going through one’s history. What has happened to the person overtime? A lot of times they might not be aware of any problems, so family members can tell you things have been different. Maybe they’ve been forgetting things, they found the person getting lost with their car, or the bills being paid twice or not at all. Sometimes, you can find problems on physical exams. The results in their neurological exams saying they’ve been having small strokes. Or there is decreased sensation in the feet or hands, perhaps indicating a vitamin deficiency. Brain imagery can help, too. Doing either a CT scan or MRI can show up a tumor, stroke, or fluid collections on the brain. Finally, there’s the mini-mental status exam. It’s a 30-point scale that checks for memory impairment, orientation, problems with language, and problems carrying out motor activities. Alzheimer’s typically affects people 65 years or older, but people in their forties or fifties can be diagnosed with early onset Alzheimer’s. Why does that happen? Some people have a family gene. The most classic one is called apolipoprotein E. It’s a gene that’s associated with developing Alzheimer’s disease, and it clusters in families. So since Alzheimer’s can run in the family, is there anything someone can do to decrease the chance of being diagnosed with the disease? You can do mental engagement activities. In other words, increased social interaction, brain puzzles, sudokus, and crossword puzzles. People who do those things seem to have less incidence of Alzheimer’s. With people who exercise regularly, we also see less chance of developing Alzheimer’s disease. Make sure that you’re socially engaged. Do whatever it is that you’re doing to exercise regularly, and interact with people even more. Once a patient is diagnosed, which treatment options are available? We are on the cusp of being able to identify a couple of proteins found in the lumbar puncture. When you directly sample fluid from the spinal chord, you can test for a couple of proteins associated with Alzheimer’s disease. You can start doing therapy to see if you can interfere with that process or progression. But right now, treatments consist of medications that help out temporarily. These are called the cholinesterase inhibitors class. The most famous one is Aricept, but there are other ones, like Razadyne and Exelon. I say “help” because the actual process that causes brain cell damage has taken place already. These medications lessen or hold in position the person’s memory loss or brain deterioration for a short period of time. People with Alzheimer’s will frequently be agitated or depressed. We try to regularize the environment so that it’s concrete, regular, and routine. Big clocks and big calendars on the wall, and the same routine every day. We also try to keep the person sleeping regularly. In your experience working in geriatrics, what is usually the hardest part for the Alzheimer’s patients and their families? The most traumatic thing for the person is taking their driving privileges away. Our independence is associated with driving. When you take away that ability, it’s devastating for the person. What are some of the most common misconceptions about Alzheimer’s? People with Alzheimer’s can develop a paranoid behavior. So the family members are accused of manipulating, stealing, hiding things from them. It can be very upsetting to the family member who tries to reason the person out of that fixed conviction. And of course, it’s not possible to do that. Sometimes there are good days, where the person is pretty sharp, aware, and alert. And then, not so good days, when they’re more disoriented, combative, out of it, or withdrawn. That kind of back and forth over time is very hard for the family members to deal with. For more information on Alzheimer’s, visit the Alzheimer’s Association.
<urn:uuid:fab103d1-bc5f-40eb-9373-270ba6b943f9>
CC-MAIN-2013-20
http://www.washingtonian.com/blogs/wellbeing/guides/qa-best-practices-for-alzheimers-and-dementia-patients.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950165
1,321
2.734375
3
the course of his career Sir places where no human being had gone before, and by making the first successful accent of Mount Everest captured the world’s imagination. His achievement in successfully scaling Mount Everest for the very first time was one of the Twentieth Century’s Limited edition bronze of 29 pieces celebrating the esteemed accomplishments of Sir Edmund Hillary show his hands holding an Ice Axe. Produced in Bronze from life cast molds created by Raelee Frazier, Highland Studio, Denver, Co. Sir Edmund Hillary in Chicago with the sculpture "THE SUMMIT" after being shown the finished art for the first time, November 1999. art and a brief history of the 1953 British Everest Expedition Edmund Hillary and Tenzing Norgay about to leave the South Col to establish camp IX below the south summit. May History of the 1953 British In the first half of the 20th century expeditions to conquer Mount Everest took the form of an international competition between the major nations of the world. At this time in history the summit of Everest represented one of the last of the unconquered frontiers of the planet. Reaching Everest's pinnacle was considered to be a prize of momentous proportion. Many had tried and failed to make the accent. The death of the two British climbers, Mallory and Irvine during a previous attempts in 1922 and 1924 had proven the conquest of Everest was not only extremely difficult but life threatening. In 1952 members of the Swiss contingency had come close only to be turned back less than a thousand vertical feet from the summit. In the same year England received a permit from the Nation of Nepal to do a reconnaissance of the southern approach to the mountain. The announcement of the letting of this permit was an unprecedented diplomatic success for up until this time Nepal had closed its boarders to visitors from the European Nations. This breakthrough in diplomacy and the receipt of the permit caused a stir in England and set the stage for the selection of the expeditionary force of 400 souls to conquer Everest. Because of his reputation for mountaineering Edmund Hillary had qualified himself to become a member of the team selected to attempt the accent. After another party from the British contingent had attempted to scale Everest had failed Hillary and the Nepalese Sherpa, Tenzing Norgay were selected to make a second attempt and finally succeeded in reaching the summit at 11:30 am May 29, 1953. Edmund Hillary took this photograph of Tenzing Norgay using an Ice Axe as a standard for the British Flag as they became the first human beings to set foot on the summit of Mt. Everest, the highest point on earth. Ironically Tensing would be required to use this same axe to save Hillary's life during a mishap in a crevasse field during the parties decent from the Summit. Hillary's successful accent was truly a Crowning Glory for England The British Empire having suffered the losses of India, Palestine and South Africa to the commonwealth during the course of the Second World War was about to enter a new era in the rule of the Empire. The war and post war periods had been earmarked with sacrifice and the severe restrictions required for the survival of what remained of the Empire. The most significant event marking the beginning of a new era was the Coronation of Queen Elizabeth II. As fate would have it the announcement of Hillary's successful accent of Everest was made by the London Times in England on the day of her Coronation June, 2, 1953. The combination of the coronation and the successful accent of the one of the planets last frontiers by the British Everest Expeditionary Force helped set a positive and forward looking atmosphere within the commonwealth. In History the timing of an event often is as important as the event itself and Hillary's accomplishment couldn't have come at a more opportune moment to help set a positive tone for the next decade. As a result the importance of Sir Edmunds accomplishment assured his continued status within the British Empire. Edmund was later to be Knighted by Queen Elizabeth in recognition of the overall importance of his contributions to the British Commonwealth of Nations. Sir Edmund Hillary as Humanitarian Edmund says about himself "In some ways I believe I epitomize the average New Zealander: I have modest abilities, I combine these with a good deal of determination, and I rather like to succeed." Sir Edmund Surrounded by School Children Sir Edmund has used his notoriety with wisdom and grace and has continued to be a positive force for the betterment of the Nepalese people as well as an outspoken advocate and spokesman for environmental causes. He is the founder and leader of several Foundations dedicated to improving the social and physical conditions of the Nepalese people. He has succeeded in building schools, hospitals and infrastructure vital to trade and commerce for the Nepalese. History may eventually record Sir Edmund's humanitarian efforts to be his most lasting and meaningful legacy despite the monumental proportion and significance to the human Psyche of his determination, will and physical abilities on Sir Edmund and Lady Hillary at the presentation of the art at "The Hillary Foundation" board meeting Chicago. Il November 1999 Sir Edmund Hillary Signing Certificates of Authenticity for "The Summit" in Chicago, November 1999 The Most Famous Living New Zealander 80 years old, Hillary is no longer an active mountaineer, but is still a tireless fundraiser and worker for education and health projects in Nepal. Hillary has been widely honored in New Zealand, England and around the world, and is the only living New Zealander to be featured on a bank note. Certificate of Authenticity Issued by Highlands Studio with each limited edition sculpture Signed and Authenticated by the artist and Sir Edmund To inquire about art please E-mail: firstname.lastname@example.org Books By Sir Edmund Hillary - High adventure. - East of Everest. - No latitude for error. - High in the thin cold air. - Schoolhouse in the clouds. - Nothing venture, nothing win. - From the ocean to the sky. - Two generations. - The view from the these web sites for more Information on Sir Edmund Hillary Hillary Biography From: American Academy of Achievement Follow the storied and often tragic history of climbing on Mt. Everest, from the early years to the present day. From the Salon writer Don George’s man to match his mountain," in the ‘Brilliant Careers’ series
<urn:uuid:a76aedc7-c3b5-4810-963f-8228ad1ff1f8>
CC-MAIN-2013-20
http://www.artsales.com/HighlandStudioThe%20Summit.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943624
1,444
2.828125
3
This Week in the Civil War This Week in The Civil War, for week of Sunday, March 4, 1862: New Madrid besieged, battle of ironclads. This week 150 years ago in the Civil War, Union forces besiege New Madrid, Mo., seeking to gain control at this juncture of the Mississippi River. The attackers march overland, arriving near New Madrid on March 3, 1862. The siege will last for days and only after heavy Union guns are brought in will the Confederate defenders retreat. Union forces will occupy the recently deserted city on March 14, 1862. Now the fight for control of the Mississippi will shift to other areas of the river -- with The Associated Press reporting the Confederates "have a very strong position" on Island No. 10, not far from New Madrid. This week also sees a new era of naval warfare open when ironclad ships -- vessels sheathed in stout armor -- clash near Hampton Roads, Va. On March 8, 1862, the Confederate ironclad CSS Virginia attacks a squadron of Union naval forces at Hampton Roads, destroying two ships and stranding a third, the Minnesota. The Monitor arrives the following day and the battle is on. The two ironclads circle and fire at each other for several hours that morning, neither sinking or seriously damaging the other. At midday, the Monitor attempts to ram the Virginia but a steering malfunction leads the Monitor to miss the Virginia's fantail. As the Monitor passes the stern of the Virginia, the Monitor's pilothouse is hit by a shell and breaks off action. Soon the Virginia retreats to the nearby Elizabeth River, unable to finish off the damaged Minnesota. The outcome is indecisive. Union forces still dominate Hampton Roads and the Confederates still control several rivers and nearby Norfolk, Va. But history has been made. Though French and British fleets had begun building ironclad ships by the time the American conflict opened, the new naval technology hadn't been tried in battle until now. This Week in The Civil War, for week of Sunday, March 11, 1862: McClellan's demotion, River shelling. This week in 1862, President Abraham Lincoln relieves Major Gen. George B. McClellan of his title as general-in-chief of all federal armies. McClellan is a greater organizer who whipped once-disorganized Union troops into a veritable fighting force. But Lincoln and others in Washington are growing impatient after repeatedly urging McClellan to attack Confederate foes. Despite Lincoln's action, McClellan still commands the Army of the Potomac, a key cog in the federal war machine. Yet Lincoln will have to wait weeks for McClellan to finish preparations to marshal n elaborate campaign against Richmond, capital of the Confederacy, that will later be waged -- unsuccessfully -- from the Virginia coastal peninsula. Elsewhere this week, Union forces occupy New Madrid in Missouri but frequent shelling continues nearby on the Mississippi River. An Associated Press reporter in a dispatch March 16, 1862, reports he is aboard a federal flagship in a flotilla patrolling the river and sporadic artillery firing has erupted near the Confederate stronghold at Island No. 10. "The flotilla got under way at 5:30 a.m. this morning and dropped down slowly till about 7 o'clock where the flag ship, being about 27 miles ahead and six miles above the island, discovered a stern wheel steamer run out from Shelter Point on the Kentucky shore, and started down the river. Four shells were thrown after her, but the distance, however, was too great for the shots to take effect." The AP correspondent reports a day later that Confederate forces at Island No. 10 have formidable encampments, large enough to hold thousands of troops. He notes "46 guns have been counted" and adds that more than tension fills the air: "Firing was heard in the direction of New Madrid all day." This Week in The Civil War, for week of Sunday, March 18, 1862: Confederate Cabinet Shake-up, Stonewall Attacks. Confederate President Jefferson Davis, beset by recent military setbacks, orders a major Cabinet reshuffle this week 150 years ago in the Civil War. The Confederate leader orders on March 18, 1862, that George W. Randolph -- a Virginia native and grandson of Thomas Jefferson -- take charge as Confederate war secretary. Randolph succeeds Judah P. Benjamin. Benjamin, who was criticized for his handling of the department and now moves to secretary of state. Randolph will go on in the next eight months to reorganize and bolster the Confederate war machinery for the battles ahead. Despite recent reversals for the Confederacy, the war is still young. An Associated Press dispatch in early March speaks of growing federal worries about a vexing Confederate commander, Maj. Gen. Thomas "Stonewall" Jackson, now ranging about the Virginia countryside. AP's correspondent reports: "Intelligence from Winchester leads to the belief that General Jackson is there in full force." Indeed, some 3,400 Confederate troops commanded by Jackson will clash with a far larger Union force of about 8,500 troops on March 23, 1862, not far away at Kernston, Va. Federal forces stop Jackson's daring drive, but his campaign sounds alarm bells in Washington. President Abraham Lincoln, wary of Jackson's threat to the capital from Virginia's neighboring Shenandoah Valley, redirects defensive forces to protect Washington's back door just when Union Gen. George B. McClellan is pressing for all the troops the federal War Department can spare him. McClellan argues a huge force is needed for an all-out attack on Richmond he is planning for his upcoming Peninsula Campaign. And after his campaign fails later in 1862, McClellan will claim he could have captured the seat of the Confederacy if he had had those extra troops at his command. This Week in The Civil War, for week of Sunday, March 25, 1862: Fighting out West, McClellan's moves. A battle unfolded out West 150 years ago this week during the Civil War. On March 26, 1862, a Confederate force of about 300 Texas fighters camped near Glorieta Pass in New Mexico Territory -- a strategic location at the southernmost end of the Sangre de Cristo Mountains on the Santa Fe trail. Several hundred approaching Union soldiers led by Maj. John M. Chivington went on the attack, pressing in on the Confederates until artillery fire threw the federal fighters back. Chivington split his force into two groups on each side of the pass and put the Rebels in a crossfire before fighting halted for the day. The next day both sides regrouped and fighting wouldn't resume again until March 28, 1862, with the Union side swelled by hundreds of reinforcements. Confederates held their ground as the battle surged back and forth in the coming hours. Eventually a wearied Confederate force retreated to Santa Fe -- and eventually back to Texas -- securing a strategic Union victory in a key point of the conflict out West. Elsewhere Union Gen. George B. McClellan has begun a long-awaited step of moving thousands of troops, heavy artillery, armaments and supplies to Fort Monroe off Virginia as he prepares for a major federal assault on Richmond, capital of the Confederacy. For weeks and even months, McClellan had come under criticism for not waging an all-out offensive sooner. But now he was on the move. Nonetheless, retThe Springfield Daily Republican in Massachusetts, indicates McClellan had already lost some element of surprise ahead of what would be his ill-fated Virginia peninsula campaign. A dispatch in the paper reported: "The latest accounts from Richmond show that the rebels are crowding troops down upon the York and James River, showing they know where to expect Gen. McClellan."
<urn:uuid:14486775-eaf2-4cdf-8d8e-7ee8ba6d51ed>
CC-MAIN-2013-20
http://www.boston.com/news/nation/articles/2012/03/02/this_week_in_the_civil_war/?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947169
1,574
3.1875
3
More information about this Work These three panels may have been the top of a polyptych similar to the one by Mariotto di Nardo in the Galleria dell’Accademia in Florence, whose central panel depicts a Virgin and Child. These panels, which were formerly in the Somerwell collection in Scotland, were auctioned at Sotheby’s in 1970 and were acquired for the Thyssen-Bornemisza collection in 1976. On the reverse of the Annunciate Angel is an old label that attributes the paintings to Giottino and states that they were in one of the storerooms in Santa Maria del Fiore in Florence when these buildings were demolished in 1826 to enlarge the square on which they were located. The panels, which were unpublished prior to the auction of 1970, were catalogued as by Bicci di Lorenzo, an attribution that continues to be accepted today. For the two Annunciation panels the artist followed the traditional iconographic model used in Tuscan art in the 14th and 15th centuries: the Virgin is depicted seated with a book on her knees. She receives the message from the Angel Gabriel who approaches her on clouds, a detail that indicates that the angel arrives from Heaven. The Holy Spirit, sent by God the Father, is located above the Archangel and represented as Christ, who descends on the other panel towards Mary in the form of a dove. The Crucifixion panel, with Christ still bleeding from his wounds, only includes the essential figures of the Virgin and Saint John. At the foot of the cross and near a pool of blood dripping from the plinth of the cross is Adam’s skull, whose presence, like the blood around it, symbolises the purification of Original Sin. At the top of the cross, from which a bright green branch is sprouting, is a pelican with its four young. According to legend, the pelican feeds its young on the blood that spouts from a wound that the bird makes in its own breast. The bird, combined with the crucifixion, emphasises Christ’s nature as Redeemer. It has been tentatively suggested that the other elements in this polyptych were two panels with pairs of saints: Thaddeus and John the Baptist, and James the Less and Nicholas of Bari, both in the Pinacoteca Stuard in Parma. According to Boskovits these panels would have flanked a Virgin and Child, in an arrangement similar to the above-mentioned polyptych by Mariotto di Nardo. Bicci di Lorenzo, who inherited a large workshop from his father, continued to use Gothic formulas well into the 15th century, introducing small changes in an effort to modernise his style. This group has been dated around 1430.
<urn:uuid:908788c7-6863-469c-b11d-e5f7c3d2a6e3>
CC-MAIN-2013-20
http://www.museothyssen.org/en/thyssen/imprimirFicha/172
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975905
574
2.96875
3
Step One-Identify Wildlife Needs Choose a Target Species. What type of wildlife would you like to attract to your landscape? Most people are interested in attracting songbirds, hummingbirds, or butterflies. Of course, if you plant the right native plants, you can attract all three! Determine the Basic Needs of Target Species. All wildlife have the same basic life requirements: food, water, and cover. Find out what your target species requires. Keep in mind that wildlife have different needs in different seasons. Does it need a certain type of food plant? Does it prefer a particular plant for cover? Some butterflies need a specific host plant on which to lay their eggs. Click below for more information on identifying the needs of native wildlife. Determine Limiting Factor. Once you know what your target species needs to survive and thrive, take a look at your existing landscape to see if any of these elements are missing. That will be a limiting factor that you will need to correct. Do you have the host plant? Do you have an adequate source of water? Are there any food plants? Cover plants? Keep in mind that limiting factors may change from one season to the next. Back to top
<urn:uuid:13404e73-6df4-44bb-b945-5eee92c66ca0>
CC-MAIN-2013-20
http://www.ncsu.edu/goingnative/create/one.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935036
246
3.28125
3
Guide No. 10: U.S. Social Security Death Index (SSDI) and Railroad Retirement Board Records "Long before the economic blight of the depression descended on the Nation, millions of our people were living in wastelands of want and fear. Men and women too old and infirm to work either depended on those who had but little to share, or spent their remaining years within the walls of a poorhouse . . .The Social Security Act offers to all our citizens a workable and working method of meeting urgent present needs and of forestalling future need . . . President Franklin Roosevelt; August 14, 1938, radio address on the third anniversary of the Social Security Act The first Social Security card was issued 1 December 1936 and on 1 January 1937, U.S. workers began acquiring "credits" toward old-age benefits. About 35 million numbers were assigned to workers who qualified at that time. One of the largest and easiest to access databases used for genealogical research is the Social Security Death Index (SSDI). Its information can be utilized to help you learn more about your ancestors, as well as your aunts, uncles and cousins. Clues and facts from the SSDI often can be used to further genealogical research by enabling you to locate a death certificate, find an obituary, discover cemetery records and track down probate records. As marvelous a finding aid as it is, the SSDI does not include the names of everyone, even if they had a Social Security number (SSN). If relatives or the funeral home did not report the death to the Social Security Administration, or if the individual died before 1962 (when the records were computerized) then they probably will not appear in this database. The omission of an individual in this index does not indicatethe person is still living. It simply means that there was no report of the person's death to Social Security Administration. When using the Social Security Death Index, in addition to the date of birth and date of death, there are three possible places included as well: - State of issuance (where a person then lived and applied or the state in which the office that issued their Social Security number was located). - Residence at time of death (this is really the address of record, but not necessarily where they lived or died). - Death benefit (where the lump sum death benefit [burial allowance] was sent). According to the Social Security Administration (SSA): The nine-digit SSN is composed of three parts: - The first set of three digits is called theArea Number - The second set of two digits is called the Group Number - The final set of four digits is the Serial Number The Area Number is assigned by the geographical region. Prior to 1972, cards were issued in local Social Security offices around the country and the Area Number represented the state in which the card was issued. This did not necessarily have to be the state where the applicant lived, since a person could apply for their card in any Social Security office. Since 1972, when SSA began assigning SSNs and issuing cards centrally from Baltimore, the area number assigned has been based on the ZIP code in the mailing address provided on the application for the original Social Security card. The applicant's mailing address does not have to be the same as their place of residence. Thus, the Area Number does not necessarily represent the state of residence of the applicant, either prior to 1972 or since. Generally, numbers were assigned beginning in the Northeast and moving westward. So people whose cards were issued in the East Coast states have the lowest numbers and those on the West Coast have the highest numbers. The state of issuance (this is not necessarily the state of residence at the time of issuance) can be verified by looking at the Social Security number itself. The next two digits of the number are a code used to track fraudulent numbers. The last four digits are randomly assigned. 574 (AK) 416-424 (AL) 429-432 (AR) 526-527 (AZ) 545-573 (CA) 521-524 (CO) 040-049 (CT) 577-579 (DC) 221-222 (DE) 261-267 (FL) 252-260 (GA) 575-576 (HI) 478-485 (IA) 518-519 (ID) 318-361 (IL) 303-317 (IN) 509-515 (KS) 400-407 (KY) 433-439 (LA) 010-034 (MA) 212-220 (MD) 004-007 (ME) 362-386 (MI) 468-477 (MN) 486-500 (MO) 425-428 (MS) 587 (MS) 516-517 (MT) 237-246 (NC) 501-502 (ND) 505-508 (NE) 001-003 (NH) 135-158 (NJ) 585 (NM) 530 (NV) 050-134 (NY) 268-302 (OH) 440-448 (OK) 540-544 (OR) 159-211 (PA) 581-584 (PR) 035-039 (RI) 247-251 (SC) 503-504 (SD) 408-415 (TN) 449-467 (TX) 528-529 (UT) 223-231 (VA) 008-009 (VT) 531-539 (WA) 387-399 (WI) 232 (WV, NC) 233-236 (WV) 520 (WY) 580 (Vir. Is.; PR) 586 (Guam, Am. Samoa, Phil. Islands) 700-728 (RR Retirement Board - All States) Beware of making assumptions about the state of residence at time of death. The one listed as "Last Residence" (more properly should be called "address of record") in SSDI is not necessarily the place of death. This was brought home when researching an individual who died while on vacation in Florida. His "last residence" shows up as New Hampshire, which was his legal residence (or address of record) at the time, but that is not where he died. Consider the possibility that a person might have had two official residences also; many "snowbirds" do. Read the search results carefully. The actual place of death is not shown in the SSDI. Some records show where "last benefit" was sent, but that is not necessarily the place of death or the deceased's address of record either. "Last Benefit" only refers to the payment of the lump sum death benefit (a burial allowance of about $250 that went to the surviving spouse). Keep in mind that ZIP codes given are those that existed at the time of the reported death and are not necessarily correct or the same as today's. ZIP codes have changed through the years. Do not assume that the state in which the number was issued was the state of birth or even the state of residency at the time (see above). Abbreviations. The following abbreviations or codes, which appear in some records in the "Last Residence" box, usually in parentheses, are internal codes used by SSA and do not mean anything to researchers. Ignore them. - (VA) (VA) does not stand for Virginia or Veterans' Administration Reasons you might not find someone in the SSDI - Social Security officially was begun in 1937, with some payments being paid as early as 1940. However, the Social Security Death Index is the computerized index to deaths reported and/or death benefits paid out starting in 1962. The SSDI includes a few pre-1962 entries, but the great majority of those included in this index are from 1962 through the present time. - While the limitations of dates may exclude your family member, other reasons that your ancestor may not be included in the SSDI might have to do with his or her occupation or lack thereof. - Prior to the 1960s, farmers, housewives, government employees, non-employed individuals, and those with a separate retirement plan might not have had a Social Security number. It was not until 1988 that all children had to have Social Security numbers. The "Application for a Social Security Number" is commonly referred to as the SS-5. In addition to the SSDI, you may find your ancestor's Social Security number in other ways, especially on death certificates. While it may seem like you are recreating the wheel to request the SS-5 form, there are times that this can be the only proof you will have for an ancestor's birth. For instance, for those ancestors born in the 1860s to 1880s who immigrated to the United States, it can difficult to pinpoint their place of birth. On the SS-5 it was required that the applicant supply complete birth information. This means more than just the country of birth, as is usually found on census and death records. Moreover, the maiden name of the applicant's mother was requested, often critical information for a family historian. To request a photocopy of the original application for Social Security Card (SS-5), find the particular record of interest and let RootsWeb generate a printer-ready letter addressed to the Social Security Administration for you. Be sure to include the name of the individual, the Social Security number, date and place of death. You will need to include a check or money order for these records currently $27 if you have the Social Security number, $29 without the number. See chart below. The process of obtaining this information usually takes several months, so be patient. Fees For Processing Requests For Individualís Social Security Record Effective July 1, 2001 Request for copy of Original Application for Social Security Card (Form SS-5), SSN Provided $27 Request for copy of Original Application for Social Security Card (Form SS-5), SSN Not Provided $29 Request for Computer Extract of Social Security Number Application, SSN Provided $16 Request for Computer Extract of Social Security Number Application, SSN Not Provided $18 Search for Information about Death of an Individual, SSN Provided $16 Search for Information about Death of an Individual, SSN Not Provided $18 The above chart and additional information can be found here: Initially people relied heavily upon the 1880 and 1900 federal census to obtain proof of their births (that's why these enumerations were Soundexed a special index based on sound rather than spelling of a surname). Many delayed birth certificates were filed in order to prove age for Social Security purposes. - RootsWeb does not create, edit or correct information found in the Social Security Death Index. This database is created by the Social Security Administration (SSA). If you believe that SSA has listed incorrectly someone as deceased (or has incorrect dates/data in the file), you should contact the local Social Security Office. It will be necessary to provide acceptable proof to have the error(s) corrected. - You can use RootsWeb's Post-em Notes to indicate that your records differ from what SSA has, but this will not change the database. Railroad workers were enrolled in the same Social Security program, but from 1937 to 1963 they had numbers ranging between 700 and 728 as the first three digits. In 1964 their numbers began to reflect the same geographic location as other workers. Some railroad workers received Social Security benefits, but some did not. However, it is wise to check the SSDI in any case. The U.S. Railroad Retirement Board was created in the 1930s, and has records dating back to 1937, but they exist only for those whose employers were covered under the Railroad Retirement Act. You can obtain information about deceased individuals for genealogical purposes. The records are arranged by Social Security number. If you do not know the number, provide as much identifying information as you have. Currently there is a $21 nonrefundable fee for a search in these records. Send request, along with check or money order, to: Railroad Retirement Board 844 North Rush Street, Chicago, IL 60611-2092 Suggested Reading & References - Allen, Desmond Walls and Carolyn Earle Billingsley. Social Security Applications: A Genealogical Resource. Bryant, Arkansas, Research Associates, 1989, 1991. - Hinckley, Kathleen W., CGRS, "Locating the Living: Twentieth-Century Research Methodology." National Genealogical Society Quarterly, 77, No. 3 (1989). - Hinckley, Kathleen W., CGRS, Locating Lost Family Members & Friends: Modern genealogical research techniques for locating the people of your past and present. Cincinnati, Ohio: Betterway Books, 1999. - Szucs, Loretto Dennis & Sandra Hargreaves Luebking. The Source: A Guidebook of American Genealogy (revised edition). Salt Lake City, Utah: Ancestry, Inc., 1997. - Using Social Security Number Application Forms for Genealogy by George G. Morgan - Railroad Retirement Board Records by George G. Morgan - Social Security Death Master File: A Much Misunderstood Index by Jake Gehring - Search SSDI - Social Security Administration Home Page - Social Security Administration History Page - Cyndi's List: Railroad Information links - Cyndi's List: Social Security information links - Social Security Death Master File: Common Misconceptions - Social Security Death Index FAQs Links in this Guide (in order they appeared) - Social Security Death Index (SSDI) - Social Security Administration Complete list of the geographical number assignments by SSA Social Security Administration Fees and Payments Railroad Retirement Board Hinckley, Kathleen W., CGRS Ancestry, Inc. The Source: A Guidebook of American Genealogy (revised edition) Using Social Security Number Application Forms for Genealogy by George G. Morgan Railroad Retirement Board Recordsby George G. Morgan Social Security Death Master File: A Much Misunderstood Index by Jake Gehring Social Security Administration Home Page Social Security Administration History Page Cyndi's List: Railroad information links Cyndi's List: Social Security information links Social Security Death Master File: Common Misconceptions Social Security Death Index FAQs
<urn:uuid:25b738f9-c8ea-4b19-ba7d-8c95476afa5d>
CC-MAIN-2013-20
http://www.rootsweb.ancestry.com/~rwguide/lesson10_text.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929897
2,965
2.890625
3
What is the difference between space and universe? It seems to be a very difficult question to answer and even understand the difference between them. Space is considered to be wider as compared to the universe. There have been many people who have expressed their thoughts on space and universe. The universe is believed to have born with a big bang which is referred to as a dense point. It just cannot be counted as to how old our universe is. The universe has been expanding gradually and this expansion is faster even than the speed of light. The universe has almost doubled in its size and is 100 times bigger now than what is was before. This burst of expansion is called as inflation. After the inflation the universe grew at a slower rate. This led to the expansion in the space. As a matter of fact the universe began to cool and matter was formed. The universe is filled with a number of objects which are named as photons, electrons, protons, neutrons and anti-electrons. The universe was very hot. This heat resulted in breaking smashed atoms with huge forces. These broken pieces scattered all around the universe. They became dense just like the fog. It took more than three lakh years for the matter to cool down and atoms to form. There was a new form of energy discovered known as dark energy which characterized the universe. The universe began to slow down gradually when the dark energy was discovered. This phenomenon gave birth to the solar system. Then there were stars, galaxies and many other objects discovered in the space. Space and universe is a very wide and interesting concept to study. There are many heavenly bodies that our universe comprises of. One can go on and on exploring them and learning. The National Aeronautics and Space Administration has been exploring such bodies and launching new programs.
<urn:uuid:1b4c4cf0-5690-48ed-af8a-44b44f3f0693>
CC-MAIN-2013-20
http://www.starstuff.org/space-and-universe.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.987616
360
3.265625
3
If you came to this page directly and do not see a navigation frame on top, please go to the home page. |Bundesland: Nordrhein-Westfalen||North Rhine-Westphalia| Aachen is situated at an altitude of 191 m at the foot of the Eifel low mountain area in the southwest of North Rhine-Westfalia, at the borders to Belgium and the Netherlands. Aachen has a population of about 257,200 (2005). As a spa town, the city is entitled to the predicate Bad, but the name Bad Aachen is only rarely used. The origin of the town's name is the old germanic word 'ahha' which means 'water'. Already during the Roman times, this place was known for its hot springs. The Latin name Aquisgranum, that was later used in medieval documents, is derived from the Celtic-Roman god, Grannus. Under the Frankish king Pippin the Younger (Pippin III), Aachen was first mentioned in a document of 765 AD under the name Aquis villa. The strongly sulfurous waters became the reason that Aachen became an important town under the reign of Charlemagne, Pippin's son. One year after the canonization in 1165 of Charlemagne, Aachen was chartered as a town and obtained the status of a free Imperial city. In 1656 a large fire destroyed more than 4,600 houses, almost the entire old town. After that, the town was rebuilt as one of the most modern spa towns of the period. Aachen was the site of the conclusion of the First Peace of Aachen which in 1668 ended the War of Devolution between France and Spain. The Second Peace of Aachen (1748) ended Austrian War of Succession between Austria and Prussia and also had involved Bavaria, France and Spain. In 1794 French troops occupied Aachen (in French: Aix-la-Chapelle), which became the seat of the administration of the Département de la Roer. After the Congress of Vienna (1815) became part of Prussia as capital of the Province Jülich-Kleve-Berg, in 1824 it was incorporated into the Prussian Rhine province (capital at Koblenz). After World War I, the western surroundings of Aachen with Eupen became part of Belgium, Aachen itself remained occupied by Belgian troops for eleven years. During World War II about 65% of the town were destroyed. In 1944 the town was evacuated and on 12 October of that year it was the first German town to be taken by the Allied forces. After the war, Aachen was occupied first by American, then British and Belgian troops. In 1946 it became part of North Rhine-Westfalia. The Karlspreis (International Charlemagne Prize of the city of Aachen) was first awarded in 1950 to people who contributed to the European idea and European peace. The population of the municipality Aachen doubled by the incorporation of the neighboring communities of Brand, Eilendorf, Haaren, Kornelimünster, Laurensberg, Richterich and Walheim following the Aachen-Gesetz (Aachen Act) of 1972. The cathedral of Aachen is the popular landmark of the city. It goes back to the Imperial chapel that was founded by Charlemagne. The central 'Oktogon' was completed in 805 AD. For four hundred years, the cupola remained the largest construction of its kind north of the Alps. The famous chandelier was donated by Friedrich I (Barbarossa) in 1165/1170. The impressive Gothic choir (length 25 m, width 13 m, height 32 m, about 1,000 m² of glass windows) was constructed between 1355 and 1414 to house the famous four relics of Aachen. The precious feretory of 12201239 contains what was believed to be: the diaper and the waistcloth of Jesus, the dress of the Virgin Mary and the cloth that covered the head of St. John the Baptist after his decapitation. Since 1239 pilgrimages to this place were known as "Heiligtumsfahrt" or "Aachenfahrt". Since 1349 it is carried out every seven years. The bell tower at the western side of the octogon was added around 1350. Charlemagne and Emperor Otto III are buried in the cathedral. The relics of the former are kept in a precious shrine of 1215. Although the cathedral was not destroyed in World War II it suffered severe damages. The greates loss probably were the historic glass windows. New, modern, glass windows were installed after the war. In 1978, the cathedral was the first edifice in Germany that was recognized by UNESCO as a World Heritage site (see also list of other UNESCO heritage sites). In 813, Charlemagne's son, Ludwig 'the Pious', was crowned here as co-regent. Lothar I, grandson of Charlemagne, crowned himself emperor here. The church also was the site of the coronation of Otto I as East Frankish king (936 AD). Since then, it remained the site of the coronation of the German kings. Thirty-one kings were crowned in the church, the last being Ferdinand I in 1531. The first bishopric of Aachen was founded in 1802 by Napoléon I. It consisted of those parts of the former archdiocese of Cologne located left of the Rhine, parts of the diocese Liège and small parts of the dioceses Utrecht, Roermond and Mainz. In 1821 the diocese was dissolved and re-incorporated into the archdiocese of Cologne. The present diocese of Aachen was founded in 1930. The Stadttheater (Municipal Theatre) [bottom left picture) was built in 19001901 by Heinrich Seeling. It replaced the old municipal theatre that had been built in 18231825 by Johann-Peter Cremer and Karl Friedrich Schinkel. The new theatre had a capacity of 1,014 seats. The building was destroyed a fire after bombings on 14 July 1943. The theatre was restored in 19481951 without the two front towers and with a modern auditorium. Today, the theatre has a capacity of 905 seats. The Elisenbrunnen [bottom right picture] was built in 18251827 in Classicist style by Karl Friedrich Schinkel. The pump room houses the 'Kaiserquelle' (emperor's spring), which supplies hot (74°C) sulfurous waters. In World War II the building was almost completely destroyed and was re-opened after its reconstruction in 1953. The town hall [near left, no.2181] was built on the site of the Karolingian imperial residence. The first, Gothic, town hall was built in 13301349. In the 17th and 18th century it was remodeled in Baroque style. In the 19th century many of the Baroque additions were removed and replaced by neo-Gothic details. The main faç was ornated with 50 sculptures of German kings. Following a large fire, which destroyed the roofs in 1883, the town hall was restored until 1902. The lower parts of the eastern tower of the town hall (Granusturm [not depicted on glass no.2181]) date from around 770 and are the only remainig part of the Karolingian residence. The Granusturm thus is the oldest building of Aachen. During World war II the building suffered heavy damages but was restored again after the war. Copies of the Imperial Regalia, the originals of which are in Vienna, are exhibited in the town hall as a reminder to the 31 coronations of German kings that took place in Aachen. The town hall is the site of the annual award ceremony of the International Charlemagne Prize of the city of Aachen.
<urn:uuid:0246866e-ead1-4a5b-a797-10017e583cc1>
CC-MAIN-2013-20
http://www.thomasgraz.net/glass/gl-2109.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00020-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97017
1,710
2.96875
3