text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Hearing aids, as those who wear them know, have some flaws. Whistling, echoing and feedback often frustrate even the most intrepid user. Biomedical engineering graduate student Peyton Paulick seeks to give those with hearing loss a better option, and if the first human clinical trial of her research device is any indication, she may well succeed. The device, a small voice coil actuator placed deep within the ear canal, responds to an electronic signal by moving the eardrum mechanically – just the right amount – to allow sound waves to enter. This eliminates the problems that occur when sound waves are amplified, as in hearing aids. Currently, options available for the hearing impaired are limited. Cochlear implants require major surgery and can cost upwards of $30,000. Traditional hearing aids have advanced technologically but still present those little annoyances. “Satisfaction rates are pretty low,” Paulick said. “A lot of people with traditional hearing aids don’t use them.” Additionally, of the more than 34 million hearing-impaired people in the United States, only 25 percent use hearing aids, leaving approximately 25 million patients untreated. Paulick’s device is inserted – by a physician in an outpatient setting – deep enough within the ear canal to touch the eardrum. It uses a digital processing unit to convert sound to an electronic signal, which causes the device to vibrate at specific frequencies. The vibration moves the eardrum a predetermined distance, allowing the sound waves in and translating to a perceived sound by the patient. “The small vibrating unit that couples directly to the eardrum knows exactly how much to make the eardrum move at any range and frequency to interpret that sound,” she says. The small voice coil actuator is three millimeters wide and six millimeters long. It is inserted by a physician in an outpatient setting. The team hopes it can be manufactured for about the same price as a hearing aid. In many people with hearing loss, the eardrum may still move but sensitivity to sound has decreased. “So we make that movement bigger so they can hear it. We make sound louder by driving the eardrum to move more,” she explains. The device – a small cylinder about three millimeters wide and six millimeters long – passed bench tests and cadaver tests earlier this year with flying colors. (In cadaver testing, a laser Doppler vibrometer measured how much the bones of the middle ear moved in response to sound.) But the true test was the human trial in late September. The device was inserted into the right ear of volunteer Mark Bachman, assistant professor of electrical engineering and computer science, and a member of the research team. Bachman also wore an external earphone on his left ear. Sounds were played through both devices at different decibel and frequency levels until the sounds matched, giving researchers insight into the settings necessary for the device to achieve comparable sound. The process is subjective, but “that’s how hearing aids work too,” Paulick says. “The device will be tailored to a specific person, depending on how severe his/her hearing loss is. But just making those movements larger will achieve that. This was a proof of concept.” Additionally, the trial tested the device’s capability to transmit complex audio waveforms – in this case, music. By all accounts, the experiment was a success. Bachman said the music sounded clear and pure, like it originated “inside his head.” It was a big moment for the research team. “These complex audio waveforms are quite a bit more complicated than just regular sounds,” Paulick said, adding that hearing-aid users and those with hearing loss often miss fine pieces of sound – classical music and conversation in noisy environments, for example. “By playing this type of waveform with very different sound elements we know now we can transmit that information successfully.” Bachman was pleased with the results as well. “It works. It does produce sound and it uses a lot less power than our original measurements were suggesting it might use,” he said. “As a proof of concept, it was fantastic; it was a big success.” -- Anna Lynn Spitzer
<urn:uuid:684396a6-3eb2-4b89-ae10-ac3812af5234>
CC-MAIN-2016-26
http://www.calit2.uci.edu/calit2-newsroom/itemdetail.aspx?cguid=0a3da938-588f-4d16-9bd0-4350047105ab
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962332
906
3.5
4
by Kathy Barbro Prior to becoming an art teacher, I worked in the field of graphic design. That experience was immensely helpful as I was able to master the tools I needed to create my own teaching aids and it also introduced me to a wealth of online visual libraries. Three sites I use most often are: I love to search the pages of illustrations on these sites to find the inspiration for new project ideas. I usually have a subject in mind and scroll through the thumbnail images looking for simple drawings that I think will translate well for young artists. I found the above image (left) while searching for winter themes. My Water Color Resist Snowflakes project (right), was inspired by that first painting I saw at Getty Images. The overlapping pine trees in this Getty One illustration (left) led to my Abstract Tree Project (right). I just came across this image that I will use for an upcoming project. Students in the 3rd and 4th grade are ready to learn about perspective. Making a snowman with this exaggerated point of view would probably be fun to do while illustrating this principle. Thank you Jessica, for letting me sit in and post and Art of Education today – I hope your readers find my tips helpful. Have a creative day everyone! Thank you, Kathy, for posting such a neat concept. I know art teachers obtain ideas from a variety of places, and another one to have up our sleeves is fabulous! Be sure to visit Kathy’s blog, Art Projects for Kids! What is your #1 source for inspiration for your art projects?
<urn:uuid:e1f5a46d-b190-4050-9fcd-b18c587d62f0>
CC-MAIN-2016-26
http://www.theartofed.com/2011/11/18/guest-post-my-inspiration/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963096
322
2.828125
3
3 Weeks Pregnant When you are 3 weeks pregnant, you are now pregnant -- meaning that an egg has been fertilized and is implanted in your womb (uterus). You may have multiple eggs fertilized at once, or an egg may divide, resulting in twins or triplets. At this stage of pregnancy, it is vital that you get enough calcium, iron, and folic acid to prevent birth defects. When you are 3 weeks pregnant, you do not yet know that you are actually pregnant. Here comes the waiting game! To find out if you are pregnant, you must wait 14 days from ovulation … sometimes it can be a very long 14 days. Toward the end of "week 3 of pregnancy," you may begin to experience what you believe to be normal midcycle symptoms. These symptoms, however, may also be early signs of pregnancy.
<urn:uuid:bb0bab88-19ee-435b-94cf-a17fc902336f>
CC-MAIN-2016-26
http://pregnancy.emedtv.com/pregnancy-week-by-week/3-weeks-pregnant.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00037-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958486
176
3.15625
3
POLS 123 The United Nations III • 5 Cr. Researches a country in depth and prepares students for the National Model U.N. Conference in New York. Prerequisite: Permission of instructor. After completing this class, students should be able to: - Explain the history, structure, and major operations of the United Nations. - Identify the geographic locations of all United Nations' member states. - Articulate informed opinions about issues of global importance. - Write well-researched, well-articulated position papers from the perspective of a country other than the United States. - Sponsor well-researched, well-articulated resolutions, reports, and/or treaties for college-level Model United Nations (MUN) conferences. - Successfully employ United Nations’ rules of procedure at college-level MUN conferences. - Use diplomatic skills—such as public speaking, problem solving, consensus building, and conflict resolution—at college-level MUN conferences. - Successfully represent a foreign country’s diplomatic position at college-level MUN conferences.
<urn:uuid:8ac782d3-6d4c-403c-b0d5-385126885d67>
CC-MAIN-2016-26
http://www.bellevuecollege.edu/classes/All/POLS/123
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.8287
234
2.859375
3
Image Credits: Shelmerdine Have you ever wondered what exactly is up with dividing perennials? This informative report can give you an insight into everything you’ve ever wanted to know about dividing perennials. One of the things that makes perennials so attractive to home gardeners is the ability to divide and transplant the perennials. Gardeners can use cuttings made from their perennials in order to create new growth, share their plants with family members and friends, or even to sell excess stock to nurseries, garden centers and flower stores. There are basically two reasons why gardeners choose to divide their perennials. The first reason is for the improvement of the health of the plants, and to encourage those plants to produce more flowers. In many cases, an older planting of perennials will become overgrown, and this can cause the bloom quantity of those perennials to drop considerably. The other reason gardeners divide perennials, of course, is to create new plantings. Perennials can be divided easily, and these new divisions can be used to create plantings in other parts of the garden, or even in another garden patch. Even though many perennials can be divided easily, not all can. In generally, division is most feasible on those perennials that grow in clumps, and those that have an expanding root mass. The more authentic information about dividing perennials you know, the more likely people are to consider you a dividing perennials expert. Read on for even more dividing perennials facts that you can share. Perennials that grow from single taproot, on the other hand usually cannot be divided. That is because any attempt to divide the taproot can cause the plant to die. Those perennials that grow from a taproot should be increased by using root cuttings or seeds instead of division. The best time to divide those spring and early summer perennials that can be divided, is generally in the fall of the year. Perennials that bloom in the fall or late summer should be divided in the spring instead. To divide perennials, the ground around the plant should first be gently lessened with a spading fork. The clump should then be sliced with a garden trowel and then divided into four parts. Those four sections should then be broken by hand to create sections four inches by four inches. Those small sections should then immediately be transferred to a previously prepared plant bed. It is important for the gardener to thoroughly wet the soil a day or two before the division is to take place. Watering thoroughly will make it easier to dig the clump. In addition, it is important to add compost or other organic material to the soil. The organic material should be added to both the original plant and the new divisions. Doing so will give the plant the nutrition it needs and help them to thrive better in their new location. The plants should also be watered thoroughly and fed with a good quality fertilizer once they have been planted. Article By B. Keith Johnson.
<urn:uuid:f019f607-b47c-46ea-ba07-2de9dc62ebcd>
CC-MAIN-2016-26
http://www.thegardenglove.com/dividing-your-perennials/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942497
622
2.828125
3
Photo: Brian Gratwicke (Flickr) Hot periods in July and August are called the dog days of summer because they do more than make a person sweat. People’s personalities change. They become tired and irritable. Tempers shorten. Many prefer lounging like a dog rather than working. Fish Have “Dog Days” It appears that humans aren’t the only animals that can have dog days in hot temperatures. Research has found that fish can have them too. Scientists have been investigating fish personality for quite some time. Some fish are shy and reserved, not venturing far from their protective habitat, and ducking for cover rapidly when threatened by new stimuli. Other fish are bolder, swimming out into the open more and acting aggressively to newly introduced objects and fish. Water Temperature And Fish Personalities One would assume that a shy fish is always shy, but research has found this isn’t always the case. Scientists working with two species of damselfish on Australia’s Great Barrier Reef have discovered that when water temperatures increase by as little as three degrees, fish personalities change. They became an average of six times more active, four times more aggressive and four times bolder. Shy fish in cooler water that cowered in shelters for up to ten minutes after being threatened by a stick would immediately emerge from the shelter in warmer water. What Does This Mean? So what does this mean in a time when oceans are becoming warmer? Are we going to hear news stories about mysterious fish attacks sinking ships? Scientists don’t think so. While many fish become bolder in warmer water, not all fish do. Scientists believe fish populations will adapt, and it will still be safe to sail the seas, even during the dog days of summer. Read More: Small within‑day increases in temperature affects boldness and alters personality in coral reef fish (The Royal Society)
<urn:uuid:a9635f88-9ec6-4161-8dc1-5e9a862569c5>
CC-MAIN-2016-26
http://indianapublicmedia.org/amomentofscience/fish-personalities-why-water-temperature-matters/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96074
398
3.21875
3
Available to teachers only as part of theTeaching Lord of the FliesTeacher Pass Teaching Lord of the FliesTeacher Pass includes: - Assignments & Activities - Reading Quizzes - Current Events & Pop Culture articles - Discussion & Essay Questions - Challenges & Opportunities - Related Readings in Literature & History Sample of Reading Quizzes Chapter 1: The Sound of the Shell Questions1. Before we learn his real name, what is Ralph called? 2. How did the boys come to be on the island? 3. Why does Piggy think it is unlikely that Ralph's father will come to save him? 4. What does Ralph do that draws the attention of the other boys on the island? 5. Why is a group of boys Piggy and Ralph find wearing black robes? 6. What medical condition does Piggy have? 7. At the end of the chapter, what does Jack contemplate that he can't quite make himself do? Answers1. The fair boy. 2. Their plane crashed. 3. Piggy says that the pilot told him an atomic bomb went off and everyone is dead. 4. He blows into a large white conch shell that he and Piggy find on the edge of the water. 5. They were part of a choir. 7. Jack thinks about killing the pig they find in the brush, but he can't do it.
<urn:uuid:a9af6121-2196-4acf-ae14-1bc55cf103a7>
CC-MAIN-2016-26
http://www.shmoop.com/lord-of-the-flies/reading-quizzes.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955212
306
3.703125
4
A Newcastle researcher says peer pressure and a drinking culture continue to be the main cause of problem drinking among university students. More than 3,000 students identified as having hazardous drinking behaviour have participated in a web based education program. The research, published in the Journal of American Medical Association, found the program had minimal effect. Lead author professor Kypros Kypri says while the findings are disappointing, they are further evidence that external factors have the greatest influence. "Using web based or other technologies are likely to produce small effects at best," he said. "They may be part of an over all package but they shouldn't be a core element. "In relation to health behaviour I think we've come to a pretty good understanding that environment in which people operate is a more important determinant of their behaviour than health messaging." Professor Kyrpi says there is a misconception that social media or the web is the ideal way to reach young people with health messages. He says the study indicates that it would be misguided for policy makers to spend money on web based campaigns. "It's unlikely to be effective based on these findings and a growing literature on this," he said. "If people consider the behaviour here that alcohol consumption is shaped by strong forces to do with promotion of products, the way young people see it consumed by older people, a life time of learning about how alcohol is consumed in society."
<urn:uuid:16c55236-5e2a-4552-90f3-dcfa19665aac>
CC-MAIN-2016-26
http://mobile.abc.net.au/news/2014-03-27/drinking-culture-main-cause-of-problem-drinking3a-study/5348358?pfm=sm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967473
287
3.265625
3
David Bourget (Western Ontario) David Chalmers (ANU, NYU) Rafael De Clercq Ezio Di Nucci Jack Alan Reynolds Learn more about PhilPapers Journal of Agricultural and Environmental Ethics 1 (4):257-273 (1988) Scholars and environmentalists in the industrialized nations have repeatedly deplored the destruction of tropical forests as a byproduct of economic development. Their position is based upon scientific, economic, and ethical arguments. Proponents of economic development from the tropical nations recognize that its immediate benefits are enjoyed by their own relatively poor populations while the benefits of habitat preservation are enjoyed by the world as a whole. So far, few institutional mechanisms have been developed that can reconcile the competing perspectives. In addition to reviewing the arguments in favor of and against habitat preservation, this paper proposes some innovative institutions that can both satisfy developmental aspirations and account for the global benefits of habitat preservation. |Keywords||No keywords specified (fix it)| |Categories||categorize this paper)| Setup an account with your affiliations in order to access resources via your University's proxy server Configure custom proxy (use this if your affiliation does not provide a proxy) |Through your library| References found in this work BETA Citations of this work BETA No citations found. Similar books and articles Praveen Kulshreshtha (2005). Business Ethics Versus Economic Incentives:Contemporary Issues and Dilemmas. [REVIEW] Journal of Business Ethics 60 (4):393 - 410. Richard Lowell & Martin L. Greenwald (1992). Some Thoughts on the Preservation of Tropical Forests. Inquiry 9 (1):14-16. Eric Katz (1979). Utilitarianism and Preservation. Environmental Ethics 1 (4):357-364. Lauren Oechsli (1993). Moving Beyond Anthropocentrism. Environmental Ethics 15 (1):49-59. Eric Katz & Lauren Oechsli (1993). Moving Beyond Anthropocentrism: Environmental Ethics, Development, and the Amazon. Environmental Ethics 15 (1):49-59. K. S. Shrader-Frechette & E. D. Mccoy (1994). Biodiversity, Biological Uncertainty, and Setting Conservation Priorities. Biology and Philosophy 9 (2):167-195. Nancy Stepan (2001). Picturing Tropical Nature. Cornell University Press. Alastair S. Gunn (1994). Environmental Ethics and Tropical Rain Forests. Environmental Ethics 16 (1):21-40. Ben A. Minteer & Elizabeth A. Corley (2007). Conservation or Preservation? A Qualitative Study of the Conceptual Foundations of Natural Resource Management. Journal of Agricultural and Environmental Ethics 20 (4):307-333. Added to index2009-01-28 Total downloads16 ( #225,387 of 1,796,206 ) Recent downloads (6 months)5 ( #170,530 of 1,796,206 ) How can I increase my downloads?
<urn:uuid:4ef5aca3-914c-432d-9588-1a4f41295705>
CC-MAIN-2016-26
http://philpapers.org/rec/KATEIF
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.721906
635
2.53125
3
Though our long-term memory begins by school age, many of us understand the subjective nature of information recall — including anyone who’s ever been involved in either a traffic accident or romantic relationship. Now, researchers at New York University say they’ve discovered how the brain organizes the sequence of memories, which it deposits into long-term storage as discrete “bits” of memory. In the new issue of Neuron, psychologist Lila Davachi compares these memories to beads on a necklace, offering some insight into the temporal nature of how the brain stores memory. "Our memories are known to be 'altered' versions of reality, and how time is altered has not been well understood," Davachi, an associate professor, said in a statement. "These findings pinpoint the brain activity that explains why remember some events as having occurred closer together in time and others further apart." Though our experience in life is continuous, our memories are stored like “beads on a string,” one after another in chronological sequence — with one important caveat. For some unknown reason, the neurological process fails to record an accurate sense of timing between events, spacing some memories closer in time while others are spaced further apart. "Temporal information is a key organizing principle of memory, so it's important to understand where this organization comes from," Davachi said. Such understanding may lead to, not necessarily improved treatments, but a greater understanding of neurological conditions such as schizophrenia, whose pathogenic path hampers the brain’s ability to record memory in proper sequential order. In the study, Davachi conducted brain-imaging scans on participants while directing participants to work through some memory exercises. They were shown images of faces and objects along with a scenic photograph. Participants were then asked to imagine those faces and objects in another scene, a technique intended to force the brain to encode new memories in the brain’s hippocampus, a region responsible for memory. To assess how the brain stored such memory, Davachi later showed participants two stimuli, either the object or face from the study’s first phase. They were then asked to rate the temporal distance between the two memories — the stimulus and the scene — as very close, close, far, and very far. In the end, analysis of the functional-magnetic resonance imaging tests showed a link between activity in the hippocampus and the temporal distance with which the memories were space. With greater hippocampal activity during a session, memories were recalled as spaced closer together, whereas the inverse was true, too. "Clearly, the hippocampus is vital in determining how we recall the temporal distances between the many memories we hold, and similarity in the brain across time results in greater temporal proximity of those memories," Davachi said. The study was funded by the National Institute of Mental Health.
<urn:uuid:b1c2e470-9a68-4342-81de-602f085e83f3>
CC-MAIN-2016-26
http://www.medicaldaily.com/memory-system-resembles-beads-string-how-brain-sequences-events-over-time-270661
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958317
580
3.390625
3
BRADSHAW FOUNDATION - LATEST NEWS Artists of all Ages? Tiny imprints in ancient rocks deep underground have revealed an extraordinary unknown story of prehistoric children's art. Based on the article by Ben Hoyle, Arts Correspondent, The Times, September 30 2011 Typically, children are a footnote at best in studies of early mankind, but research that will be presented this weekend pushes the study of young people's creativity into new territory. Working in the enormous cave complex at Rouffignac, in the Dordogne region of France, Jessica Cooney, an archaeologist at the University of Cambridge, and Leslie Van Gelder of Walden University, Minnesota, have been able to show that girls and boys as young as two were involved in decorating caves at least 13,000 years ago. Their methodology tells them not only how old the child artists were but in some cases what sex they were and whether they were working alone or with help. They have deduced that some of the art could only have been made by a child of two or three sitting on an adult's shoulders and making marks guided by an adult's hand, information that adds a new dimension to our knowledge of prehistoric man. The caves have been known about since the 16th century but it was not until 1956 that experts realised that some of the art on the walls was prehistoric and not until 2006 that children were shown to have been involved. Now new fieldwork has established how old they were. Although Rouffignac is celebrated for its rock paintings of mammoths, rhinoceroses and horses, Ms Cooney and Dr Van Gelder's work has focused on less spectacular markings known as finger flutings, which make up about 80 per cent of all prehistoric art and which are found in caves in France, Spain, Australia and Papua New Guinea. 'It's like finger painting in clay,' Ms Cooney said yesterday. 'A lot of the walls in caves are malleable, covered in layers of clay or something called moon milk, which is a precipitate of limestone. People would put their fingers into this and move either their fingers or their whole bodies.' The impressions they left behind take the form of lines, circles and in some cases rudimentary animals. The flutings are sometimes only millimetres deep but are loaded with information about the individual who left them. Dr Van Gelder and her late husband, Dr Kevin Sharpe, developed a methodology based on the analysis of thousands of living people that enabled them to determine the age of a child aged seven or younger based on the width of the middle three fingers. 'Where there is a clear finger imprint on the rock the shape of the top edges of the fingers can tell, to 80 per cent accuracy, the sex of the artist. We have found marks by children aged between three and seven years old and we have been able to identify four individual children by matching up their marks,' Ms Cooney said. They include a girl who is 'like a typical five-year-old who just wants to get her hands dirty. She flutes everywhere and does a whole variety of different lines.' One chamber has so many child flutings that Ms Cooney thinks it may have been 'a playpen of sorts'. Why the children made the marks is less clear: was it a form of play or a did it have a ritual significance? Ms Cooney suspects it was both, with different meanings attached to the marks in different caves. She is presenting her findings on Sunday at the Society for the Study of Childhood in the Past conference in Cambridge. Bradshaw Foundation comments: Scientists know that the caves were not inhabited by our Palaeolithic ancestors, but they were frequented by them for a number of reasons. It is logical to assume that children would have been included in many, perhaps all, of the visits. In Chauvet there are footprints of a child in the Gallery of the Crosshatching leading to the Skull Chamber. The prints are those of a pre-adolescent about 1.3m tall. The low length/width ratio suggests it is a boy. The child regularly wiped his torch - to maximise the burning efficiency and brightness - on the walls and ground, and the charcoal marks have been dated to 26,000 years ago. A cast was made of one of the footprints, revealing that the child's foot was completely imprinted as it moved, which shows he was walking slowly and carefully on a homogenous and soft floor. Was the child on his own? Niaux Cave also has evidence of children. In 1949 a series of footprints left by two children of about 9 to 12 years old was discovered in a rock cavity. In 1970, a number of footprints were discovered in the section of the cave since named Reseau Clastres. Studied by Dr Jean Clottes and Robert Simonnet, they calculated that the prints belonged to three children between the ages of 8 and 12; the children were holding hands, and that the central figure - older than the outer two - was leading them. In Cosquer, handprints of children have been observed in the mondmilch - the white altered soft surface of the limestone wall - of relatively high walls, at more than eight feet from the ground. This means that children not only had access to the deepest parts of the cave, but also that they were held up at arm's length or on the shoulders of adults so that they could imprint their hands high up on the surface of the walls. This cannot be construed as a random gesture but as a very deliberate action. Given that Palaeolithic art represented an overall belief system which persisted with little change for over twenty millenia, ending only when the Ice Age finally drew to a close, the passing on of artistic knowledge and skill - to accurately and appropriately convey this belief system - would have been taken very seriously. Apprenticeship would have involved a 'hands on' approach.
<urn:uuid:3d351002-91b8-4a3a-8ac9-4c7dc6d443c5>
CC-MAIN-2016-26
http://www.bradshawfoundation.com/news/cave_art_paintings.php?id=Artists-of-all-Ages-
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981965
1,224
3.25
3
The Origins of Life June 7, 2011How did life get here? When someone states that they do not believe in God, often one of the first questions in response is, “Then how did life get here?” Of course, “God did it” is not a good explanation for, well, much of anything, because it does not actually provide any details about the process it claims to explain.1 Regardless of this, however, it is still a valid question to ask: Without invoking a God, is there a reasonable explanation for how life arose from non-life? This is where the field of abiogenesis comes in. My goal with this article is to provide a general overview, in simple terms, of the theories and models that scientists have created to explain the development of life from non-life.2 I will try to use a minimum of jargon and technical terminology, although some, of course, cannot be avoided. First, the current biological system will be briefly described, and then two major competing models in the field will be presented. The Current Creature As you are no doubt aware, all present-day life on earth is comprised of cells, which store information in DNA (deoxyribonucleic acid). But this is far from the whole picture. DNA is made up of long strands of nucleotides known as “base pairs”, which could be compared to letters in an alphabet. In order to use this information in the DNA molecule, it must be transcribed and translated. This is done with the help of enzymes (proteins that help to facilitate chemical reactions). These enzymes break apart the double helix of the DNA, and then go through each base pair, matching it up to a corresponding base pair which is attached to an RNA (ribonucleic acid) molecule. The major difference between DNA and RNA is that DNA is double-stranded (like a ladder), while RNA is single-stranded (like a broken ladder, with one of the vertical supports missing). After the base pairs are copied to RNA, this RNA then moves to another area of the cell, where more enzymes “read” the different base pairs and translate them into amino acids.3 Certain sets of base pairs correspond to certain amino acids, so the enzymes assemble these amino acids into a long chain. These chains (if formed correctly) are known as proteins. And proteins are the compounds that actually do the work within the cell. Some of them are even responsible for transcribing and translating the DNA in the process that I just described! The Dividing Line This outlines the major question to be resolved. DNA/RNA molecules contain the information needed to assemble proteins, but proteins are needed to read that information. Like the chicken or the egg, it becomes a question of which came first: DNA/RNA or proteins? And that is essentially where researchers are divided. Some believe that RNA came first (the RNA world hypothesis), whereas others believe that proteins arrived first (the metabolism first model). But with either model, it is important to note that both RNA and proteins are still present and play vital roles in today’s more complex systems. Evolution has had about four billion years to improve on the systems with which early life started off. Because of these two major models, there is no consensus in the field as of yet. Abiogenesis is a fairly young field of study, which started around the time of the Miller–Urey experiments in 1952. In comparison to, say, physics, which had its origins in Aristotle (300s BCE) and was developed by others such as Galileo (early 1600s) and Newton (late 1600s), abiogenesis is virtually in its infancy. So it is not unexpected that biologists have yet to come to a firm conclusion on the processes that drove the beginning of life. It is also common for relatively new fields of study to “cast the nets wide” and explore many possibilities. In other words, a lack of consensus is not detrimental to the field. On the contrary, it means the field is healthy and active. Much Ado about Metabolism The first model that will be discussed is the model which posits that proteins and metabolic processes came first. Metabolism deals with the usage of energy, and so researchers who advocate the “metabolism first” model argue that metabolic processes must have been present in order to produce organic molecules such as RNA. Central to this model is the reverse Krebs cycle, a set of chemical reactions which are used by some bacteria today to produce organic molecules from carbon dioxide and water. These chemicals were present on the early earth, and primarily near deep-sea hydrothermal vents. However, the biggest obstacle for this model is that the Krebs cycle has 10 steps to it, and it is difficult to see how such a process could come about without some sort of genetic system to store information and provide the enzymes necessary to keep the cycle going. Zhang and Martin (2006) have found that three of these steps can be driven by zinc sulfide particles, which were present in early Earth waters.4 They suggest the possibility that a more complex mineral compound could drive the remainder of the reactions. If this is the case, given an environment where these minerals are plentiful, this could drive the production of complex carbon molecules such as amino acids and nucleotides, which could then be used to create proteins and RNA as the processes became more organized. Some research has demonstrated the ability of some single amino acids to catalyze reactions, which opens the possibility that the reverse Krebs cycle could have been driven to create amino acids for the purposes of facilitating better efficiencies in metabolic processes. In this case, the development of RNA might have been a beneficial by-product of the process. Despite the work of many researchers on this model, from what I gather (and I am, admittedly, not an expert), it seems that this model is less widely accepted as the next one, which I am about to share with you. There is some evidence that metabolic processes could have arisen without any genetic input, but it seems more likely to be the case that small chains of reactions were eventually connected into larger cycles with the help of genetic information. The Run-Down on RNA This brings us to the second major model of abiogenesis: The RNA world model. This suggests that RNA was the first to form, and only later did metabolic cycles, proteins, and enzymes appear. The evidence for this is, in my view, more substantial. To begin with, the work of Miller and Urey demonstrated that, given conditions believed to be present on the early earth (methane, hydrogen, ammonia, and water), many amino acids could form spontaneously. They were able to produce 13 of the 22 amino acids that are used to make proteins in living cells, although a more recent analysis of the sealed vials from the original experiments has found that well over 20 were actually produced. Although this is an impressive result, there is some debate over whether the early atmosphere on Earth differed somewhat from how Miller and Urey believed it to be. Some scientists argue that there would have been large amounts of oxygen (which essentially prevents the creation of amino acids), while others argue that the atmosphere had large quantities of hydrogen (which would facilitate the reactions). However, these are highly technical discussions that are beyond the scope of this article. Either way, Miller and Urey demonstrated that there are conditions under which virtually all the amino acids necessary for life can arise. In addition to this evidence, the unique properties of RNA make it an excellent candidate for the precursor to life. Although its primary function today is to carry genetic information, much like DNA does, it can also operate as a catalyst for reactions, much like protein enzymes do. Indeed, even in modern cells, ribozymes (catalytic strands of RNA) play a role in synthesizing proteins. Because of this dual property, it is at least possible that RNA could be self-replicating, i.e. catalyze its own replication. And indeed, RNA polymerase is an example of a modern ribozyme that is capable of replicating parts of its own strand. Of course, the big question is how such an RNA molecule could form in the first place. For many years, scientists could not figure out a chemical reaction that would fuse the nucleobases (the “rungs” of the ladder) to the chain of sugars that make up the base. However, Powner, Gerland, and Sutherland (2009) recently discovered a set of reactions that would allow these components to fuse, which works for two of the four nucleobases used in RNA.5 It is possible that in the next few years, similar methods will uncover a way that works for the other two. Synthesis of Pyrimidine Nucleotides. The previously assumed reactions follow the blue arrows, but they fail to fuse together where the red X indicates. The new successful synthesis follows the green arrows. Let’s presume for the moment that a self-replicating RNA strand was at one point able to be produced. If this was the case, the build-up of complexity from that point forward is simple in comparison. Due to their properties, fatty acids (the main component of the modern cell membrane) can form “bubbles” spontaneously. Thus, it is plausible that a primitive fatty acid membrane could have surrounded the first self-replicating RNA strands. This would protect them somewhat from destruction, and also keep them in close proximity to each other to continue replication. These conditions would allow natural selection to occur. In order to function, natural selection needs at least four conditions to be met: a (1) population of organisms capable of (2) self-replication, and (3) variation in that population that leads to (4) differential survival. If these conditions are met, natural selection can and will occur. Although it is a stretch to call RNA molecules “organisms”, they do fulfill the conditions. This group of early, self-replicating RNA strands would fulfill the first two conditions to start off. In addition, with no checks or balances to the replication process such as are found in the modern cell, mutations were bound to occur with relative frequency, which would lead to variation in the population. And inevitably, these mutations would lead to differential survival, in the sense that some mutations would lead to an inability to continue self-replication. Thus, natural selection would kick in and begin to select for RNA strands that could replicate with high fidelity, quickly, and more efficiently. Mistakes in the replicating process might often be “fatal”, but many mutations might simply be neutral, or even add extra benefit. And from there, complexity could develop. Protein synthesis may have developed over time, as it would lead to increased efficiencies in further replication. (Proteins are much better catalysts than RNA.) Wolf and Koonin (2007) have outlined a stepwise model for the origin of the protein translation system (which reads RNA and assembles proteins), each step of which would create a distinct advantage for that organism.6 And eventually, down the road, when enough complexity had been achieved, similar benefits would lead to the usage of DNA instead of RNA, since the double-stranded structure of DNA is stronger than the single strand of RNA. What I have written above is just the briefest of summaries regarding the major theories that scientists have developed to describe abiogenesis. These theories rely on much more detailed explanations than I could possibly convey in this article. But the message to take away is that, although there is still certainly much left to discover about the nature and development of early life on Earth, the explanations provided are plausible. Like a detective solving a murder mystery, these scientists are trying to piece together, bit by bit, a coherent narrative based on the clues. We see the yet-unfinished product of four billion years of evolutionary pressures, but the challenge is to reduce life to its most basic elements. And from what scientists have discovered so far, it seems that RNA has the characteristics necessary to be the most fundamental unit of life. Hopefully further research will uncover the details about how it was first formed and the way in which the complexity developed into the beautiful engine that drives us all today. - See ‘God Did It’ is a Terrible Explanation for more details. - As a non-expert in the field myself, I have drawn much of this information from summaries of the research, such as here and here. Readers who would like more information are highly encouraged to visit these well-referenced articles and comb through their source material. - I use quotation marks around “read” because these enzymes are anything but literate. They are not reading information like you are reading this article. They simply have certain chemical properties which react to certain sets of base pairs. It is all very much dependent on the physical and chemical properties of the RNA strand and the enzyme. - Zhang, X.V., Martin, S.T. (2006). Driving parts of Krebs cycle in reverse through mineral photochemistry. Journal of the American Chemical Society, 128(5), 16032-16033. The article can be found here. - Powner, M.W., Gerland, B., Sutherland, J.D. (2009). Synthesis of activated pyrimidine ribonucleotides in prebiotically plausible conditions. Nature, 459(7244), 239-242. The article can be found here. - Wolf, Y.I., Koonin, E.V. (2007). On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization. Biology Direct, 2(14). The free full text is available here. About the Author: Jeff HughesJeff Hughes is a recent graduate of Honours Psychology at the University of Waterloo (UW). He is the current Vice President of the Atheists, Agnostics, and Freethinkers of Waterloo student group. In the fall, he will be starting graduate studies in Social Psychology at UW. The Course of Reason is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. CFI blog entries can be copied or distributed freely, provided: - Credit is given to the Center for Inquiry and the individual blogger - Either the entire entry is reproduced or an excerpt that is considered fair use - The copying/distribution is for noncommercial purposes
<urn:uuid:a4bfcc0d-90cd-483d-b164-bcfcd03a2d88>
CC-MAIN-2016-26
http://www.centerforinquiry.net/oncampus/blog/entry/Jeff_Hughes_the_origins_of_life/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965357
3,008
3.46875
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists Thomas Reid (April 26, 1710 – October 7, 1796), Scottish philosopher, and a contemporary of David Hume, was the founder of the Scottish School of Common Sense, and played an integral role in the Scottish Enlightenment. The early part of his life was spent in Aberdeen, Scotland, where he created the "Wise Club" (a literary-philosophical association) and graduated from the University of Aberdeen. He was given a professorship at King's College Aberdeen in 1752, where he wrote An Inquiry Into the Human Mind on the Principles of Common Sense (published in 1764). Shortly afterward he was given a prestigious professorship at the University of Glasgow when he was called to replace Adam Smith. He resigned from this position in 1781. Reid believed that common sense (in a special philosophical sense) is, or at least should be, at the foundation of all philosophical inquiry. He disagreed with Hume and George Berkeley, who asserted that humans do not experience matter or mind as either sensations or ideas. Reid claimed that common sense tells us that there is matter and mind. This common sense is the result of the way that we were made by God. In his day and for some years into the 19th century, he was regarded as more important than David Hume. He advocated direct realism, or common sense realism, and argued strongly against the Theory of Ideas advocated by John Locke, René Descartes, and (in varying forms) nearly all Early Modern philosophers who came after them. He had a great admiration for Hume, and asked him to correct the first manuscript of his (Reid's) Inquiry. His theory of knowledge had a strong influence on his theory of morals. He thought epistemology was an introductory part to practical ethics: When we are confirmed in our common beliefs by philosophy, all we have to do is to act according to them, because we know what is right. His moral philosophy is reminiscent of the Latin stoicism mediated by the Scholastica, St. Thomas Aquinas and the Christian way of life. He often quotes Cicero, from whom he adopted the term "sensus communis." His reputation waned after attacks on the Scottish School of Common Sense by Immanuel Kant and John Stuart Mill, but his was the philosophy taught in the colleges of North America, during the 19th century, and was championed by Victor Cousin, a French philosopher. Justus Buchler showed that Reid was an important influence on the American philosopher C.S. Peirce, who shared Reid's concern to revalue common sense and whose work links Reid to pragmatism. To Peirce, the closest we can get to truth in this world is a consensus of millions that something is so. Common sense is socially constructed truth, open to verification much like scientific method, and constantly evolving as evidence, perception, and practice warrant. Reid's reputation has revived in the wake of the advocacy of common sense as a philosophical method or criterion by G. E. Moore early in the 20th century, and more recently due to the attention given to Reid by contemporary philosophers such as William Alston and Alvin Plantinga. He wrote a number of important philosophical works, including Inquiry into the Human Mind on the Principles of Common Sense (1764, Glasgow & London), Essays on the Intellectual Powers of Man (1785) and Essays on the Active Powers of Man (1788). - Philosophy of perception - Stephen Barker & Tom Beauchamp, eds., "Thomas Reid: Critical Interpretations" (1976). - The Aberdeen University Reid Project - Stanford Encyclopedia of Philosophy entry on Reid - Inquiry into the Human Mind, Essays on the Active Powers of Man (1, 2 and 4), and Essays on the Intellectual Powers of Man - Reid @ Google Booksbg:Томас Рейд
<urn:uuid:accaa2b6-455a-4929-b1a6-8818783cd1dd>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Thomas_Reid
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951728
869
3.234375
3
Mushrooms are not vegetables, even though they are often used as such and tossed into salads. Instead, mushrooms are classified as fungi. Many types of wild mushrooms are not safe to eat due to toxins or hallucinogens, but about a half dozen varieties commonly found at Asian grocery stores are safe, tasty and nutritious. However, raw mushrooms are difficult to fully digest, especially by some people with gastrointestinal conditions, so cook them to make digestion easier and reduce the chances of any side effects. Composition of Mushrooms Instead of the cellulose-rich walls found in plants and vegetables, the cell walls of mushrooms are made of mycochitin -- a dense, fibrous compound that’s difficult for most people to properly digest. The digestibility of mushrooms greatly improves with cooking because the heat breaks down the fungal cell walls. In addition to better digestion, cooked mushrooms are more nutritious because more nutrients are able to be absorbed by your body. Mushrooms are good sources of protein, dietary fiber, B vitamins, vitamin D, phosphorus, potassium, copper and selenium. Methods of Cooking Mushrooms can be cooked in a variety of ways to improve their digestibility. Steaming is probably the best method because it preserves heat-sensitive nutrients such as vitamin C. Other efficient cooking methods include sautéing, panfrying, baking and deep-frying. Larger types of culinary mushrooms, such as maitake and shiitake, are quite meaty; you can easily cook them on a grill. Thoroughly cooking mushrooms will also destroy any compounds that could lead to mild stomach irritation and reduce the bitterness of some varieties. Some Medicinal Properties Digesting mushrooms more effectively also allows you to better absorb compounds that have medicinal properties. For example, maitake are delicious Japanese mushrooms containing compounds that have antiviral and immune-boosting properties. They may also help control hypertension and high blood sugar levels. Slender, white mushrooms called enoki require only brief cooking and have significant anticancer and immune-boosting effects. Cordyceps are mild-tasting Chinese mushrooms used in many Asian countries to restore health, increase energy levels and enhance endurance. Never eat mushrooms that you come across in the wild, because some species contain toxic compounds that trigger serious gastrointestinal symptoms or even death. Wild mushrooms are sometimes sold at farmers markets or used at gourmet restaurants, but ensure that the seller is familiar with their effects. If in doubt, stick to common varieties sold at grocery stores. Some mushrooms have a long history of medicinal use, such as reishi, but are not meant for culinary purposes due to their disagreeable taste. Instead, reishi and some types of cordyceps are best consumed as supplemental capsules, tinctures or herbal teas. - Encyclopedia of Human Nutrition; Benjamin Caballero et al. - Principles and Practice of Phytotherapy: Modern Herbal Medicine; Simon Mills and Kerry Bone - Chinese Herbal Medicine: Materia Medica; Dan Bensky et al. - Jupiterimages/Comstock/Getty Images
<urn:uuid:f80345b4-e41a-4175-ae08-02b2edf1b456>
CC-MAIN-2016-26
http://woman.thenest.com/mushrooms-easy-digestion-3932.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930755
636
3.125
3
Careers in Math, Philosophy and Science Math, Philosophy and Science courses are valuable courses which will make learning Math, algebra and Theme Park Engineering more interesting and fun for the students. The Everyday Math course provides a comprehensive understanding of everyday Math including calculating interest rates, solving fractions and doing multiplication. You will be able to work with symbols, variables and set elements after you complete the Algebra course. In the Theme Park Engineering course you’ll learn about architecture, ride control, show control, audio, video, acoustics, lighting, mechanics, hydraulics, and figure animation. These courses will enhance the students’ abilities to use the easy to understand mathematical and algebra techniques and apply them to their everyday life. Some of the lessons have been packed with innovative and creative problem solving strategies. The courses will help the teachers and the students develop mathematical patterns and relationships.
<urn:uuid:f2a77b0f-8dd1-4e50-93e8-75f7c8688b28>
CC-MAIN-2016-26
https://www.expertrating.com/certifications/InstructorLed/Math-Philosophy-Science/Math-Philosophy-Science.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912091
178
3.203125
3
NIAID News Release National Black HIV/AIDS Awareness Day February 7, 2003 Today, the National Institute of Allergy and Infectious Diseases (NIAID) is proud to stand with hundreds of national, regional and local HIV/AIDS groups in observing the third annual National Black HIV/AIDS Awareness Day. This critical effort to mobilize community-based efforts to educate African Americans about the devastation of HIV/AIDS; encourage those most at risk of acquiring HIV to be tested; and provide support to those living with the disease will undoubtedly help save lives. It comes amid a welcome and unprecedented movement to address the HIV/AIDS crisis among African Americans. This month alone, virtually every Black media outlet in the United States will run articles on the impact of AIDS on African Americans. Many national organizations have made HIV/AIDS a priority issue. These efforts are necessary and impressive -- and need to be sustained and expanded. There is no time to waste. AIDS is now the leading cause of death of African-American men aged 25 to 44 in the United States. Among African-American women in the same age group, AIDS claims more lives than diabetes or cancer. Although African Americans make up only 13 percent of the U.S. population, they account for more than half, or more than 20,000, of all new HIV infections each year. Around the world, more than 14,000 people are infected with HIV every day, more than two-thirds of them in sub-Saharan Africa. Despite these grim statistics, there is hope and progress in our fight against the HIV/AIDS pandemic, in the United States and abroad. President Bush recently announced his Emergency Plan for AIDS Relief. His Plan commits $15 billion over 5 years, starting with $2 billion in fiscal year 2004, for the prevention, treatment and care of HIV/AIDS in 14 of the hardest-hit countries in sub-Saharan Africa and the Caribbean. Meanwhile, to improve the care of the approximately 900,000 Americans infected with HIV, and to stop the spread of HIV in this country, the President also has proposed $16 billion in fiscal year 2004 for domestic HIV treatment, prevention and care, a 7 percent increase over last year. This figure boosts AIDS research funding by $93 million, and support for AIDS drug assistance program by $100 million. Much progress has been made against HIV/AIDS. In this country, prevention efforts have reduced the annual number of new HIV infections from approximately 150,000 per year to around 40,000 per year In recent years, we have seen the positive impact of advances in HIV therapies for so many living with HIV/AIDS, and the promise these medicines offer for those in the developing world. But prevention and treatment are only part of the answer. We must do all we can to find a vaccine to prevent HIV infection. With NIAID funding -- and in partnership with industry, academia and the community -- more than 20 promising HIV vaccine candidates are in clinical trials. It will be through efforts such as National Black HIV/AIDS Awareness Day that we will be able to educate the community about the advances and opportunities for progress in vaccine research, as well as in HIV prevention and treatments. National Black HIV/AIDS Awareness Day, and the work of the organizations and individuals engaged in activities to support it, is a critical effort in addressing the AIDS crisis among African-Americans in this country. I commend these efforts and offer our support to all those involved, and thank them for their leadership in promoting a healthier America. Dr. Fauci is the director of the National Institute of Allergy and Infectious Diseases at the National Institutes of Health in Bethesda, Maryland. This article was provided by U.S. National Institute of Allergy and Infectious Diseases.
<urn:uuid:193081fb-a73c-4cfb-9214-65426a7eda1d>
CC-MAIN-2016-26
http://www.thebody.com/content/art6726.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952552
762
3.109375
3
Late Archaic or Classical Period early 5th century B.C. Highlights: Classical Art (MFA), p. 142. Overall: 1.2 cm (1/2 in.) Overall (with 03.765): 4.31 gm (0.01 lb.) Medium or Technique Anne and Blake Ireland Gallery (Gallery 210A) “A baule” or spool earrings consist of a band of sheet metal rounded at one end and rolled into a cylinder. A hook at one end of the earring either passed through the earlobe or a suspension hoop in the ear. The upper zone of decoration, which covered the earlobe, is shaped like a pediment with a palmette drawn in beaded wire and terminating in spirals. Granules of many different sizes fill in space at the base of the palmette. A thin strip of sheet gold forms an undulating ribbon-like border between the two zones. The main decorative zone is divided into eighteen squares, all separated by a ribbon border. The squares alternate between a hemisphere of plain sheet gold and one covered with granules, creating a pattern of matte/textured and polished/smooth surfaces. The sheet gold hemispheres are marked with a pyramid of three granules that form a central bead. An embellished disk closes one end of the spool. The disk is decorated with a border of flat open circles (beaded filigree wire); beaded filigree is used to embellish the open area in the center. The earring forms a pair with 03.765. By date unknown: with Edward Perry Warren (according to Warren's records: Bought in Rome. [with 03.765]); March 24, 1903: purchased by MFA from Edward Perry Warren Francis Bartlett Donation of 1900
<urn:uuid:bab0f1b4-6eb4-4bea-88c3-444d3c02299e>
CC-MAIN-2016-26
http://www.mfa.org/collections/object/spool-earring-181365?image=
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.872415
381
2.65625
3
Soybeans have been an important commodity in Mississippi for more than 50 years, but recent advances have pushed the crop’s value above $1 billion. Mississippi soybeans had a value of $267 million in 2006, $1.27 billion in 2012 and $1.17 billion in 2013. Prices have been high for the past several years, but state producers put more effort into management and increased yields to a record average of 45 bushels an acre in 2012 and 2013. “Soybean prices have been favorable in recent years, and combined with better management to produce higher yields, the crop has become an even more significant contributor to agricultural value in the state,” said Greg Bohach, vice president of the Mississippi State University Division of Agriculture, Forestry and Veterinary Medicine. “In the past decade, yields have steadily increased. The acreage devoted to soybeans has stayed fairly constant at about 2 million acres for the last six years.” In 1961, Mississippi soybean producers harvested 1 million acres for the first time. Their fields yielded 22.5 bushels per acre and the crop was valued at about $60 million. Over the following decades, the crop gained ground and improved yields, due in part to MSU’s research and Extension efforts. In 2006, producers harvested 1.65 million acres of soybeans at an average yield of 26 bushels an acre. Just seven years later, soybean producers set a record average yield of 45 bushels an acre harvested from 1.99 million acres. Brian Williams, Extension agricultural economist, said price, yield and acreage drove the increase. “Since 2003, Mississippi producers have broken the soybean yield record three times and tied it a fourth time,” Williams said. “Price has also increased significantly over the past 10 years. In 2001, soybean prices were under $5 per bushel, and they are trading at over $14 a bushel today.” Williams said soybean yields the last two years are almost 20 bushels an acre better than they were in 2006. While that was a particularly bad year for production, yields have consistently improved from the early 2000s. “There is no doubt that Mississippi growers have improved their soybean production,” he said. Soybean is one of the least expensive crops to grow and in the past was planted on some of the less productive farmland with limited management. The MSU agricultural division began extensive efforts to improve soybean production in the state, and the results have been dramatic. “Both MSU and the Agricultural Research Service of the U.S. Department of Agriculture began substantial support of the soybean enterprise in the 1960s,” Bohach said. The MSU Extension Service and the Mississippi Agricultural and Forestry Experiment Station, known as MAFES, began investing significant resources into the crop at that time. “Dr. Edgar Hartwig at Stoneville was developing new soybean varieties, such as Lee, Bragg, Lamar, Sharkey and Forrest. These proved extremely beneficial for the Southern soybean industry and helped earn Dr. Hartwig a place in the Agricultural Research Service Hall of Fame,” Bohach said. MAFES began work on soybean varieties, fertility, and weed and insect control. In 1972, the Mississippi Legislature founded the Soybean Promotion Board, which has helped fund research and Extension activities that significantly contributed to increasing soybean yields. The Extension Service, which is marking its centennial anniversary in May, continues its mission to deliver research-proven information to the people of the state. “Extension and research activities such as the SMART program in the past and the PHAUCET irrigation tool have been major factors in helping our producers harvest a crop valued at more than $1 billion for the first time ever in 2012 and again in 2013,” Bohach said. SMART stands for Soybean Management by the Application of Research Technology, while PHAUCET stands for the Pipe Hole and Universal Crown Evaluation Tool. Danny Murphy, owner of Murphy Farms in Madison County and chairman of the executive committee of the American Soybean Association, said soybeans have become a competitive crop today. “You have to go back 20 years or more to when Alan Blaine was the Extension soybean specialist and Larry Heatherly at Stoneville helped pioneer the early-production system we use now,” Murphy said. “Their work elevated the soybean crop to where it was treated as important and not a stepchild as it may have been over its past history.”
<urn:uuid:0081d28a-b9c0-4d56-9cd8-2961f029b0a4>
CC-MAIN-2016-26
http://www.agprofessional.com/newsletters/infoblast/articles/Mississippis-soybean-value-grew-1-billion-since-2006-255806941.html?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960662
953
2.65625
3
Part 3 of this series introduced you to the challenges of preserving digital file formats. It also provided effective solutions to the challenges. This part will guide you in getting started with preserving your digital family history records. Getting Started with Digital Preservation Assuming you already have a personal computer, you can get started with digital preservation by purchasing a digitizing device, an archival storage device, and obtaining the software you want to create archival file formats such as PDF/A and JPEG 2000. The digitizing device can be a scanner or a digital camera. Perhaps you will want both. Scanners are easier to use, but not as versatile as digital cameras. When using a scanner, always scan at the highest resolution for which you can afford the archival storage capacity. At the very least, you should scan at 300 dpi (dots per inch) if you never intend to print larger than the resulting digital record size. 1200 or greater dpi is recommended if you think you will ever want to print a larger version of the record. The scanning device you purchase will have software that allows you to set the desired dpi. When using a digital camera to digitize a physical record, make sure you have natural, flat, uniform lighting so you can avoid shadows and reflections. Using a tripod is recommended, especially to keep the lens parallel to the record being photographed (camera lenses magnify skew if they are not parallel). Most digital cameras allow you to choose a dpi setting, so always choose the highest setting available. Then, when you load your digital pictures to your computer, they will have maximum resolution when functioning as source files for the archival versions you create. Although these camera pictures will require significant capacity on your computer’s hard disc when you load them, you can delete them once you have created archival files and written them to archive-grade storage media. If you have analog audio recordings that you want to preserve digitally, you can purchase an audio digitizer. These USB devices can digitize virtually any type of analog audio signal so the recording can be archived in the WAV format. To digitize your analog family movies, a professional service is recommended to minimize cost and maximize quality. The same applies to 35mm slides, although slide scanning attachments may be available for the scanner you purchase (but they may be pricey). For more information on scanning slides, see reference . If using a service, be sure to specify which archival file format you want for the output (either AVI (.avi) or QuickTime (.mov) for digital video, and lossless JPEG 2000 for digitized slides). If JPEG 2000 is unavailable, you might ask for TIFF and then convert the files to JPEG 2000 yourself. As explained in this series, M-DISCs are recommended for personal archiving. To acquire Millenniata and associated LG products, you can go to the Millenniata website (millenniata.com). These products will also be available from some popular retail outlets in October 2011. Remember that virtually any DVD or Blu-ray drive can read an M-DISC. Adding Descriptive Information to Records Once you have the necessary software, digitizing equipment, and archival storage in place, you are ready to get started with digital preservation. However, before you attempt to preserve any records, it is important that you develop a plan to add descriptive information (called metadata in the digital preservation industry) to the digital records you are planning to preserve. At a minimum, descriptive information should include both contextual and historical information. Contextual information describes what the record is—for example, a copy of someone’s death certificate, a photograph of a named person, etc. Contextual information also relates the record to its environment, throwing more light on the person(s) to whom the record applies. The more complete and descriptive contextual information is that you add to a digital record, the more valuable, interesting, and endeared the record will become—to you, your posterity, and your extended family. Historical information provides the source of the record (for example, the county, city, town, or church archive from which a copy of a birth certificate was obtained). It should also identify the creator of the record, if such information can be determined. This is important for copyright reasons, which are discussed below. Your plan to add descriptive information to digital records should begin with file names. A file name can contain both contextual and historical information. For example, when the author scanned a photograph of a distant relative, the scanning software gave the output file a generated name of: One would never know from this file name what the record actually is (other than a TIFF image). But by changing the file name to: Photo of Esther Elizabeth Knight on her wedding day 8 May 1917.tif anyone looking at the file name will immediately know exactly what the record is. When searching the contents of an archival disc, having this much information for all the file names listed will certainly help you zero in on the object of your search very quickly! A caution is in order here. Current personal computer operating systems have a limit of 256 characters to identify the location of a file on the computer’s hard disc (called the file path). These 256 characters include the file name as well as the names of all folders that must be opened to navigate to the file. Folder names may also be descriptive. Therefore, the more nesting of folders you use, the fewer characters will be left for the file name; and hence the fewer characters will be available for descriptive information in the file name. In general, it is best to rename files with descriptive information when you first create or load them—otherwise, you may never get around to doing it. In order to create a full set of descriptive information, you should also add reference information (or tags—another type of metadata) to files when you create them. Reference information allows search software to assist you in locating and accessing records. When the author scanned file 110237489853.tif as explained above, he also added the following tags by clicking on the appropriate software option buttons: Title: Esther Elizabeth Knight on her wedding day 8 May 1917 Subject: Esther Elizabeth Knight wedding photo Author: in the public domain Keywords: Esther, Elizabeth, Knight, wedding, 1917, bride, photo, public domain If tags are to be used effectively, both file creation software and search software must support such tags. It has already been pointed out that soft Xpansion Perfect PDF Master (which is free for personal use) does not allow the addition of tags when creating PDF/A files—you must purchase soft Xpansion’s business version of this product to get this capability. Any time you deal with records, make sure you adhere to copyright law in regards to copying, printing, and distribution. This applies whether you are working with digital records or physical records. To avoid violations, track down the source or owner of each record (if possible), then apply applicable copyright law. A wonderfully clear and concise summary of copyright law as it pertains to genealogy has been written by Michael Patrick Goad.8 Please take time to study his short, well written article. Some key points from it are reproduced here: - If an original work of authorship was created after 1977, it’s copyrighted and it’s going to be for a very long time. The earliest that any work created after that will lose its copyright will be about 2049 – that’s assuming that the author died right after he authored the work. - If it was created before 1923, there is no copyright on it anymore, so long as it was published. If it wasn’t published, it may still be protected by copyright. - Works published before March 1, 1989 without proper copyright notice are almost always in the public domain because, under the law that existed before that, a proper copyright notice was required for copyright protection. - Works published from 1923 to 1963 had to be renewed after an initial copyright term for protection to continue. The U.S. Copyright Office estimates that over 90% of works eligible for renewal were never renewed. A second article written by Gary Hoffman provides additional useful information that augments Goad’s article with further insight. Please review this article as well. 9 Before writing any records to an archive-grade optical disc, you will want to organize them so as to be as efficient in writing as possible. An archive-grade optical disc is designed to be permanent; therefore you cannot change anything after it is written. You can write the entire disc at one time, or you may write just a portion of it and add files later. In general, writing one record at a time is not practical. The number of records (files) you can store on an optical disc depends on the disc type and the average size of the records you want to write, as shown here. |Storage Media Type||Number of 2.5 MB records that can be written||Number of 1 MB records that can be written| (MB means megabyte) Please note that there are no archive-grade Blu-ray discs available currently. To simplify writing, it is recommended that you first copy the target files to a temporary folder and monitor the size of the folder as you proceed. For Windows, this can be done by floating your cursor over the folder name—a pop up will display the total capacity of the folder. In general, you should not exceed a folder size of 650 MB if writing to a CD, or 4700 MB (4.7 GB) if writing to an M-DISC (but only 4200 MB for other types of DVDs since their outer tracks are easily damaged by physical handling). Once the temporary folder is populated with the target files, you can start the writing (i.e., etching or burning) process. If the folder size exceeds the disc’s capacity, writing will stop when the disc is full, leaving all remaining files unwritten. Of course, maximizing the number of files written to each disc minimizes the number of discs required. An important preservation principle developed at Stanford University is LOCKSS (Lots Of Copies Keep Stuff Safe). The basic concept is this—the more copies you archive in different locations, the safer your records will be. To apply LOCKSS to your archive, you should write a minimum of two discs per set of files and store them in two different locations as far apart as practical. Writing three discs and storing them in three different locations is even better. Perhaps you can exchange archival discs with friends and/or family to enhance the safety of your archived data. It’s a good idea to periodically test your archival storage media by opening files randomly and examining the contents to detect errors. This should be done at least annually. If errors are found on a disc, retrieve a copy of the disc (which is why you need to apply LOCKSS!) and determine if it is error free. If so, then you can replicate the copy and dispose of the flawed disc. If the copy is also flawed and you have no more copies to examine, then you have no choice but to test each file and copy the error-free files to new archival storage media. For those files with errors, you can recreate them if you still have the original physical records and can redigitize them. Of course, applying LOCKSS to your archive requires that you get organized and develop a process to track (i) locations of the archival storage media, (ii) media age, (iii) when the media should be tested next, and (iv) when a media refresh migration should be performed. Fortunately, there is an abundance of software available to help you do this, such as Microsoft Access or Intuit QuickBase (an online database). Sharing Your Digital Records As mentioned in Part 1 of this series, sharing a digital record with others is fast and easy—as long as you have an Internet connection and email services. The author uses Yahoo email (mail.yahoo.com) because it is free and offers unlimited storage capacity. Also, it allows you to attach a file as large as 20 MB to an email. However, whether or not someone can receive such a large file depends on his or her email capabilities. Should you want to send someone a file that is larger than the person’s email software will accept, you can use a free transfer service instead. TransferBigFiles.com is a website that allows you to transfer large files over the Internet at no charge. YouSendIt.com will also do this for a fee. Once you upload a file that you want to transfer, a link is provided which you can then email to your intended recipient. That person need only click on the link in your email to download the file to his or her computer. Backing Up Your Archived Records A side benefit of an email service that provides unlimited storage capacity is that it provides a means to extend the LOCKSS principle for your personal archive. By sending yourself emails with attached preservation files, you can create a collection of such emails that will be stored on the email service provider’s computer infrastructure. In effect, you can backup your archive on this infrastructure. You should never rely on this approach to be your primary or even secondary archive, however, since the email service provider could start limiting storage capacity at any time or could even go out of business. And organizing so many emails to function as your primary archive might be difficult. Also, you may have difficulty accessing your email inbox when you urgently need to retrieve a record from your digital archive. Online (cloud) backup is also becoming a popular way to backup family history records because of its convenience. But newcomers to cloud backup have much to learn and consider. The Library of Congress has published a blog10 that explains these considerations. You should review this blog if you are interested in exploring cloud backup. However, you should never rely on cloud backup as your primary or even secondary archive. There is no guarantee that your data will be saved indefinitely. Some cloud backup services (including Amazon web services) have already crashed, resulting in lost data for some customers. Also, information in the cloud can be hacked. Bottom line, you should not count on cloud backup services alone to protect your important family history records! As Time Goes By… . . . it is important that you, your posterity, and your extended family monitor technology changes and take appropriate actions as needed. These actions, which comprise the ongoing aspects of digital preservation, include: - Transforming file formats that are becoming obsolete to their replacement formats. - Copying files to newer archival storage media to prevent data loss (unless you are using M-DISCs). - Migrating files to newer archival storage media so they can continue to be read if existing storage technology is becoming obsolete. Clearly, digital preservation is not a one-time activity, nor is it a single-generation project. Your responsibility in the digital preservation chain is to gather, digitize, and preserve records the very best you can, then pass them on to the next generation of your posterity and/or extended family that has been prepared to carry on the work. In many respects, digital preservation is like a relay race—you carry the baton for a period of time and then pass it on to the next runner. To prevent the baton from being dropped during the handoff, you and the next runner must work together in perfect synchronization. This means preparing and motivating the next runner to carry on the race without missing a step. As this process is carried on from one generation to the next, your digital family history records can be preserved in perpetuity. Yes, it takes work—but the payback cannot be measured. After reading the guidelines above, you should find Part 5 of the series helpful. It provides a step-by-step summary of preserving your family history records digitally. 8 Copyright Fundamentals for Genealogy (by Michael Patrick Goad 29 July 2003) Also see— www.pddoc.com/copyright 9 Who Owns Genealogy? Cousins and Copyright (by Gary B. Hoffman) 10 Personal Archiving in the Cloud (by Mike Ashenfelder) This article is part of the Preserving Your Family History Records Digitally series by Gary T. Wright. Each article in the series is part of the white paper, Preserving Your Family History Records Digitally.
<urn:uuid:74c16377-877e-4971-8dd1-fdadcef937a6>
CC-MAIN-2016-26
https://familysearch.org/blog/en/started-digital-preservation/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925722
3,416
3.015625
3
New York City and its periphery host a $1.5 trillion economy, central to a world economy of about $100 trillion. Four hundred years ago, New York Harbor, and the Bay beyond it, was in a state of equilibrium with its human population; oysters filled the harbor bottom, and the surrounding hills and wetlands teemed with wildlife. The eight million inhabitants of New York help define successful modern life around the globe, and at the same time, the coastal geography of the city puts New York on the front line of climate change and our civilization’s sustainability challenge. Research in the past few years shows that for New York, ‘sustainability’ has become a literal question. This week, the New York Times reported on a new paper projecting sea level rise from potential ice loss in Antarctica: “The long-term effect would likely be to drown the world’s coastlines, including many of its great cities.” If so, New York would not have another 400 years. The answer comes in how we change, or choose not to change. Andrew Willner has been a leader of efforts to protect the waterways and land in New York and New Jersey for over twenty five years. Willner founded New York/New Jersey Baykeeper in 1989, running the organization till 2008. In recent years, science and policy have been catching up with his vision for a sustainable harbor. Ana Deustua interviewed him for City Atlas. Why did you create the New York/New Jersey Baykeeper? With a friend from South Street Seaport, I started a small boat building and repair yard on Staten Island. My daughter, who was 10 years old, came to the yard and it made me angry that she probably shouldn’t go swimming in the waters of Staten Island because of pollution. It infuriated me. I became angry at the idea that a beautiful body of water could be detrimental to my daughter’s health if she went swimming. I swam in it, but I didn’t want my child to swim in it. I found out that there was a Riverkeeper on the Hudson River, a Soundkeeper in Long Island, and Baykeepers in Delaware and San Francisco. I began communicating with them and they helped me get the Baykeeper program started in the summer of 1989. I worked with Baykeeper for twenty years, until April 2008. It was a great opportunity for me, I really treasured it and it was the biggest challenge I have ever had. What was your biggest accomplishment while running Baykeeper? The biggest change since I was appointed baykeeper is that people didn’t see the New York Harbor as a natural resource. If I did anything in the 20 years that I was the baykeeper, it’s that we converted hundreds of thousands to think of the lower Hudson, the East River, New York Bay, Jamaica Bay, and Raritan Bay as their watery homes: places where they can go for recreation, fishing, and where they would identify with the waterfront within their community. The waterfront has become one of the most appealing places to live. How is this trend changing the New York/New Jersey harbor estuary? This race to the coast has several negative implications. More people and property are in harm’s way in storm surge and flood prone areas, the “centers” of older waterfront communities are being eroded in favor of the water’s edge, and the loss of “working waterfront” is a detriment to the region as a whole. Some people with means, and the developers on water’s edge buildings, will get an exclusive view and make a short term profit, while ultimately the externalities of sea level rise, storm ravages, and lack of planning and foresight are costs which will be borne by the rest of us. The other major problem is that privatization of the waterfront excludes the public from their commonly owned, public trust resources, to the advantage of the privileged few. Would you let your daughter now swim in the Bay? My daughter is now a Mom, and a physician. I can advise her but she is probably more equipped to determine whether or not, or where she and her children should swim. However, I continue to enjoy swimming in a variety of locations. Are we prepared to keep New York Bay clean, as a rising sea level reaches inland? I don’t think so. For example, most sewage treatment plants are in the floodway and may become inoperable. Toxic waste sites and garbage landfills will be underwater, petroleum and oil facilities are located on the waterfront and will be adversely affected by sea level rise, and abandoned businesses and residences will pollute the estuary for a very long time unless timely actions – including retreat from the shore – are instituted immediately. I am however fairly confident that none of this will be done in a timely way. Why should New York lead the talk on sea level rise? New York City, being an island metropolis, is also projected to be one of the five U.S. cities hardest hit by climate change and most vulnerable to rising sea levels. Likewise, our metropolis produces little of its own food and little else for its people’s basic needs. This puts our city and its surrounding communities in serious jeopardy. New York is a coastal city and region. The very reasons it became an important port are now the things that will adversely affect the region’s infrastructure and people. On the positive side – “ If it can happen here it can happen anywhere.” How did your work as baykeeper lead you to your current work? During my work with New York/New Jersey Baykeeper from 1989-2008 I met and engaged with thousands of people from all walks of life and from all parts of the harbor. When I retired from Baykeeper I started a sustainability consulting firm to continue the work I was doing with environmentally conscious businesses, municipalities, and non-profit organizations. Tell us your three steps to make the New York bioregion more resilient to climate change. 1. Become a leader in sustainability and resilience. 2. The people have to make their elected officials take action. 3. Be aware that real pain is associated with the changes needed to mitigate and avoid the effects of sea level rise and climate change. Resilient communities are at the core of a “Too Small to Fail” future. If we don’t plan for more robust communities, and implement solutions for undeniable problems, a catastrophic crash seems inevitable. However crisis can equal opportunity, as we saw during the Great Depression and during World War II. But unless sensible plans to manage disaster are formulated and put forward now, the opportunity afforded by crisis could be hijacked by a more organized well-financed minority with an authoritarian agenda. You’re an advocate for the Transition Town concept of a resilient, locally-based economy. Is New York City, with a population of 8 million, really a candidate for the Transition idea? In short the answer is probably not. However, neighborhoods and coherent sections of the city, where urban agriculture, core community groups, and like-minded people are already intact, may be. Here is what the “New Economy” for our bioregion might look like: it will prosper through an eclectic amalgam of business, non-profit and government innovation, including rooftop solar warehouses, wind farms, and tidal energy producers; urban and rural farmers, and rooftop apiaries; commercial fishermen, fish mongers, and fish farmers; local farmers markets, shoreline farmers, and seafood markets; a local water-based transportation system to bring goods to market; suburbia converted to interconnected “front yard” farms; a local currency used to pay for local commodities; buying and hiring locally; restored and created wetlands serving as nurseries for fish and wildlife and where blueberries and other produce can be sustainably harvested; sustainable forests that are logged selectively with an eye on future production; public works projects such as sea walls and sea gates as required to protect communities and valuable infrastructure against sea level rise; an economy of local businesses and micro-industries, including everything from brewers and butchers to cheese makers and toolmakers; from ship builders to bicycle builders; local wind turbine, solar collector, and tidal generator manufacturers and installers; shoemakers and fix it shops; composters and oil recyclers. If we become a locally-focused region, what happens with the foods and products that can’t be grown or produced here? The New York City bioregion is [already] connected tenuously to the rest of the world by literally thousands of lifelines, including an aging and increasingly failure-prone power grid; an aging and leaky water system; and a vast network of roads, rails, shipping and air routes that rely exclusively on increasingly costly fossil fuels. Like a patient on intravenous life support, any major interruption in the flow of natural resources, energy, water or food to the metropolitan area could hamstring or permanently harm its economy and people. With global oil, gas and coal production predicted to irreversibly decline in the next 10 to 20 years, this collapse becomes not a question of if, but when. Most of the products we consume in New York City come from Asia or Europe, or by truck from California and the mid-west. New York is tied to these lifelines that extend around the world for fuel, but, when petroleum becomes too expensive to transfer, it’s going to be a crisis if we don’t get alternative sources. So food, energy and water are critical in the New York City region. What would happen if the Transition Town approach worked in New York City? It will demonstrate that it can work anywhere else. What are the advantages of the ‘Main Street economy’ versus a ‘Wall Street economy’? My grandfather started a lumber company with a friend who owned a pushcart. They scavenged construction sites, pulled nails out of and squared up any lumber they could find, and sold it for what it was – a recycled product. Later they built their company into a large wholesale/retail lumberyard, and eventually became a self-serve regional hardware and lumber company. But what my grandfather and my uncles, who eventually took over the business, never forgot was that they had an obligation to their employees, many of whom worked at the company for their entire careers. They sold a good product, treated their customers with respect, supported their community, and made a living for their families. After my uncles retired, their partner sold the company to a Fortune 500 company and within a few years it no longer existed. I tell this story because this Main Street business was locally owned, locally rooted, and privately held. It was innovative, successful, and sold tools, materials, and services to people who became repeat customers because of the quality and customer service they received. As soon as their company became the property of Wall Street, all those values were lost and destroyed. Until then it had been too small to fail. Growing evidence suggests that every dollar spent at a ‘too small to fail’ locally owned business generates two to four times more economic benefit – measured in income, wealth, jobs, and tax revenue – than a dollar spent at a globally owned business. That is because locally owned businesses spend much more of their money locally and thereby pump up the economic multiplier. Under our present system, no local businesses receive any of our pension savings, or investments in mutual funds, or investment from venture capital firms, or hedge funds. The result is that we who invest do so in Fortune 500 companies we distrust, and under-invest in the local businesses we know are essential for local vitality. We need new mechanisms to enable investment in local, place-based, ‘too small to fail’ Main Street businesses. Main Street investing is how the local economy once functioned. It was in the interest of well-off farmers, merchants, and small town banks to loan money to, and invest in, businesses that would hire local people, and make something that had value and created real wealth. Perhaps, along with a ‘buy local/hire local’ campaigns, ‘locavesting,’ – a resurgence of local currencies, and new public and community banks, and credit unions will reinvigorate our region’s Main Street economy. How do social justice and environmental sustainability intersect? Any plan for a resilient bioregional economy must insure that everyone has fundamental needs met for nutritious food, shelter, healthcare, education, and ecosystem services as a non-negotiable condition. This means such things as converting urban brownfields to greenfields, ensuring affordable housing, improving work opportunities for disadvantaged groups, and allowing seniors and children to play useful civic roles. You posted a letter and proposal in 2013 called “A Call to Action.” In it you describe the risks of climate change in New York City and the benefits of the Transition movement. In three years since, what has changed? Everything I wrote about in 2013 is coming true more quickly than I could have imagined, except for the response to the dire problems facing the region.
<urn:uuid:28419e05-a5d5-4f12-b5e8-76b7cf6757a7>
CC-MAIN-2016-26
http://www.andrewwillner.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968557
2,737
2.625
3
In 1961 a cholera pandemic broke out in Indonesia. Within five years the disease spread to India, the Soviet Union, Iran, and Iraq, within ten to West Africa. By 1991 it had struck Latin America. In 1995 alone, the disease killed more than 5,000 people worldwide and sickened hundreds of thousands more with debilitating diarrhea. Just one strain of the bacterial species Vibrio cholerae wreaked almost all the death and misery. This strain is known as O1, and it produces a toxin that binds to cells of the small intestine, setting off a cascade of reactions in which cells pump out vast amounts of chloride ions and water--some five gallons a day. If salts and water are not quickly replaced, the patient dies. Surprisingly, most strains of V. cholerae are harmless organisms that live and multiply in rivers and the open sea. But at some time in its evolutionary history, the O1 strain turned lethal. What caused this deadly transformation? A virus, according to microbiologist Matthew Waldor. Waldor, who works at the New England Medical Center in Boston, and his colleague John Mekalanos at Harvard discovered the virus while studying the stretch of bacterial dna known to include the gene, called ctx, that codes for the cholera toxin. They suspected that a virus might have infected the bacteria with the gene, since viruses often insert their own genetic material into bacteria. Another possibility was that different strains of bacteria were swapping genes, a routine occurrence among wild strains. Waldor and Mekalanos found that a virus was indeed the culprit after taking O1 bacteria and replacing the toxin gene with one coding for resistance to an antibiotic. They then cultured these bacteria with an antibiotic-susceptible strain and found that some of the second bacteria had now become resistant. They next passed the bacteria through a very fine filter. The bacteria-free fluid left behind could still transfer antibiotic resistance, even when treated with an enzyme that attacks naked dna--that is, dna not sheltered by the protein coat that protects a virus. Waldor and Mekalanos concluded that a virus was ferrying the gene from one strain to another; using an electron microscope, they succeeded in photographing it. The researchers suspect the long, stringy virus enters bacteria via their pili--hairlike structures the bacteria use to stick to the gut. The pili are suspect because bacteria that lack them resist infection. By invading bacteria, the ctx virus gains a home; in addition, its genes are copied every time bacteria divide. Is Vibrio, then, merely the victim of merciless parasitism? Apparently not. The watery, salty diarrhea induced by the toxin, says Waldor, gives the bacteria a perfect medium in which to grow. For the cholera bacterium, it’s like Miami Beach, says Waldor. It’s just a fine place to be. So fine, in fact, that each thousandth of a quart of diarrhea, of some 20 quarts produced daily, contains about 100 million bacteria. If the cholera victim defecates in a river, he helps the bacteria--and the virus--spread. Although ctx is not the first virus known to induce disease in this way--diphtheria and botulism are both caused by virally hijacked bacteria--its discovery could lead to a safer live cholera vaccine. One recently developed vaccine contains V. cholerae with a partially deleted toxin gene. It has been tested in 6,500 people around the world and appears to be safe and highly protective. Nevertheless, there is always the danger that a vaccine bacterium inside the body could be reinfected by a ctx virus. Waldor and Mekalanos think their vaccine will avoid that risk. In our vaccine, we’ve deleted the attachment site on the bacteria for the virus, so the virus can’t enter the chromosome, says Waldor. Deprived of its home, the virus cannot survive for long. Waldor and Mekalanos’s vaccine has been tested in about 100 Army volunteers with no resulting illness. Trials over the next two years should determine how effective their vaccine really is.
<urn:uuid:1f6d10c2-edb2-4ff1-809c-3714b7205e43>
CC-MAIN-2016-26
http://discovermagazine.com/1996/oct/howcholerabecame900
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00087-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949138
866
4.1875
4
HOME OF BETSY BAYNARD AS A TYPE OF EASTERN SHORE SLAVE HOLDER Looking backward to the days when forests stretched for miles over an acreage now covered by fertile farms, we see about five miles N.E. of Greensboro, some distance from the Eastern bank of the Choptank River, a small clearing appear. Soon arose a small unpretentious building typical of its day. Tall pines overshadowed it. At dawn the song of the woodland bird awakened the sleeper, while during the hush of eventide the call of prowling wild animals sent a thrill of fear through the listener. Such in the 17th century was the beginning of the Baynard plantation—the largest in the Greensboro section—extending over an area of more than six hundred acres. Time was in the early slave days when tobacco flourished there, and Negroes, singing their weird melancholy songs “toted” the tobacco to their storeroom. From thence it was carried over the woodland road and delivered at the warehouse of William Hughlett for even in the 17 hundreds the dense green of Maryland pines had given way to the paler green of cultivated fields. First the Baynards planted tobacco, but later cereals formed the base of income; while in the last days of the plantation, to these were added the returns from tanbark and railroad ties. In 1812 “Old Massa Baynard” died and Mistress Betty, then sixteen years old, became— under her mother—the Autocrat of the Plantation. The home with its rambling Negro quarters had been enlarged and, while never ostentatious, held old china, colonial furniture, a grandfather’s clock and other antiques such as delight the eye. There after her mother’s death Betsy Baynard lived alone save for her house servant, Myna, and two powerful dogs who stood guard day and night. Completing this plantation community were her slaves who fill their huts to overflowing, at times numbering more than two hundred. Although not given to slave dealing, at the time of enlarging her house to obtain the needed money Betsy sold a servant “South into Georgia.” “They say” the cartwhip was daily used as a ruling power among her colored people but the blows must have fallen lightly for many of her slaves remained contentedly on her plantation until old and infirm, and when she died ten years after the Emancipation some half dozen of her slaves were yet with her. An amusing anecdote of the Baynard slaves relates that a young Negro, returning from a dance, in the cold, gray dawn went to the well for a drink of water. As his eye followed the bucket on its descent he saw something white. True to race superstition he believed it a spirit and ran to tell Miss Betsy of “De hand in de well.” She returned with him and found a sheep had fallen in and all but drowned. A tragedy of the plantation was the death of Miss Mary Reid, a cousin of Miss Betsy’s, who at times made her home there. A slave girl, on being reprimanded for some delinquency, took offense and attempted revenge on Betsy by way of Paris green. The poison miscarried, resulting in Miss Reid’s death almost immediately. As a memorial to the Baynard generosity stands Irving Chapel. While the name is that of the first minister, the plat of land on which Irving Chapel stands was donated from the Baynard plantation, and the lumber for the building was added on condition that the church members cut it from the forest. Miss Baynard also gave a sum of money, large in those days and sufficient for church erection. Betsy Baynard died without direct lineal descendant. The land was sold in small sections, and is owned principally by Rosanna Richards, G. W. Richards, A. K. Brown and J. A. Meredith. All that remains to recall the story of other days is a portion of the old home which is yet in use by J. A. Meredith, and a small family burying ground with three markers—Written from material collected by Paul Meredith.William Baynard born 1769, died 1812. Litia Baynard born 1773, died 1843. Elizabeth Baynard born 1796, died 1873. Table of Contents | Previous Chapter | Next Chapter | Home ©2000 Caroline County MDGenWeb All rights reserved
<urn:uuid:20013647-6cf7-408f-8cbb-26c3499d47d4>
CC-MAIN-2016-26
http://rootsweb.ancestry.com/~mdcaroli/Plantation.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970852
925
2.9375
3
Do you need a bone density test? This X-ray scan of your hip, spine, and wrist will tell you how strong your bones are—or if you’re at risk for osteoporosis, the bone-thinning disease that can lead to painful and disabling fractures. But do you need the exam? You do if you’re over age 65—or 50 with a previous bone fracture or risk factors for osteoporosis. A bone density test, called a Dexa scan, is a painless procedure that measures the mineral content of your bones. You’ll get a result that indicates whether your bones are normal, have low bone mass (a condition known as osteopenia), or are significantly below normal, which is osteoporosis. The importance of strong bones Your bones are constantly changing as new bone replaces old bone. A problem arises when you lose bone more quickly than is replaced, weakening your bones and making them susceptible to fractures. When you have osteoporosis, even a minor fall can break a bone. There are 1.5 million fractures each year in the United States caused by osteoporosis, including fractures of the spine, hip, and wrist—the most common areas affected by the condition. And they can be devastating. For example, half the people who break a hip never are able to walk again without some type of assistance. Many people with osteoporosis need long-term care. Because osteoporosis doesn’t usually have symptoms until you unexpectedly break a bone, the bone density test can spot thinning bone (osteopenia) before it develops into osteoporosis. Although osteoporosis can affect men and women, it’s primarily a disease of women, especially after menopause when women lose estrogen, the hormone that helps the body maintain bone density. Other risk factors include: having had a fracture as an adult; being thin; having a family history of osteoporosis; being a long-term smoker; having an inactive lifestyle; not getting enough calcium; going through menopause before age 45; being Caucasian or Asian, or having used steroids for a long time. You and your doctor will use the information from the bone density test to decide what treatment, if any, you may need and what preventive steps you can take. According to the National Osteoporosis Foundation, you can prevent osteoporosis by getting enough calcium and vitamin D and eating a well-balanced diet; exercising regularly; eating more fruits and vegetables; not smoking; and limiting alcohol to a couple of drinks a day. If you have osteoporosis, your doctor may recommend an osteoporosis medication, such as biphosphonates, teriparatide denosumab, or selective estrogen receptor modulator (SERMs) that can either slow bone loss or help to form new bone. Like any medications, these medications have risks and not all of these medications are necessarily right for you. So be sure you and your doctor discuss the pros and cons of taking any medication and weigh their risks with the serious risk of not treating osteoporosis. To learn more about osteoporosis and dexa scans, click here. Rebecca M. Shepherd, M. D., specializes in the treatment of osteoarthritis, osteoporosis, rheumatoid arthritis, and autoimmune disease. She is a graduate of the University of Texas and Vanderbilt University’s medical school and served her residency and fellowship at Washington University.
<urn:uuid:474ae3e9-fc62-4c5e-9be0-d73f6abffe6b>
CC-MAIN-2016-26
http://www.lancastergeneralhealth.org/LGH/Locations/LG-Health-Physicians/Rheumatology/Arthritis-and-Rheumatology-Specialists/May-2013/Do-you-need-a-bone-density-test-.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00057-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918756
738
3.0625
3
MCT Gold is 100% medium chain triglycerides oil. MCTs are known as "carbo-lipid" for good reason since they prevent muscle loss and increase lean body mass. MCTs, therefore are a nutrient with great value for those in endurance training and the "body culture". Medium chain triglycerides are actually not carbohydrates. MCTs are important fatty acids that provide energy exactly as do carbohydrates. Also known as caprylic and capric acids, MCTs function differently from conventional fats, in that less fat is absorbed in the body. During intense exercise, MCTs prevent the breakdown of muscle tissue since they produce the so-called ketones, which are used directly by the muscle to produce energy and reduce muscle loss. Conventional fats do not produce many ketones. In addition, MCTs are quickly absorbed and circulate in the bloodstream, and they do not produce fatigue as is common with the consumption of simple sugars. This product is now bottled in an opaque high density polyethylene, made from non-leachable food grade materials. HDPE bottles are particularly suited for the product because molecules of HDPE have fewer branches and side chains which leads to higher density and smaller pores. This makes it an effective barrier to contain medium chain triglycerides within the bottle. Find a similar product MCTs are unique in that, in the presence of carbohydrates, they can be turned into energy inside the mitochondria, the powerhouses of energy production in the cell. Conventional fats can be burned only after the carbohydrate reservoir of the cell has been depleted. This has important implications in that the burning of MCTs in the presence of carbohydrates spares glycogen and, hence, production of ketones. Both prolong endurance for training and stamina. Medium chain triglycerides also improve the absorption of amino acids, which are critical for muscle tissue repair. Furthermore, MCTs also improve the absorption of calcium and magnesium. These minerals are needed for carbohydrate and amino acid metabolism to improve muscle contraction response time. MCTs are digested immediately after ingestion as they are hydrolyzed by the enzyme lipase, which is present in the saliva. Additionally, energy released by the digestion and metabolism of MCTs is converted in the body into heat by a process called thermogenesis, which favorably affects the basal rate of metabolism. When the body converts energy into heat, the metabolism increases and, in turn, leads to fat loss.
<urn:uuid:80b19996-bc96-4d28-a708-b2100ab44015>
CC-MAIN-2016-26
http://www4.netrition.com/ultimate_nutrition_mct_gold.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948501
498
2.515625
3
At stake in this election: - 598 seats in the Federal Diet (Bundestag) Description of government structure: - Chief of State: Federal President Joachim GAUCK * - Head of Government: Chancellor Angela MERKEL ** - Assembly: Germany has a bicameral Parliament (Parlament) consisting of the Federal Council (Bundesrat) with 69 seats and the Federal Diet (Bundestag) with 622 seats. * The Federal President is elected by a Federal Convention to serve a 5-year term. The Federal Convention includes all members of the Federal Assembly and an equal number of delegates elected by the state parliaments. A candidate must secure a majority of votes in the Federal Convention to be elected. If this does not happen in two rounds of voting, a third round is held where only a plurality is needed to win. ** The Chancellor must win an absolute majority of votes in the Federal Assembly. Description of electoral system: - The Federal President is elected by indirect vote to serve a 5-year term. - Chancellor is elected by parliament to serve a 4-year term. - In the Federal Council (Bundesrat) 69 members are filled by regional legislatures*. In the Federal Diet (Bundestag), 299 members are elected by plurality vote in single-member constituencies to serve 4-year terms and 299 members are allocated by popular vote through a mixed-member proportional system to serve 4-year terms.** * There are 16 multi-member districts (magnitude ranging from 3 to 6), each of which corresponds to one of the 16 Lander (states). Each state elects a regional assembly, which elects a regional government, a delegation of which represents the Land in the Federal Council. ** Each voter has two votes. In each single-member district, the first vote counts toward the election of a plurality winner. Each Land also represents a multi-member district. The second vote determines outcomes in the MMDs. Within each Land, a party is entitled to a share of seats proportional to its share of second votes. Seats at the MMD/compensatory level are allocated to parties from closed lists to make each party's overall seat share (compensatory seats + SMD seats) is proportional to its Land-wide vote share. To qualify for compensatory seats, a party must win either 5 percent of second votes nation-wide or at least 3 SMD seats. Because it is possible for a party to win more SMD seats than its share of second votes otherwise would allow ("overhang mandates"), the size of the compensatory tier can change, and with it the size of the Bundestag. There are currently 24 members in the overhang mandates. Main parties in the electoral race: - Alliance: Christian Democratic Union of Germany / Christian Social Union of Bavaria / Christlich Demokratische Union Deutschlands / Christlich-Soziale Union in Bayern (CDU/CSU)* Leader: Angela MERKEL and Horst SEEHOFER Seats won in last Federal Diet election: 239 - Party: Social Democratic Party of Germany / Sozialdemokratische Partei Deutschlands (SPD) Leader: Peer STEINBRÜCK Seats won in last Federal Diet election: 146 - Party: Free Democratic Party / Freie Demokratische Partei (FDP) Leader: Rainer BRÜDERLE Seats won in last Federal Diet election: 93 - Party: The Left / Die Linke Leader: Katja KIPPING and Bernd RIEXINGER Seats won in last Federal Diet election: 76 - Party: Alliance '90/The Greens / Bündnis 90/Die Grünen Leader: Katrin GÖRING and Jürgen TRITTEN Seats won in last Federal Diet election: 68 - Party: Pirate Party / Piratenpartei Leader: Bernd SCHLÖMER Seats won in last Federal Diet election: 0 - Party: Alternative for Germany / Alternative für Deutschland (AfD) Leader: Bernd LUCKE Seats won in last Federal Diet election: N/A * The CDU and CSU, also known as the Union parties, are Germany's two main conservative parties. The CSU only contests elections in Bavaria, whereas the CDU contests elections in all other German states. Population and number of registered voters: - Population: 81,147,265 (July 2013 est.) - Registered Voters: 62,168,489 (September 2009 )
<urn:uuid:e3d03d1f-3874-41f3-b826-52f9852c071d>
CC-MAIN-2016-26
http://www.electionguide.org/elections/id/555/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.911782
976
3.546875
4
Print version ISSN 0042-9686 Bull World Health Organ vol.79 n.10 Genebra Jan. 2001 Quest for malaria vaccine revs up, but much work remains After upwards of 75 human trials ¾ mostly small-scale, but a few full-scale, field trials ¾ and with dozens more being planned, malaria vaccines are coming of age. Money has begun to pour into vaccine trials from both public and private sources, and new ideas and approaches are proliferating. Researchers are upbeat, but how long will it be before a good malaria vaccine becomes availabe. Robert Walgate investigates. With the influx of new money ¾ probably ten times as much as was available over the last decade ¾ malaria vaccine research and vaccine development are booming. And with the new knowledge of the human genome, and the almost complete sequencing of the malaria genome, hundreds of new components of the malaria parasite are being discovered that might stimulate the immune system, to add to the mere dozen or so that have been worked on by vaccine researchers to date. So far, the most promising results, in the view of the majority of experts interviewed for this article, have been seen with a vaccine that has emerged from a 17-year collaborative effort involving several teams. The vaccine, code-named "RTS,S", is a combination of molecules from the malaria parasite and others from the hepatitis B virus, combined into a single particle that incorporates a specially designed immune-stimulating, or adjuvant, substance. It was devel- oped by the drug firm GlaxoSmithKline (and its predecessors), in partnership with, among others, the Walter Reed Army Institute of Research. In a preliminary trial, the vaccine protected 70% of adult volunteers in the Gambia against infection with Plasmodium falciparum, the most lethal of the four malaria parasites that infect humans, but the protective effect waned rapidly after two months. Dr Joe Cohen, head of vaccines for emerging diseases at GlaxoSmithKline told the Bulletin that the vaccine is "targeted to infants and children living in malaria-endemic regions". An early-stage (Phase 1) trial is currently under way in children in the Gambia, primarily to test the vaccine for safety. "If all goes well, an efficacy, or Phase 2b, trial will take place in an African country in 2002 to tell us to what extent the vaccine reduces malaria disease." Much larger (Phase 3) trials would then be started "to evaluate the impact of the vaccine on morbidity and mortality in children and infants." The initial series of Phase 1/2b trials, which will cost nearly US$ 7 million, is being conducted in partnership with the US$ 50 million Malaria Vaccine Initiative (MVI) lauched 18 months ago by Microsoft entrepreneur Bill Gates and based at the Seattle headquarters of the Program for Appropriate Technology in Health (PATH). The second most promising vaccine candidate, according to many experts, is an Australian "combination B", vaccine, that combines immune-stimulating molecular structures (antigens) from the asexual blood stages of the malaria parasite's life cycle (see Box). It has been tested in children in Papua New Guinea by a team from the Papua New Guinea Institute of Medical Research and their collaborators at the Swiss Tropical Institute in Basel, Switzerland. In this trial, still to be published, the vaccine induced a substantial reduction in parasite densities in the bloodstream over a period of at least four months, and could, say the researchers, be potentially life-saving, especially in children. The duration of protection, however, has not yet been studied. To create a useful vaccine against such a complex organism as the malaria parasite, with its different life cycle stages, each involving a distinct set of immunologically operational antigens (see Box), is far from easy. Dr Tom Richie, head of clinical trials at the Naval Medical Research Center's malaria program in Silver Spring, Maryland, which hosts one of the world's largest malaria vaccine research programmes, points out that malaria is a chronic infection, whereas successful human vaccines so far "are almost all against acute infections or the acute stage of an infection. In other words if you get smallpox naturally or chickenpox, you develop a sterilizing immune response naturally". After surviving one infection you will usually not be infected again. "But in malaria that doesn't happen. So a basic problem we face with a malaria vaccine is that we are harnessing as our weapon the immune system, but the parasite already knows how to avoid it." Scientists, he says, "have to understand the mechanisms of immune evasion in malaria, and we have to short- circuit those mechanisms." A good vaccine able to do this, he believes, is ten years away. There is, however, strong evidence that a malaria vaccine really could work. For one thing, it is clear that in malarious areas a degree of natural immunity builds up after many infections. For another, research which began in the early 1970s and was expanded during the 1990s has shown that 90¾95% of people exposed over several months to irradiated mosquitos carrying malaria parasites develop protection against the infection and that protection can last for ten months. This approach though is too crude and impractical to produce a vaccine for wide application. Scientists are divided, however, as to how to move forward to make a better vaccine. Some believe that enough malaria parasite antigens have been identified and that all that is needed now is to find the right way of presenting them to the immune system in order to elicit a protective immune reaction. Among the many research programmes working with different sets of antigens are the Institut Pasteur in Paris, where Dr Pierre Druilhe is focusing on blood-stage vaccines. One candidate vaccine is now in an early stage (Phase 1) trial in Lausanne, Switzerland, with support from the three-year-old European Malaria Vaccine Initiative, and 11 other candidates are in the pipeline. Then, at Walter Reed, Dr Christian Ockenhouse is working on vaccines against P. vivax malaria (see Box), as is Dr Chetan Chitnis of the malaria research group at New Delhi's International Centre for Genetic Engineering and Biotechnology. At the National Institute of Allergy & Infectious Diseases (NIAID), part of the US National Institutes of Health, a new malaria vaccine development unit headed by Dr Louis Miller has started work on several blood-stage and sexual-stage antigens for a vaccine against malaria disease and a transmission-blocking vaccine, respectively. And in Bogota, Dr Manuel Patarroyo, at the Colombian Institute of Immunology Foundation, is still full of enthusiasm after the disappointing results in human trials with his famous "SPf66" vaccine. He is reportedly working on a second-generation vaccine made up of a ring-shaped molecular structure that mimics a merozoite, or blood stage, surface protein (see Box). With the considerable funding now available, some of these efforts are likely to reach the human trial stage. Some scientists are working not with antigens but directly with the genes, the DNA, that code for the antigens. At the University of Oxford in the UK, Dr Adrian Hill is using so-called "naked DNA", which, injected into the host's own cells, can make the protein components of the vaccine. A harmless virus package engineered to carry the same genes more efficiently into the body's cells is administered some time later. This so-called "DNA prime¾virus boost" technique was tested in adults in the Gambia and the results are "encouraging," Hill says. At the US Naval Center's malaria lab, the leading candidate, called MUSTDO-9, is also a DNA vaccine, containing nine antigens from the sporozoite and liver stages of the parasite. Some scientists say a large array of antigens should be used in these DNA vaccines. After all, the sporozoite has about 5000 genes, while current candidate vaccines work with the antigens produced by less than a handful of genes. Others point out that adding too many elements can be counterproductive. Walter Reed researchers, for example, discovered that when a pre-erythrocytic antigen (see Box), called TRAP, was added to RTS,S, it did not add to, but diminished, the vaccine's efficacy. Yet others stress the importance of developing vaccine delivery systems that induce specific immune responses. The US Naval Center group has reported that a DNA vaccine they are working on has induced in human subjects interferon-gamma responses that are thought to be critical to the protection against malaria produced by the irradiated sporozoite vaccine. Optimism also stems from the advent of new tools, like genomics and proteomics. Dr Stephen Hoffman, former director of the Naval Medical Research Center's malaria program and now with Celera Genomics, is enthusiastic about his work with proteomics, a systematic, exhaustive approach to the analysis and identification of parasite proteins. "Now, for the first time, we have the possibility to begin to identify the real targets of irradiated sporozoite immunity or naturally acquired immunity to malaria." And using the P. falciparum genome sequences published so far, Richie at the US Naval Medical Research Center says his team already has identified "hundreds of new [antigen] candidates". Dr Michael Hollingdale of the London School of Hygiene and Tropical Medicine sounds a more cautious note: "I think [genomics] is a very exciting approach, but while you may dramatically increase the number of vaccine candidates they've still got to be turned into vaccine products, manufactured and tested. Many of the candidates we're using now were identified 10¾15 years ago, so there's a big role for finding new ones. But you still come down to the engineering job." For some researchers, there are just too many ideas around. Dr Stephanie James, at the NIAID's parasitology and international programs branch, says: "We already have so many different antigens, expression systems and delivery systems under examination, all of which carry some degree of ego investment by both the investigators and those who have funded the research so far, that it has been very difficult to sort out. One may well assume that this will be magnified by the expedited discovery of more candidates through genomics and post-genomics research. It will be increasingly important for the field to come together to agree on some fundamental selection criteria for putting candidates on the path to clinical trials." Another testing question is how effective a usable malaria vaccine must be. Dr Marcel Tanner of the Swiss Tropical Institute says: "Maybe we need to rethink what we mean by an effective vaccine. We can aim at something which is 99% efficacious. But if we see the vaccine as part of an integrated strategy, a vaccine which reduces morbidity and mortality by just 50% could be a tremendous addition. Maybe the way forward is to rethink the concept of vaccines, and to look at packages of measures that can really be implemented, not just theoretical ways of reducing malaria." A recent study in the United Republic of Tanzania, for example, showed that antimalarials plus iron supplements administered preventively to infants through routine immunization programmes, can reduce the incidence of malaria by nearly 60% in these children (see Bulletin of the World Health Organization, 2001, 79: 688). At the botton line, though, comes the price tag. To develop and produce a usable vaccine may about US$500 million, says MVI director, Dr Regina RAbinovich. "There is certainly a lot moremoney now than in the past, but malaria remains a neglected disease with only a fraction of the funding HIV/AIDS gets." So where's the money for malaria going to como from? "We are going to have to seek other sources, because you don't get that level of funding directly from government. There's going to have to be a clear package development initiative, and there may be potential from philanthropy, and a package [from different sources] that will make it feasible." At the end of June, the birth of such an initiative hit the news. Three donor agencies ¾ the European Malaria Vaccine Initiative, the United States Agency for International Development (USAID) and the MVI ¾ announced that they had "joined forces in facilitating malaria vaccine development, from testing and manufacturing vaccine candidates to ensuring their accessibility and affordability in developing countries". In a press comment, Rabinovich said: "Current global resources are not sufficient to defeat [malaria], making concerted action imperative...Today's agreements extend our efforts to replace competition with strategic collaboration." Robert Walgate, London, UK A brief backgrounder to malaria vaccine research Of the four main species of Plasmodium parasites that cause human malaria, two are particularly troublesome and are the targets of vaccine research: Plasmodium falciparum, which causes the most lethal form of malaria (anywhere between 700000 and 2.7 million deaths a year, 75% in African children), and P. vivax, which causes comparatively few deaths but is extremely debilitating. When an infected female Anopheles mosquito bites, it injects the malaria parasite into its victim's bloodstream in the form of rod-shaped sporozoites, which quickly invade the liver. This is the pre-erythrocytic stage of the parasite's life cycle, and includes the liver or hepatic stage. Over the next few days the parasites multiply in the liver and turn into roundish merozoites, which burst out from the liver and enter red blood cells in the bloodstream: during this blood or erythrocytic stage, the merozoites undergo several cycles of multiplication, eruption and reinvasion, causing the cyclic fevers characteristic of malaria. Finally ¾ if the patient remains alive ¾ some of the merozoites go on to develop into gametocytes, or sexual forms of the parasite (all other forms or stages of the parasite's life cycle are, therefore, often referred to as asexual stages). During this sexual stage, the gametocytes can be picked up by another biting mosquito, and the deadly cycle continues. At each of these stages of the life cycle, the parasite presents on its outer coat a distinct set of molecules, or antigens, capable of stimulating the host's immune system. The different candidate vaccines under research or in development contain these or parts of these antigens. Some, like the RTS,S vaccine (see main text), use sporozoite antigens, and aim to stop infection in its infancy. Others, like the Australian vaccine, may use merozoite antigens, aiming to reduce disease by limiting the development of the asexual blood stages of the parasite, which cause the symptoms of malaria. Yet others, called ''altruistic'' or transmission- blocking vaccines, because they would be of no direct use to the individual patient, would act against the gametocytes and stop transmission of the infection, thereby putting a crimp on an epidemic or lowering the level of malaria infection in a community.
<urn:uuid:ff35daac-769a-4d59-bd40-01f43e61cefd>
CC-MAIN-2016-26
http://www.scielosp.org/scielo.php?script=sci_arttext&pid=S0042-96862001001000022&lng=en&nrm=iso&tlng=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937341
3,124
3
3
February 2, 2009 Why Positive Interactions May Sometimes be Negative History abounds with examples of dramatic social change occurring when a disadvantaged group finally stands up and says "Enough!". By recognizing their inequalities, members of disadvantaged groups can mobilize and attempt to bring about change. Traditional methods of improving relations between different racial and ethnic groups have focused on creating harmony between those groups. For example, "contact theory" proposes that bringing members of opposing groups together by emphasizing the things they have in common can achieve harmony by increasing positive feelings towards the other group. However, research has shown that positive contact not only changes attitudes, but can also make disadvantaged group members less aware of the inequality in power and resources between the groups. Is it possible that there can be too much of a good thing? Psychologist Tamar Saguy from Yale University, along with her colleagues Nicole Tausch (Cardiff University), John Dovidio (Yale University) and Felicia Pratto (University of Connecticut) examined the negative effects of positive contact between groups, first in the laboratory and then in the real world. In the first experiment, students were divided into either advantaged or disadvantaged groups, with the advantaged groups in charge of distributing course credits at the end of the experiment. Before the course credits were doled out, members of the groups interacted, with instructions to focus on either the similarities or differences between the two groups. The results, described in Psychological Science, a journal of the Association for Psychological Science, revealed that following the similarity-focused interactions, members of the disadvantaged group had increased expectations that the advantaged group members would fairly distribute the course credits. These expectations were the result of overall improved attitudes towards the advantaged group and reduced attention of the disadvantaged group members to the inequalities between the groups. However, these expectations proved to be unrealistic â&eur;" the advantaged group discriminated against the disadvantaged group when handing out course credits, regardless of the type of conversations they had engaged in at the start of the experiment. The psychologists next wanted to see if this effect occurs in the real world. They surveyed Israeli-Arabs (a disadvantaged minority group) about their attitudes towards Jews. As in the previous experiment, more positive contact (assessed by the number of Jewish friends the Israeli-Arabs had) resulted in improved attitudes towards Jews and increased perceptions of Jews as fair towards Arabs. In addition, although in general Israeli-Arabs are strongly motivated towards social change and greater equality, positive contact with Jews was related to a decreased support for change. The results of the two studies suggest that positive contact with majority groups may result in disadvantaged groups being less likely to support social changeâ&eur;" with improved attitudes towards the advantaged groups and reduced attention to social inequality, the disadvantaged groups may become less motivated to promote change. These findings have important implications, not just for global diplomacy, but also in our everyday encounters. The authors note that positive contact between groups does not necessarily have to undermine efforts towards equality. Rather, they suggest that "encounters that emphasize both common connections and the problem of unjust group inequalities may promote intergroup understanding as well as recognition of the need for change." The authors conclude that such mixed-content encounters can bring members of all groups together and "perhaps motivate them to eliminate social inequalities." On The Net:
<urn:uuid:fbee0367-699d-4436-9a88-3c6b42c23738>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/1632488/why_positive_interactions_may_sometimes_be_negative/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965641
677
3.4375
3
Regulations & Policy Official Designation of Invasive Species The HISC is in the process of drafting and adopting administrative rules to formally define species that are invasive in Hawaii. While no official designation currently exists, the species listed on this site as “invasives” are a sample of high-profile species that are considered to be invasive due to their ability or potential to cause harm to Hawaii’s environment, economy, or way of life. At this time, the description of these species as “invasive” is for educational purposes and is not related to regulatory restrictions. While no official State designation of “invasive species” currently exists, the State of Hawaii has existing regulations relating to species that may harm Hawaii’s environment, economy, health, or lifestyle. Hawaii Department of Agriculture The Hawaii Department of Agriculture (HDOA) regulates a number of activities related to pests. Below are a few key examples of HDOA policies that are relevant to pest species. For a full description of HDOA’s mandates, administrative rules, and programs, visit the HDOA website. - Importation of species into Hawaii - The Hawaii Department of Agriculture’s Plant Quarantine Branch regulates the importation of goods into the State. Chapter 150A, Hawaii Revised Statutes (HRS), describes HDOA’s plant and non-domestic animal quarantine mandate. Hawaii Administrative Rules (HAR) Chapter 71 describes the process for importation and describing which species are conditionally approved, restricted, or prohibited from importation. For more information on importation, visit HDOA’s import program page. - Intrastate Quarantine - The HDOA also regulates the movement of agricultural pests within the State by restricting the movement of certain goods between islands and inspecting interisland shipments. The process by which HDOA regulates these shipments is described in Hawaii Administrative Rules Chapter 72. You can learn more by visiting the Plant Quarantine Branch’s page on their Interisland Inspection Program. - Noxious Weeds - The HDOA defines “noxious weeds” in Chapter 152, HRS as “any plant species which is, or which may be likely to become, injurious, harmful, or deleterious to the agricultural, horticultural, aquacultural, or livestock industry of the State and to forest and recreational areas and conservation districts of the State, as determined and designated by the department from time to time.” The criteria for designating noxious weeds and the list of species currently designated as such are available in Hawaii Administrative Rules Chapter 68. - Pests for Control - The HDOA Plant Pest Control Branch eradicates, contains, or controls pests of plants which could cause significant economic damage to agriculture, our environment, and quality of life. This branch of the HDOA includes the Biological Control Section, which researches and develops biological control agents to mitigate the impacts of certain pests. Hawaii Department of Land and Natural Resources The Department of Land and Natural Resources (DLNR) regulates the transport of and release of wildlife, manages aquatic and terrestrial resources, and is the administrative home of the Hawaii Invasive Species Council. To see a full description of the department’s mandates, administrative rules, and programs, visit the DLNR website. - Injurious Wildlife - Under statutory authorities provided by Chapter 183D, Hawaii Revised Statutes, DLNR’s Division of Forestry and Wildlife (DOFAW) maintains Hawaii Administrative Rules Chapter 124, which defines “injurious wildlife” as “any species or subspecies of animal except game birds and game mammals which is known to be harmful to agriculture, aquaculture, indigenous wildlife or plants, or constitute a nuisance or health hazard and is listed in the exhibit entitled “Exhibit 5, Chapter 13-124, List of Species of Injurious Wildlife in Hawaii…” Unless permitted by DLNR, it is prohibited to release, transport, or export injurious wildlife. Permits may be applied for per the instructions at DOFAW’s Injurious Wildlife page. - Alien Aquatic Organisms - Under statutory authority provided Chapter 187A-31, Hawaii Revised Statutes, DLNR’s Division of Aquatic Resources (DAR) is designated as the lead State agency for preventing the introduction and carrying out the destruction of alien aquatic organisms through the regulation of ballast water discharges and hull fouling organisms. - Hawaii Invasive Species Council - Chapter 194, Hawaii Revised Statutes, identifies the DLNR as the administrative host of the interagency Hawaii Invasive Species Council and designates the chairperson of DLNR as a voting member of the HISC. By convention, the chairperson of DLNR acts as a co-chair of the HISC, along with the chairperson of the Hawaii Department of Agriculture. It is under this statute that the Council has the authority to adopt administrative rules that will allow the Council to provide an official State delegation of invasive species. Hawaii Department of Health The Department of Health regulates programs that impact human and environmental health. Though the Department’s Vector Control Branch was eliminated in 2009, the Department is still mandated to control vectors of human disease. Remaining Vector Control Workers are currently employed under the Department’s Sanitation Branch. You can learn more at the DOH webpage for Vector & Disease Control. - Vector Control - Chapter 321-11, Hawaii Revised Statutes describes subjects of health under the Department’s purview, including the management of mosquito breeding habitat and the deinsectization of aircraft to prevent the introduction, transmission, or spread of disease or the introduction or spread of any insect or other vector of significance to health. The Department’s programs relating to vector control are further described in Hawaii Administrative Rules Chapter 26.
<urn:uuid:89ee1dd7-5f54-47f7-90ce-91a349f3e590>
CC-MAIN-2016-26
http://dlnr.hawaii.gov/hisc/info/policy/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.91479
1,210
2.9375
3
Oversize rod cylinders Cylinders with oversize rods can end up with dangerously high intensification pressures under certain conditions. The cutaway in Figure 15-17 shows a cylinder mounted vertically with its rod down. The rod area is approximately half the piston area so the annulus area around the rod is also approximately half the area of the piston. The left circuit shows the cylinder extending with a meter-out flow control circuit. For the 5-in. bore cylinder with a 3 1/2-in. rod and the pump compensator set at 3000 psi, pressure in the rod end would be approximately 5880 psi as the cylinder approaches the work. If this is a hydraulic press, middle circuit, with 5000 lb of platen, tooling a load-induced pressure of 499 psi would bring the rod end pressure to 6379 psi. This much pressure could damage the cylinder seals, over-pressure the flow control, and exceed the rating of pipe connections. The circuit on the right eliminates all of the above problems but still allows speed control of the cylinder. A counterbalance valve on the cylinder rod end set at 100 to 150 psi above load-induced pressure would keep the platen from falling while at rest or while it is approaching the work. A meter-in flow control sets cylinder speed and as fluid enters it, pressure in the cap end only rises to approximately half the counterbalance valve setting. The cylinder starts extending when pressure in the rod end reaches approximately 574 psi, which is well below the rating of all components. Each of the circuits in Figure 15-17 would control cylinder speed but the counterbalance circuit is the best choice for the reasons given plus it has a lot less energy loss. Another reason for using an oversize rod is for regeneration circuits like the ones in Figure 15-18. Any single rod cylinder will at least attempt to extend with equal pressure at both ports and this is called regeneration. Whether it actually extends depends on the load it must overcome, maximum system pressure available, and what the rod diameter is. This is because the maximum force during regeneration is pressure times the area of the rod. The piston is in balance during regeneration and serves no function during the process. The standard rod cylinder in Figure 15-18 would extend at the rate of 12.25 in./sec at a maximum force of 3142 lb. Even if this is ample force to extend the present load, the amount of flow in regeneration is excessive. As the cylinder is regenerating forward with flow of 10 gpm from the pump, there would be 52 gpm coming from the head end for a total of 62 gpm to enter the cap port. This high flow would cause excessive back pressure and keep cylinder speed slow because the circuit relief valve would be bypassing at system pressure. Another reason using a standard rod cylinder is not good practice is that its retract speed would only be 2.33 in./sec, so overall cycle time would not increase as much as first thought. The above scenario is the prime reason for using 2:1 rod area ratios for regeneration. The lower cylinder in Figure 15-18 is the same bore but has a 31/2 in. oversize rod. This rod is not exactly 2:1 area ratio because it uses NFPA standard sizes for interchangeability. All NFPA cylinders have the largest standard rod that is up to but not over 2:1 area ratio. The figures for the 2:1 ratio rod now show a net force on extend of 9621 lb. at a speed of 4 in./sec. During regeneration, flow from the head end is 10.4 gpm with a total flow to the cylinder of 20.4 gpm. This is a good measure of force and a reasonable flow rate that usually overcomes work resistance at easy-to-handle flow rates. Retract speed would be 3.8 in./sec, making extend and retract speeds almost equal. When the rod diameter is exactly 2:1, extend and retract speed and force are identical -- the same as a double rod-end cylinder. However, getting exactly 2:1 area ratios requires an odd size bore or rod that may require special seals. The circuits in Figure 15-19 show some standard regeneration setups used for particular needs. The full-time regeneration is a replacement for a double rod-end cylinder circuit. With a 2:1 area ratio rod, it will have identical speed and force in both directions. Even with standard rod diameter cylinders, force and speed are within 10% to 12% of the same, which is often satisfactory. The full force at pressure buildup example uses a sequence valve to indicate work resistance and direct head-end oil to tank. A check valve in the regeneration flow line allows regeneration flow to the cap end and prevents pump flow to tank during the full force portion of the cycle. This circuit extends fast until work contacts, no matter the size of the part. The full force at limit switch circuit uses a normally closed two-way directional control valve to send head-end flow to tank when a limit switch is made. This cuts cylinder speed in half so part contact is less abrupt. This circuit protects tooling, allows more time for visual inspection of alignment, and can give an operator more time to respond to unsafe conditions. The circuits in Figure 15-19 may need a counterbalance valve to retard running-away conditions when the cylinders are vertically mounted. When this is necessary, the counterbalance valve must be externally drained to eliminate backpressure in the pressure adjustment chamber. Adding a bleed-off flow control to the line between the head-end port and the check valve and after a counterbalance valve allows cylinder speed reduction when required. For complete coverage of regeneration circuits see the author’s upcoming e-book Fluid Power Circuits Explained.
<urn:uuid:75eb4d41-1185-4183-9b63-b934e4b0ebf1>
CC-MAIN-2016-26
http://hydraulicspneumatics.com/other-technologies/chapter-15-fluid-power-actuators-part-2?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918245
1,184
2.75
3
In any application(on my Win 7 pc), when i try to choose font, the common dialog box pops-up. There, I see some font-names start with @. While the most others don't. Why the @ symbol is used for? Are these fonts are different? How? As Raymond Chen explains in his blog post titled Why do some font names begin with an at-sign?: Also see the Vertical Writing and Printing MSDN article, as well as Michael Kaplan's informative posts on the same subject.
<urn:uuid:8ea665a0-c86d-4ff8-bdd2-565c86cef37a>
CC-MAIN-2016-26
http://superuser.com/questions/568150/font-name-with-an-symbol-how-they-are-different
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928771
108
2.75
3
Scientists at Konkuk University, Seoul National University and the University of Edinburgh’s School of Chemistry have developed an energy-efficient device capable of storing the memory needed for smart phones, mp3 players and digital cameras. The new device rethinks the way memory is currently used by introducing a self-propelled floating cantilever which reacts to electrical currents within the device to convert this electrical information into binary code. The floating cantilever not only works much more quickly than current electronically powered conversion devices, but with less energy than conventional devices. The new development promises more efficient and faster gadgets in the future. Current memory devices — NAND- and NOR-type flash memories — are widely used because of their ease of manufacturing and advanced technology. Other attempts at carbon nanotube (CNT) field-effect transistors have been successful but result in low operation speed and short retention times. This recent study built upon that CNT research and added the moving cantilever to improve upon the problems previously encountered. “This is a novel approach to designing memory storage devices. Using a mechanical method combined with the benefits of nanotechnology enables a system with superior speed and energy efficiency compared with existing devices,” said Eleanor Campbell of the University of Edinburgh’s School of Chemistry, a professor that worked on the study. The researchers meant to tackle the high-energy problem of storing energy in small devices. Device memory has existed as an electrical current that needs to be converted and stored as binary code, but the new moving cantilever technology is reactive to that electrical current, pulling nearer or farther depending on its frequency and intensity. These movements are converted into binary code, which is then stored. Because the cantilever is actually powered by the electrical current, it greatly reduces the amount of power needed to run the device. Via Science Daily
<urn:uuid:db24fb50-23b7-421d-9b4c-888a3fd55b32>
CC-MAIN-2016-26
http://inhabitat.com/new-memory-device-rethinks-data-storage-and-saves-energy/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933755
376
3.625
4
Plastic Water Bottles a Health Threat Response to Editorial Paul Goettlich / Post Tribune (Indiana) 2jun01 Also see: Get Plastic Out Of Your Life by Paul Goettlich / Living Nutrition magazine 1may2004 Differences between bottled water and tap water Jim Gordon, Editorial Post-Tribune 24may01 Bottled water has grown into a $22 billion-a-year industry. Bottled water is now so popular there are more than 700 brands of water produced worldwide. The phenomenon is attributable to the belief by health-conscious consumers that bottled water has some advantage over the stuff that comes from the kitchen faucet. Believing that, they stock supplies of bottled water and even take it with them when they have occasion to leave home for any prolonged period - guarding against the eventuality that they'll become thirsty and have only a tap to turn to. But now comes a report from the world's largest environmental organization, the Worldwide Fund for Nature, which concludes: "Bottled water may be no safer or healthier than tap water in many countries while it sells for up to 1,000 times the price." The Swiss-based environmental group, which is known in the United States as World Wildlife Fund, based its report, released this month, on a study made at the University of Geneva. Among other things, the study points out that tap water standards in Europe and the United States are higher than those governing bottled water. Those who have become heavy users of bottled water may be surprised to learn that, in the view of the World Wildlife Fund, their habit is actually harming the environment. They say 1.5 million tons of plastic are used each year to bottle water. And toxic chemicals released during the manufacture and disposal of bottles can release gases that contribute to climate change. What's more, says Dr. Biksham Gujja, head of World Wildlife Fund International's Fresh Water Program, some bottled water is nothing but tap water in a fancy vessel. The bottom line, it appears, is that those who insist on purchasing the bottled water are possibly paying not so much for the product as for the attractive nature of that which contains it. Much like Cub fans and Wrigley Field. Paul Goettlich 2jun01 Thank you for writing about this issue. We consumers need all the help we can get when it comes to translating slick advertising on containers. There is another difference between bottled and tap water. Last year, Consumers Union tested 5-gallon polycarbonate jugs and found that "eight of the ten 5-gallon polycarbonate jugs we checked leached bisphenol-A into water--from 0.5 ppb to 11 ppb. Any health effects would be most likely to occur in developing fetuses, judging from animal research." Bisphenol-A (BPA) mimics the hormone estrogen in animals. Just in case nobody has noticed lately, we are still animals. Estrogen is active in the human body at concentrations measured in parts-per-trillion. To get an idea of what 1 part-per-trillion looks like, imagine one drop of water in 660 rail tank cars. That's a train about 6 miles long! Our federal regulations for testing such chemicals as BPA do not come anywhere near the parts-per-trillion range. As a group, the plastics and chemical industries are waging a multi-million dollar disinformation campaign to debunk the fact of hormone mimics via the media and lobbying. My choice between bottled and tap would be bottled if I could verify the quality and if it was available in glass bottles.
<urn:uuid:8d4e2296-f70f-4d11-b0a8-12d6e54cf4ac>
CC-MAIN-2016-26
http://www.mindfully.org/Plastic/Bottled-WaterTap-Water.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953163
750
2.6875
3
Guest Author - Nikki Phipps Crinums are little-known bulbs with a lot of potential in the garden. Crinum is a member of the Amaryllis family, and there are somewhere between 60 and 100 species of crinum worldwide, with most species found in Africa. There are also two main groups, those with symmetrical flowers and water-loving characteristics, and those with a wide, funnel-shaped form occurring in a range of habitats from riverbanks and dry regions to slopes and woodland areas. The name Crinum originates from the Greek word ‘Krinon’ meaning white lily, as most crinum species have white flowers. These bulbs may also be referred to as Cemetery lilies, with many found growing in old cemeteries. Crinums have lush, tropical-looking foliage, and the majority of crinum species can regenerate leaves during the growing season from dormant leaf bases. Crinums can consist of either one or many trumpet or bell-shaped flowers, and their lily-like blooms range from white to light pink and red. Some varieties of crinum have sweet aromas; while others produce a foul odor. Crinum also has tuberous species as well as those with rhizomes. Crinum is one of the late-flowering summer bulbs; however, in mild regions these beautiful bulbs will bloom from spring throughout fall. Crinums occur in a wide range of habitats from aquatic to desert, making them some of the hardiest bulbs around. Crinum prefers moist, humus-rich soil; however, most species will grow well in any soil. Adding compost will yield better results. These plants also grow best in full sun but will tolerate light shade, thriving best in a moist area of the garden. Most crinums are suitable as landscape plants in or near water features or along pond edges. Crinum is a large solitary plant, reaching about 2 feet tall once mature, and is oftentimes grown in containers. These lively plants work well planted alone or in groups within borders. They also like to be fed regularly during the growing season. Although they are quite adaptable, Crinums should always be well protected from frost or cold weather so bring them indoors for over wintering. The most widespread and abundant crinum within the United States is the Swamp lily (C. americanum). The swamp lily’s fragrant flowers will bloom all summer provided the soil is kept moist and is most suitable for bogs and water gardens. It varies greatly with regards to its size of both the floral parts and the width of leaves. However, the strap-like leaves generally reach up to 2 inches wide and 3 feet long, and its bulbs can weigh over 40 pounds. South African native, C. campanulatum, also called Marsh Lily, needs to be permanently placed in water. Its flowers fluctuate from light to dark pink. This species is fairly easy to germinate and grow. Another South African native, the Cape lily (C. bulbispermum) does not need wetlands to perform well in the garden. In the wild, it has been found growing in deep, dry soils. This particular species is also naturalized to the southern parts of the U.S. Its flower color varies from white to pink, rose, or burgundy-red. C. moorei does well under trees, and unlike most other crinum; this species does best in shade. The foliage is more abundant, and the flowers last longer when these plants are grown in shade. The blooms can be white or pink. Propagation is by offsets and seed. Offsets can be taken off and planted after flowering or during its dormant period. Its only real pest is the black-and-yellow-striped amaryllis caterpillar but occasionally snails and slugs can affect crinum species as well. Crinums are easygoing, carefree garden plants that will add an exotic presence to nearly any garden.
<urn:uuid:d997ed2d-d07f-49cd-97d7-13c3e0f808f5>
CC-MAIN-2016-26
http://www.bellaonline.com/articles/art1315.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959062
835
3.671875
4
Greenhouse Gases Blamed for Higher Arctic Temperatures in the Last Century A recent study claims that the average Eastern Canadian Arctic temperatures have been higher in the past one century than in the last 120,000 years. This study was conducted by the University of Colorado Boulder, led by Gifford Miller, a geological sciences professor. The researchers found that in the early Holocene period, the Sun's energy showered over the region was about 9 percent greater than what it is today. The present temperatures prevailing over the Eastern Canadian Arctic surpassed the peak warmth, which existed over the region in the early Holocene or early geological period, when the Sun's energy was about 9 percent more. This geological epoch commenced after Earth's last glacial period got over, approximately 11,700 years ago and is still continuing. "The key piece here is just how unprecedented the warming of Arctic Canada is," Prof. Miller said in a press release. "This study really says the warming we are seeing is outside any kind of known natural variability, and it has to be due to increased greenhouse gases in the atmosphere." The researchers estimated the Arctic temperatures by examining the clumps of dead mosses and trapped gas bubbles in the ice caps on Baffin Island. The method of radioactive dating, also called radiometric dating, was used on these mosses by the researchers and it was found that the mosses were entrapped in the ice since 44,000 to 51,000 years. Radiometric dating refers to a method of dating any material on the basis of the decay of the radioactive atoms present in it. This process is able to date around 50,000 years precisely. These findings pointed toward the temperatures being higher than what they were 120,000 years back "Although the Arctic has been warming since about 1900, the most significant warming in the Baffin Island region didn't really start until the 1970s," said Miller. "And it is really in the past 20 years that the warming signal from that region has been just stunning. All of Baffin Island is melting, and we expect all of the ice caps to eventually disappear, even if there is no additional warming," Miller concluded.
<urn:uuid:87c37789-7d31-4657-abca-832bca5bf8f2>
CC-MAIN-2016-26
http://www.scienceworldreport.com/articles/10517/20131028/greenhouse-gases-blamed-highest-arctic-temperatures-120-000-years.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00031-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97165
446
3.71875
4
March 10, 2010 Low Strengthens Into Hubert, Making Landfall In Madagascar The low that forecasters were watching for development yesterday, March 9, strengthened into Tropical Storm Hubert, and is already making landfall in eastern Madagascar. The Atmospheric Infrared Sounder (AIRS) instrument on NASA's Aqua satellite captured Tropical Storm Hubert's cold thunderstorm cloud tops on March 10 at 5:11 a.m. ET as the western edge of the storm was already raining on eastern Madagascar. The infrared imagery showed two areas where convection was strong in Hubert: the northeastern and southern quadrants of the storm. It is in those two areas that the highest, coldest thunderstorm tops were revealed by AIRS infrared imagery. Those thunderstorm cloud tops were as cold as -63 Fahrenheit!Hubert has maximum sustained winds near 39 mph (35 knots) and is moving west-southwest near 6 mph (5 knots). At 10 a.m. ET (1500 UTC) on March 10, Hubert was located about 160 nautical miles southeast of the capital city of Antananarivo, Madagascar near 20.9 South and 48.8 East. As Hubert continues moving inland over the next two days, forecasts for the capital city and other areas in south central Madagascar will continue to experience periods of moderate to heavy rainfall, and gusty winds. Animated multispectral satellite imagery showed a loss of central convection as Hubert's center moves closer to a landfall. Once Hubert's center is over land, forecasters expect Hubert will quickly fall below tropical storm strength. Image Caption: NASA's Aqua satellite captured cold thunderstorm cloud tops of Hubert in this infrared image of March 10 at 5:11 a.m. ET. Hubert's western edge is already raining on Madagascar. Credit: NASA JPL, Ed Olsen On the Net:
<urn:uuid:2ed08c1d-d142-483d-b104-eeb78812f342>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/1834749/low_strengthens_into_hubert_making_landfall_in_madagascar/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94926
386
2.703125
3
For centuries, a select group of alternative healthcare practitioners have known that colors can dramatically affect health, inner harmony and emotions. Although those trained within the conventional medical model may doubt the efficacy of color therapy, or chromotherapy, a surprising number of success stories have surfaced touting the ability of color to impact human health. As the science behind chromotherapy is uncovered, it is easy to recognize it’s parallel with polarity therapy. Since polarity therapy and chromotherapy are both deeply routed in the basic laws of vibrational physics, these two modalities make a logical union. Based on the premise that different bands of the light spectrum produce different effects in the human body, chromotherapy is known as a vibrational healing modality. When color and light strike an individual, they influence that same vibration present in the body. The set of frequencies related to musical notes demonstrates how the vibration of color can influence the human body. If two properly tuned guitars are in the same room and the G string is plucked on one guitar, the G string on the second guitar will also ring. This phenomenon occurs because the sound frequency of the G note travels across the room causing the resonant frequency of the G string on the second guitar to sound. Likewise, the body’s organs have their own resonant frequencies associated with each chakra and meridian. Well known to physicists, the electrically charged molecules composing living tissue is always vibrating. Thus, chromotherapy practitioners can tune their clients for optimal wellness by exposing chakras and meridians to the color needed. Some of the properties of color that render it a potential healing tool include: - A property of light, color is electromagnetic energy. - Different colors of light have different wavelengths. - The shorter the wavelength, like violet, the faster it vibrates; the longer the wavelength, like red, the slower it vibrates. Creating resonance between the body’s vibrating electromagnetic particles and the desired color’s vibration helps chromotherapy recipients achieve a more healthful state. Chromotherapy in Practice Applying the principles of chromotherapy, a therapist can utilize light and color in various forms. Some of its more common applications include projecting colored light onto certain areas of the body, suggesting colored visualizations and incorporating various colored materials into a session. Each basic color used in chromotherapy is associated with a different chakra and relates to different physical and emotional issues: - Red – Red stimulates brain wave activity, increases heart rate, respiration and blood pressure and excites the sexual glands. It energizes the first chakra located at the coccyx. Warming and energizing, red is appropriate for someone who is tired, cold and has poor circulation. - Orange – The color of joy and wisdom, orange energizes the second chakra located at the sacrum. Regarded to stimulate the appetite, orange is beneficial for illnesses of the colon and digestion. - Yellow – Related to the solar plexus chakra, yellow energizes, lifts the mood, improves memory and can improve digestion. - Green – Affecting the heart chakra, green is calming to the central nervous system. A good color for cardiac conditions, high blood pressure and ulcers, green also benefits those suffering from depression and anxiety. - Blue – The color of the throat chakra, blue is a good color choice to influence respiratory or throat difficulties. Calming and cooling, blue may help counteract hypertension. - Indigo – Related to the brow chakra, indigo can improve problems with the sinuses and face. It has also been used to help heal burns and reduce pain. - Violet – Associated with the crown chakra, violet is cleansing, strengthening and peaceful. Affecting the skeletal system, it is often used therapeutically to improve immunity, arthritis and relieve headaches. Polarity therapy is a natural health care system that is also based on the human energy field. Relying on the constant motion of molecules, polarity therapy is aimed at balancing the constant pulsation of energy between positive and negative poles. These poles create fields and energetic lines of force throughout the body. Dr. Randolph Stone, the founder of polarity therapy, explains that a disturbance in this energetic system causes a departure from good health. By incorporating energy mapping of the five natural elements (Ether, Air, Fire, Water and Earth) and the seven primary energy centers or chakras, polarity therapy encourages each energetic field to achieve unrestricted, optimal vibration levels. A polarity practitioner adds their own energy to a disordered field, to create vibration in unison. Known in physics as a Bose-Einstein Condensate, creating vibratory unison allows a dysfunctional organ to work more effectively. Similar to understanding entropy in quantum physics, proponents of polarity therapy acknowledge that healing occurs as energetic order is restored to systems that had previously been disordered. Polarity in Practice Mostly using very gentle types of bipolar contact, polarity bodywork involves many techniques. Characteristic of polarity, bipolar contact is when a practitioner uses the fingers of both hands to energetically and functionally link related areas of the body for energy movement. Methods used include cranial holds, rocking movements, techniques similar to reflexology and some osteopathic and chiropractic influenced moves. However, polarity therapy always emphasizes energetic work over manipulation. Since forceful manipulations are not part of polarity therapy, it is suitable for elderly and frail clients. Whether practicing polarity therapy or chromotherapy, bodyworkers have the opportunity to put their physics knowledge to good use. For a Western science trained, analytical mind, both modalities are logical ways to influence well-being. If proficient in both chromotherapy and polarity, practitioners can combine the two to increase the therapeutic effectiveness of their sessions. http://healing.about.com, Color Therapy – Chromotherapy, Phylameana lila Desy, About.com, Inc., 2008. Rowen, Robert Jay, MD, 9 Alternative Health Scams, Second Opinion Publishing Inc., Atlanta, Georgia, 2002. www.biopulse.org, Color Therapy, Association Alternative Medicine, 2008. www.polaritytherapy.org, Polarity Therapy: An Introduction, Will Wilson, American Polarity Therapy Association, 2008.
<urn:uuid:20a25de3-3abf-4de7-8d23-5311234be079>
CC-MAIN-2016-26
http://www.integrativehealthcare.org/mt/archives/2008/04/chromotherapy_a.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907645
1,292
2.84375
3
Imagine trying to carry a very heavy backpack (say 20 pounds) on the front of your body. You would probably change your posture to balance the load. You would plant your feet further apart, tilt your hips forward, and arch your back with your belly pushed forward. Well, even though your baby isn't quite 20 pounds, you still might change your posture to carry the extra weight up front. This change strains the back muscles and causes backache. As your pregnancy progresses, the ache might worsen because, not only is the weight getting heavier, but also near the end of pregnancy the baby's head might be in a position that pushes against the lower spine. To prevent backache and reduce back pain, give these strategies a try: Excerpted from The Complete Idiot's Guide to Pregnancy and Childbirth © 2004 by Michele Isaac Gliksman, M.D. and Theresa Foy DiGeronimo. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc. To order this book visit Amazon's website or call 1-800-253-6476. © 2000-2016 Sandbox Networks, Inc. All Rights Reserved.
<urn:uuid:50d55e9a-4778-4fa8-8a3d-75c9db1a2aa9>
CC-MAIN-2016-26
http://pregnancy.familyeducation.com/signs-and-symptoms/aches-and-pains/57281.html?pollres=1&page=&for_printing=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939063
258
2.71875
3
Patient's Guide to Heart Transplant Surgery Dental Care and Infections Because so many bacteria and fungi may be present there, another source of major infection after transplant is your mouth. This is why we insist on a dental evaluation before you have surgery. After transplant, it is important that you have regular checkups and maintain good dental hygiene. The routine dental care provided by your dentist will help to prevent infections and to decrease the amount of gum overgrowth from the Cyclosporine. If you plan to have any major dental work done, please notify your dentist in advance that you have had a transplant and will require antibiotics to be given prior to the procedure. The reason for this is that bacteria may get into your blood while you are having dental work done, and the bacteria could cause an infection. The American Heart Association standard antibiotic protocol is advised for the prevention of infection during major dental work. If your dentist has any questions, please have him/her call the transplant office. The transplant doctors or coordinators will always be willing to give him/her any information needed. > Next: Lung Infections Share this page:
<urn:uuid:de897b7b-d8ae-4052-afe4-3cd346b060ca>
CC-MAIN-2016-26
http://www.cts.usc.edu/ht-pg-infection-dentalcare.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930156
233
2.5625
3
This web page was made as an assignment for an undergraduate course at Davidson College. My Favorite Protein: Figure 1. Structure of EPO complexed with extracellular domainans of EPO receptor shown in ribbons. Image taken from PDI. Click on Image to find out more about its source. Permission Pending What is Erythropoietin? Erythropoietin, also known as EPO is acidic glycoprotein growth factor that triggers erythrocyte, or red blood cell production (Erslev 1991). The 5 exons of the EPO gene encodes 193 amino acids, 27 of which are later cleaved off to produce a 166 amino acid long peptide although the circulating peptide contains 165 amino acids. The mechanism for this is cleavage is unknown (OMIM 2003). EPO is produced by the kidney or liver of adult mammals and also produced by the liver of fetal and neonatal mammals (Genecards 2003). How does Erythropoietin control erythrocyte concentration? Erythropoietin triggers the production of erythrocytes that make up the majority of the cells within blood. The purpose of red blood cells is to transport respiratory gases. Low levels of oxygen levels in the body, known as hypoxia, causes the pathway leading to EPO production, and consequentially, erythrocyte production. This process is done through the use of the transcription factor, HIF-1 which many tissues give off in reduced oxygen conditions. HIF- 1 tells the kidney (or liver) to produce EPO. EPO then binds to two receptors (EPO- R) found on stem cells in the bone marrow of ribs, breastbone, pelvis and vertebrae. This leads to the maturation to functional red blood cells, and ultimately the increase of oxygen supply in tissues (Purves et al. 2001). Thus, when EPO is present, red blood cells mature. When EPO is unavailable, they die (Erslev 1991). Figure 2. Figure from Life: The Science of Biology, Sixth Edition (Purves et al. 2001). This figure demostrates how low oxygen levels cause the growth factor HIF-1 to then trigger the kidney to make erythropoietin. EPO then causes stem cells to synthesize red blood cells which cause the oxygen supply within tissues to become greater. Permission Pending. Structure of Erythropoietin Figure 3. Figure from PDB.Click here to download Chime image.. Chime image of Erythropoietin. Click to Rotate Protein. Erythropoietin is composed of an "up-up- down-down four helical bundle topology" and has "two small antiparallel beta strands typical of the short- chain class" (Syed et al. 1998). A disulphide bridge connects one of the pairs of antiparallel long helices from Cys 7 to Cys 161, while another antiparallel long helix (between alpha B and alpha C regions) is connected by a short loop. An irregularity at Gly 151 results in a kink in the alpha D helix. A beta sheet also results from amino acids of the AB and CD crossover loops. A, B, and C helices combined with many aromatic and hydrophobic regions form the interior of EPO. In addition, short helices exist near both alpha B and alpha C regions (Syed et al. 1998). EPO binds to two receptor proteins (EPObp2 and EPObp1), and thus has two binding sites (Syed et al. 1998). Figure 4. Figure from Syed et al. 2003. Figure shows the Crystal structure of the erythropoietin complexed with its two receptors, EPObp2 and EPO bp1. Alpha helices are shown as cylinders while beta sheets are shown as ribbons. Permission Pending. Mutants of Erythropoietin Since Gly 151 in the D helix of erythropoietin connects the side chain of Lys 152 into hydropobic contact with Val 63, Trp 51, and Phe 148 within the interior of the protein, the replacement of alanine at either position 151 or 152 would cause a loss of activity. Mutations to acidic amino acids do cause a considerable loss of reactivity although substitutions at the basic positions of Lys 20 and Lys 45 result in no loss of bioactivity. Two different amino acid positions that naturally contain Arg are very susceptible to mutations that result in loss of bioactivity. These two sites are Arg103 (that results in mutant R103A) and Arg 14 (that results in mutant R14Q). Both Arg 103 and Arg 14 are involved in site 2 binding, but a mutation in Arg 103 only results in loss of site 2 binding, whereas a mutation in Arg 14 results in an overall fivefold loss in affinity (for both binding sites 1 and 2) (Syed et al. 1998). Table 1: Table from Nature (Syed et al.). This table shows the amino acid residues that are within the functional eptitope of erythropoietin. Mutations that will cause the most change in bioactivity are shown and the degree to which they cause loss of in vitro bioactivity is marked (bold and underlined, > 50 times; bold, >5 times; underlined, 2-5 times; unhighlighted, no effect). Erythropoietin and Disease The result of the underproduction of EPO is linked to a condition known as anemia, or the exhaustion of red blood cells (Purves et al. 2001). Among some of the diseases associated or coincide with underproduction of EPO are cancer, rheumatoid arthritis, HIV infection, sickle cell anemia, and anemia of prematurity. In some of these cases, like anemia of prematurity, a problem within the translation of the erythropoietin- coding gene into its protein is the cause of low EPO levels (Faruki and Kiss 1995). Anemia of prematurity seems to be caused by this underproduction of EPO. It is believed that the switch from the synthesis of erythropoietin in the liver to synthesis within the kidney that happens at birth in many mammals may be the cause of underproduction of EPO in premature infants. There is believed to be a delay in the switch to renal EPO synthesis, and so less erythropoietin is produced in premature babies. In other diseases, such as chronic renal disease, the decrease in EPO production is due to the fact that the kidney’s function is impaired, and likewise, because erythropoietin is produced mostly in the kidneys, its production is impaired also (Erslev 1991). However, in cases of anemia associated with cancer and other chronic diseases, the cause of decreased levels of EPO are due to the inhibition of EPO by inflammatory cytokines such as IL-1 and TNF that are generated in the presence of these diseases (Faruki and Kiss 1995). Treatment of Anemia Anemia caused by low levels of EPO can be treated through the use of recombinant EPO or rhu- EPO. The gene encoding EPO was abstracted, spliced into an expression vector and multiplied through the use of bacteria. Because people who have kidney failure undergo dialysis that removes toxins, and in the process EPO from their body, they lack whatever EPO their body did make. Recombinant EPO thus given to patients undergoing dialysis to restore their EPO levels (Purves et al. 2001) Human Erythropoietin Amino Acid Sequence and Orthologs For Homo Sapiens For Mus musculus (house mouse) For Equus callabus (horse) For Bos taurus (cow) Erslev AJ. 1991. Erythropotein. New England Journal of Medicine 324: 1339-1344. Faruki H, Kiss JE. 1995 July. Erythropoietin. The Institute for Transfusion Medicine. <http://path.upmc.edu/consult/rla/july1995.html> Accessed 2003 Mar 10. GeneCards. 1997-2001. GeneCard for gene EPO GC07P098853. Weizmann Institute of Science. <http://bioinfo.weizmann.ac.il/cards-bin/carddisp?EPO&search=erythropoietin&suff=txt> Accessed 2003 Mar 11. McKusick VA. 1986 June 4. *133170 Erythropoietin, EPO. OMIM. <http://www.ncbi.nlm.nih.gov/entrez/dispomim.cgi?id=133170> Accessed 2003 Mar 10. NCBI. Nation Center for Biotechnology Information. Individual links found with Information. Purves WK, Sadava D, Orians GH, Heller HC. 2001. Life: the Science of Biology, Sixth Edition. Sunderland, Massachusetts: Sinauer Associates, Inc, pp:324-325 and 879-881. Syed RS, Reid SW, Li C, Cheetham JC, Aoki KH, Liu B, Zhan H, Osslund TD, Chirino AJ, Zhang J, Finer- Moore J, Elliott S, Sitney K, Katz BA, Matthews DJ, Wendoloski JJ, Egrie J, Stroud, RM. 1998. Effieciency of Signalling through cytokine receptors depends critically on receptor orientation. Nature 395: 511-516. Molecular Biology Main Page Biology Main Page Questions? E-mail Holly Smith at email@example.com
<urn:uuid:60e00166-8191-427c-ab09-3ca5937a562b>
CC-MAIN-2016-26
http://www.bio.davidson.edu/Courses/Molbio/MolStudents/spring2003/Smith/MyFavoriteProteinorGene.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00113-ip-10-164-35-72.ec2.internal.warc.gz
en
0.880034
2,069
3.34375
3
Details about Atlas of Great Lakes Indian History: The Indian history of the Great Lakes region of the United States and Canada, and particularly of the Ohio Valley, is so complex that it can be properly clarified only with the visual aid of maps. The Atlas of Great Lakes Indian History, in a sequence of thirty-three newly researched maps printed in as many as five colors, graphically displays the movement of Indian communities from 1640 to about 1871, when treaty making between Indian tribes and the United States government came to an end. History was shaped in this part of North America by intertribal warfare, refugee movements, epidemics of European-introduced diseases, French and English wars and trade rivalry, white population advances, Indian resistance, Indian treaties deeding land to state and national governments, and imperfect arrangements for reservations, removal, and allotment of land. The changing pattern of Indian village locations as a result of all these factors is shown on the maps. Each map is highlighted by accompanying text, written as if the author were pointing out specific places on the map. Eighty-one illustrations convey a realistic impression of the land and its people. Back to top Rent Atlas of Great Lakes Indian History 1st edition today, or search our site for other textbooks by Helen H. Tanner. Every textbook comes with a 21-day "Any Reason" guarantee. Published by University of Oklahoma Press.
<urn:uuid:916bc314-e3b1-42fa-a97d-ceb0f724e0c2>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/atlas-of-great-lakes-indian-history-1st-edition-9780806120560-0806120568
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943266
285
3.875
4
June 30, 2011 We’ve been closely observing the rock/ice fall coming down the Nisqually Glacier the past week. The word from the experts is that it is not volcanic or seismic in origin. The probable cause is the natural erosion of the volcano, at a spot that has weakened significantly in the exposed layers of volcanic strata high up on the Nisqually Cleaver (Ridge). Rock fall from the steep exposed part of the ridge occurred at least three times, and entrained large amounts of snow and ice with it as it fell. So far, the furthest extent of the flow of this material down the glacier is to an elevation of approximately 7600 feet. Below that the glacier flattens out significantly. Our groups are taking a conservative crossing point on the lower glacier right now, at about 6,000 feet in elevation and approximately one mile in distance from the lowest activity. We will continue to observe activity on the glacier and have an alternate route available if necessary to avoid the Nisqually Glacier completely. The Nisqually Glacier is a contained drainage and all activity is confined to this area. It does not affect our ascent of the Muir Snowfield to Camp Muir or the Kautz Route or Fuhrer Finger Route once we have gained the other side of the lower Nisqually Glacier.
<urn:uuid:a4f89336-1133-4d6e-a438-0513b2bbd52a>
CC-MAIN-2016-26
http://www.mountainguides.com/wordpress/2011/06/30/mt-rainier/rock-avalanche-on-nisqually-glacier/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954573
277
2.515625
3
Climate models generally do a poor job of capturing how rising temperatures in the Arctic are affecting sea ice. Most underestimate the rapid pace at which sea ice is diminishing. Why is that? Scientists at the massive science conference hosted by the American Geophysical Union (AGU) taking place now in San Francisco have been discussing why it is so difficult to capture what’s happening to Arctic sea ice in climate models, and how we can make the most reliable forecasts possible with the tools available. A policy-making tool Arctic sea ice extent has been declining by about four per cent per decade, with the seasonal low at the end of summer shrinking particularly quickly. Decadal trend in Arctic sea ice extent since 1979 (left) Map of changes in sea ice concentration across the Arctic (right) Source: IPCC 5th Assessment Report (Sep 2013) Reliable forecasts of how warming will affect sea ice are important for decision making, Professor Julienne Stroeve from the US National Snow and Ice Data Centre told the AGU conference. This includes questions like when the Arctic is likely to be sea ice free in summer. But only a quarter of models simulate a rate of sea ice loss comparable with that observed by satellites since 1979, according to the Intergovernmental Panel on Climate Change (IPCC). So where are they going wrong? One major limitation of most climate models is that they don’t give a complete picture of the physics that governs interactions between sea ice, the ocean and the atmosphere, Professor Danny Feltham from the University of Reading explained in the session. Some physical processes that affect sea ice take place on smaller scales than models can simulate. Scientists instead have to simplify the effect those processes have on sea ice using mathematical equations to represent the complex physics. This is known as parameterisation, Feltham told us: “When I say it’s important to improve the physics in climate models, I mean that it’s important to include parameterisations of processes that aren’t currently in climate models or to make those that are included already more realistic.” Melt ponds are a good example, Feltham explains. These pools of meltwater on the surface of the sea ice affect how much of the sun’s energy is reflected by sea ice, and how much is absorbed. Ridges that develop when ice floes collide make the surface rougher and can also affect the interactions between the ocean, sea ice and atmosphere. Why is it so hard to get these physical processes right? One reason is that the processes are new areas of research and nobody has studied them in detail yet, says Feltham. But the real challenge is that scientists need to test models against real-world observations and at the moment, those observations are very thin on the ground. Scientists need data about physical processes occurring across the whole Arctic, increasing the scale of the task even further, Feltham says: “Fieldwork is crucially important, but if you’re only making observations in a few points, it’s not clear how you can scale that up the the whole Arctic”. Scentists sampling meltponds as part of NASA’s now-completed ICESCAPE project. Credit: NASA/Kathryn Hansen When it comes to making forecasts of sea ice loss, the fact that most models don’t match up well with recent observations is an issue, says Stroeve. The IPCC uses a subset of about five models that appear to be doing a better job than the rest. But it could be that they’re getting it right for the wrong reasons, Feltham explains: “All models have an incomplete representation on physics and it could be that the way you’ve balanced the physical processes means the model does a reasonably good job against current observations”. But the best models now might not necessarily be the best ones in ten years’ time, he adds: “As the sea ice cover and climate in general is evolving, different physical processes assume more or less importance. And so unless you capture the physics behind those processes, you can’t predict how they’re going to change in future.” Selecting a subset of models based on a comparison with a particular time period is no guarantee of a reliable forecast, says Stroeve: “You have to ask yourself the question, if the models perform well in one time period, does that mean they will perform better in the future? From our analysis that’s not the case.” And the time period chosen for the comparison may influence how well the models appear to be performing. Only a quarter of models resemble the trend in observations up to 2012 but if you include data for 2013 and 2014, you get a better match to reality, Stroeve points out. A weighted approach Stroeve has an interesting solution. Rather than using a subset of models, her approach uses all available climate models but attaches a different level of importance to each one depending on how well they match observations. Models that don’t match well don’t contribute much to the forecast, while ones that perform best contribute the most. The mix is constantly checked against each year’s data to make sure it’s the best combination, she told the AGU crowd. Another thing to consider is that a single model might not realistically capture how natural variability changes sea ice on a year-to-year basis, Stroeve explains: “If you’re trying to forecast when the Arctic will be sea ice free, that will depend on the natural variability in the particular model or range of models you select â?¦ Taking all the models together might be one way to sample the whole range of natural variability that you could expect”. Applying this approach suggests that in a scenario where emissions peak around 2040 and start to decline, there will still be just over one million square kilometres of sea ice left in summer by 2080. This is the threshold considered to define a “sea ice-free” Arctic summer. Stroeve’s team has submitted the new research for publication in Nature. The next job for the team is to look at doing the same analysis for a higher, business-as-usual emissions scenario, which could well tell a different and more dramatic story, she tells us.
<urn:uuid:d85dd088-ff83-43eb-b6ce-58ba3c5c26ca>
CC-MAIN-2016-26
http://www.carbonbrief.org/why-arent-climate-models-better-at-predicting-arctic-sea-ice-loss
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937425
1,330
3.03125
3
This is a masterly theoretical treatment of one of the central problems in evolutionary biology, the evolution of social cooperation and conflict. Steven Frank tackles the problem with a highly original combination of approaches: game theory, classical models of natural selection, quantitative genetics, and kin selection. He unites these with the best of economic thought: a clear theory of model formation and comparative statics, the development of simple methods for analyzing complex problems, and notions of information and rationality. Using this unique, multidisciplinary approach, Frank makes major advances in understanding the foundations of social evolution. Frank begins by developing the three measures of value used in biology--marginal value, reproductive value, and kin selection. He then combines these measures into a coherent framework, providing the first unified analysis of social evolution in its full ecological and demographic context. Frank also extends the theory of kin selection by showing that relatedness has two distinct meanings. The first is a measure of information about social partners, with close affinity to theories of correlated equilibrium and Bayesian rationality in economic game theory. The second is a measure of the fidelity by which characters are transmitted to future generations--an extended notion of heritability. Throughout, Frank illustrates his methods with many examples, including a complete reformulation of the theory of sex allocation. The book also provides a unique "how-to" guide for constructing models of social behavior. It is essential reading for evolutionary biologists and for economists, mathematicians, and others interested in natural selection. Table of Contents
<urn:uuid:7a30bc50-652b-4bc8-b6a9-f3deedfd7899>
CC-MAIN-2016-26
http://press.princeton.edu/titles/6360.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00051-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925298
302
3.03125
3
With special commentary by: Gregory Tino, MD, Assistant Professor of Medicine Pulmonary-Critical Care and Director of Pulmonary Outpatient Practices at the Hospital of University of Pennsylvania The problem of "second-hand smoke" which is also called environmental tobacco smoke (ETS) is an emotionally charged personal and public health issue. Non- smokers have many negative comments about being forced to breathe toxin filled air. Smokers, on the other hand, feel that their "rights" are infringed upon by non-smokers seeking regulations to inhibit their smoking habits. The fact remains that scientists estimate that every year more than 3,000 deaths from lung cancer in non-smokers are caused by second-hand smoke. Second-hand smoke is made up of about 80 percent "sidestream smoke" (the smoke which comes from the lit end of the cigarette and does not pass through the filter) and 20% "mainstream smoke" (the smoke which is exhaled by the smoker). It has been proven, by laboratory analysis, that sidestream and mainstream smoke contain different amounts of toxic substances. Sidestream smoke is actually the more dangerous of the two, as it contains higher concentrations of toxins and cancer-causing chemicals. This smoke is not inhaled by the smoker--only by those around him or her. Second-hand smoke is a major source of dangerous indoor air pollution in the United States. It contains almost 5,000 chemicals. The Environmental Protection Agency (EPA) and the Department of Health and Human Services have compiled a list of the Top 20 Hazardous Substances. Seven out of the twenty toxins listed are found in cigarette smoke. Remarkably, about 75 percent of a cigarette's nicotine, (another noxious and addictive substance) goes into the air, for others to passively inhale via sidestream smoke. There is no safe exposure level for the cancer-causing agents found in second-hand smoke. The EPA classifies second-hand smoke as a known -- not just a "probable" or "possible" -- human Class A carcinogen. This distinction has been used by EPA for only 15 other pollutants, including asbestos, radon, and benzene. Studies, of non-smoking spouses living with smokers, conclude that long-term exposure to second-hand smoke increases the risk of lung cancer in the spouse who has never used tobacco. According to Gregory Tino, MD, Assistant Professor of Medicine Pulmonary-Critical Care and Director of Pulmonary Outpatient Practices at the Hospital of University of Pennsylvania, "Advice to avoid second-hand smoke should take its place beside the usual advice to avoid the use of all tobacco products." It is not easy to avoid second-hand smoke because it is estimated that about one in four people in the U.S. smoke. In order to limit exposure: Federal, state, and local levels of government have already begun to enact laws, which attempt to limit exposure to second-hand smoke. The federal government has banned smoking on the sites of federally assisted programs for children and on domestic airline flights. Forty-eight states and the District of Columbia have enacted laws that, in some way, restrict smoking in public places. As more people become aware of the dangers of second-hand smoke, they put pressure on their government officials to enact tougher legislation. A recent example of this response is the suggestion for stricter regulations on second-hand smoke made by the Presidential Task Force On Environmental Tobacco Smoke. Non-smokers are now beginning to voice their opinions. The Department of Labor ruled that the widower of a former Veterans Adminstration hospital nurse should receive compensation due to her second-hand smoke-related cancer death. And the first class action lawsuit has recently been filed against the tobacco industry by 60,000 flight attendants. They allege that their illnesses, including lung cancer, have been caused by second-hand smoke. The fact remains that most Americans, smokers and nonsmokers alike, are wary of tighter governmental regulations on any issue. But, most will agree that we do have a "right" to breathe clean air. It seems, however, that we can not seem to agree on exactly what constitutes air pollution. This is illustrated by an experience I had several years ago. I attended a town meeting in my area to express my opinion about a proposal to build a trash to steam plant in the middle of our neighborhood. When I entered the church hall filled with other "concerned citizens" I was shocked. -- The room was filled with cigarette smoke. When will we learn? OncoLink is designed for educational purposes only and is not engaged in rendering medical advice or professional services. The information provided through OncoLink should not be used for diagnosing or treating a health problem or a disease. It is not a substitute for professional care. If you have or suspect you may have a health problem or have questions or concerns about the medication that you have been prescribed, you should consult your health care provider. Information Provided By: www.oncolink.org | © 2016 Trustees of The University of Pennsylvania
<urn:uuid:ddc4e013-60a2-4191-bbc2-7d3edd300a78>
CC-MAIN-2016-26
http://www.oncolink.org/includes/print_article.cfm?Page=2&id=66&section=oncorisk&aid=2088
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962465
1,038
3.328125
3
Vice President Hubert Humphrey announces his candidacy for the Democratic presidential nomination. In an interview, he said he supported the current U.S. policy of sending troops “where required by our own national security.” On March 31, 1968, President Lyndon B. Johnson, frustrated with his inability to reach a solution in Vietnam, announced that he would neither seek nor accept the nomination of his party for re-election. This set up a contest for the Democratic nomination. Humphrey’s main competition was Senator Eugene McCarthy (D-Minnesota), who had come within a few hundred votes of beating Lyndon Johnson in the New Hampshire primary. Robert Kennedy had entered the race and won most of the Democratic primaries until he was assassinated in June. When the Democratic National Convention opened in Chicago in August, a conflict immediately erupted over the party’s Vietnam platform. While demonstrations against the war took place in the streets outside the convention hall, Humphrey won the party nomination. He was ultimately defeated in the general election by Republican Richard Nixon, who criticized the Johnson’s handling of the war and ran on a platform of achieving “peace with honor” in Vietnam.
<urn:uuid:7d324828-433a-4968-9b08-8c5b92a2eab1>
CC-MAIN-2016-26
http://www.history.com/this-day-in-history/humphrey-announces-his-candidacy
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979269
238
3.21875
3
June 5, 2013 Keep Wrinkles Away With Regular Sunscreen Use Brett Smith for redOrbit.com - Your Universe Online While sunscreen can be effective in preventing painful burns or even skin cancer, a new study in the Annals of Internal Medicine shows that it can also prevent photoaging, or skin wrinkling and aging as a result of exposure to ultraviolet radiation."We now have the scientific evidence to back the long-held assumption about the cosmetic value of sunscreen," co-author Adele Green, an epidemiologist with the University of Manchester, told CNN. "Regular sunscreen use by young and mid-aged adults under 55 brings cosmetic benefits and also decreases the risk of skin cancer." The study researchers recruited just over 900 participants who were followed for four years. About half of the participants were told to properly use sunscreen daily, re-applying sunscreen after few hours outside, going in the water or heavy sweating. The other participants were not given directions with regard to sunscreen use. The researchers noted that ethical concerns prevented them from asking this control group to not use sunscreen. To measure the signs of photoaging among the volunteers, the scientists used a technique called microtopography, which involved taking the silicone impressions of the back of each volunteer´s hand. Damage found in the impressions was measured on a scale from one to six, with six signifying skin with severe aging. Skin aging levels were recorded at the start and the end of the four-year period. "Skin surface patterns reflect the severity of the sun's damage to the deeper skin, especially to the elastic fibers and collagen," Green said. Researchers found that participants in the daily sunscreen group were 24 percent less likely to show increased signs of aging. The study also found that volunteers over age 55, who naturally experience more age-related skin changes, didn´t see as much of a benefit as younger participants. The average age in the study was 39. Previous research has shown that UV radiation damages collagen and other fibers responsible for keeping skin smooth and firm. Broad-spectrum sunscreens, like the SPF 15 used in the new study, provide protection against these damaging rays. While the dangers of UV exposure have been well-known and widely reported for some time, some experts said the new study on photoaging may get more attention and change more habits. "It has been a source of frustration for us that for some sections of the community, the sun-safe message does not seem to be getting through," Green told USA Today's Kim Painter. "We now know that protecting yourself from skin cancer by using sunscreen has the added bonus of keeping you looking young." "Maybe sheer vanity will encourage young people to be proactive and use their sunscreen, because the cancer fear doesn't seem to be getting through to them,” Deborah Sarnoff, a New York City dermatologist and a senior vice president at the Skin Cancer Foundation, told the national daily paper. She added that while the new study was “very well done," it may underestimate the effect of sunscreen because it used SPF 15, a minimum by dermatological standards, and was conducted in the 1990s, when the formulas involved were less refined than they are today.
<urn:uuid:d8862b45-7468-4879-8ffc-d26205eb8e69>
CC-MAIN-2016-26
http://www.redorbit.com/news/health/1112865804/sunscreen-use-slows-skin-aging-wrinkles-060513/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968383
658
3.453125
3
The Iowa Blue chicken breed is a good dual purpose layer of light brown eggs. They also produce sex-linked offspring when used in cross-breeding programs. Iowa Blue Facts: Size: Male: 7 Ibs. Female: 6 Ibs. Comb, Wattles & Earlobes: They have medium to moderately large single combs with six well-defined points that stand upright. Their wattles and earlobes are medium to moderately large as well, and they are all bright red. Color: The beak is horn and the eyes are dark brown. The shanks and toes are slate. In spite of the name, the bird does not hold to true blue coloring. The head is silvery white and the neck and upper breasts have white feathers with a slender black stripe down the middle transitioning to black feathers with white lacing. The lower breast, body, legs, wings and tail are bluish black to gray with penciling. Male: The back and saddle are similar to neck. Female: The back is bluish to gray with penciling. Place of Origin: United States Conservation Status: Study Special Qualities: They are a good layer of lightly tinted brown eggs. These birds were developed around Decorah, Iowa in the first half of the 1900's. This bird is very rare and hasn't been recognized by the APA or ABA. The almost folk like story of these birds is that a White Leghorn hen went broody and hid under a building to brood her chicks. When she finally came out she had a group of chicks that were unlike any chicks in the area. Some were colored chestnut, but others looked like pheasant chicks, with light yellow, horizontal stripes on their cheeks, and a triangle of yellow under their chins, with black stripes down their backs. Some of the old timers that are familiar with the breed would tell you that the breed was sired by a pheasant. In the 1960's some hatcheries within Iowa carried the breed readily. As time passed, and these hatcheries went out of business the breed was almost lost. Ken Whealy, of a Decorah based non-profit organization that was dedicated to the preservation of heirloom plants, discovered a few struggling flocks of these birds in the 1980's. Since then he has been trying to distribute birds to any interested parties in an effort to restore the breed. The breed is a dual purpose bird and is known as a good forager. Hens lay light brown eggs and as the story suggests, they do tend to go broody. When the Blue roosters are used in cross-breeding they produce sex-linked chicks, such as Gray Cockerels and Black Pullets when crossed with a White Plymouth Rock hen, or a Reddish Gray Cockerel and Blackish Gray Pullet when crossed with a New Hampshire.
<urn:uuid:7e38d06c-e4f0-464f-ad2e-2b87a8fe712f>
CC-MAIN-2016-26
http://www.raising-chickens.org/iowa-blue.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00005-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968726
600
2.828125
3
Paul Cézanne worked primarily in Aix-en-Provence, in the South of France. He became a painter only after much disagreement with his father, who encouraged him to study law and banking. Although he regularly spent short periods in Paris, he spent most of the rest of his life in Aix and nearby L’Estaque, where he painted scenes from the surrounding countryside. Key piece to look for: House in the Country, 1877-79. Cézanne is best known for his landscapes of the countryside around Aix-en-Provence, which he painted by slowly building up broad, thick strokes of color, giving his paintings a richness of color and lack of outlines. He adopted the impressionist style of working outdoors, often picking almost inaccessible vantage points to work from. He approached his subjects analytically. “Deal with nature as cylinders, spheres, and cones,” he said. He even went as far as to study the geology of a landscape in order to know “its geological structure…how [its] roots work, the colors of the geological soils.” Image credit: Paul Cézanne, House in the Country, about 1877-79. Oil on canvas; 23-1/2 x 28-7/8 in. Wadsworth Atheneum; Anonymous gift.
<urn:uuid:f76a5e87-7a06-4df7-89aa-1b10f8c5beea>
CC-MAIN-2016-26
http://denverartmuseum.org/article/staff-blogs/passport-paris-artist-profile-paul-c-zanne
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980729
278
2.640625
3
13 December 2011 Secretary-General Ban Ki-moon stressed today that sustainable development cannot be achieved without addressing social inequalities and called for fresh ideas and international commitment to fairly sharing global resources. “One billion hungry people – one in five people without access to electricity – nearly 80 million young people out of work – 67 million children not in primary school – pervasive poverty – egregious disparities in access to sanitation and adequate health care,” said Mr. Ban in an address to the Fifth Meeting of his High-Level Panel on Global Sustainability. “This is not equitable. It is not sustainable. Nor can we live with deteriorating ecosystems. Science tYour recommendations will help shape the UN system’s policies on sustainable development for years to come.ells us that we are approaching, and increasingly over-stepping certain planetary boundaries. This, too, is not sustainable,” he said. The 21-member panel, which is co-chaired by President Tarja Halonen of Finland and her South African counterpart Jacob Zuma, is expected provide its final report next month after 16 months of work. It is tasked with finding ways to lift people out of poverty while tackling climate change and ensuring that economic development is environmentally friendly. “Your recommendations will help shape the UN system’s policies on sustainable development for years to come,” said the Secretary-General. “I look for your guidance on governance issues in particular. How can the UN system work more effectively – and with other institutions – to make sustainable development a reality?” He said the panel’s report will also be a major contribution to the UN Conference on Sustainable Development (Rio+20) in Brazil in June next year. He described the conference as “a once-in-a-generation opportunity that we cannot afford to waste.” “All the issues that will be on the table in Rio – climate change, demographics, water, food, energy, global health, women’s empowerment – are intertwined. And all the pillars that underpin the Rio process – stabilizing the global economy, safeguarding the environment, and ensuring social equity – are parts of a single agenda. “We cannot make progress in one without progress in the others. At Rio the world should act on this fundamental understanding,” he stressed, adding that the current economic crisis should not be an excuse for inaction. News Tracker: past stories on this issue
<urn:uuid:e299dcd7-60f5-462f-950c-7890043b6b49>
CC-MAIN-2016-26
http://www.un.org/apps/news/story.asp?NewsID=40724&Cr=sustainable+development&Cr1=
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927406
504
2.625
3
KNOXVILLE, Tenn., Aug. 14 (UPI) -- Stopping bullying has an evolutionary basis, a U.S. researcher says, preventing them from monopolizing resources so groups can increase their odds of survival. The urge to band together against strong aggressors is a key to humanity's success as a species, biomathematician Sergey Gavrilets at the University of Tennessee said. He used mathematical models to investigate why humans exhibit strong egalitarian, or socially equal, behaviors. Gavrilets used the models to compare the prosperity of groups that allowed stronger members to consume the best resources, and those in which "helpers" aided the weak individual by standing up to, or fighting off, the stronger bully. Extrapolated over thousands of generations, the models showed groups with helpers prospered. The findings suggest people evolved a genetic drive to help weak individuals fight back, ultimately leading to widespread cooperation among humans as well as empathy and compassion. "Based on the results, helping the victim then is the evolutionary 'right' thing to do, not only from a victim's point of view or a societal point of view, but also the helper's point of view," Gavrilets told the Los Angeles Times. "I'd speculate that this is also a psychologically rewarding thing to do in spite of the risks potentially involved."
<urn:uuid:007b3038-c04f-495e-9455-a904c42b533b>
CC-MAIN-2016-26
http://www.upi.com/Science_News/2012/08/14/Study-Fighting-bullies-pushed-evolution/UPI-73181344980881/?rel=96011346102971
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930326
279
3.15625
3
The pharaohs of the first Egyptian royal Dynasty (c. 3000-2800 B.C.) chose to be buried at Abydos in Upper Egypt. Their courtiers, however, started a cemetery of massive rectangular tombs (known as mastabas) on the northern tip of the Saqqara plateau. From the Second Dynasty onwards, royal tombs were constructed at Saqqara too. Not much is known of these simple subterranean galleries. Although the last king of this line, Khasekhemuy, was again buried in Abydos, he also constructed a huge rectangular enclosure at Saqqara. This so-called Gisr el-Mudir set the example for his successor Djoser of the Third Dynasty (c. 2630-2611 B.C.), whose funerary complex comprises a similar enclosure of 545x277 m. Its centre is occupied by a dazzling architectural innovation: a 63 m high Step Pyramid, made by piling up six mastabas on top of each other. The rest of Djoser's enclosure contains a number of temples and dummy buildings, all built in bright white Tura limestone. After Djoser, step pyramids soon developed into proper pyramids. At the same time, the enclosures became much smaller and merely enveloped a pyramid temple, joined to a valley temple at the edge of the cultivation by means of a sloping causeway. Most kings selected other Memphite cemeteries for their pyramids: Giza and Abusir north of Saqqara, or Dahshur and Meidum to the south. Still, Saqqara boasts the remains of the step pyramid of Sekhemkhet (Third Dynasty), the mastaba tomb of Shepseskaf (Fourth Dynasty, 2472-2467 B.C.), and the proper pyramids of Userkaf, Djedkarê, and Unas (Fifth Dynasty, 2465-2323 B.C.),and all the kings of the Sixth Dynasty (Teti, Pepi I, Merenrê, Pepi II, 2323-2150 B.C.). The five last monuments contain copies of the oldest religious texts from Ancient Egypt, the so-called Pyramid Texts. By the time of Pepi II, many areas of the Saqqara plateau were already lined with mastaba tombs of Memphite courtiers and officials. Usually these rectangular structures comprise a number of offering chapels with wall decoration in limestone reliefs. Thus, Saqqara still forms a large open-air museum of Old Kingdom art. During the Middle Kingdom (2040-1640 B.C.) and New Kingdom (1550-1070) both the capital and the major cemeteries moved further south, and only two more mudbrick pyramids were built at Saqqara in the 13th dynasty. One of them was constructed by Khendjer, the other owner is unknown. Large-scale construction at Saqqara was not resumed until the middle of the Eighteenth Dynasty (from about 1400 B.C. onwards), when the pharaohs again devoted more attention to Memphis. Numerous high officials, priests, and artisans built their tombs in several clusters dispersed all over the plateau. These tombs were of a new type: a free-standing offering chapel or funerary temple, sometimes with open courtyard and pylon gateway, with rock-cut burial chambers deep underground. There were also some completely rock-cut tombs along the edge of the Saqqara escarpment. This period of use lasted for about two centuries, when the attention shifted again to the new capitals in the Nile Delta. During the last millennium B.C. a great number of shaft tombs was cut, until the whole substructure of the desert was honeycombed. The same period witnessed great religious activity on the Saqqara plateau. The site developed into a place of pilgrimage, centred around the burial place of the sacred Apis bulls of Memphis (the Serapeum). The latter consists of vast underground galleries lined with the burial chambers for the individual bulls. Similar galleries were cut for other animal cults (cows, baboons, cats, dogs, ibises and hawks). This upsurge of the traditional Egyptian cults was followed by Christianity, which brought several monastic communities to the desert of Saqqara. After about 850 A.D., the plateau became utterly deserted and most of its monuments were gradually covered by drift sand.
<urn:uuid:a416bd27-2cfa-4698-a189-69198e5f068d>
CC-MAIN-2016-26
http://www.saqqara.nl/saqqara/history
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00156-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946851
926
3.5625
4
Sealaska is seeking congressional help in obtaining that last 85,000 acres promised to it in the 1971 Alaska Native Claims Settlement Act (ANCSA). The legislation, which amends ANSCA, will be reintroduced to Congress this year. It honors a promise to restore just four-tenths of 1 percent of Southeast Alaska’s 23 million acres to Sealaska and its tribal member shareholders. The return of this land has far reaching significance to the economy of Southeast Alaska and the development of sustainable businesses for the future. Last week the public radio stations of Southeast Alaska broadcast a series about what it termed Sealaska’s controversial lands legislation. It failed to say that opinion polls show that the majority in Southeast Alaska support the legislation. So do Sen. Lisa Murkowski, Sen. Mark Begich and Rep. Don Young. Alaska’s congressional delegation has heard the concerns of people throughout Southeast Alaska in an unprecedented public process. Sealaska itself held more than 200 community stakeholder meetings that helped to shape the legislation. Of the lands Sealaska is seeking, nearly half have already been logged or already have logging roads. This is unlike the land originally identified in ANCSA, which includes important watersheds for local communities. Still some argue that Sealaska should select land inside the box prescribed by ANCSA. But for most of us, environmental values have changed since 1971. Hardly anyone thinks now that logging should happen within the Tongass’ high conservation areas. But that is where most of Sealaska’s remaining entitlements are. Sealaska’s timber operation is an important economic engine in Southeast Alaska’s economy, providing more than 400 jobs that would be lost by 2013 if the legislation fails. Sealaska’s impact on the regional economy extends beyond jobs; it has also provided wood to small mill operators and to other businesses through its micro sale program. Beyond Southeast, the continued viability of this timber harvest matters statewide. It also matters to over 80,000 Alaska Native tribal shareholders, for whom Sealaska’s timber earnings are a leading source of shared revenue. Other lands identified in the legislation will be designated for the future. Sealaska will use these lands for future diversification into cultural tourism and green energy. The public radio series did give us some hopeful insights. No one doubted the ancestral importance of the forest to Sealaska’s tribal member shareholders who call it Haa Aanż. No one has objected to the protection of sacred sites. This respect has not always been accorded to the Native peoples of Southeast Alaska. So what values does this legislation hold for residents of Southeast Alaska? • Over 270,000 acres of designated roadless areas and 112,000 acres of productive old growth set aside for Sealaska selection will be released back as public lands. • It reduces Sealaska’s harvest of old growth timber by over 41,000 acres, and creates 150,000 acres of newly-designated conservation areas. • When managed with the rest of Sealaska’s lands, it will provide a sustainable forest with fish and wildlife habitat and timber operations that will annually generate 40 million to 50 million board feet. • It protects 400 jobs and creates new employment in Southeast Alaska, which has higher unemployment than the rest of the state. A larger conversation still has to happen about the future uses of the entire Tongass National Forest, which touches everyone in Southeast. But right now, as Murkowski has said, the U.S. needs to fulfill its 1971 promise to Sealaska. Supporting Sealaska’s land legislation is a good idea for anyone who is concerned about the economy and the environment in Southeast Alaska. Sealaska is proud to provide jobs and support businesses throughout the region, and with the lands promised it can be an economic engine for generations to come. This legislation is good for Southeast’s economy, and it is good for the country to keep its promises. • McNeil is the president and CEO of Sealaska Corporation.
<urn:uuid:1f121f94-b0f8-44f3-b52a-0bfd8c50d531>
CC-MAIN-2016-26
http://juneauempire.com/stories/013111/opi_778605974.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942484
829
2.828125
3
One of the best exposed geological cross sections of the Antarctic continent occurs in the northern sector of the Transantarctic Mountains along the west coast of the Ross Sea in North Victoria Land. Geological investigations of the region reveal a cross section of a late Precambrian-Palaeozoic mobile orogenic belt, segmented into distinctive 'terranes' and further disrupted by ... Cretaceous-Cenozoic break up tectonics of Gondwanaland in the Ross Sea region. Field sampling for K-Ar and Rb-Sr geochronological studies in the Priestley and Mariner Glacier areas was undertaken during the first half of the 1990/91 Italiantartide Expedition at Terra Nova Bay Station, Antarctica, as part of a regional 1: 250,000 geological mapping programme in North Victoria Land. Some 320 samples were collected. A sequence of samples from the Priestley Formation in the upper Priestley Glacier was collected for K-Ar and Rb-Sr dating of early regional metamorphism (late Precambrian and/or early Palaeozoic) in the Wilson Terrane basement. Possible correlatives at higher metamorphic grade were collected for similar work in the upper Boomerang and lower Preistley Glaciers (Preistley Schists) and on the coast around Terra Nova Bay. Samples were also collected from the Bowers/Wilson and Bowers/Robertson Bay Terrane boundaries in the Mariner Glacier region, especially in the Mountaineer and Millen Ranges. Rocks from the Bowers and Robertson Bay Groups are in the intervening Millen Schists and may help to identify the timing of fault movement at the terrane boundaries and hence their amalgamation.
<urn:uuid:15be8284-a762-4f7f-926b-439ad2327642>
CC-MAIN-2016-26
http://gcmd.nasa.gov/KeywordSearch/Metadata.do?Portal=amd_nz&KeywordPath=Parameters%7CSOLID+EARTH%7CROCKS%2FMINERALS&OrigMetadataNode=GCMD&EntryId=K231_1990_1991_NZ_1&MetadataView=Full&MetadataType=0&lbnode=mdlb1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00154-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927288
352
2.890625
3
"We Cannot Walk Alone:" Images and History of the African-American Community. Lafayette County, Mississippi. An "Open Doors Exhibition." April through August Families & Individuals Arts & Crafts Businesses & Occupations Continued, page three "Many women served as midwives for both the white and African-American communities. Women such as Rose Taylor, Hittie Mae Toles, Gertie Mae Carter, Judy Taylor, Roxie Bramblett, Mattie McEwen, Lou Thompson, and others helped new children into the world. Each woman had her own tools including a sharp pair of scissors to cut the umbilical cord, Lysol for disinfectant, and a sterilized belly band. Often such women also knew of herbs and home remedies that they passed on to their parents. Lou Thompson handed down her skills to her grand-daughter Rosie Lou Mitchell who received additional training and now works as a midwife at a Jackson Businesses & Occupations, page one
<urn:uuid:840e391d-9156-4c89-a2b3-c224d70fb577>
CC-MAIN-2016-26
http://www.olemiss.edu/depts/general_library/archives/exhibits/civilrights/aa/occupation3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923155
219
2.625
3
Officially called the High-Repetition-Rate Advanced Petawatt Laser System (HAPLS), this ultra powerful laser will emit 100,000 times more power than all of Earth's power stations combined. It's received the nickname "Death Star "Laser" for its similarity to Darth Vader's laser wielding base in Star Wars. Continue reading for a video and more information. The Daily Mail reports: "The system combines technologies from across Europe and around the world. It relies on a scheme referred to as 'double-chirped pulse amplification,' enabling high signal to noise in the output pulses which will seed HAPLS. 'HAPLS's high repetition rate will make possible new scientific discoveries. While scientists have long performed experiments with powerful single-shot lasers, they have never had an opportunity to repeat experiments at 10 times per second,' said Livermore physicist and HAPLS project manager Constantin Haefner."
<urn:uuid:1294060d-7e27-4276-92b4-ceedfeb42332>
CC-MAIN-2016-26
http://www.techeblog.com/index.php/tech-gadget/scientists-develop-real-death-star-laser-that-s-1000-times-more-powerful-than-all-of-earth-s-power-stations
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920404
188
2.734375
3
IBM has launched a program that it claimed allows mainframe users to monitor their systems' energy consumption in real-time. IBM said it would start publishing typical energy consumption data for its System z9 mainframe and that the data would be derived from field measurements of about 1,000 live production machines. The measurements will determine average watts/hour consumed which can be used to calculate watts per unit, similar to kilometres per litre estimates and appliance kilowatt per year ratings. The metering system works by monitoring a mainframe's energy and cooling statistics as collected by internal sensors and presents them in real time on the System Activity Display, said IBM. Users can then correlate the energy consumed with work actually performed and, when the machine reports its maintenance health on a weekly basis, its power statistics can be used. These statistics can be observed real time or summarized for project or trend analysis, said IBM. The company reckoned that energy consumption statistics are used for demonstrating cost savings toward electric rebates and programs to reduce data center energy consumption. Big Blue said that it has a power estimator tool available to enable future planning. It calculates how changes in system configurations and workloads can affect the entire energy envelope -- including the power needed to both run and cool the machines. For example, a user adding a single mainframe processor for Linux applications could project the amount of additional energy required before and when the feature is turned on, according to IBM. Normally less than approximately 20 watts are added when an Integrated Facility for Linux (IFL) feature is turned on, reckoned the company. Big Blue said that a mainframe processor with zVM virtualization can typically perform the work of multiple x86 processors because a mainframe is designed for running many mixed workloads at high utilization rates. It claimed that a single processing chip executing hundreds of workloads efficiently is key to consuming less energy than multiple x86 servers, and that this translates into a simplified infrastructure and cost savings. IBM said that it collected data for August and September 2007 which showed that typical energy use can be normally 60 per cent of the "label" or maximum rating for the model of mainframe measures. The company said that this allowed it to claim to be the first organization to embrace recommendations from a recent EPA report that encourages server vendors to publish typical energy consumption figures for servers. IBM said that the metering system was being launched in tandem with a new program to publish consolidated real-world consumption figures by model for System z9. Typical use figures will assist data center planning as they will give data center designers an idea of how much energy a particular mainframe consumes. IBM said that it's summarized the field population data for each month since 2 August 2007, when the US EPA published the report to US Congress on Data Center and Server Energy Efficiency. The EPA encouraged server vendors to publish typical energy usage numbers to enable purchasers of servers to make informed decisions based on energy efficiency. "The mainframe's high utilization rates and extreme virtualization capability may help make it a more energy-efficient choice for large enterprises," said David Anderson PE, IBM green consultant. "A single mainframe running Linux may be able to perform the same amount of work as approximately 250 x86 processors while using as little as two to ten percent of the amount of energy. Customers can now measure the energy advantages of IBM System z."
<urn:uuid:055c6069-fbb9-449b-bbef-4dbc51a01465>
CC-MAIN-2016-26
http://www.pcworld.idg.com.au/article/193937/ibm_launches_real-time_mainframe_energy_gauge/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931475
685
2.796875
3
Avoid phishing scams The number and sophistication of phishing scams sent out to consumers is continuing to increase dramatically. While online banking and e-commerce is very safe, as a general rule you should be careful about giving out your personal financial information over the Internet. The Anti-Phishing Working Group has compiled a list of recommendations below that you can use to avoid becoming a victim of these scams. - Be suspicious of any email with urgent requests for personal financial information - unless the email is digitally signed, you can't be sure it wasn't forged or 'spoofed' - phisher typically include upsetting or exciting (but false) statements in their emails to get people to react immediately - they typically ask for information such as usernames, passwords, credit card numbers, social security numbers, etc. - phisher emails are typically NOT personalized, while valid messages from your bank or e-commerce company generally are - Don't use the links in an email to get to any web page, if you suspect the message might not be authentic - instead, call the company on the telephone, or log onto the website directly by typing in the Web address in your browser - Avoid filling out forms in email messages that ask for personal financial information - you should only communicate information such as credit card numbers or account information via a secure website or the telephone - Always ensure that you're using a secure website when submitting credit card or other sensitive information via your Web browser - to make sure you're on a secure Web server, check the beginning of the Web address in your browsers address bar - it should be "https://" rather than just "http://" - Consider installing a Web browser tool bar to help protect you from known phishing fraud websites - EarthLink ScamBlocker is part of a free browser toolbar that alerts you before you visit a page that's on Earthlink's list of known fraudulent phisher Web sites. - Its free to all Internet users - download at http://www.earthlink.net/earthlinktoolbar - Regularly log into your online accounts - don't leave it for as long as a month before you check each account - Regularly check your bank, credit and debit card statements to ensure that all transactions are legitimate - if anything is suspicious, contact your bank and all card issuers - Ensure that your browser is up to date and security patches applied - in particular, people who use the Microsoft Internet Explorer browser should immediately go to the Microsoft Security home page to download a special patch relating to certain phishing schemes - also, people who use Microsoft Windows should regularly visit Microsoft Windows Update to make sure their system is protected against the latest reported threats - Always report "phishing" or “spoofed” e-mails to the following groups: - forward the email to firstname.lastname@example.org - forward the email to the Federal Trade Commission at email@example.com - forward the email to the "abuse" email address at the company that is being spoofed (e.g. firstname.lastname@example.org) - when forwarding spoofed messages, always include the entire original email with its original header information intact - notify the Internet Fraud Complaint Center of the FBI by filing a complaint on their website: http://www.ic3.gov/
<urn:uuid:879b5f53-d53f-4093-bbb4-2a419bf5c1de>
CC-MAIN-2016-26
http://www.hamline.edu/Content.aspx?id=2147500046
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899039
693
2.5625
3
Oh, all right.. No delta because x is extending without bound towards negative infinity, but there is an epsilon because the function approaches a finite value of L, correct? I understand that. But, I still don't know how I'd go about constructing a proof. You need to show that for any We can choose an Such that if, is in the domain of the function. Since both side are positive we can take the reciprocal of both sides, but then we have to flip the inequality, obtaining an equivalent statement, That means, . We also note that, is in the domain of . So this tell us given any we need to choose wow, major headache.. I understand all your examples Thing is. The only thing that's killing me on mine is that there is no specific function... I'm just given f(x) In your example, you wrote That isn't too bad to do because f(x) is defined as a specific function, I don't know how I'd write in the form x < something, when I don't have any specific function, so I only have an f(x), not an x.
<urn:uuid:689af182-6108-47e9-a2cc-ac627b15c278>
CC-MAIN-2016-26
http://mathhelpforum.com/calculus/7939-epsilon-delta.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949691
249
2.609375
3
Last week, Nature published another strong statement addressing the political/economic attack on climate science in an editorial titled “Into Ignorance“. It specifically criticized the right wing element of the U.S. Congress that is attempting to initiate legislation that would strip the US EPA of its powers to regulate greenhouse gases as pollutants. In so doing, it cited as an example the charade of a hearing conducted recently, including the Republicans’ disrespectful and ignorant attitude toward the science and scientists. Among many low points, this may have reached its nadir when a House member from Nebraska asked, smirkingly and out of the blue, whether nitrogen should be banned–presumably to make the point that atmospheric gases are all either harmless or outright beneficial, and hence, should not be regulated. Aside from the obvious difference that humans are not altering the nitrogen concentration of the atmosphere, as they are with (several) greenhouse gases, such a question boggles the mind in terms of the mindset that must exist to ask it in a public congressional hearing in the first place. But rarely are the ignorant and ideological bashful about showing it, regardless of who might be listening. In fact an increasing number seem to take it as a badge of honor. There have been even more strongly worded editorials in the scientific literature recently as well. Trevors and Saier (2011)*, in a journal with a strong tradition of stating exactly where it stands with respect to public policy decisions and their effect on the environment, pull no punches in a recent editorial, describing the numerous societal problems caused when those with the limited perspective and biases born of a narrow economic outlook on the world, get control. These include the losses of critical thinking skills, social/community ethics, and the subsequent wise decision making and planning skills that lead a society to long-term health and stability. Meanwhile, scientific bodies charged with understanding how the world actually works–instead of how they would imagine and proclaim it to–continue to issue official statements endorsing the consensus view that humans are strongly warming the planet in recent decades, primarily by greenhouse gas emissions to the atmosphere. Three years ago, we wondered whether geologists in general have a different view on climate change to the climate research community. A recent statement from the U.K. Geological Society, however, suggests that our impressions perhaps were not well-founded. Notwithstanding these choices of ignorance, many other organizations continue apace with many worthwhile and diverse goals of how to deal with the problem. Here are a few links that we have run across in the last week or two that may be of interest to those interested in sustainability and adaptation. Please note the imminent deadlines on some of these. The Center for Sustainable Development’s online courses related to community-level adaptation to climate change: The CDKN International Research Call on Climate Compatible Development: The Climate Frontlines call for abstracts for a July conference in Mexico City on the theme “Indigenous Peoples, Marginalized Populations and Climate Change” [Apologies: the official deadline for abstracts has apparently passed; view this is a conference announcement] George Mason University’s call for votes on the Climate Change Communicator of the Year *Trevors, J.T & Saier Jr., M.H. 2011. A vaccine against ignorance? Water, Air and Soil Pollution, DOI 10.1007/s11270-011-0773-1.
<urn:uuid:a49c3f4c-56ad-4a17-b26f-8376dd1840bd>
CC-MAIN-2016-26
http://www.realclimate.org/index.php/archives/2011/03/?wpmp_switcher=desktop
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943002
706
2.609375
3
Dam Removal Projects Funded by the Coastal Conservancy Dams and reservoirs provide several benefits, including water storage for future use, flood protection, or hydroelectric power generation. As dams age and are no longer able to serve their intended purpose, these structures are considered for removal. The gradual accumulation of sediment behind a dam can lead to structural safety and flooding issues which can spur consideration for removing the dam. Additionally, dams can prevent anadromous fish from accessing upstream historic spawning habitat, and modification or removal of a dam helps fish pass previous barriers. The California Coastal Conservancy is providing financial support for several dam removal projects with the goal of restoring native fish and helping re-establish natural processes in important coastal watersheds. The Conservancy has helped these technically complex and contentious projects move forward by leading dam removal planning processes in collaboration with various stakeholders, and by funding technical studies to inform feasibility assessments and designs The following dam removal projects have benefited from the Conservancy’s financial assistance. Dam Removal Projects led by the Coastal Conservancy San Clemente Dam The San Clemente Dam Removal Project will remove an antiquated, unsafe dam on the Carmel River in Monterey County and provide unimpaired access to 25 miles of spawning and rearing habitat for threatened steelhead trout. California American Water, the dam’s owner, resolved to undertake the project after the Coastal Conservancy agreed to assist with funding the cost of removing the dam, which exceeds the cost of strengthening the dam in place. The 106-foot high dam was built in 1921 to create a water storage reservoir, which is now largely filled with sediment and no longer functions as a water storage facility. In the early 1990s the California Department of Water Resources determined the dam could potentially fail in the event of either a severe earthquake or flood. The project has been identified as a high priority by several state and federal agencies as well as conservation organizations, many of which have contributed funding to the project. Currently, the Coastal Conservancy and CalAm are jointly funding the design and permitting phase of the project, estimated to cost approximately $6 million. The total project cost is estimated at $83 million dollars, with up to $35 million in estimated contributions from state and federal agencies and private funders. Construction is expected to start in 2012. The dam removal project presents a unique opportunity for public and private interests to work together to realize significant environmental benefits. More information can be found on the San Clemente Dam Removal & Carmel River Reroute Project website, and Carmel River Reroute and San Clemente Dam Removal Project page. Photo credit: BEACON The Matilija Dam Ecosystem Restoration Project will remove an obsolete dam on Matilija Creek, a tributary to the Ventura River, to restore the transport of sand and sediment to downstream beaches, and to allow passage for endangered steelhead trout to ancestral spawning habitat in the upper reaches of the Ventura River watershed. Constructed in 1947, Matilija Dam was intended to provide a local water supply, while offering flood protection for downstream communities. Over time, the build-up of sediment behind the dam has undermined both of those original functions. The Conservancy is engaged in this multi-stakeholder effort, led by the US Army Corps of Engineers and the Ventura River Watershed Protection District. The Conservancy was an active participant in development of the consensus-driven Feasibility Study, completed in 2004. Currently, the Conservancy is supporting pre-construction elements of the dam removal, including engineering designs and acquisition of downstream properties expected to be impacted during construction activities. The Coastal Conservancy has contributed more than $8.77 million in state bond funds to the Matilija project since its inception, and combined state agency funding of $15 million represents more than 63 percent of the total funding for the project since the beginning of the Feasibility Study. More information can be found on the Matilija Dam Ecosystem Restoration Project website. Dam Removal Projects with assistance from the Coastal Conservancy Klamath River Dams Photo Credit: U.S. Fish & WIldlife Service, Klamath Basin Ecoregion Collection. Four dams in the Klamath River basin are planned for removal in an effort to advance restoration of salmonid fisheries of the Klamath basin. Once home to the third-largest salmon run on the West Coast, management of the Klamath basin is the subject of multiple disputes regarding reliable water and power supplies, and declining salmon populations. While the Klamath River Hydroelectric Project was considered for relicensing by the Federal Energy Regulatory Commission in 2006, a separate Klamath Settlement Group formed to explore future management alternatives, including removal of Iron Gate, J.C. Boyle, Copco 1 and Copco 2 dams. The Conservancy provided approximately $1 million to fund studies to evaluate the feasibility and potential cost of removing these four dams. These studies characterized reservoir sediment, modeled sediment transport following dam removal, and assessed impacts of removal on biological resources and water quality. This information contributed to the development of the Klamath Hydroelectric Settlement Agreement (KHSA) and the Klamath Basin Restoration Agreement (KBRA), signed on February 18, 2010. The KHSA calls for a determination by the Secretary of the Interior on the Klamath River Dams by November 2011, following preparation of additional scientific studies through an open and transparent process. The Conservancy is currently providing matching funds for a water quality study under preparation as part of the KHSA. More information can be found on KlamathRestoration.gov. The Coastal Conservancy is currently assisting California State Parks, the Army Corps of Engineers, the Santa Monica Bay Restoration Commission and other partners in evaluating the restoration of aquatic and riparian habitat along Malibu Creek in the Santa Monica Mountains of Los Angeles County. The primary purpose of the Malibu Creek Ecosystem Restoration Feasibility Study is to address the impacts and possible removal of Rindge Dam, an obsolete, 100-foot- high water storage facility that was built in the 1920s but quickly filled in with sediment from the upstream watershed. Located about four miles above Malibu Lagoon, the dam blocks habitat connectivity along the creek for endangered steelhead trout and other species, and prevents sediment transport to downstream beaches. The Feasibility Study is also analyzing fish passage opportunities at other migration barriers along Malibu Creek and its tributaries. The Conservancy is a member of the project’s technical advisory committee and over the past 10 years has provided State Parks with over $1 million in grant funding to support the study. The Corps has completed an administrative draft of the feasibility study, with a goal of completing a public draft of its study in late 2012 or early 2013. Project documents and other information about these dam removals can be found in the Clearinghouse for Dam Removal Information, a project of the Water Resources Collections and Archives.
<urn:uuid:2cf41cb0-fcc2-4544-85b5-56f23df2e9d5>
CC-MAIN-2016-26
http://scc.ca.gov/2011/06/10/dam-removal-projects-funded-by-the-coastal-conservancy/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935043
1,410
3.3125
3
Holistic Health Connection - Busting the Animal Protein Myth By: Katie Skinner It is a common myth that dietary protein is only obtained through animal sources like chicken, beef, salmon and cheese. Surprisingly, to most Americans, vegetables, nuts, seeds, legumes and grains are considerable sources of dietary protein, especially dark green leafy vegetables. It is obvious in the animal kingdom that plants pack a powerful protein punch when some of the biggest animals on Earth (elephants, gorillas, rhinoceroses, hippopotamuses and giraffes) eat predominantly plant based diets. Protein plays an important role in numerous functions in the body and is an important part of a balanced diet. Luckily for non-meat eaters, people trying to reduce their consumption of animal products or those trying to reduce their carbon footprint, protein is present from the top of the food chain to the bottom! In Prescription for Nutritional Healing by Phyllis Balch, CNC, protein is described as one of the four basic nutrients. When protein is consumed it is broken down in the body into essential amino acids which must be obtained through the diet because the body cannot manufacture them on its own. The benefits of consuming plant based protein are illustrated by Dr. Joel Fuhrman in his book, Eat to Live. Dr. Fuhrman explains that when compared calorie-to-calorie, plant sources provide the same amount of protein as animal sources, and in many cases even more! For example, Dr. Fuhrman shows that 100 calories of broccoli contains 11 grams of protein while 100 calories of steak contains only 6 grams of protein. Not only does broccoli contain protein, but it is also providing significant amounts of fiber, phytochemicals and antioxidants. Steak provides none of those nutrients and contains cholesterol and saturated fat. Not only does our body thank us for supplying it with more plant based sources of protein, the environment does too! Producing billions of pounds of meat and dairy requires large amounts of pesticides, chemical fertilizer, fuel, feed and water. Meat production also generates greenhouse gases, toxic manure and other pollutants that contaminate the air and water. The Environmental Working Group partnered with ClimateMetrics, an environmental analysis firm, to create the Meat Eater’s Guide to Climate Change + Health. This guide helps Americans better understand the health, environmental and climate impacts of their food choices. Among many other findings, The Environmental Working Group found that if everyone in the U.S ate no conventionally produced meat or cheese one day a week for one year it would be like taking 7.6 million cars off the road! One way to add more plant-based protein to the diet is through protein powder supplements. Plant-based protein powders are most commonly derived from pea, hemp, rice and soy. At The Mustard Seed Natural and Organic Food Store we sell a large variety of plant-based protein powders which include top brands Sun Warrior, Plant Fusion and Vega. All three of these brands feature a complete amino acid profile, branch chain amino acids and are gluten free, dairy free and soy free. Protein powders are easily mixed with water, non-dairy milk or put into a smoothie. They can be consumed throughout the day, with meals or post-workout. Click here for a coupon redeemable on the purchase of one of these plant-based protein powders at The Mustard Seed Natural and Organic Food Store located at 969 Arsenal Street in Watertown. Prescription for Nutritional Healing by Phyllis A. Balch, CNC Eat to Live by Joel Fuhrman, MD *Disclaimer: Nothing in this article is intended as, or should be construed as, medical advice. This article is for educational purposes only. These statements have not been evaluated by the Food and Drug administration. The products mentioned in this article are not intended to diagnose, treat, cure or prevent disease. |Printable Version||E-mail a Friend|
<urn:uuid:10a41af8-897c-478a-a4d5-b1323db641d5>
CC-MAIN-2016-26
http://www.mustardseednaturalmarket.com/common/news/store_news.asp?task=store_news&sid_store_news=24&storeID=DF049EAB9C4B4C51ABC60DDCDDA8E452
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940569
817
3.140625
3
Regardless of many benefits available to Russia from adopting a more practical approach to climate mitigation, the country remains on the outskirts of the international climate policy debate—an important element of foreign policy in this decade. Russian leaders tend to point to the post-Soviet decline of Russia’s greenhouse gas emissions as a major contribution to global climate mitigation efforts. Yet, because the country’s carbon intensity remains very high, that stance undermines Russia’s role as a serious global climate actor.Recognizing its limited progress with climate mitigation policies and its responsibility to contribute more would create a better foundation for Russia’s strategic role. A number of “no-regrets” policy steps are available: - Domestically adopting the mitigation pledge announced at the Copenhagen climate conference - Implementing a domestic offsetting or emissions trading scheme that could act as a bridge to international carbon trading activities - Further developing the “Russian Proposal,” which seeks to encourage a wider group of countries to make climate commitments Russia’s stance on the Kyoto Protocol and allocating the potential burdens in climate mitigation is similar to many other industrialized countries’ approaches. This provides Moscow a good platform to create a cooperative role for itself in global climate diplomacy. Moreover, Russia’s current mitigation policies—regardless of the delays in their implementation—are slowly changing the country’s previous image of being just a potential seller of carbon credits to a more serious player in mitigation. However, making the most of its opportunity to develop a strategic role requires Moscow to take climate policy much more seriously. The Kremlin’s climate change path boils down to political will—and whether climate change is considered important enough—as well as its ability to engage in serious strategic thinking and policy preparation. Why Russia’s Climate Policy Matters Global temperatures have to be kept from rising beyond 2 degrees Celsius above pre-industrial levels, a potentially dangerous level of warming according to international consensus among climate scientists. Achieving that target requires taking action to cut greenhouse gas emissions worldwide. Understandably, China and the United States, the world’s two largest emitters, have attracted most of the attention in international climate negotiations. Yet, Russia, the world’s fourth-largest greenhouse gas emitter, following India, has a vastly important role to play. In 2010, it emitted 2,202 million tons of carbon dioxide equivalent, which does not take into account the amount of carbon dioxide taken out of the atmosphere by Russia’s carbon sinks. Its emissions from fuel combustion alone were greater than all of the emissions by Central and South America. Russia’s emissions from fuel combustion alone were greater than all of the emissions by Central and South America. Regardless of the heavy decline during the country’s economic restructuring phase, Russia’s recent carbon emissions have been on an upward trend. In 1998–2010, Russia’s total greenhouse gas emissions went up by 10.7 percent.1 The International Energy Agency predicts 11.2 percent growth in Russia’s energy-related carbon dioxide emissions between 2009 and 2020. In comparison, carbon dioxide emissions in China and India are projected to grow by 41.4 percent and 47.7 percent, respectively. By contrast, emissions in the United States and the European Union are expected to decline by 0.2 percent and 4.5 percent, respectively.2 Russia also possesses the largest carbon sequestration capacity in the world. Its boreal forests, the largest forested region on earth, store large amounts of carbon. Additionally, about half of the Northern Hemisphere’s terrestrial carbon is locked in Russia, predominantly in its permafrost regions.3 Deforestation and the melting of permafrost as well as a growing amount of black carbon in snow-covered territories could have considerable implications for global efforts to effectively mitigate climate change.4 Since 1990, the world’s total emissions have gone up by 43 percent and OECD member countries’ emissions by 10 percent. By comparison, in 2010, Russia’s carbon emissions stood at 34.2 percent below their 1990 level5—a notable track record.6 Russian officials have presented this as strong evidence of Russia’s leading role as a contributor to climate change mitigation efforts.7 The international climate community, however, has generally remained unimpressed by Russia’s performance. First, Russia’s reductions were not the outcome of focused policies to cut emissions. The decrease was principally the result of the economic decline that followed Russia’s transition to a market economy after the collapse of the Soviet system. By 1998, when the Russian economy hit bottom, energy use was about a third lower than it was in 1990, resulting in major decline in emissions.8 (Figure 1 shows Russia’s greenhouse gas emissions and GDP since 1990.) As Russia has recovered, economic growth—which was rapid in the 2000s—has driven the country’s emissions up, though at a significantly slower pace than in developing countries. A number of factors have allowed the Russian economy to grow quickly while greenhouse gas emissions have increased at a relatively slow rate: economic restructuring that has favored services instead of heavy industry, improved capacity utilization at facilities that were largely left idle during the 1990s, and, most of all, high oil prices during this period. The European Bank for Reconstruction and Development has described this as relative decoupling of emissions from GDP growth.9 Second, the “historical responsibility” argument that calls for countries to be held accountable for their cumulative emissions is not in Russia’s favor. According to that argument, Russia would have a significant global responsibility. The USSR was the second-largest carbon emitter not only during its last days but almost throughout its entire history. When the USSR collapsed, the Russian Federation was already locked in with an economy that had a level of carbon intensity that could no longer be justified in an increasingly carbon-constrained world.10 Today, in terms of its cumulative carbon dioxide emissions, Russia stands behind the United States and China.11 Even though Russia’s emissions are still below their 1990 level and are growing relatively slowly, the country’s carbon intensity remains high. However, Russia’s official position does not recognize this “historical responsibility” argument because Russia claims that the damaging nature of greenhouse gas emissions was unknown for much of the twentieth century. Russia’s counterargument—shared by most industrialized and some developing countries—is that emissions cannot be cut sufficiently without the participation of the major emerging economies due to their increasing share of global emissions. Third, even though Russia’s emissions are still below their 1990 level and are growing relatively slowly, the country’s carbon intensity remains high—it was 81 percent above the global average in 2010 (see figure 2).12 Per capita carbon emissions, with about twelve tons of carbon dioxide per person, are nearly three times the world average (see figure 3).13 Likewise, the Russian economy remains the most energy intensive among the G20 countries, with an intensity level about three times higher than the European Union average.14 The Impact of Climate Change Despite Russia’s contribution to climate change, Russian policymakers feel little need to take steps domestically to mitigate it, and public pressure for tackling the issue is nearly absent. Skepticism persists about the anthropogenic causes of climate change. The potential benefits of climate change are also widely present in public discourse, which has further prevented Russia from taking a more proactive stance. Climate change could in fact benefit the Russian Federation in various ways. Higher temperatures in the winter could reduce heating costs. Increased precipitation could potentially expand agricultural output in some parts of the country. The melting of sea ice in the Arctic could benefit oil and gas exploration and create new opportunities for navigation. Shipping activity has already increased in recent years in the Russian Arctic. Climate change could have some dire consequences as well. The speed at which temperatures are changing matters a great deal. Russia’s average temperature is rising particularly fast—almost twice as fast as the global average and nearly three times as fast in parts of Siberia, according to the Federal Service for Hydrometeorology and Environmental Monitoring.15 This presents Russia with greater weather unpredictability and shorter time horizons in which to adapt. While they may benefit some areas of the country, rising summer temperatures are also expected to increase droughts, particularly in the areas that currently constitute the core of Russia’s agriculture.16 Rising floods and increased river runoff could cause additional damage to agriculture. Forest fires, as witnessed near Moscow during the summer heat waves of 2010, could be a growing cause of deforestation and health hazards. Melting permafrost is weakening the bearing capacity of the ground. This has consequences for settlements and infrastructure in Russia’s north, and it could have a huge impact on the Russian economy, as it may complicate energy development projects in the region. There are already reports about an increase in accidents related to pipeline networks in permafrost regions.17> The Kyoto Protocol The Kyoto Protocol to the United Nations Framework Convention on Climate Change adopted in 1997 was the first worldwide attempt to set quantitative, legally binding emission commitments for developed countries and several economies in transition, including Russia. By the end of the protocol’s first commitment period, which began in 2008 and will expire at the end of 2012, Russia’s initial responsibility was to maintain its emissions at the 1990 level. Agreeing to comply with the protocol’s target posed no challenge for Russia, since its emissions were well below the 1990 level at the time. Even so, it was not until November 2004 when Russian leaders decided to ratify it. Many explanations for Russia’s delayed ratification have been provided. They can be summed up in four arguments. First is the fact that climate change has never been high on Russia’s policy agenda for a number of societal and scientific reasons already discussed. Second, some worried about the limits the Kyoto Protocol put on economic growth. Despite the protocol’s loose target that allowed for some emissions growth, the voices of the doubtful were quite loud in the Russian debate in the early 2000s, not least due to Putin’s goal to double the country’s GDP within a decade. Third, Russia was concerned about the equity of the agreement. The Kyoto Protocol required no emission reduction commitments from developing countries, while the then largest emitter, the United States, opted not to join. Many in Russia considered these issues significant shortcomings in the global effort to effectively avert climate change. And fourth, Russia hoped to secure diplomatic gains by delaying the ratification of the Kyoto Protocol. In order for the protocol to enter into force, at least 55 parties had to ratify the treaty, accounting for at least 55 percent of global emissions. When the United States rejected the protocol in 2001, Moscow was left in a decisive position to reach the threshold. As part of its negotiations with the European Union on the Kyoto Protocol, Russia’s consent was linked to progress on its bid to join the World Trade Organization. Eventually, Moscow did sign on, and it expected to be able to benefit financially from the agreement. As the Kyoto Protocol took effect in 2005, each signatory had an emission target based on 1990 levels expressed in assigned amount units (AAUs), with each unit equal to one ton of carbon dioxide. Due to the collapse of emissions in the 1990s, Russia received the largest surplus of AAUs with the right to trade them in international carbon markets. This potential benefit preoccupied the climate debate in Russia. The United States had been expected to account for the majority of the demand for the Russian surplus. Its withdrawal from the protocol removed the majority of the demand for the Russian AAUs, and Russia thus had to turn to the more complicated Joint Implementation mechanism to benefit from the international carbon market.18 In the absence of pressure by a stringent international climate commitment, the implementation of Russia’s climate mitigation policy efforts lags behind most other countries, though many key mitigation policies, mostly driven by economic interests, have been successfully developed and adopted. Establishing a functional legislative and administrative framework to approve Joint Implementation projects took many years. At the end of 2010, the Russian government set a target to reduce the energy intensity of the Russian economy by 40 percent by 2020. The major legislative package to improve energy efficiency that followed that announcement is perhaps the most substantial effort to date to promote Russia’s low carbon future. However, it remains largely unimplemented. Further, a Climate Doctrine was adopted in 2009 and established the official basis for policies and measures to mitigate climate change and adapt to it. Yet, the action plan that followed provided no new concrete measures to do so. It has remained as a political declaration rather than a practical policy document. Finally, a legal limit on gas flaring—set as 5 percent of associated petroleum gas produced from 2012—has a large potential to cut emissions; the implementation is under way but estimated to be delayed by two to three years.19 Negotiating With Russia Beyond the episode of the ratification of the Kyoto Protocol, Russia’s role in international climate diplomacy is best described as peripheral. Moscow has continued to expect credit for the substantial decline in its emissions compared to the 1990 Kyoto baseline. International negotiators have been well aware that this decline was not the result of focused emission reduction policies and measures. Moscow’s stance has been seen as unfair by many countries, particularly given Russia’s continuing waste of energy resources—and hence unnecessary greenhouse gas emissions. Further complicating Russia’s role in climate negotiations has been its strong insistence on the full accounting of its forest carbon sinks—a factor key to the national pride of the country—without politically set caps.20 As typical of the negotiation positions of forested countries, Russia’s interpretation of the accounting rules would boost its own carbon sink. In addition to the delays in the implementation of some of Russia’s mitigation policies, their timing has given rise to the impression that Russia is principally after diplomatic gains instead of a constructive solution to climate problems. For instance, the goal of a 40 percent reduction in the energy intensity of the economy was announced just one month prior to the climate-focused G8 meeting in Japan in 2008, and the Climate Doctrine was adopted shortly before the Copenhagen climate conference in 2009. In the case of the energy-intensity target, the government has developed a legislative framework but implementation has remained slow. The action plan to implement the Climate Doctrine mainly consists of existing rather than new policies. Presenting a clear vision how Russia aims to contribute to global climate action would ease doubts that Moscow rejects Kyoto and emphasizes wider participation to escape its commitments. Moreover, it has become obvious that the Kyoto Protocol was never part of the Russian climate vision, though Russia is not alone in this. Due to the limited international participation in the protocol and the low impact it will have on global atmospheric conditions, Russian leaders considered Kyoto deficient during the initial debate leading to its ratification. Delays in establishing a framework for Joint Implementation projects have forestalled Russia’s ability to reap economic benefits, further weakening its interest in extending its participation in the Kyoto Protocol. Thus at the end of 2011, Russia gave notice it would not enter into the second commitment period of the Kyoto Protocol. Moscow’s preference, shared by many other countries, is for a new global agreement that obliges all major emitters to participate. That preference could be interpreted as yet another illustration of rhetoric that is not backed up with action. It provides Russia a convenient way to postpone future climate commitments—maybe indefinitely. However, presenting a clear vision of how Russia aims to contribute to global climate action would ease such interpretations. Why Should Russia Reconsider? There are many reasons, both domestic and international, why Russia should reevaluate its climate policy. In order to live up to its aspiration as a leading and contributing player on key international issues, as envisaged by the leadership in Moscow, Russia cannot afford to be seen as ignoring the common task of tackling climate change—a key issue on the international policy agenda. The current starting point of claiming that Russia has already overwhelmingly contributed to the objective of meeting global climate targets is simply no longer credible in the eyes of the G8 and G20. This is further underlined by the agreement made at the 2011 United Nations Climate Change Conference in Durban to negotiate a new global climate pact by 2015. That pact will have to contain more ambitious emission reduction commitments for all countries if it is to work. Russia also has compelling reasons to take the threat of climate change seriously. Temperatures in Russia are rising relatively quickly with some potentially negative consequences. This is hardly surprising as temperatures in the Arctic, where a large part of Russian territories lie, have been rising faster than in the rest of the world. This underlines the risk Russia runs if it continues to treat climate change as somebody else’s problem—or even worse, a Western conspiracy to force Russia to buy foreign green technologies. Other industrialized countries, for instance those in the European Union, are acting on climate because they recognize the economic and human risks involved. Moscow has the opportunity to gain significant benefits by shifting toward more active and genuine participation in global efforts to address climate change. Furthermore, low-carbon policies could provide incentives for policy implementation. As part of a wider package of policies, the price of carbon could support existing policies that are facing difficulties with implementation, for instance to improve energy efficiency or reduce associated gas flaring. Charging for methane emissions as part of a wider policy package could push some associated petroleum gas utilization projects over the threshold of economic viability. Further, the promotion of a domestic green-technology market and the production of such technologies could be a potential path to diversify the economy, which is also recognized in Russia’s modernization program. Growing emphasis on climate policy worldwide provides future international markets for such technologies and renewable energy. Russia can also strengthen its climate policy without much trouble or cost. In Copenhagen, then-president Dmitri Medvedev announced Moscow’s willingness to commit to limiting emissions growth to 25 percent below the 1990 level by 2020.21 This commitment is widely recognized as free of economic risks for Russia since present emissions are 34 percent below the 1990 level.22 In addition, Russia has already set up a package of policies that has the potential to start turning the country toward a low(er) carbon path. Even though the problems with policy implementation that plague the political system are seriously threatening these policies, the measures are a good starting point for a climate mitigation portfolio. Based on all this, Moscow has the opportunity to gain significant benefits by shifting toward more active and genuine participation in global efforts to address climate change. A Climate Vision for Russia In order to truly tackle the problem of climate change, gain influence in building the international climate regime, and reap economic benefits domestically, the Russian climate vision must be more comprehensive. Moscow should move from rhetoric to action in terms of climate commitments. Even though Russia has stated that it will stay outside Kyoto’s second commitment period, Russia has the opportunity to demonstrate its role as a serious climate protection partner by legally adopting a domestic emission limitation target—as proposed at the Copenhagen conference in 2009. This would signal to the world community that Moscow has moved on from its legacy of post-Soviet emission decline and would add credibility to Russia’s focus on negotiating a new global climate agreement instead of joining Kyoto’s second phase. Moscow should move from rhetoric to action in terms of climate commitments. Russia could also aim to become a genuinely substantive contributor to the negotiation process. The Russian Proposal made a first step at the Durban Climate Change Conference by officially proposing the establishment of a periodic review of country groups under the United Nations Framework Convention on Climate Change. A point of reference for the Kyoto Protocol, these groups divide countries in terms of climate commitments required from them based on the development levels of 1990. Their revision would provide an update of who needs to commit and to what type of emission reduction or limitation based on the current level of development. If also used as a point of reference by the future climate pact, this could oblige better-off developing countries to accept emission reduction targets based on the level of their economic development. The Russian Proposal was welcomed by many parties at Durban who also believe the current system to be inequitable. For instance, many countries seem to agree that allowing countries like South Korea, Singapore, and Qatar to escape mitigation commitments is unfair while, for instance, Ukraine and Belarus, with significantly lower standards of living, are making commitments. The major shortcoming of the Russian initiative is that it is substantively hollow. To identify a solution to the challenge of future burden sharing of climate commitments, the issue requires much more attention, regardless of the opposition by the developing country group G77 on undertaking mitigation commitments. To make a contribution and influence the design of the future climate regime as also outlined in Russia’s foreign policy doctrine, Moscow must have a more substantial suggestion to offer. For instance, the proposal could be amended to include indicators for judging which countries are developed enough to make emission limitation and reduction commitments under the next climate pact. In the absence of substance, this very reasonable initiative is easy for a powerful developing country lobby to discredit and ignore. An Emissions Trading Scheme? Russia’s decision to drop out of the Kyoto Protocol’s second commitment period has also stirred domestic discussions about the future of carbon market mechanisms in the country. Immediate concerns have been raised about the future of Joint Implementation projects. Even though the process to set up the domestic approval system for Joint Implementation was prolonged, the mechanism is delivering results quickly. In May 2012, the Russian government had officially approved 108 projects that cumulatively account for 311.6 million tons of emission reductions; half of these projects were approved in the spring of 2012.23 Russian stakeholders in Joint Implementation projects are eager to see their country join Kyoto’s second commitment period in order to tap into the hundreds of tradable megatons of emissions allowances waiting in the Russian pipeline. However, potential benefits related to Joint Implementation alone are unlikely to prompt the Russian leaders to change their minds for a number of reasons. First, the opposition of the Russian leadership and many experts is linked to the fundamental question of the protocol’s insignificant contribution to limiting climate change. Second, the demand for credits generated by Joint Implementation is likely to dry up after the so called true-up period of the Kyoto first commitment phase, in 2013–2014, due to the loose emission reduction targets set by a limited number of participants in Kyoto’s second phase. The European Union’s focus on Clean Development Mechanism projects in the least-developed countries to satisfy its limited demand for external credits is a cause for additional concern in Russia since this reduces demand for credits generated by Joint Implementation.24 Even if Russia would reconsider joining Kyoto’s second commitment period, the absence of other major players—such as the United States, Japan, and Canada—means that benefits will be limited to extending investment flows through Joint Implementation a bit longer. Establishing a domestic emissions trading scheme (ETS) has recently become part of Russia’s climate policy discourse mostly pushed by the carbon market experts currently engaged in Joint Implementation projects. Some industries, represented by the union Delovaya Rossiya (which does not act for Russia’s main emitters), have supported the idea of having a domestic ETS. A working group, with backing from the Ministry of Economic Development, has been formed to discuss carbon regulation issues that could also involve an ETS. Yet, there is no indication that the Russian leadership will support setting up a domestic ETS. It is difficult to see the top leadership imposing carbon emission caps on industries upon which the economy heavily depends. Furthermore, setting up a full-scale ETS is likely to be problematic, not least because the Russian actors are used to selling emission quotas instead of buying them. The risk of failure with this complicated task is high given the limited administrative capacity available in Russia and the opportunities it can provide for corruption on various levels. Here, the lessons from both setting up the Joint Implementation approval scheme within Russia as well as the European Union’s ETS should be kept in mind: the political struggles such institutional arrangements can cause between ministries and agencies in Russia, and the opposition that industrial actors expressed to allocating limited emission rights in the European Union. Thus, a carbon tax could be a less complicated instrument with fewer such stumbling blocks but similar impact on emissions. Even if an ETS is not a feasible solution, some kind of carbon-pricing tools may be useful options for Russia. A domestic offsetting scheme, for instance, based on the existing Joint Implementation mechanism may be a less risky option to maintain capacity to participate in the international carbon market in the future. Even though it may not be obvious in the absence of domestic emission caps in Russia, some limited domestic demand can be identified. For instance, some Russians have raised objections about the requirement that foreign aviation participate in the European Union’s ETS. That requirement has been labeled “green protectionism” by many in Russia. Domestic offsets could provide a more acceptable alternative so that companies do not have to purchase emission permits from the EU. Other Russian industries could also use domestic carbon allowances to offset their emissions in order to market their products as carbon neutral.25 Likewise, the Sochi Olympic Games have been labeled a zero-emission games and may need domestic credits. Russia will remain on the outskirts of the international climate policy debate—an important element of foreign policy in this decade—unless the Kremlin decides to change its attitude on climate change diplomacy as outlined in the Russian Federation’s foreign policy doctrine. A domestically adopted emission limitation target would be a good start—perhaps with an extension of the country’s participation in the international carbon market through a domestic offsetting scheme or ETS. At the same time it must be recognized that policy implementation—not just on climate but in general—tends to run into systemic difficulties in Russia. So, even the announced mitigation policies cannot be taken for granted. Russia will remain on the outskirts of the international climate policy debate unless the Kremlin decides to change its attitude on climate change diplomacy as outlined in the Russian Federation’s foreign policy doctrine. The Russian Proposal at the Durban climate conference contains an important and widely recognized idea of establishing criteria for developing countries to graduate step-by-step toward emission mitigation targets based on their level of economic development. To maximize the foreign policy as well as global mitigation benefits of this initiative, Russia should develop a more substantial proposal as a contribution to the Durban platform. Adopting a domestic mitigation target would show developing countries that Moscow practices what it preaches. Even though Russia’s role is less decisive in the current climate negotiations than during the Kyoto ratification process, the fact that Russia’s approach to future burden sharing is in line with other industrialized countries provides Moscow a better platform to create a cooperative role for itself. Given Russia’s transition economy status, the expectations of the country’s mitigation target are probably limited, and thus fairly easy to fulfill. Regardless of its systemic problems with implementation, Russia has already gained credibility in terms of launching mitigation policies. This is slowly changing Russia’s previous image of being just a potential seller of assigned amount units in international carbon markets. Instead of presenting itself as a global leader in emission reductions, Moscow should recognize its limited progress with climate mitigation policies and its responsibility to contribute more. None of the steps suggested would compromise Russia’s principles when it comes to the participation of all major emitters and staying outside of the Kyoto second phase; rather, it would put them in practice. However, making the most of this opportunity to develop a strategic role in the design of the new regime requires Moscow to take climate policy much more seriously. In order to enhance its role and credibility in the global efforts to avert climate change, Moscow should depart from its traditional starting point: instead of presenting itself as a global leader in emission reductions, it should recognize its limited progress with climate mitigation policies and its responsibility to contribute more. The Kremlin’s choice boils down to political will—and whether climate change is considered important enough—and to its ability to engage in serious strategic thinking and policy preparation. That would be something new from Russia in the field of climate policy. 1 The referenced greenhouse gas emissions exclude carbon sinks (LULUCF). National Inventory Submission 2010 by the Russian Federation to the UNFCCC. Available at http:// unfccc.int/national_reports/annex_i_ghg_inventories/national_inventories_submissions/ items/6598.php. 2 This is based on IEA’s Current Policies Scenario. Under its New Policies Scenario, energy related carbon emissions will grow by 11 percent. See World Energy Outlook 2011. 3 Goodale, Christine, Michael Apps, Richard Birdsey, Christopher Field, Linda Heath, Richard Houghton, Jennifer Jenkins, Gundolf Kohlmaier, Werner Kurz, Shirong Liu, Gert-Jan Nabuurs, Sten Nilsson, Anatoly Shvidenko, “Forest carbon sinks in the Northern Hemispere,” Ecological Applications, 12 (3), 2002, 891–99. 4 Black carbon is a pollutant produced through incomplete combustion of biomass and fossil fuels. When it settles on ice or snow, it increases its heat absorption capacity, accelerating thawing. In Russia’s snow-covered regions, it is brought mainly with air currents from Russia’s nearby regions. 5 National Inventory Submission 2010 by the Russian Federation to the UNFCCC. Available at http://unfccc.int/national_reports/annex_i_ghg_inventories/national_inventories_ submissions/items/6598.php. 6 See Enerdata energy database at www.enerdata.net. 7 See energy database of Enerdata at www.enerdata.net, Alexander Bedritsky’s statement at the Cancun climate conference December 9, 2010. 8 Total primary energy supply dropped from 868 million tons of oil equivalent to 581 million tons of oil equivalent. (Russian Energy Survey, International Energy Agency, 2002.) 9 European Bank for Reconstruction and Development, “The Low Carbon Transition,” 2011. 10 Carbon intensity is the amount of carbon dioxide emitted to generate one unit of GDP. In 1990, the USSR’s carbon intensity was twice the global average—it stood at 1.17 kilograms of carbon dioxide per dollar of GDP (measured in purchasing power parity). See Enerdata. 11 See World Resources Institute database at www.wri.org. 12 Russia’s carbon intensity in 2010 stood at 0.43 kg of carbon dioxide per unit of GDP. See Enerdata. 13 See World Bank database at data.worldbank.org. 14 See energy database of Enerdata at www.enerdata.net. 15 “Doklad ob Osobennostiakh Klimata na territorii Rossiiskoi Federatsii za 2008 God,” Federal Service for Hydrometeorology and Environmental Monitoring, Moscow, 2009. 16 Elena Liubimtseva, “Global Food Security and Grain Production Trends in Central Eurasia: Do Models Predict a New Window of Opportunity?” National Social Science Journal, 41 (1), 2010, 154–65. 17 O. Anisimov, A. Velichko, P. Demchenko, A. Eliseev, I. Mokhov, V. Nechaev, “Effect of Climate Change on Permafrost in the Past, Present and Future,” Atmospheric and Oceanic Physics, 38 (1), 2002. s25–s39. 18 At the moment countries are divided into industrialized Annex I countries which are obliged to take quantitative emission limitation or reduction targets, and developing non-Annex I countries without such commitments. Set forth in Article 6 of the Kyoto Protocol, Joint Implementation projects are mechanisms by which countries that have binding emission targets, the so-called Annex 1 countries, can meet their obligations not domestically but in other Annex 1 countries. 19 Vast amounts of associated gas released during production of crude oil continue to be flared instead of utilized productively. The exact amount of gas flared in Russia is unclear. Estimates for 2010 vary between 16 and 35 billion cubic meters. (World Energy Outlook, 311) 20 Russia has insisted on an accounting method that would allow it to factor in a significant increase of harvesting forest before accounting for losses due to declining forest sinks. In addition, the uncertainty of data and the broad interpretation of “managed” forests (that is, forests that are subject to active policy measures) add headroom to the accounting. See for instance Anna Korppoo and Thomas Spencer. “The Dead Souls: How to Deal with the Russia Surplus?” FIIA Briefing Paper 39, Finnish Institute of International Affairs, September 4, 2009. 21 With the condition of including forest carbon sinks in the accounting. 22 Most Russian and foreign analysts agree that Russia’s emissions are likely stay somewhere between 15 and 20 percent below the 1990 emission level. This is sufficient, as the Russian pledge contains 10 percentage points from the forest carbon sinks. On emission levels, see for instance: Igor Bashmakov, Nizkouglerodnaya Rossiya: until 2050, Center for Energy Efficiency (Russian 2009); Vladimir Malakhov, Economic Perspectives on Low-Carbon Development in Russia, “International Journal of Low-Carbon Development 5 (4); 298–302; McKinsey & Company, (2009). Pathways to an Energy and Carbon Efficient Russia." 23 Projects approved with a decree of the Ministry of Economic Development as listed at the carbon unit operator Sberbank’s list, May 29, 2012, www.sbrf.ru/moscow/ru/legal/cfinans/ sozip. 24 Clean Development Mechanisms were established by the Kyoto Protocol for Annex I countries to meet their emission reduction commitments by investing in developing countries. Russia is not eligible to participate in Clean Development Mechanisms. 25 Anton Galenovich, “Carbon Protectionism vs. Carbon Leakage: Issues and Solutions,” Presentation at the Ministry of Economic Development, March 15, 2012. Available at www. vavt.ru/main/site/LSP806C80/$file/10_Anton_Galenovich.pdf.
<urn:uuid:a148ac11-2d68-4e01-9c5f-8d3d695ccd47>
CC-MAIN-2016-26
http://carnegieendowment.org/2012/08/01/climate-vision-for-russia-from-rhetoric-to-action/d4tq
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942818
7,138
2.8125
3
Get Word-Study Lesson Plans Wren, Ph.D. © BalancedReading.com, 2002 Decoding text is half of the game of reading. To be able to read, children must be able to comprehend language, and they must be able to decode text (see the Simple View discussion). Simply decoding text, however, is not sufficient -- children must be fluent and highly accurate at decoding text. Decoding should be as automatic as possible. In the beginning, decoding is laborious, and a great deal of concentration must be devoted to sounding out words. Once a child learns how to correctly sound out words, what that child needs more than anything is practice. Time spent on task at this crucial stage is critically important, and the task that children spend the most time with is reading actual text. Once a child has figured out most of the basics for sounding out words -- once the child has developed good decoding strategies -- the child needs to practice those strategies with real words in real text. To learn good decoding strategies, children rely on more basic, more fundamental skills. First, children must develop an understanding that words have meaning, and that there is a structure to text. Children develop healthy "concepts about print" when they spend time lap-reading and reading interactively with their parents, caregivers and teachers. Next, children must become familiar with the letters of the alphabet -- they must learn to easily identify and distinguish the letters. They don't have to learn the letter names, necessarily, but they must be able to easily and individually identify them somehow. Children must also develop an understanding that spoken words are made up of phonemes -- phoneme awareness is one of the biggest stumbling blocks that children face, and teachers must make sure that all children have phoneme awareness as soon as they can. And children must put their knowledge of the letters together with their awareness of phonemes -- they must learn that the letters in printed text represent the phonemes in spoken language. In other words, they must learn the alphabetic principle. These are the fundamentals that give rise to good decoding skills. Children who have these skills (concepts about print, letter knowledge, phoneme awareness, and knowledge of the alphabetic principle) in kindergarten usually go on to become healthy readers. Children who are still learning these skills at the end of the first grade usually do not go on to become healthy readers (See "M is for Matthew These basic skills are necessary, but not sufficient, for reading success. Children must practice applying these skills with real words and real text. They must be given many, many opportunities to write and read real connected text, and they should get many opportunities for feedback and instruction from teachers. The primary goal is to teach children the patterns that exist in the English spelling system, but this is not usually accomplished by teaching some abstract rules about spelling-sound relationships -- children are good at finding patterns, but they are lousy at applying rules. To emphasize the patterns that exist, instructional strategies like those advocated by Pat Cunningham (Making Big Words, and Phonics they Use) are extremely effective. I've written a document that outlines the essential knowledge domains that underly healthy decoding skills called "The Cognitive Foundations of Learning to Read," and I've also written a short document illustrating the difference between decoding and reading called "Decoding and the Jabberwocky's Song." Also this article by Connie Juel and Cecilia Minden-Cupp is quite informative. In English, there is a good deal of regularity between the letters and the sounds (phonemes), but there are also quite a few exceptions. There are very few letters in English that always correspond to a single sound, and there is no one sound that always corresponds to a single letter (See "P is for Phonics"). English, it is said, has a "deep orthography," which basically just means that there are a lot of words that are not spelled the way they sound (e.g. "colonel" or "choir"). This is illustrated by the following table that shows the one-to-many relationship that exists between letters and sounds (phonemes). ||Words that represent different sounds each letter ||APPLE, AUTHOR, AUTHORITY, ANY, SAID, SAY, ALGAE ||CITY, COUNTRY, CHAIR ||BED, BEAD, STEAK, EUREKA, THE, SEW ||GIANT, GRUNT, RING, REIGN, SIGN, ENOUGH ||HOLE, PHONE, SHINE, CHORE, CHOIR, HOUR, EXHIBIT ||FINE, LID, CEILING, WEIRD, GOITER ||BOY, BOOT, FOOT, BLOOD, COYOTE, OUNCE, ONCE, ||PAT, PHONE, PSYCH, PNEUMATIC ||SAND, SUGAR, EASY, AISLE ||TAN, THAN, THIN, LATCH, OFTEN ||UNDER, POUND, UNIQUE, TULIP, POUR, AUTHOR, AUTHORITY, CHURCH, BUSY, DIALOGUE ||WON, WREN, COW, LOW, AWFUL, FEW, WHICH, WHOLE, ||RELAX, LUXURY, EXECUTIVE, XENON ||YES, PSYCH, THEY, SAYS, VERY, PYGMY ||ZOO, WALTZ, RENDEZVOUS ||AUTHOR, AUTHORITY, LAUGH, BUREAU, RESTAURANT, DINOSAUR, BEAUTY, GAUGE ||EAT, CREATE, GREAT, IDEA, DEAF, HEAR, HEARD, HEART, BEAR, BUREAU, BEAUTY ||OUT, YOU, YOUR, COULD, YOUNG, JOURNEY, ENOUGH (see OUGH for more) ||MOTH, MOTHER, FATHEAD ||PIECE, PIE, QUIET, FRIEND, SOLDIER ||FOOD, FOOT, BLOOD, FLOOR ||TOAD, BOARD, BROAD ||TRAIN, SAID, AISLE, AGAIN, AIR ||COUGH, THOUGH, THROUGH, THOROUGH, THOUGHT, ENOUGH A Decoding question from a reader on the Discussion Forum: Can you give some examples how can I teach kids to show them how to use chunking in word identification? "Chunking" is a more efficient strategy for word identification that kids should be adopting in 2nd grade and beyond. There are certain letters in the English writing system that tend to go together. It is more efficient for students to process those chunks of letters as a group than to process them individually. Common chunks like "ING" or "THA" or "EAT" should be very quickly and Some of Pat Cunningham's "making words" activities are great for teaching kids to chunk letters in word identification. Start by giving each student letter cards or letter tiles with the following letters: A E T L K S N Tell the students to arrange the letters to make the word "TAKE." Then ask them what letters they need to change to make the word "LAKE." Then tell them to make the word "SAKE." Then tell them to make the word "SNAKE." Then change it to "STAKE." Point out to them that the letters "AKE" are common letters in English. They are used in a lot of different words. You can demonstrate some more using other letters (SHAKE, FAKE, MAKE, BAKE, etc.) You can do the same thing with initial letters or medial letters. Go from STRING to STRONG to STRAW to STREET -- tell them that the letters "STR" are common in English. Go from RING to BRING to STRING to THING to KING -- tell them that "ING" is a common chunk of letters. To expand, have them look for common chunks of letters in their book -- letters that often go together. They might come up with examples "THE" (THE, THEM, THEY, THEME, THEIR, ANOTHER, etc.) "OOK" (BOOK, LOOK, TOOK, SHOOK, BROOK, COOK, HOOK, etc.) "PLA" (PLAY, PLATE, PLAN, PLASTER, PLACE, etc.) "AME" (SAME, LAME, CAMEL, NAME, BLAME, etc.) Have students create words for pocket charts that contain letter chunks. Next to "OOK" they would have LOOK, BOOK, TOOK, SHOOK, etc. When a student comes up with a new one, they can add it to the pocket chart. BalancedReading.com • P. O. Box 300471 • Austin, TX 78703 you have comments or questions about this site? you like to contribute material or information to this site? Last Updated 2-2-09
<urn:uuid:a89fb19c-de6a-4afb-ae8f-e891b138b0dc>
CC-MAIN-2016-26
http://www.balancedreading.com/decoding.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898263
2,011
4.0625
4
The Mighty Miss Malone One Remarkable Family Is on the Journey of Their Lives! From Newbery Award-winning author Christopher Paul Curtis comes a heart-wrenching and suspenseful novel about an unforgettable struggle to survive the Great Depression. Twelve-year-old Deza Malone has a close and loving family, and she's the smartest girl in her class in Gary, Indiana. But times are tough, and it's hard for black men like Deza's father to find work. Desperate to help his family, Deza's father leaves town to look for work, and soon Deza, her mom, and her older brother, Jimmie, are setting off in search of him. Along the way, they experience many Depression-era hardships, including living in a shantytown and riding the rails, all the while never giving up the hope of being together again. - Interest Level - Grades 3 - 5 - Grade Level Equivalent - Lexile Measure - Guided Reading - Number of Pages - Historical Fiction, Multicultural
<urn:uuid:63114347-fecc-4597-a7e9-7844f01abd3e>
CC-MAIN-2016-26
http://www.scholastic.com/parents/book/mighty-miss-malone
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00087-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934678
219
2.65625
3
In the days following the late-August earthquake in Napa, a high-tech manufacturer called Jawbone analyzed sleep data from thousands of users of its UP wristband, a device that monitors physical activity. The company used the data to produce a map showing exactly what time people were jolted awake on the night of the quake, based on their distance from the epicenter — information of great potential importance to public safety and emergency response agencies. The wake-up map is a recent example of the promise of massive amounts of personal data for the expanding field of mobile health (mHealth). Mobile devices collect personal data Dubbed the quantified self (QS) movement, the rapid rise of self-monitoring technologies offers insights into a person’s daily life via wearable and mobile devices that collect data about exercise, movement, heart rate and other observations. But self-monitoring also has implications of significant value for health researchers and clinicians, said Donna Spruijt-Metz, director of the mHealth Collaboratory, a program of the USC Dornsife Center for Economic and Social Research that was recently established to drive advances in mHealth. That’s why the mHealth Collaboratory hosted the Quantified Self Los Angeles (QSLA) Show & Tell Meetup in August. It was one of 112 QS meetup groups around the world, loosely organized by California-based Quantified Self. “The tools that self-monitoring enthusiasts and companies are developing will be a key part of mHealth progress at USC and other research institutions,” said Spruijt-Metz, adjunct associate professor of preventive medicine at the Keck School of Medicine of USC and of psychology at the USC Dornsife College of Letters, Arts and Sciences. “The mHealth Collaboratory is here to get scientists, business people and the innovative-user community talking together and inspiring each other to develop disruptive new mobile health solutions.” It’s just the beginning Existing self-monitoring products, such as sensor-packed wristbands, typically work in sync with computer or smartphone apps. They track and analyze objective data such as exercise and activity, geographical movement, calories and sleep patterns. But that’s just the beginning, said QSLA meetup organizer Ernesto Ramirez, a Ph.D. candidate in a joint program on public health at the University of California, San Diego and San Diego State University. “The quantified self technologies present endless possibilities from a public health standpoint for collecting data and insights for public health research and interventions,” Ramirez said. “To collect data on this scale through traditional methods would be prohibitively expensive or impossible.” The technology will empower clinicians, who will be able to more accurately track and assist the health conditions of individual patients with a range of conditions, including chronic issues such as diabetes and heart disease, Spruijt-Metz said. At the same time, researchers will be able to easily gather high-quality data from hundreds or thousands of participants that could speed the development of new drugs or other health interventions. Approximately 25 self-trackers attended the QS meetup, including researchers and self-tracking application developers. Such early adopters are important to USC researcher Gillian O’Reilly, a Ph.D. candidate in the division of health behavior research in the Department of Preventive Medicine at the Keck School of Medicine. O’Reilly is currently conducting research to identify personality characteristics associated with enthusiastic and diligent self-monitoring. “Not everyone likes the idea of self-tracking, and some people are suspicious of sharing that data with researchers,” O’Reilly said. “My goal is to understand the barriers and develop interventions that could motivate people to stick with it.” Support for the mHealth Collaboratory is provided by a grant from the USC Research Collaboration Fund offered through the USC Office of Research, with additional support from the USC Dornsife Center for Economic and Social Research, the Institute for Creative Technologies and the Southern California Clinical and Translational Science Institute.
<urn:uuid:16bede2f-08ea-43e4-bbb6-60e3fffe317b>
CC-MAIN-2016-26
http://news.usc.edu/69782/mobile-health-team-explores-new-frontier-of-self-tracking-technology/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940293
852
2.546875
3
Definitions of subjunction n. - Act of subjoining, or state of being subjoined. n. - Something subjoined; as, a subjunction to a sentence. The word "subjunction" uses 11 letters: B C I J N N O S T U U. No direct anagrams for subjunction found in this word list. Words formed by adding one letter before or after subjunction (in bold), or to bcijnnostuu in any order: s - subjunctions Words within subjunction not shown as it has more than seven letters. List all words starting with subjunction, words containing subjunction or words ending with subjunction All words formed from subjunction by changing one letter Other words with the same letter pairs: su ub bj ju un nc ct ti io on Browse words starting with subjunction by next letter Previous word in list: subjugators Next word in list: subjunctions Some random words: mridanga
<urn:uuid:cfa88d7d-2020-4ad3-8354-999c0a896851>
CC-MAIN-2016-26
http://www.morewords.com/word/subjunction/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.820521
225
2.765625
3
From the first step of the evaluation process, even though it may be one of the last evaluation tasks, explicitly discuss the content, sharing, and use of reports during the initial planning of the evaluation and return to the discussion thereafter. Most importantly, identify who your primary intended users are. Use of the evaluation often depends on how well the report meets the needs and learning gaps of the primary intended users. Besides the primary intended users (identified as part of framing the evaluation), your findings can be communicated to others for different reasons. For example, lessons learned from the evaluation can be helpful to other evaluators or project staff working in the same field; or it may be worthwhile remolding some of the findings into articles or stories to attract wider attention to an organisations' work, or to spread news about a particular situation. You will share the findings of the evaluation with the primary intended users and also other evaluation stakeholders. Don’t limit yourself to thinking of sharing evaluation findings through a report. Although a final evaluation report is important it is not the only way to distribute findings. Depending on your audience and budget, it may be important to consider different ways of delivering evaluation findings: - Presenting findings at staff forums and subject matter conferences - Developing a short video version of findings - Sharing findings on the organisation intra-net - Sharing stories, pictures and drawings from the evaluation (depending on what options you have used to gather data) - Creating large posters or infographics of findings for display - Producing a series of short memos Tasks related to this component include: Identify the primary intended stakeholders and determine their reporting needs, including their decision-making timelines. Develop a communication plan. Produce the written, visual, and verbal products that represent the program and its evaluation according to the communication plan. Graphic design and data visualization can be applied to emphasize key pieces of content and increase primary intended user engagement. Review the reporting products to make sure they are accessible for those who are colorblind, low-vision, or reliant on an audio reader. If part of the evaluation brief make recommendations, on the basis of the evaluation findings, about how the program can be improved, how the risk of program failure can be reduced or whether the program should continue. 5. Support Use Communicate the findings and recommendations but don’t stop there. As primary intended users reflect on the evaluation, facilitate the review to gather their feedback and guide their interpretations. Plan ways and time to check in on progress toward improvement. Look for opportunities to share the unique aspects of the program and its evaluation to external audiences.
<urn:uuid:f9846467-2ed7-41c5-8e12-14aa1df9a01a>
CC-MAIN-2016-26
http://betterevaluation.org/plan/reportandsupportuse
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901643
536
2.65625
3
What’s the difference between flour and corn tortillas? According to the Tortilla Industry Association, “tortillas are the most popular of all ethnic breads in the U.S. today including bagels, pita bread and English muffins,” so obviously this is a question that needs to be addressed. The following examines the pros and cons of both corn and flour tortillas to figure out for once and for all which is healthier. Corn tortillas contain 70 calories and flour tortillas have 150. Obviously, corn has the edge here. However, you have to consider that corn tortillas are often smaller than flour, so you may eat more. But, on a tortilla to tortilla calorie comparison, corn wins. Corn tortillas have 1 gram of fat and flour tortillas are higher at 3.5 grams. While not all diets make low fat a priority and many suggest that incorporating certain fats into your diet can be beneficial, the most concerning factor in this matchup is that 1.5 grams of the fat in flour tortillas are actually saturated fat, commonly known as the “bad fat” that all healthy eaters need to avoid. Corn wins. Protein & Fiber Protein and fiber are sometimes overlooked, but very important factors to consider when examining a nutrition label. Protein tells you how long the food will stick with you and help prevent hunger. Fiber assists with digestion and makes sure that your food passes through you rather than being converted and stored as fat. Corn tortillas have 2 grams of protein and zero grams of fiber. Flour tortillas have 4 grams of protein and 1 gram of fiber. Winner here: flour. Corn tortillas have a very simple ingredient list, including corn flour and water. Corn tortillas are both wheat and gluten free, which is great for people with allergies. Flour has a much longer ingredient list including additives for flavor including salt. One flour tortilla can have up to 20% of your daily sodium intake. Another point for corn. Both corn and flour tortillas have a place in the kitchen. Flour tortillas are great for handheld foods including burritos or wraps. Corn tortillas are great for tacos, enchiladas and tostadas. Winner here? We’ll call it a tie. Don’t forget to consider taste, even when on a diet! Corn tortillas have a more earthy, natural taste while flour is more rich and decadent. Winner? It comes down to a personal preference decision. After examining the health factors in a side by side comparison, both flour and corn tortillas have their pluses and minuses. But, based on calories, fat content and the ingredient list, all which are key components in weight loss, corn tortillas win as the healthier option. So, next time you are at a restaurant and your server asks, “Would you like corn or flour tortillas?,” make the healthy decision and choose corn.
<urn:uuid:8606939d-00e8-4ef1-950d-b350f6a0bc71>
CC-MAIN-2016-26
http://www.3fatchicks.com/corn-tortillas-versus-flour-tortillas-which-is-healthier/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944986
612
2.578125
3
How to Calculate the Quota Rent on Supply & Demand Original post by Edriaan Koening of Demand Media A government may impose a quota on a certain product for various reasons, for example to keep natural resources sustainable or to protect domestic producers. Because of the quota, the quantity of products being traded changes and the price changes as well. Quota rent refers to the economic benefit gained by the party who sells the same products at a higher price. By illustrating the situation on a supply and demand graph, you can find the quota rent. Draw a graph with the vertical axis representing price and the horizontal axis representing quantity. Label the price and quantity ranges of the product under quota along both axes. For example, label the price axis with the numbers $0 to $1,000 and the quantity axis with the numbers 0 to 500. Find out the levels of supply and demand at various price points for the product under quota. For example, a coffee table may have a demand quantity of 700 and a supply quantity of 200 at the price of $200. The same product may have a demand quantity of 300 and a supply quantity of 600 at the price of $600. Mark a point on the graph for each price point. Connect all the supply points and all the demand points to get a demand-and-supply graph. Each line will cut across the graph diagonally and the two lines will meet at one point. This point shows the price and quantity at which the product is traded if there were only domestic demand and domestic supply. Mark the price level at which the product is being traded in the world market to take into account demand and supply levels from other countries. Draw a horizontal line from the price axis across the graph. For example, The supply and demand lines for coffee tables may intersect at the price of $500 and the quantity of 550. However, importation of coffee tables pushes the price down to $400. Draw a horizontal line along the price point of $400. Draw a vertical line down from the point where the horizontal line from Step 4 meets the demand line and another vertical line where it meets the supply line. Bring these vertical lines down to meet the horizontal axis of the graph. The space between the two vertical lines shows the quantity of products being imported. For example, if the lines intersect the horizontal axis at 400 and 600, then there are 200 coffee tables being imported into the country at the price of $400. Determine the quota quantity imposed by the government. For example, the government may limit the number of coffee tables being imported into the country to 100. Find the horizontal level below the intersection of the supply and demand lines where the distance between the supply and demand line is 100. Draw a horizontal line along this level between the supply and demand lines. Draw a vertical line down from each of the two ends of the horizontal line you drew in Step 6 until it meets the horizontal axis of the graph. Shade the rectangular area made by these lines and the horizontal line from Step 4. This shaded area represents the quota rent. Multiply the length and the height of the shaded rectangle from the previous step to find the amount of the quota rent. About the Author Edriaan Koening started professionally writing in 2005 while studying toward her Bachelor of Arts in media and communications at the University of Melbourne. She has since written for several magazines and websites. Koening also holds a Master of Commerce in funds management and accounting from the University of New South Wales.
<urn:uuid:852fce8c-d69d-44ea-8e74-7b4d42e92d88>
CC-MAIN-2016-26
http://wiki.fool.com/How_to_Calculate_the_Quota_Rent_on_Supply_&_Demand
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924759
717
3.25
3
Missile guidance systems. This is the computer language that many people know as Ada. Today's Ada looks more like Inprise's Delphi than its caveman ancestor, and soon will produce Java applets that you can drop into your web sites. However, does it stack up against other languages available for Linux? C++ is the de facto standard for Linux programming. After all, the kernel itself is written in C. However, C++ is not suitable for all kinds of projects because different computer languages have different strengths and weaknesses. Ada was designed for team development and embedded systems, leading to advantages over C in development time and debugging. An in-depth 1995 study by Stephen F. Zeigler (http://sw-eng.falls-church.va.us/AdaIC/docs/reports/cada/cada_art.html) showed that development in Ada costs about half that of C++. It also suggests that Ada produces "almost 90% fewer bugs for the final customer". The test bed compiler for Ada 95 was gnat. Gnat was developed closely with gcc, the native C compiler for Linux. Unlike some compilers that translate a program into C and then feed the C program into gcc, gcc has built-in support for the Ada language. Like g++, gnat works with with gcc, allowing it to produce fast, quality executables without any intermediate steps. This integration gives a lot of flexibility to programmers who want or need to support multiple languages. Gnat has an extensive set of features for trading variables and function calls between Ada and C/C++. It can import C/C++ items into Ada, export Ada items to C/C++. You can also link Ada functions indirectly into Java, using Java's ability to import C++ functions. Gnat comes with over 140 standard libraries. These include numeric and string libraries, file operations, hash tables and sorts. If you would rather work directly with Linux C libraries, a variety of "binding" libraries exist, available for download from the Public Ada Library or The Home of the Brave Ada Programmers. These include bindings for POSIX (that is, the kernel), X Windows, Motif, TCL and WWW CGI applications. Although gnat is distributed under the GPL license, gnat and its libraries may be used in commercial applications. |JDK 1.0.1||gnat 3.09||gcc 126.96.36.199| |Long Int assignment||20.88||0.35||0.22| |Long Int multiply||29.50||0.38||0.23| Gnat performed well against gcc. Inspite of gnat's extensive run-time error checks, the test programs ran on average only a third slower than gcc. With these checks disabled, you should get performance comparable with C. As an interpreted language, Java ran several times slower than either Ada or C. The following table presents a summary of some common features, compared with others languages (including Delphi): |Objects & Classes||yes||yes||yes||yes| |Overloading||yes||some*||yes||yes||*no infix operators| |Built-in Multithreading||NO||yes||yes*||yes||*has 2 kinds| |Built-in Distributed Processing||NO||NO+||yes*||NO||*requires free add-on +not built-in, uses a class |Garbage Collection||NO||yes||yes*||yes||*not implemented by gnat| As you can see, Ada holds its own against Java. If you don't have gcc 188.8.131.52, you can specify a separate directory where gnat will install itself and its personal copy of gcc 184.108.40.206. To make gnat available, you have to perform two additional steps: Once you have the binaries installed, the more adventuresome can recompile gnat for their version gcc: Similar to Samba, ACT can provide comprehensive commercial support, but it costs several thousand dollars. The support includes priority bug fixes and the latest version of gnat (compiled against whatever version of gcc you are using). The commercial support is not strictly required for serious projects for the same reason that Samba isn't. Technical assistance is available on the Internet, and since the compiler adheres to the international standard and has been well-tested, the public release is as well-built and reliable as gcc itself. The commercial support is aimed at substantial projects that need high level support, such as projects critical to the success of a business or department. Gnat comes with extensive documentation in HTML format that you can browse with lynx or Netscape. This includes complete coverage of all of gnat's unique features and options. Unfortunately, no tutorial for Ada is included. ACT also provides a free gnat add-on called "Glade", which enables Ada 95's built-in distributed processing support. Programs using Glade can work with each other transparently over a network. When testing gnat, I found a minor problem related to the normalize_scalers pragma. This is a compiler directive that helps to detect variables used before they are initialized. The directive worked fine except in a package containing object definitions. All other language features I tested appeared to work properly under Linux, including multitasking. Gnat's executables can be several times the size of a gcc executable. Gnat provides compiler directives to reduce a program's size, but if a small footprint is an important issue, you may want to avoid gnat. Gnat also lacks an IDE, but this is a common problem for Linux languages. My biggest complaint isn't with the language: it's ACT's uneven customer support. Although I've always received prompt replies from ACT, they are not always courteous or helpful, and I've often been more frustrated than enlightened. Once I went to my local photocopy shop to make copies of the gnat manual, and they refused because the manuals contain a copyright notice. I emailed ACT and they were quick to respond that the copyright notice would be changed in the next edition. It's been over a year since they updated their FTP site, and I'm still waiting to see this simple change. On another occasion, my software company was investigating gnat's suitability as a development platform for Linux. As far as we knew, ACT could have been run out of Robert Dewar's basement. We wanted to know that there would be future releases of gnat before we committed ourselves to developing a software base and suffer the stigma of programming in Ada. At first we were told we shouldn't consider developing in gnat unless we had their commercial suppport. Then we were told that they wouldn't provide commerical support for a fledgling company like ourselves. If ACT would have given us a straight answer on the commercial support, told us that we would still be supported on a non-priority basis, and wished us luck on our future endevours, we would have been more than happy. Instead, we felt that they had been dishonest with us and then told that we were dirt when we had gone out on a limb to consider gnat in the first place. No company should be rude or disrespectful to its clients. They deserve straight-forward answers, timely fixes, and a "please" and "thank you" now and then to show their interest is appreciated. Robert Dewar, the head of ACT, assured me that "ACT is committed to providing high quality Ada 95 products for Linux. We have a number of serious Ada users using Linux today, and we intend to continue to serve the Linux market for such users." When speaking of Linux's exponential growth over the last two years, Mr. Dewar was quick to point out that the gnat Linux market is currently small. ACT is not interested in promoting the Ada 95 standard. They would rather spend their time improving their products and selling them to the existing Ada markets. That the Linux community has largely overlooked gnat is not surprising. The difference between a successful project and a failure can often hinge upon choosing the right development tool for the job, and gnat has a lot to offer. Gnat brings a versatile development environment to Linux, an efficient compiler and a rich set of development tools, and that is something the Linux community cannot ignore. Whether or not gnat has a bright future in Linux is anybody's guess, but it would be a shame if a high quality piece of free software should be overlooked. This is a lesson that all Linux enthusiasts have learned very well. Ada Core Technologies (ACT): http://www.gnat.com Ada Information Clearinghouse (AIC): http://sw-eng.falls-church.va.us/ Gnat FTP Site: ftp://cs.nyu.edu/pub/gnat Home of the Brave Ada Programmers (HBAP): http://www.adahome.com Public Ada Archive (PAL): http://www.wustl.edu Professional Java Fundamentals. Cohen, Shy et al. Wrox Press, 1996. Programming in Ada 95. Barnes, John. Addison Wesley, 1996. C++ for Professional Programmers. Blaha, Stephen. Int'l Thomson Computer Press, 1995.
<urn:uuid:fbc894ea-09cc-4a04-929f-45147178ef7a>
CC-MAIN-2016-26
http://www.tldp.org/LDP/LGNET/issue33/burtch.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923248
1,959
3.0625
3
Senator Kennedy's prolific career spanned nearly five decades, during which he authored more than 2,500 bills in the U.S. Senate. Several hundred have become public law. This fall we hope to add yet another bill to that distinguished list - the Matthew Shepard Local Law Enforcement Hate Crimes Prevention Act. Ted Kennedy was one of the Senate's earliest champions in the fight against hate crime. Since the early 1990s, Senator Kennedy has called for better government response to the growing problem of violence motivated by racism, religious intolerance, sexual orientation bias or other similar factors. For example, in one of his most courageous political moments, Senator Kennedy argued in favor of legislation protecting those who face violence because of their sexual orientation or gender identity. He spoke out after realizing that gay, lesbian, bisexual and transgender persons, as well as those who seek to protect their rights, have been threatened by a particularly aggressive wave of bias-motivated violence. Senator Kennedy later went to on to compare hate crimes to "acts of domestic terrorism" and worked tirelessly to pass hate crimes legislation in the Senate. In 2007, he joined Sen. Gordon Smith in a bipartisan effort to pass the Matthew Shepard Local Law Enforcement Hate Crimes Prevention Act. The bill failed to advance in the Senate Judiciary Committee, but that not deter Senator Kennedy. He continued to fight, and just this year, the Senate adopted this critical measure as part of the Defense Authorization Bill. Human Rights First is one of many U.S. rights groups supporting the Matthew Shepard Hate Crimes Prevention Act as it will help to ensure that law enforcement authorities have the tools they need to combat violent hate crime in the United States. This bill could prove to be one of the nation's strongest weapons to date to protect those who are most vulnerable to bias-motivated violence. These crimes -- including assaults on individuals, damage to homes and personal property, and attacks on places of worship, cemeteries, community centers, and schools -- undermine our shared values of equality and nondiscrimination, ideals that Senator Kennedy worked his whole life to promote. Senator Ted Kennedy was a longtime friend of the human rights movement and a powerful supporter of social justice and democracy at home and throughout the world. He had a keen understanding of the courage and tenacity it takes to overcome adversity and to find the way forward when the odds seem insurmountable. This fall, we sincerely hope that President Obama will follow in his footsteps by signing the Matthew Shepard Hate Crimes Prevention Act into law. Watch Human Rights First's Tribute to Edward Moore Kennedy.
<urn:uuid:401a6d9d-ea73-45be-b22b-d5f69960ec61>
CC-MAIN-2016-26
http://www.huffingtonpost.com/paul-legendre/ted-kennedy-fearless-lead_b_269781.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963835
515
2.6875
3
Society History By Region Europe Germany Historical Personages Chancellors Bismarck, Prince Otto Edward Leopold von Sites relating to the "Iron Chancellor" of the Second Reich. Related categories 1 Astrocartography of Otto von Bismarck Summary of Bismarck's career, and the astrological factors which supposedly influenced it. Otto von Bismarck: The Iron Chancellor of Germany During his life Otto von Bismarck pursued the idea of German unification. As a result, Germany grew into a powerful empire under its iron chancellor. Prince Bismarck Died Last Night A contemporary article in the New York Times, covering international and American reactions. [PDF] (July 30, 1898) Last update:April 17, 2014 at 7:05:08 UTC
<urn:uuid:17e7040d-3136-4951-89dd-f8ac5ae65e88>
CC-MAIN-2016-26
http://www.dmoz.org/Society/History/By_Region/Europe/Germany/Historical_Personages/Chancellors/Bismarck%2C_Prince_Otto_Edward_Leopold_von/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00167-ip-10-164-35-72.ec2.internal.warc.gz
en
0.833326
172
3.078125
3
Millions of maggots squirm over blackened pieces of fruit and bloody lumps of fetid flesh. A pungent stench of festering decay hovers over giant vats of writhing, feasting larvae. It's more than enough to put most people off their lunch. Yet these juvenile flies could soon be just one step in the food chain away from your dinner plate. Such nausea-inducing scenes are daily occurrences at a test site owned by AgriProtein, a South African company which began building what it says will be the world's largest fly farm a few weeks ago. Others in the US, France, Canada and the Netherlands are also gearing up for large-scale farming of insects to feed chicken, pigs and farmed fish. Hundreds of people attended the Insects to Feed the World conference in Wageningen in the Netherlands earlier this month. Many of them are convinced that bugs can provide a sustainable alternative to more conventional but increasingly expensive cereals, fishmeal and soybeans. While it may be normal for chickens scratching around a farmyard to gobble up grubs and bugs, until now no-one has taken it to an industrial scale or fed insects to animals that eat other foods in the wild. So can consumers stomach the idea of tucking into smoked salmon, chicken burgers and pork chops that come from maggot-fed animals? The UN Food and Agriculture Organisation estimates that population growth and increased demand for meat and fish will require 70% more feed for cattle by 2050. This will put extra strain on arable land, and further pressure on fish stocks; currently, a third of the fish landed gets turned into meal to feed animals. And the cost of feeding livestock has soared. "Ingredients for traditional animal feeds are becoming increasingly expensive, especially fishmeal because of over-exploitation of the oceans," says Arnold van Huis, an entomologist at Wageningen University, and co-author of The Insect Cookbook, an English translation of which was published in March. "Cereals are used but the nutritional profile of plant proteins is not good enough. Soya is high in protein but prices have also risen sharply. We need alternatives." It is hardly surprising that entrepreneurs are investigating new possibilities. AgriProtein announced earlier this month that it had raised $11m to build its first two commercial-scale farms. The first, in Cape Town, will create 20 tonnes of larvae and 20 tonnes of fertiliser per day. It uses three species – the black soldier fly, the blowfly and the common housefly. Each is adapted to feed on different types of waste, and their meals include leftover or spoiled food, manure and abattoir waste. Males and females are bred in giant cages and their eggs are extracted and mixed with its food. One kilogram of eggs turns into around 380kg of larvae in just three days. The larvae are then extracted, dried and milled, leaving behind nitrogen-rich material for compost. AgriProtein’s Magmeal product is approved as a feed for chickens and fish in South Africa. The company is also preparing to apply for approval for an iron-rich product made from larvae fed on blood and guts for use as an additive for breeding sows; piglets aren’t born with enough iron, and in the wild animals usually get what they need from soil. In captivity, they need iron supplements. AgriProtein hope their product will be cheaper. AgriProtein is not alone. The Vancouver-based Enterra Feed Corporation hope to triple production of black soldier fly larvae products for pet food and eventually for aquaculture feed by next summer. EnviroFlight, based in Ohio in the US, also produces feed for farmed fish made from black soldier fly larvae. Ynsect hopes to be farming mealworms and black soldier flies near Paris on a large scale by 2016. And Protix Biosystems in the Netherlands is now planning to expand its black soldier fly farming operation, selling larvae lipids for use in animal feed, and protein to pet food manufacturers. Laws in some regions, however, are currently preventing insect-feed taking off. In the European Union, insects fall under the same rules as traditional livestock once they have been killed, dried or otherwise processed. That means their protein can be fed to pets but not to animals destined for human consumption. Even if this was to be changed, other regulations state that farmed animals cannot be fed on catering waste or manure. These regulations, drawn up in the wake of the 1990s BSE crisis, were never intended to apply to insects. The European Commission's Directorate-General for Health and Consumers in Brussels is working towards allowing insect meal to be fed to farmed fish. But this is likely to take a year. Different laws apply in different US states. Insect-based ingredients in feed given to animals bred for human consumption are allowed in Ohio, and manufacturers are hopeful that other states will follow their lead. The Canadian Food Inspection Agency is also considering allowing protein and fats from insect larvae to be used in aquaculture and chicken feed. There are differing views on the speed with which the law should be revised. "Some want to work with all kinds of risky material like manure or human faeces to open up the whole thing at once," says Kees Aarts of Protix Biosystems. "It's a bridge too far. It is also important that the industry does not over-sell itself. Insects can play an important role in providing a new source of nutrients in a world of growing demand, but they cannot solve all the big problems we're facing all at once." Will consumers accept maggot-fed food? Many may not care – after all, the realities of industrial food production are already hidden from view for most people. And Jason Drew, who runs Agriprotein with his brother David, is convinced that a combination of the environmental benefits and the "back to nature" message will win over squeamish customers. "Out in the fields, the natural diet of chickens consists of flies, larvae, worms and ants, and wild-caught trout gets its protein from insects," he says. "That's why they jump out of the water. What we're doing is going back to an entirely natural process, and that's something people understand very quickly. Ten-to-15 years from now this will be a very big global industry." If you would like to comment on this, or anything else you have seen on Future, head over to our Facebook or Google+ page, or message us on Twitter.
<urn:uuid:17fa2d15-2104-48c9-a091-d7dad3c24e71>
CC-MAIN-2016-26
http://www.bbc.com/future/story/20140603-are-maggots-the-future-of-food
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96648
1,359
2.609375
3
Using a Multimeter A multimeter is used to make various electrical measurements, such as AC and DC voltage, AC and DC current, and resistance. It is called a multimeter because it combines the functions of a voltmeter, ammeter, and ohmmeter. Multimeters may also have other functions, such as diode and continuity tests. The descriptions and pictures that follow are specific to the Fluke 73 Series III Multimeter, but other multimeters are similar. Important note: The most common mistake when using a multimeter is not switching the test leads when switching between current sensing and any other type of sensing (voltage, resistance). It is critical that the test leads be in the proper jacks for the measurement you are making. - Be sure the test leads and rotary switch are in the correct position for the desired measurement. - Never use the meter if the meter or the test leads look damaged. - Never measure resistance in a circuit when power is applied. - Never touch the probes to a voltage source when a test lead is plugged into the 10 A or 300 mA input jack. - To avoid damage or injury, never use the meter on circuits that exceed 4800 watts. - Never apply more than the rated voltage between any input jack and earth ground (600 V for the Fluke 73). - Be careful when working with voltages above 60 V DC or 30 V AC rms. Such voltages pose a shock hazard. - Keep your fingers behind the finger guards on the test probes when making measurements. - To avoid false readings, which could lead to possible electric shock or personal injury, replace the battery as soon as the battery indicator appears. The black lead is always plugged into the common terminal. The red lead is plugged into the 10 A jack when measuring currents greater than 300 mA, the 300 mA jack when measuring currents less than 300 mA, and the remaining jack (V-ohms-diode) for all other measurements. The meter defaults to autorange when first turned on. You can choose a manual range in V AC, V DC, A AC, and A DC by pressing the button in the middle of the rotary dial. To return to autorange, press the button for one second. Automatic Touch Hold Mode The Touch Hold mode automatically captures and displays stable readings. Press the button in the center of the dial for 2 seconds while turning the meter on. When the meter captures a new input, it beeps and a new reading is displayed. To manually force a new measurement to be held, press the center button. To exit the Touch Hold mode, turn the meter off. Note: stray voltages can produce a new reading. Warning: To avoid electric shock, do not use the Touch Hold to determine if a circuit with high voltage is dead. The Touch Hold mode will not capture unstable or noisy readings. AC and DC Voltage Turn off the power and discharge all capacitors. An external voltage across a component will give invalid resistance readings. This mode is used to check if two points are electrically connected. It is often used to verify connectors. If continuity exists (resistance less than 210 ohms), the beeper sounds continuously. The meter beeps twice if it is in the Touch Hold mode. Warning: To avoid injury, do not attempt a current measurement if the open circuit voltage is above the rated voltage of the meter. To avoid blowing an input fuse, use the 10 A jack until you are sure that the current is less than 300 mA. Turn off power to the circuit. Break the circuit. (For circuits of more than 10 amps, use a current clamp.) Put the meter in series with the circuit as shown and turn power on.
<urn:uuid:bdd22f5a-6838-48a7-89fe-ca3c8daa9e32>
CC-MAIN-2016-26
https://duniaengineering.wordpress.com/2009/03/19/bagaimana-cara-menggunakan-multimeter/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.851792
776
3.4375
3
The ICZN does not usually deal with the routine descriptions, naming and publishing of new species, this is a practical matter for taxonomists however the ICZN does define the rules which create the framework under which this can be undertaken. Describing new species is a task for a specialist; much research may be needed to be sure a species has not already been described, and to decide if it is sufficiently different from existing species to describe. The characters considered diagnostic by taxonomists are often highly technical and specific to particular to groups of animals, so it is best to consult an expert in the group concerned. Taxonomic procedure is described in published works such as Winston, J. 1999. Describing species. Columbia University Press. It is important for taxonomists to follow the rules set by the ICZN when describing species, these ensure, for example: - the description is published in a work that is obtainable in numerous identical copies, as a permanent scientific record (criteria of publication, Chapter 3); - the scientific name must be spelled using the 26 letters of the Latin alphabet; binominal nomenclature must be consistently used; and new names must be used as valid when proposed (criteria of availability, Chapter 4); - that names are consistently formed following certain rules; that original spellings can be established (formation of names, Chapter 7); - that names are based on name-bearing types, the objective standard of reference for the application of zoological names (Chapter 16); - that general recommendations are followed for ethical behaviour (Appendix A); - and that best practice should be used to give taxa names which are unique, unambiguous and universal (Appendix B).
<urn:uuid:55c9f56e-d4de-4c91-ba76-45d948777f3e>
CC-MAIN-2016-26
http://iczn.org/content/how-can-i-describe-new-species
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93725
353
3.546875
4
Translation of photosensitive in Spanish: - Example sentences - When exposed to ultraviolet light, a photosensitive chemical in the liquid causes the material to harden and encapsulate the cells. - One is a photosensitive cathode which emits electrons when exposed to light and the other is an anode which is maintained at a positive voltage with respect to the cathode. - Some polymers, such as most polyimides and polycarbonates, are not photosensitive and are typically processed using photoresist patterning and reactive ion etching. What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:5fa4025c-4088-4704-b0ec-eacd4dd0bb23>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/us/translate/english-spanish/photosensitive
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00116-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90542
141
2.96875
3
Swift images its first GRB The upstart satellite imaged a gamma-ray burst even before it was fully operational. January 25, 2005 For the first time, a gamma ray burst (GRB) was imaged during its explosive act. An orbiting satellite named Swift accomplished this feat. A product of a NASA-led international collaboration launched November 20, 2004, Swift is designed to study the mysterious, powerful GRBs — the most explosive events in the universe since the Big Bang. On January 17, days before even being fully operational, two of Swift's three instruments successfully detected and imaged GRB050117. Less than 200 seconds after Swift's Burst Alert Telescope (BAT) detected gamma rays, the orbiting satellite autonomously turned its X-Ray Telescope (XRT) on the burst location. The XRT then captured an image of the burst — while the BAT was still detecting gamma rays. According to Neil Gehrels, Swift's principal investigator at NASA's Goddard Space Flight Center in Greenbelt, Maryland, the image is the first "prompt X-ray emission from a gamma ray (burst), and Swift's first autonomous slew caught in the act." Within an hour after the BAT detected the burst, its location was transmitted to ground-based stations for further analysis. Four hours later, four observatories were searching for GRB050117's optical and infrared emissions. For future detections, telescopes in orbit will join ground-based observatories and turn to the burst location to observe the afterglow and its surrounding region — dramatically increasing the amount of data captured. Penn State University's David Burrows, the XRT team leader, said processing data from GRB050117 will take about 2 weeks — so neither an image of the burst nor conclusive results is available at this time. Project scientists think it likely the burst resulted in a new black hole. Swift locates and images the source of GRBs faster than any previous instrument. Most bursts last less than 10 seconds and few last 1 minute — so immediate response is critical for learning more about these mysterious phenomena. Burrows emphasized the significance of quick detection and analysis: "Getting to the bursts before they fade gives us a chance to gather more clues to the burst's origin. The longer look gives us more light to look at and a bigger spectrum" to analyze. In fact, project scientists wear beepers to alert them any time of day — when a GRB is detected. According to Burrows, Swift data provides a time capsule from the first generation of star formation. Swift can peer deeper into the universe — farther back in time — than any previous instrument. No optical image available Swift's third instrument, the UltraViolet/Optical Telescope (UVOT) was not yet operational to capture its own image of GRB050117. Burrows says all the instruments will be operational days ahead of the scheduled February 1 goal. On January 24, all three instruments completed gathering data from another burst, GRB050124. Not automated at the time, Swift focused its instruments on the burst 3 hours after its explosion. Each instrument executed its responsibilities successfully.
<urn:uuid:3ba65dd1-8496-4fd4-867b-efcb6c4fdc66>
CC-MAIN-2016-26
http://www.astronomy.com/news/2005/01/swift-images-its-first-grb
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933751
652
3.765625
4
Find great books for preschool, elementary, and middle school children and teens along with ideas of ways to teach with them in the classroom across the curriculum. 1. Letting Swift River Go by Jane Yolen and Barbara Cooney (Little Brown, 1992 ISBN 0316968994. Order Online) is my first choice for a science book because it presents not only the mechanics of storing water, but the results of that action on the people who are displaced by those actions. The idea that progress is a two-edged sword is well presented and should help students think about the prices paid. More information including activities, related books and links. 2. Everybody Needs a Rock by Byrd Baylor and Peter Parnall (Simon & Schuster, 1974 ISBN 0689710518. Order Online) could have gone into the math list as well as this one. It's an obvious choice when working with rocks and minerals in the science program but it belongs in any science work where attributes are involved. The added accent on beauty and the senses makes this one golden. More Information. 3. Helen Cowcher's Tigress (Farrar, Straus & Giroux, 1991 ISBN 0374477817. Order Online) builds on the conflict between the needs of a wild animal and those of the populace, surely a pervasive problem in science. More Information. 4. Shark in the Sea by Joanne Ryder and Michael Rothman (Morrow, 1997 ISBN 068814909X. Order Online) involves transformation as many of Ryder's books do. We experience one day through the eyes of an animal, in this case a shark. The idea of perspective is thus enlarged by an engaging and lyrical book. 5. Science is a search for truth. Besides that search, in The Day Jimmy's Boa Ate the Wash by Trinka Hakes Noble and Steven Kellogg (Dial, 1980 ISBN 0140546235. Order Online) we get varying perspectives and the idea that each action causes a reaction. 6. Virginia Hamilton and Barry Moser's In the Beginning: Creation Stories from Around the World (Harcourt, 1996 ISBN 0152387404. Order Online) belongs in a science collection for contrast, food for thought, and the evidence of the very human need to explain the unknown. 7. The focus is on procedure in Janet Stevens' Cook-a-doodle-doo (Harcourt, 1999 ISBN 0152019243. Order Online) as a descendant of The Little Red Hen tries to make a strawberry shortcake. The misunderstandings and mistakes make for a humorous story but the added insets give explanations and information about each process and make this a good science book. 8. There are several reasons for using You Can't Take a Balloon into the Metropolitan Museum by Jacqueline Preiss and Robin Preiss Glasser (Dial, 1998 ISBN 0803773014. Order Online) in a science program. First of all, of course, there is the balloon which takes off and lands and takes off again. Finding the science behind each move is just one activity from this book. Another plus is that there are strong parallels between the balloon's activities and those of its owner inside the museum. Sometimes these parallels are obvious but other times it's subtle. It takes careful observation to find them all. 9. Owl at Home by Arnold Lobel (HarperCollins, 1975 ISBN 006440346. Order Online) is a controlled reader and you might skip right by the science inside but it belongs in the science program. In each of these short stories, Owl could do with a little science training. His interpretation of natural events is confusing to say the least. More Information. 10. The last of the science picture books is not as much of a stretch as some of my other choices but I can't think about natural science without reaching for Owl Moon by Jane Yolen and John Schoenherr (Philomel, 1987 ISBN 0399214577. Order Online). It's a lyrical and pictorial gift to anyone who cares for the out of doors. A Seed Is Sleepy by Dianna Hutts Aston. Illustrated by Sylvia Long. (2007, Chronicle. ISBN 9780811855204. Order Info.) Picture Book. 28 pages. Gr 2-8. This is a gorgeous picture book from the same author/illustrator team that brought us An Egg Is Quiet. We start with a beautiful opening page spread showing sunflower seeds nestled within the center of a ripe sunflower head. Beautiful illustrations show seeds next being secretive (lying dormant for a season or for years), fruitful (encased in blueberries and papayas) and so on. Full Review. You Can Always Change Your Mind Later. Read More about our agreement with Amazon.com. Search Our Site Subscribe to our Free Email Newsletter. In Times Past by Carol Hurst and Rebecca Otis Integrating US History with Literature in Grades 3-8. Enliven your US History curriculum! Teach US History using great kids books. By Carol Otis Hurst!!
<urn:uuid:7bb4a852-4a78-4dbb-869b-ea0d05faee35>
CC-MAIN-2016-26
http://www.carolhurst.com/subjects/picturescience.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.878604
1,048
3.65625
4
Most developers find it daunting to create an iPhone app, let alone people who have no programming experience. If you have never programmed previously, you have no idea where to start. The sheer volume of information can be overwhelming. Moreover, if you pick up a book, mostly they assume you have some previous experience preferably with an object-oriented programming language. For some reason people keep thinking you need to learn the C programming prior to learning Objective-C. Some may not even have heard of these programming languages. Let’s start by dispelling some common myths. No, you do not need to learn the C language. It would certainly help, but it is not required. If you have previous experience with one of the many object-oriented programming languages like: Java, Ruby, Python, and C#, then it is certainly helpful. But where do you start if you have no previous experience with programming? First start with the basics: data types, variables, conditionals, loops and functions. After that you should be ready to learn object-oriented programming things like: classes, objects, encapsulation and inheritance. You can then pick up any beginner’s book or tutorial because you are now equipped to follow along. It’s one of the reasons why we rethought our teaching style here at Treehouse. Seeing how daunting the experience can be for a beginner, we went back to the drawing board and carefully crafted the learning experience. Our new project-based content eases you into iPhone development especially if you have no programming experience. Why? Because we realize that many of you don’t have the time to learn a lot of theory to actually create an app. Don’t get me wrong, the theory gives you the building blocks that makes a good developer. And if you have the time and inclination, then please do learn it. Imagine building an app while learning the theory as you go along. That’s exactly what we hope to accomplish at Treehouse. In our first project, we build a fun application called the Crystal Ball. Since we build this app in stages, you go through a step by step process in which we build upon the previous lesson until we have a fully functional app. You learn essential concepts like object-oriented basics, randomization, apply design, accelerometer or capturing device motion, gestures and how to deploy the app to the App Store. Once you have the first project under your belt, you will have a very good sense of what it is like create an iPhone app. And if you enjoyed the process, then you can move on to the other challenging iPhone projects. And if you didn’t, then at the very least you can show off the Crystal Ball app to your friends and family. Our goal is to make app development fun and easy. So what are you waiting for? Go ahead and create your first iPhone app. After all, expertise comes with experience. The latter is quite important.
<urn:uuid:ea97719f-d78c-4418-99c3-03a512ba0db0>
CC-MAIN-2016-26
http://blog.teamtreehouse.com/learning-to-create-an-iphone-app
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939193
605
2.6875
3
Over time, old shrubs that have not been pruned annually can turn into overgrown yard monsters, detracting from the appearance of your home and yard. Gardeners can trim overgrown bushes and shrubs, gradually replacing the old wood with new. This keeps the shrub healthy and looking attractive. Always prune in the dormant season, late winter to early spring once frost danger has passed for your region. Since deciduous shrubs or bushes have no foliage it will be easier to make cuts at this time. Look for dead or diseased branches on your bush or shrub. Removing these keeps the plant healthy and cuts back on the spread disease. Dead branches feel brittle and don't move with the wind. Diseased ones may be scarred, wounded or discolored. Cut off dead or diseased branches at their base using anvil pruners for cuts thinner than 3/4-inch and lopping shears for thicker ones. To prevent disease from spreading, spray your pruning tools with disinfectant between each cut. Clip off up to 1/3 of the old, overgrown branches, bearing in mind the total amount of unhealthy wood you just removed. Remove fewer branches if there was a lot of poor wood. Discard the clippings and wait until the second year. Repeat steps 1 to 3 in the second year. There should be less unhealthy wood since you removed so much in the prior season. Again clip off up to 1/3 of the overgrown old branches. There should be 1/3 old growth wood remaining when you finish the second year pruning. Repeat steps 1 to 3 in the third and final year. Your old overgrown shrub will be replaced with new wood. Trim the shrub annually to maintain a compact shape and prevent it from getting overgrown again. Trim back the tips of branches once or twice in a season using anvil pruners. Always remove dead and diseased wood.
<urn:uuid:8e373218-199e-40bb-9ab3-68b8132e0c37>
CC-MAIN-2016-26
http://www.gardenguides.com/139113-trim-overgrown-bushes-shrubs.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938127
405
2.671875
3
Prior research had shown that, just two to four weeks after contracting HIV-1, the lymphoid tissue layer in the mucous membrane of a patient's gastrointestinal (GI) tract can lose up to 60 percent of its CD4 memory T cells -- immune cells responsible for recognizing invaders and priming other cells for attack. Intrigued, Martin Markowitz, an Aaron Diamond Professor at Rockefeller University and a staff scientist at ADARC, wanted to know whether this loss was reversible, and whether giving patients HAART during the early infection period helped restore these cells to the GI lining the way it restored them to the blood itself. In a paper published today in PLoS Medicine, Markowitz, Rockefeller researcher and clinical scholar Saurabh Mehandru, and their colleagues report on a trial of 40 HIV-1 positive patients who began treatment with HAART shortly after contracting the virus -- during the acute early infection phase -- and who they followed from one to seven years. The researchers found that although the blood population of CD4 T cells rebounded to normal levels, a subset of the GI tract population remained depleted in 70 percent of their subjects. "If we sample the blood, it only has two percent of the total volume of the se cells. It doesn't give us the whole picture," Markowitz says. "But if we actually go into tissue, we see something different. What we see there is eye-opening." After three years of intensive drug therapy that suppresses HIV replication very effectively, most patients still had only half the normal number of CD4+ effector memory T cells in their GI tracts. "Obviously the first question is, why" What's the mechanism"" Markowitz says. A second paper, published online in the Journal of Virology, makes some headway toward an answer. By examining the viral burden of DNA and RNA in cells from the GI tract, and comparing that to cells from the peripheral blood, Markowitz, Mehandru and their collaborators determined that the mucosal lining of the GI tract carried a disproportionately heavy viral load. That means that the initial loss of CD4 T cells in that area is partially due to virus activity. But the researchers also found evidence suggesting that there are at least two more ways in which the cells were being killed off. Some of the T cells self-destruct (a process called activation-induced cell death or apoptosis), while some appear to be killed by other cytotoxic immune cells. "These papers speak strongly to HIV pathogenesis, to HIV therapy, and to understanding how the host and virus interact," Markowitz says. However, the short and long term consequences of the persistence of this depletion remain unknown. In the clinic, if the loss of CD4 T cells in the GI tract translates into increased incidence of colonic polyps or colorectal cancer, routine monitoring practices will have to be re-examined, with HIV-positive patients receiving colonoscopies earlier and perhaps more frequently than current recommendations allow. In the laboratory, these findings should give researchers another angle with which to approach HIV vaccines. "What good is a vaccine going to be if you get immune responses in peripheral blood but t here's nothing in tissue"" Markowitz says. "It's pretty clear that a successful vaccine will need to address issues surrounding mucosal immunity, which is an area that -- relatively speaking -- has been previously ignored." Related biology news : 1. Research advances quest for HIV-1 vaccine 2. HIV-1 spread through six transmission lines in the UK 3. Ancient immune defense mechanism is no match for HIV-1 4. UAB researchers confirm HIV-1 originated in wild chimpanzees 5. HIV-1s high virulence might be an accident of evolution 6. Human testis harbors HIV-1 in resident immune cells 7. Researchers discover new details about HIV-1 entry and infection 8. Study: Harmless virus kills some cancers 9. VCU study shows hormone-like molecule kills cells that cause inflammation in allergic disease 10. Scientists develop nanotech-laser treatment that kills cancer cells without harming healthy tissue 11. Restoring silenced suppressor gene kills lung-cancer cells
<urn:uuid:a187aa5f-ac44-49bc-81c0-5725b4e94fe9>
CC-MAIN-2016-26
http://www.bio-medicine.org/biology-news/HIV-1-kills-immune-cells-in-the-gut-that-may-never-bounce-back-4270-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945251
857
3.265625
3
Columbus taxpayers will spend billions of dollars to stop millions of gallons of sewage from spilling into the Scioto River and other waterways during heavy rains. As the city builds new treatment tanks and interceptor pipes, it could save a lot of money by investing in sand, according to an Ohio State University researcher. A recent research project that was funded by the city suggests that a system using sand — and the bacteria that live in it — could effectively treat sewage before it reaches the water. “It looks like a very, very promising technology,” said Karen Mancl, the project’s lead researcher and an expert on sewage and water quality at Ohio State. Mancl has spent more than three years studying the effectiveness of bioreactors, a type of sewage-treatment system that dates to the 1800s. The concept is simple: Sewage flows into the sand where bacteria digest ammonia, phosphorus and other pollutants that can sicken people, kill wildlife and help grow thick mats of toxic blue-green algae. The water that flows from the bioreactor contains only trace amounts of pollutants.
<urn:uuid:35f331c9-1a1a-4dcf-8998-d628413a2a03>
CC-MAIN-2016-26
http://www.microbeworld.org/component/jlibrary/?view=article&id=8076
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00200-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944285
226
3.578125
4
This ad was made for an anti-smoking campaign, mocking Joe Camel, the character used to represent Camel cigarettes. Smoking has been an issue in the world for the past decades. Recent knowledge has been exploiting the negative side effects of smoking. Advertisements such as below are very blunt and to the point. They don't try to candy-coat the fact the smoking has very harmful side effects. These ads for anti-smoking campaigns are harsh, but are made this way to hopefully scare people away from smoking or scare them into quitting smoking. I feel that it takes a blunt ad such as this to really persuade the target audience. This ad is blunt and harsh because it shows just how bad someone can end up after smoking. This ad is relevant to what we are learning in class because it proves the power of advertising and the strength that a message can have on people's behavior.
<urn:uuid:8c9fd53f-3c52-4b12-842a-4da07ac31749>
CC-MAIN-2016-26
http://taylerz.blogspot.com/2010/09/anti-smoking-ad.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00160-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975901
180
2.6875
3
Scotland had the highest poverty levels and worst death rate of the regions Almost a third of Scottish households are "breadline poor", according to research commissioned by the BBC. Changing UK - a study conducted by Sheffield University - looked at how nations and regions within Britain have altered over the past four decades. It said Scotland had the largest number of poor people in each of the last four decades, as well as the highest death rate of all 14 regions examined. The Child Poverty Action Group (CPAG) described the figures as "a scandal". The data was drawn from official sources and divided into 14 BBC television regions and 45 BBC Radio station areas, with Scotland defined as one of the TV regions. The report said that in each decade since 1970, Scotland had the highest proportion of people in the breadline poor category. The category is defined as a poverty line so low that people are excluded from participating in "the norms of society". In 1970, 27% of the Scottish population was classed as breadline poor, with the figure dropping to 23% in 1980. By 1990, 27% of Scots fell into the category, but this rose to 32% in 2000, according to the report. The proportion of people in Scotland classified as "asset wealthy" also rose, while the middle category of non-poor, non-wealthy was squeezed, indicating that the gap between rich and poor had widened over the 40-year period. John Dickie, head of CPAG in Scotland said: "Across Scotland the number of families living below the poverty line remains a scandal. "There is nothing inevitable about this injustice, an injustice that damages children's health, education and wellbeing in profound ways. "Whilst real progress has been made in the last 10 years in tackling child poverty, that progress has not gone far enough and has recently stalled completely." He called for "substantial extra investment" in child benefit and tax credits from the UK Government. "At times of recession, investing in our poorest families is not just the right thing to do morally, it is the most effective way to boost the economy as our worst off families have no choice but to spend any extra money immediately in the local economy," he added. Meanwhile, Office for National Statistics data revealed that Scotland had the highest mortality ratio of the 14 regions, with people north of the border 17% more likely to die on any given day, week, month and year than the average Briton. The researchers said this statistic took into account that the country's average age was 41.7 and said there were 17% more deaths than would have been expected. But the mortality ratio in Glasgow was the highest of all the nations and regions analysed in the study, at 31%. Professor Robert Wright said Scotland had become more segregated Other issues covered in the report included housing, with the report finding that more new homes were built north of the border in 2006 - 20,058 - than any other BBC TV region. Population trends were also analysed, with Scotland's falling by 1% in the period 1981-2006. The only other area to experience a decline was the north west of England. Meanwhile, Glasgow's population has dropped by 12% since 1981. Robert Wright, professor of economics at Strathclyde University's business school, said there had been a drift in population from the west of Scotland to the east. "The younger part of the population in Scotland is concentrating in the east," he said. "There's a big gap in the standard of living between Glasgow and Edinburgh." The researchers also measured loneliness by looking at four factors: the number of non-married adults and one-person households, the number of people who rent privately and those who moved to their current address within the past year. These factors indicated that people were less likely to be involved in their local community or feel part of it, the researchers said. Loneliness increased in Scotland, according to the measurements, from 18.5% in 1971, to 28.5% in 2001, making it the third loneliest region in the UK. The study found Edinburgh had the largest number of lonely people, with a third of its population falling into the category. Professor Wright said rural areas were usually considered as more community focused than cities and that the loneliness trend may be explained by the city having a high student population. He said: "A lot of cities have a transient population, who are there for a short period of time to study, for example."
<urn:uuid:30bf6ee5-5a35-4956-abcc-42bd79882177>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/uk_news/scotland/7750728.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97719
933
2.71875
3
FDA OKs New Schizophrenia, Bipolar Drug Saphris Trumps Placebo at Reducing Symptoms of Schizophrenia, Bipolar Disorder Aug. 14, 2009 -- The FDA has approved a new drug called Saphris to treat schizophrenia and bipolar I disorder in adults. "Mental illnesses like schizophrenia and bipolar disorder can be devastating to patients and families, requiring lifelong treatment and therapy," Thomas Laughren, MD, director of the division of psychiatry products in the FDA's Center for Drug Evaluation and Research, says in a news release. "Effective medicines can help people with mental illness live more independent lives," Laughren says. The FDA notes that the most common symptoms of schizophrenia include hearing voices or seeing things that are not there, having false beliefs (for example, believing that others are controlling thoughts, reading minds, or plotting harm), and being inappropriately suspicious or paranoid. Bipolar I disorder is a chronic, severe, and recurrent psychiatric disorder that causes alternating periods of depression and high, increased activity and restlessness, racing thoughts, fast talking, impulsive behavior, and a decreased need for sleep. Saphris, which comes in tablets, belongs to a class of drugs called atypical antipsychotics. The FDA approved Saphris based on clinical trials in which the drug trumped a placebo at reducing schizophrenia symptoms in adults and other trials in which Saphris was better than a placebo at treating symptoms of bipolar disorder. In clinical trials, the most common side effects reported by schizophrenia patients being treated with Saphris were the inability to sit still or remain motionless, decreased oral sensitivity, and drowsiness. The most common side effects in clinical trials of patients treated with Saphris for bipolar disorder were drowsiness, dizziness, movement disorders other than the inability to sit still or remain motionless, and weight gain. All atypical antipsychotic drugs carry a "black box" warning, the FDA's sternest warning, alerting prescribers about an increased risk of death associated with off-label use of these drugs to treat behavioral problems in older people with dementia-related psychosis. Saphris isn't approved for those patients. Saphris is made by the drug company Schering-Plough.
<urn:uuid:2a74c810-8073-49b2-b34a-92af8832bea9>
CC-MAIN-2016-26
http://www.webmd.com/bipolar-disorder/news/20090814/fda-oks-new-schizophrenia-bipolar-drug
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941604
467
2.546875
3
How the Tutorial Works The tutorial is composed of 16 modules, which are grouped into three courses. The tutorial was designed so that each module is free-standing — you can start anywhere and move around as you wish. However, you should be aware that later modules build on previous ones, and a greater level of statistical and epidemiologic knowledge is assumed for later modules. All of the modules in this tutorial start in a full-screen window that shows the major tasks that will be covered. Under each Task are several links to pop-up windows. - The "Key Concepts" link has more information about the task, including NHANES specific information and caveats to consider. - The "How to" link provides steps for completing the task and may include demonstrations of the steps. - The "Download Sample Code and Datasets" link takes you to a page where you can download the code and datasets used in the module. Screenshot of Module Facepage It is important to note that the style of some of the tasks differs from the style of other tasks, depending on the type of information that is provided. Some tasks only have a "Key Concepts" link, whereas others have all three links mentioned above. Additionally, some of the early modules have tasks that require you to navigate the NHANES website and are interactive. Others provide SAS and SUDAAN programs, with explanations needed for completing the task. When you are asked to navigate the NHANES website in a "How to" link, you will see the live NCHS website on the right side of the screen and directions for completing the task on the left, as shown in the screenshot below. To print the instructions, use the "Print Text!" button. Screenshot of "Navigate" type Task Embedded in the "How to" instructions are links that say "Watch animation." These animations were created for those who need additional help completing a step. Clicking this link will open a pop-up window. In this window, you will see a narrated video demonstration of the step. If you choose not to view the demonstrations, you can bypass the animation and complete the tasks on your own using the instructions provided. For more information on using the demonstrations, see the demonstrations section of this document. Screenshot of Demonstration Link Adobe Flash player is required for viewing the demonstrations that show how to complete a task. See the Flash player section of Technical and Software Requirements for information on installing the Flash player. To view a demonstration, click the designated link. The animation will open in a pop-up window and start playing. The slides are timed and will advance automatically. If you wish to skip or repeat a slide, use the playback controls described in the next paragraph. Screenshot of Playback Controls on Demonstration Playback controls are at the top of the animation:
<urn:uuid:39742ee4-c599-4111-9bf3-4449f4ae2048>
CC-MAIN-2016-26
http://www.cdc.gov/nchs/tutorials/dietary/logistics/howthetutorialworks.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00037-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916654
583
2.6875
3
The first rule allows for the irradiation of unrefrigerated raw meat. Previously, only refrigerated or frozen meats could be irradiated, but FDA says research on the meat treated at higher temperatures shows that this application poses no health risk. The second rule ups the dose of absorbed ionizing radiation in poultry from 3.0 kilogray (kGy) to 4.5 kGy. While this higher dose is already allowed in meat and molluscan shellfish, the limit had remained at 3.0 kGy for poultry until now. The two rules were issued in response to two petitions filed in 1999 by the U.S. Department of Agriculture’s Food Safety and Inspection Service. FDA says that since that time, it has received many comments from consumer advocacy groups – including Public Citizen and the Center for Food Safety – requesting the denial of both petitions, as well as the denial of another rule permitting irradiation of molluscan shellfish. However, these comments “were of a general nature” and “did not contain any substantive information that could be used in a safety evaluation of irradiated poultry,” said the FDA in its new poultry irradiation rule. The agency reached the same conclusion for the comments urging denial of the new meat temperature rule. Irradiation is considered a food additive because it is a process that “can affect the characteristics of the food,” explains the agency. The treatment therefore falls under the jurisdiction of FDA, which regulates all additives, even though FSIS oversees meat safety. There are three safety issues to be considered when looking at food irradiation, says the agency. These include: – Potential toxicity – Nutritional adequacy – Effects on the microbiological profile of the food Irradiating unrefrigerated meat was not found to increase meat’s toxicity, change the food’s nutritional properties or increase the likelihood of certain bacteria thriving on meat; therefore FDA has determined that this is a safe application for the process. As for a higher radiation dose for poultry, since absorbed doses of 4.5 kGy have already been proven safe when applied to other flesh foods including beef, lamb and shellfish, there is no reason for this dose not to be allowed in poultry. “The Agency determined in the 1997 rule permitting the irradiation of meat, meat byproducts and certain meat food products, that the conclusions regarding the irradiation of specific flesh foods can be used to draw conclusions about the irradiation of flesh foods as a class,” notes FDA in its poultry rule. The two final rules went into effect November 30, 2012 – the day they were published. FDA requires that all meat that has been irradiated must be labeled with a radura symbol on packaging and notes that the same requirement will apply to foods irradiated under these new rules.© Food Safety News
<urn:uuid:441f8ffa-0d94-4417-a8ae-284ea0d65007>
CC-MAIN-2016-26
http://www.foodsafetynews.com/2012/12/fda-expands-irradiation-uses-for-meat-and-poultry/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961205
595
3.25
3
Big-Shouldered River Swamps Indiana Town By ROBYN MEREDITH Published: March 7, 1997 UTICA, Ind., March 6— This town of 411 people is battling against the Ohio River, and so far, the water is winning. About 200 homes around here have already been overcome by chestnut-colored water, and even the two dozen National Guardsmen here with their ever-higher stacks of sandbags are slowly failing to keep the rising river from claiming the only road left into town. The worst of it, perhaps, is that so many of the people here being beaten by the big river were already struggling against another powerful enemy: poverty. In about half the houses here that have flooded, ''the families are very poor,'' said Glenn W. Murphy, a 46-year-old member of the Town Council. Some residents call themselves river rats. ''They can't afford to leave, they can't afford to stay,'' Mr. Murphy said. ''They can't afford flood insurance -- it is devastating.'' As Wednesday's rain adds strength to the already hefty Ohio River, river front cities and towns are being swamped by murky flood waters that show no signs of tiring. Throughout the Ohio River Valley, those made homeless by the floods are wondering when the water will retreat to its banks, or at least when it will tire of toying with them. Already, the flooding has killed 26 people in five states, left tens of thousands homeless and caused more than $400 million in damages from Appalachian hamlets in West Virginia and southeastern Ohio to the downtown streets of Louisville, Ky., Cincinnati and Memphis. There was flooding in West Virginia, Ohio, Kentucky, Indiana and Tennessee. All across the soggy land, radio stations played country music that laid out other people's heartaches, as if to keep flood victims from feeling too sorry for themselves. Between the songs, stations broadcast reports on which roads had been closed by floods. Here in Utica, a half-dozen trailer homes rested safely on high ground along a road at the outskirts of town. Downtown, near where the muddy water engulfed backyard swing sets and licked at the windows of nearby houses, National Guardsmen hurried to build a wall of sandbags to save the last road into town. Muddy water climbed up one side of the sandbags and leaked across the vital road. Firemen hooked a water pump to the puddle it formed and threw it back over the seeping wall, as if bailing out a slowly sinking boat. The Ohio is expected to crest in quiet darkness here on Friday. Up a steep hill, past three dogs sleeping in the sun, a dozen flood victims sought company and a warm meal at the miniature chairs and tables of the town's elementary school. T. J. Tower, a 6-year-old red-headed boy, said his home had been invaded by the water. ''All I can see is my roof,'' he said. The Ohio now carries more than it used to, he remarked. ''I saw a little teddy bear and a little remote-controlled car'' floating along, he said. The toys did not belong to him, he added. After dreary days of gray skies and sometimes torrential rain, the sun was shining across the Ohio River Valley today. Everywhere, it glinted off water -- the water found in ditches turned to creeks, creeks swollen to rivers and rivers transformed into lakes. The river crested on Wednesday in Cincinnati but is slowly and methodically rising along the stretch that separates Indiana and Kentucky. Water filled the downtown section of Aurora, Ind. In Tennessee, past where the Ohio flows into the Mississippi, the Mississippi is still rising and is not expected to crest until early next week. The state is further threatened by a storm that is expected to hit the swollen river with more rain on Saturday. Shelters have been prepared. ''We are all holding our breath and anticipating it will get worse before it gets better,'' said Howard Cobbs of the Tennessee Red Cross. In West Point, Ky., a town of 1,200 people 20 miles down river from Louisville, the police ordered the evacuation of 65 residents who have refused to leave. The stubborn ones stay, even though the business district is submerged and another foot of water is expected by Friday morning. ''Some people just don't want to come out,'' said Mililani Chun of the Hardin County Disaster and Emergency Services Division. ''Yesterday we were in a speed boat and we ran across a man who was fishing from his backyard.'' But on the other side of the Ohio River here in Utica, where water reaches the windows of the fire station and where the intersection of Fourth and Mulberry now serves as a boat ramp, residents were looking forward to the river's retreat. Dorothy M. Hall, 75, sat with others chased from their homes by the Ohio River. In anticipation of the flood, she moved her belongings to safer ground on the second floor of her house. ''A friend of mine had to carry me out on his back piggyback,'' she said, noting that while the river is still rising, it has not yet climbed to the top of her stairs. ''I'm getting too old for floods,'' she sighed, lamenting that she allowed her flood insurance to lapse after she retired and began living on a fixed income. ''It's so expensive,'' she said. Photo: National Guardsmen checked a leak in a sandbag wall they built in Utica, Ind., which is being swamped by flooding from the Ohio River. Yesterday, only one road into Utica remained free of flood waters. (Monica Almeida/The New York Times) Map of Indiana highlighting Utica: The people of Utica, Ind., mostly poor, are enduring a familiar trial.
<urn:uuid:13eca88a-a984-4b5b-bc30-7dc073983588>
CC-MAIN-2016-26
http://www.nytimes.com/1997/03/07/us/big-shouldered-river-swamps-indiana-town.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972031
1,202
2.578125
3
Details about The Columbia Guide to Religion in American History: The first guide to American religious history from colonial times to the present, this anthology features twenty-two leading scholars speaking on major themes and topics in the development of the diverse religious traditions of the United States. These include the growth and spread of evangelical culture, the mutual influence of religion and politics, the rise of fundamentalism, the role of gender and popular culture, and the problems and possibilities of pluralism. Geared toward general readers, students, researchers, and scholars, The Columbia Guide to Religion in American History provides concise yet broad surveys of specific fields, with an extensive glossary and bibliographies listing relevant books, films, articles, music, and media resources for navigating different streams of religious thought and culture.The collection opens with a thematic exploration of American religious history and culture and follows with twenty topical chapters, each of which illuminates the dominant questions and lines of inquiry that have determined scholarship within that chapter's chosen theme. Contributors also outline areas in need of further, more sophisticated study and identify critical resources for additional research. The glossary, "American Religious History, A–Z," lists crucial people, movements, groups, concepts, and historical events, enhanced by extensive statistical data. Back to top Rent The Columbia Guide to Religion in American History 1st edition today, or search our site for other textbooks by Paul Harvey. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Columbia University Press. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our History tutors now.
<urn:uuid:4d1d616f-e228-4579-a518-f410dca8361a>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/the-columbia-guide-to-religion-in-american-history-1st-edition-9780231140201-0231140207?ii=5&trackid=7d1a95d9&omre_ir=1&omre_sp=
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00110-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914801
334
3.015625
3
There has long been debate about meat and its role in consumption of resources. A recent New York Times op-ed column posits the notion that soybeans, often toted as the more eco-friendly protein source, might actually do the planet more harm than traditionally grazed livestock. Author Nicolette Hahn Niman, a livestock rancher herself, discusses the credible arguments against hamburger. But, she points out, the greatest detriment comes from animals housed in Concentrated Animal Feeding Operations (CAFOs). These operations, she writes, "[crowd] animals together in factory farms, [store] their waste in giant lagoons and [cut] down forests to grow crops to feed them [and] cause substantial greenhouse gases." Niman continues to discuss meat and its relation to carbon dioxide, methane, and nitrous oxides. Niman reminds readers that farming equipment burns most agricultural CO2 in America, but in the rest of the world, deforestation is the big CO2 culprit. Further, much of this international deforestation is done for soybean cultivation. Brazil, for instance, dedicates about "70 percent of areas newly cleared for agriculture ... to grow soybeans." Much of this soy goes into making food for livestock in CAFOs, but it's also used to make the world's tofu, the dietary staple of many vegetarians. Traditional farms don't have much to do with this CO2 cycle, Niman says, because they tend to grow their own soy and keep their animals outside, using less machinery than major agribusiness. As for methane, agriculture's "second-largest greenhouse gas," Niman points out that rice fields are huge generators of this gas, as are the liquid manure concoctions Americans produce and pump all over their corn and soy crops ... but traditional farms with grazing livestock aren't really culprits in this planetary problem. Animals on these farms fertilize the land naturally. Niman points out that individuals can cut back on their carbon and methane footprints just by purchasing meat from grazed animals. The article points to research at Australia's University of New England and the University of Louisiana, which indicates that poor diets contribute even more to methane produced in industrialized animal operations. Niman continues her defense of meat in exploring nitrous oxide, which, again, seems to come mainly from man-made fertilizers on industrial farms. Organic meats (and other crops for that matter) are free from chemical fertilizers and, thus, are not contributors to the problematic levels of nitrous oxide. She goes on to point out the benefit in grazing land, citing research from Kansas State University and North Dakota State University, that indicates grazing animals are key to healthy prairies and increased vegetation. Other benefits include decreased erosion, and improved water quality. The piece concludes by reminding us that transportation of food is as much of a problem as its production, and that eating seasonal, local foods (including meat that was raised traditionally) can not only decrease a person's greenhouse gas contribution, but actually help to improve ecosystems. "None of us, whether we are vegan or omnivore, can entirely avoid foods that play a role in global warming," says Niman. But if we avoid processed foods, buy locally and seasonally, and cut back our intake of meat, making sure it's from traditionally raised animals, "It could be, in fact, that a conscientious meat eater may have a more environmentally friendly diet than your average vegetarian." Also on MNN: • Study says meat production creates half of all climate change emissions. (Plus: View the original report.)
<urn:uuid:ab9c190c-a133-4cb1-8c91-12cb44cf8914>
CC-MAIN-2016-26
http://www.mnn.com/earth-matters/animals/stories/are-soybeans-the-real-solution
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953772
730
3.328125
3
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2006 September 7 Explanation: No single exposure can easily capture faint stars along with the subtle colors of the Moon. But this dramatic composite view highlights both. The mosaic digitally stitches together fifteen carefully exposed high resolution images of a bright, gibbous Moon and a representative background star field. The fascinating color differences along the lunar surface are real, though highly exaggerated, corresponding to regions with different chemical compositions. And while these color differences are not visible to the eye even with a telescope, moon watchers can still see a dramatic lunar presentation tonight. A partial eclipse of the Moon will be visible from Europe, Africa, Asia, and Australia. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: EUD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:4ef6cb33-9622-4afb-a278-2e5e782d4c9f>
CC-MAIN-2016-26
http://apod.nasa.gov/apod/ap060907.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.842509
208
3.03125
3
California Fish Species |Scientific Name||Cottus beldingi| Paiute sculpins favor living on rubble or gravel in cold, moderate-gradient streams where water temperatures rarely exceed 20°C. They are also found living in lakes and surviving sustained temperatures in the range of 20-25°C where water flow is ample. Because Paiute sculpins typically live in the riffles of clear streams they are often found in association with trout. In Lake Tahoe Paiute sculpin are most often found in deepwater near aquatic macrophytes. In both stream and lake environments the sculpins feed primarily at night when they can more easily ambush and capture prey. Their diet in a stream may consist of aquatic insect larvae, aquatic beetles, snails, water mites, or algae. Dragonfly larvae are a focal point of feeding in meadow streams. Feeding in Lake Tahoe varies with the depth of the sculpin. Deep water dwellers feed on mostly detritus and algae, with other prey items supplementing their diet. Paiute sculpins in shallower regions eat primarily benthic organisms such as chironomid midge larvae. The feeding habits of Paiute sculpins vary with body size and seasonal changes, as certain prey are more available during specific time periods. They feed year-round with decreased consumption rates in fall and winter. Paiute sculpin reach sexual maturity in their 2nd or 3rd year, with spawning occurring primarily in May and June. Spawning sites are usually found where there is adequate rocky or gravelly substrate to hide nests. Presumably each female deposits her eggs in one nest to be fertilized. One study revealed mean fecundity in Lake Tahoe was 123 eggs per female, similar to egg production in Sagehen Creek, CA. When the fry hatch they remain within the nest for another 1-2 weeks, absorbing the yolk sac. The post-larval sculpins may then be washed away by downstream currents or littoral waves sometime in the following weeks. The success of Paiute sculpin has great variance and populations seem to thrive in the absence of winter flooding, though overall stream conditions may suffer. | |Watershed||East Walker Watershed, Honey-Eagle Lakes Watershed, Lake Tahoe Watershed, Truckee Watershed, Upper Carson Watershed, West Walker Watershed| Please note, watersheds are at the USGS 8-digit Hydrologic Unit Code (HUC) scale, so they often include a lot of sub-watersheds. If a species occurs in any sub-watershed within the HUC, the species appears within the HUC. Link to an EPA page that shows HUCs.
<urn:uuid:dc8b9b3e-672d-417e-ad69-7d40b7b915ee>
CC-MAIN-2016-26
http://calfish.ucdavis.edu/species/?uid=63&ds=241
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933196
566
3.703125
4
In an effort to stay one step ahead of cyberattackers, researchers at the University of Texas at Dallas created a monster. A new kind of malware, named Frankenstein, avoids detection by repurposing trusted host programs and using methods differing from those used by traditional malware. "We wanted to build something that learns as it propagates," said Dr. Kevin Hamlen, associate professor of computer science at UT Dallas who created the software along with his doctoral student Vishwath Mohan, Science Daily reported. "Frankenstein takes from what is already there and reinvents itself. Just as [author Mary] Shelley's monster was stitched from body parts, our Frankenstein also stitches software from original program parts, so no red flags are raised. [The malware] looks completely different, but its code is consistent with something normal.” Most so-called "metamorphic malware" attempts to avoid detection by mutating semi-randomly, a method which lends itself to detection once anti-malware software manufacturers determine the mutation algorithm being used. The creators of Frankenstein suggest that using code from known, non-malicious programs could allow malware to not only go undetected, but become white-listed. Hamlen and Mohan's research, which was supported by the National Science Foundation and Air Force Office of Scientific Research, could be used to improve existing anti-malware software and also be used for offensive cyberoperations, according to a research paper published online as part of a recent USENIX Workshop on Offensive Technologies. The next stage of research, the researchers said, will include a more comprehensive system and experiments to verify and extend initial research results. Technical details about the research can be found here.
<urn:uuid:82f275d9-a149-4902-a041-25dc2295d226>
CC-MAIN-2016-26
http://www.govtech.com/security/Stealthy-Frankenstein-Malware-Developed.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951841
348
3.140625
3
I am stuck on the following three problems any help is appreciated. Two buildings with flat roofs are 60 feet apart. From the roof of the shorter building 40 ft in height the angle of elevation to the edge of the roof of the taller building is 40 degree. How high is the taller building. A ladder with its foot in the street makes an angle of 30 degrees with the street when its top rests on a building on one side of the street and makes angle of 40 with the street when its top rest on a building on the other side of the street, if the ladder is 50 feet long how wide is the street.
<urn:uuid:46403e4e-2ec5-4253-b8f4-a66bd6d4d6a5>
CC-MAIN-2016-26
http://math.stackexchange.com/questions/162107/trigonometry-word-problems
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957245
127
3.390625
3
Many of the world's greatest scientists were inspired to go into their fields by reading science fiction books. And it's easy to see why. A lot of the best science fiction features scientists who solve problems and make breakthroughs. Here are 10 great novels that will inspire you with a new love of science. Top image: Painting by Adolf Schaller from Carl Sagan's Cosmos. Note: We tried to keep this list to books that actually feature heroic scientists who make progress — not scientists who meddle in things that people were not meant to yadda yadda. We looked for books where a scientist actually makes a discovery or invents something, and this is viewed as a Good Thing. This is by no means an exhaustive list, and we'd love to hear your choices! 1. Cryptonomicon by Neal Stephenson Stephenson has been on a mission lately to encourage more positive science fiction in which people solve problems using science, including the upcoming Hieroglyph anthology. And his 1999 novel, which follows a fictionalized set of World War II cryptographers including Alan Turing, and a group of 1990s hackers trying create a secret data network for people who are vulnerable to genocide — it shows how progress is carried forward from one generation to the next. Image by CoyoteGirl 2. Contact by Carl Sagan Perhaps the most famous piece of science fiction about heroic scientists, Sagan's novel follows Ellie Arroway, who has a strong passion for science that leads her to get involved with the search for extraterrestrial intelligence. And spoiler alert: she finds some. This isn't so much the "aliens just show up" kind of story, but the kind where we find them by doing scientific investigation. 3. Bellwether by Connie Willis Willis often features protagonists who are explorers or discoverers, but this Nebula-nominated book is unusual in that it's fairly close to "realistic" fiction. It follows a group of researchers in a laboratory who are discovering how to create fads, and wind up being part of several fads themselves. And in the end, studying sheep does indeed lead to a breakthrough that reveals something about human nature. 4. 2312 by Kim Stanley Robinson Robinson's fiction is often concerned with environmental problems — but he doesn't just show people struggling with them, but actually finding solutions. In his Science in the Capital series, which starts with 40 Signs of Rain, he shows the politics and science of mitigating climate change. And in this more recent future-set novel, he shows how environmentalists manage to raise Florida out of the ocean and reintroduce wolves to the wild. Robinson's 2312 is downright superheroic in its treatment of scientists. 5. The Dispossessed, Ursula K. Le Guin This is one of the all-time great novels about a physicist who discovers a whole new kind of physics, which winds up having a practical application. On one level, The Dispossessed is about a scientist who is caught between two worlds: an anarchist planet and a capitalist planet. But a lot of the most fascinating parts involve the scientist discovering the Simultaneity principle, which in turn leads to the invention of the Ansible, Le Guin's famous device that allows instantaneous communication across spacetime. There is a lot of heroic physics in this novel, and it's wonderful. Art by Christian Pearce. 6. The Lifecycle of Software Objects by Ted Chiang Here's a book that comes up in conversation all the time — this novella (which you can read for free online) follows a team of A.I. "trainers" working on developing digital entities (or digients) from infancy. Because you can't just develop A.I., you have to grow it like a child, or nurture it like an animal. Chiang's characters aren't the computer scientists who created the digients in the first place, but theyre still smart people who do tons of problem-solving. And Chiang's fiction has this great thing where the more you explore the surprising ramifications of his big idea, the deeper into the situation the characters get — so the process of discovery is also the progress of the plot. 7. The Practice Effect by David Brin In this 1984 novel, scientists succeed in creating a device that manipulates space and time — and they're able to use it to travel to another planet, which is very similar to Earth. Except on this other planet, the second law of thermodynamics works differently: Objects don't get worn out, and in fact get stronger the longer they're used. It's up to Dennis Nuel to figure out why this aberration is happening. 8. A Natural History of Dragons by Marie Brennan Yup, it's a fantasy novel. But it's a fantasy novel about a naturalist, who studies the science of dragons. It's the first of a trilogy of novels about the heroic Isabella, Lady Trent, who travels around studying supernatural and mythological beasts. She not only makes great discoveries about dragonkind, but she also uses her scientific acumen to get herself and her group out of a series of nasty scrapes — it's the best kind of coming-of-age story. 9. The Sparrow by Mary Doria Russell This isn't quite as happy and upbeat a novel as some of the others on the list — but it's definitely about people making discoveries and doing science. A group of Jesuits discover an alien signal, and just like Ellie in Contact, they head out to meet the alien intelligences. There's a lot of clever problem-solving here — both in the space mission and in the "first contact" stuff, with linguistics turning out to be as important as physics in the end. 10. As She Climbed Across The Table by Jonathan Lethem I wanted to give a shout-out to Richard Powers, whose books about topics like creating artificial intelligence regularly make the list of great "Lab Lit" titles about scientists. But I also wanted to keep this list to 10 titles, and it's definitely worth including Lethem's bizarre, funny story of scientists who create an artificial black hole — and one scientist who falls in love with it. Not just because of the basic premise, but because the arc of the book is about figuring out just what the black hole, called "Lack," is, and why it only swallows up certain objects. There's a lot of great scientific deduction in this novel. What's your favorite novel that's driven by scientists making great discoveries or creating terrific inventions? Thanks to Genevieve Valentine, Alasdair Wilkins, Alyc Helms, Annalee Newitz, Mary Robinette Kowal and everybody else who suggested stuff!
<urn:uuid:d37b2f19-64e7-4959-bde6-1f046729f543>
CC-MAIN-2016-26
http://io9.gizmodo.com/10-great-novels-that-will-make-you-more-passionate-abou-1617655776
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960148
1,384
2.90625
3
Consistent with the mission of Yale Peabody Museum, we regularly document, collect and study the natural history of our surrounding Connecticut landscape. Over the past 150 years, Peabody staff have developed a remarkable understanding of the world’s biodiversity, and the Yale Natural Lands have figured in that research. The Peabody collections house many specimens from the eight properties that comprise the Yale Natural Lands, including from early Yale researchers and curators (e.g., A. W. Evans, E. B. Harger, G. E. Pickford, J. R. Reeder, C. L. Remington, J. W. Toumey). However, our overall understanding of the flora and fauna of the Yale Natural Lands is not exhaustive. To redress this we are building on the existing specimen base and bolstering our knowledge of changes to the biodiversity of southern Connecticut by conducting a BioBlitz - a 24-hour race to identify as many living organisms as possible in a specific area - of the Yale Peabody Museum Natural Areas in Branford and Guilford, Connecticut, in early May 2016. This will be followed by a full year of surveys of all Yale Natural Lands during which the Peabody Museum’s Divisions will document the biodiversity of each property, with generous financial support from the Yale Natural Lands Committee. The specimens and data collected during the BioBlitz and subsequent surveys will be organized and curated by two Yale undergraduate students. These Yale students will receive training in curatorial practices from representative collections managers and staff from the Divisions of Botany (P. Sweeney), Entomology (L. Gall), and Vertebrate Zoology (G. Watkins-Colwell). The culmination of this collaborative body of work will be a publication on Yale Natural Land biodiversity by curators, collections managers, and Yale students in the peer-reviewed scientific publication, the Bulletin of the Peabody Museum of Natural History. Data collected during the course of our surveys not only bolsters our understanding of southern Connecticut’s natural history but will provide a solid foundation for future research and education on Yale Natural Lands over the long term as well as continued surveys in the future. Between 2007 and 2010, the Yale Peabody Museum teamed up with Connecticut’s Beardsley Zoo and The Connecticut Audubon Society to hold five BioBlitzes — in this case the town of Stratford. One of the main goals of the Stratford BioBlitz was to collect data from a single town in multiple seasons to see how the species diversity of a place changes annually and throughout the year. Habitats available for the Stratford survey included two beaches, a salt marsh, a cranberry bog, rivers, streams, ponds and a mixed hardwood forest. See the results of the first four of these Stratford BioBlitz events: There are several BioBlitz events each year throughout Connecticut. Many are conducted by municipal land trust organizations or conservation groups interested in learning which species occur on properties in their care. Center for Conservation and Biodiversity & Connecticut State Museum of Natural History University of Connecticut
<urn:uuid:f7e35e27-0b8e-4522-b4af-1b96221659d3>
CC-MAIN-2016-26
http://peabody.yale.edu/events/yale-peabody-museums-bioblitz
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00079-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916855
640
3.046875
3
Advanced Functions Logarithms A security PIN code is four digits long. Each digit could have the value 0 to 9. if an office building needs 10 security PINs, express the number of codes in logarithmic form with base 10 The attempt at a solution I don't know how to do the question, but my best attempt produced what you see above. 4 represents the # of digits. x-10 because of the range 0-9. Re: Advanced Functions Logarithms The number of codes is 10. log(10) = 1 when using base 10 logarithms. 4^10 is the number of distinct codes from which 10 may be chosen. log(4^10) = 10*log(4) when using base 10 logartihms. I wish I knew the actual question.
<urn:uuid:1badead8-7b31-4c6b-8b46-59b7b7aae4ab>
CC-MAIN-2016-26
http://mathhelpforum.com/algebra/192627-advanced-functions-logarithms-print.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.829356
177
2.578125
3
Researcher to study use of technology during recent earthquake A University of Colorado Colorado Springs researcher will join a multidisciplinary team in New Zealand in an effort to study the effects of the earthquake that struck Christchurch on Feb. 22. Jeannette Sutton, senior research associate at the Trauma, Health and Hazards Center, will join a team from the U.S.-based Earthquake Engineering Research Institute. The team will examine the aftermath of the 6.3 magnitude earthquake with the goal of bringing back lessons that can be applied to U.S. building practices and used in academic settings. Sutton will examine the use of social media during the earthquake. Specifically, she is interested in how the technology was used to share information between victims and survivors and how volunteers and first-responders used the tools. "I'm also interested in the collaboration and coordination between volunteer groups and the official response organizations via social media," Sutton said. "And in those areas where social media or communication technology is not available, I want to learn about how they are communicating to vulnerable groups and populations who are at extreme risk." Sutton will join researchers from Auburn University, the University of British Columbia, the Johns Hopkins University, the U.S. Geological Survey, Colorado State University and Duke University. The group will focus on the performance of engineered structures in the earthquake, nonstructural building components, hospitals, and the health-care system, as well as risk communication and societal resiliency in addition to Sutton's interest in communication. The Christchurch earthquake is of particular interest to researchers because it was part of an aftershock from a September 2010 earthquake. The team is organized by EERI's Learning from Earthquakes Program which has sent reconnaissance teams to investigate hundreds of earthquakes during the past 40 years. Six team members, including Sutton, are receiving support from the National Science Foundation. Other public and private organizations are contributing travel support. During the trip, the researchers will be contributing to a blog. To view, visit http://www.eqclearinghouse.org/2011-02-22-christchurch. — Tom Hutton Professor elected to leadership post with international academic group Robert von Dassanowsky, University of Colorado Colorado Springs professor of German and film studies, has been elected vice president of the Modern Austrian Literature and Culture Association (MALCA), the international academic organization for Austrian studies. He will assume the office April 7 during the association's annual conference, hosted this year by Washington and Jefferson College in Pennsylvania. Dassanowsky also has been invited to speak at the "Cultures at War: Austria-Hungary 1914-1918" symposium at St Hilda's College, Oxford University, April 13-15. CU educators take honors for authoring textbook A textbook authored by two professors with University of Colorado connections, along with two other academicians from Washington state, was awarded the 2011 Textbook Excellence Award (Texty) in the college level Mathematics/Statistics category. The Text and Academic Authors Association (TAA) honored the first edition of "Briggs/Cochran: Calculus" by William Briggs, Lyle Cochran, Bernard Gillett and Eric Schulz. The book is published by Pearson Education/Addison-Wesley. Briggs was on the mathematics faculty at the University of Colorado Denver for 23 years, teaching throughout the undergraduate and graduate curriculum with a special interest in applied mathematics. He developed the Quantitative Reasoning course for liberal arts students at CU Denver. He is the author of five other textbooks and monographs, and is a University of Colorado President's Teaching Scholar. Gillett is a senior instructor at CU-Boulder. He has earned five teaching awards over the span of a 20-year career. He has been active in the publishing industry since 1993, working at that time as a developmental editor for a software package that accompanied a college mathematics textbook series. He has published a number of books, including several student and instructor manuals for math texts, and four rock climbing guides for the mountains in and around Rocky Mountain National Park. Cochran is a professor of mathematics at Whitworth University in Spokane, Wash., and Schulz has been on the mathematics faculty at Walla Walla Community College in Walla Walla, Wash., since 1989. The Texty Award, created in 1992, recognizes current textbooks and learning materials. Judges are published textbook authors. "Briggs/Cochran: Calculus" was one of seven textbooks to receive the award. For a list of winners, visit http://www.taaonline.net/awards/2011winners.html The awards will be presented during a luncheon at the 24th annual TAA Conference in Albuquerque on June 25. The TAA is the only nonprofit membership association dedicated solely to assisting textbook and academic authors. TAA's overall mission is to enhance the quality of textbooks and other academic materials, such as journal articles, monographs and scholarly books, in all fields and disciplines, by providing its textbook and academic author members with educational and networking opportunities. Law professor's book focuses on Supreme Court justices In her new book, University of Colorado Law Professor Emily Calhoun examines the obligations of Supreme Court justices to losing parties in constitutional rights disputes. "Losing Twice" (Oxford University Press) argues that justices have an obligation to avoid and ameliorate harm to citizens whose arguments about constitutional meaning are rejected. Building on that straightforward proposition, Calhoun shows how the justices' failure to satisfy their obligation inflicts unjust harm on constitutional losers. She moves beyond debates about judicial activism to construct a novel legal framework for evaluating the legitimacy of the work of Supreme Court Justices. The book draws on insights from many academic disciplines, but is directed at a general readership as well as academic audiences. It examines real-world constitutional rights disputes using language and concepts that will help any reader better understand why the Justices' resolutions of abortion, gay rights, and racial discrimination disputes can provoke such outrage. With the book, Calhoun aims to remind readers of the relationship that ought to exist among members of a political community committed to equality and government-by-consent. She questions assertions that justices should be thought of as umpires in an athletic contest or as mere elite, legal technicians. Want to suggest a colleague — or yourself — for People? Please e-mail information to Jay.Dedrick@cu.edu
<urn:uuid:0990b350-7f9a-42e9-9316-fdfbde32a0e6>
CC-MAIN-2016-26
https://www.cusys.edu/newsletter/2011/03-16/people.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946
1,327
2.546875
3
- Brown supermarket paper bags - Paint or crayons - Green construction paper - paste or glue Turn everyone in the class into a turtle. Draw yellow plastron on front and green carapace on back of paper bags. Cut tail from construction paper in shape of a long "V". Glue tail onto bottom of carapace. Cut holes for head and arms. Get a group of friends to form a turtle parade. Maybe you could have a turtle race, where the winner is the slowest!
<urn:uuid:defaba82-72f4-4034-bc87-18d08559353b>
CC-MAIN-2016-26
http://octopus.gma.org/turtles/convene.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.895389
108
2.96875
3
What is Holding Back Natural Gas as the Transportation Fuel of the Future? The natural gas revolution has brought big changes to the U.S. energy scene. Natural gas prices, which used to move closely together with oil prices, have plunged in the last five years, as the following chart shows. One result has been the rapid displacement of coal by natural gas in electric power generation. According to a recent report from the Union of Concerned Scientists, some 100 gigawatts of coal-fired electric plants, representing more than a quarter of coal capacity and nearly a tenth of total U.S. electric capacity, have either been closed or are likely soon to be closed because they have become uncompetitive with natural gas. Natural gas has also been displacing oil at a rapid rate as a home heating fuel. In transportation, however, the use of natural gas is spreading more slowly. Transportation ranks second only to electric power generation in total energy use. There are at least three ways to use natural gas to power transportation. One is to generate electricity with natural gas, which can then power electric cars or electrified rail lines. Another is to convert natural gas to liquids like methanol or synthetic gasoline. However, as I discussed in this post two years ago, the biggest potential lies in the direct use of compressed natural gas (CNG) or liquid natural gas (LNG) as a fuel for natural gas vehicles (NGVs). As the next chart shows, there is nothing new about NGVs, which are in widespread use in many countries. Some governments have promoted them to reduce urban pollution and others to achieve greater energy independence. In many of these countries, gas is widely used for light vehicles like taxis and private cars. In the United States, on the other hand, the small number of NGVs now in service are mostly city busses, garbage trucks, delivery vans, and other fleet vehicles. What is holding back the wider use of CNG in the U.S. transportation system? Not technology. Unlike hydrogen, cellulosic ethanol, algae diesel, and other futuristic alternatives, NGVs use simple, off-the-shelf technology. That is part of the reason they are popular in Pakistan, Bangladesh, Armenia, and other developing and emerging market countries. Instead, the slow spread of CNG as a transportation fuel in the United States is largely attributable to economic and political factors. The network problem As James Hamilton notes in a recent post, the spread of natural gas as a motor fuel, especially in its early stages, encounters a network problem. There is no point in owning an NGV if you have no place to refuel it, and no point in building refueling stations if no one owns NGVs. That explains why early adopters of CNG have mostly been fleet operators whose vehicles refuel when they return to base at the end of their shift. However, two factors are now breaking down the barrier posed by the network problem. First, natural gas is particularly attractive as a replacement for diesel fuel in heavy trucks. Compared with light vehicles like passenger cars and pickups, heavy trucks drive more miles per year and consume more fuel per mile. Long-haul trucks further gain if they use LNG instead of CNG. LNG vehicles have greater initial costs but have more compact fuel tanks and greater driving range. All of these factors increase the payoff to the use of natural gas for long-haul vehicles, and they also increase the payoff to investments in fueling stations. Not surprisingly, then, truck stops have been the pioneers in building fueling stations. Pilot-Flying J, a leader in the field, already has a network of natural gas filling stations that extends from coast to coast. By the end of 2013, the company’s natural gas highway will run from the Canadian to the Mexican border, as well. (Click here for a map.) These stations will at first offer LNG for long-haul trucks, but once they are in place, adding CNG will be a natural extension of service. A spokesperson says the company will do so on a station-by-station basis as local and regional demand develops. Second, it is possible to build dual-fuel vehicles, which can run on either gasoline or CNG at the flip of a switch. In 2012, Ford, GM, and Chrysler all started offering dual-fuel versions of popular pickup models. True, dual-fuel vehicles cost a bit more than pure NGVs, and they require room for two sets of fuel tanks. The current offerings run $40,000 and up. Still, any three-quarter ton pickup is going to use a lot of fuel, and if used for business, is likely to run up a lot of miles. Dual-fuel pickups will appeal to farmers, building contractors, and others who may sometimes be within range of a CNG filling station, and sometimes not. As more such vehicles go on the road, they will create the necessary incentive to add more stations to the network. Even if we assume that the network problem will gradually take care of itself, certain aspects of energy and environmental policy still retard the spread of NGVs in the United States. NGV advocates urge three kinds of changes. First, they advocate changes in the taxation of motor fuel. For example, the LNG used by heavy trucks is now taxed at the same rate of $.243 per gallon as diesel fuel, despite the fact that it takes 1.7 gallons of LNG to supply the same energy as a gallon of diesel. NGVAmerica, an industry group, recommends equalizing the tax on an energy-equivalent basis. A more radical proposal would be to tax motor fuels on a carbon-equivalent basis. Because natural gas is less carbon intensive per unit of energy than diesel fuel, a carbon tax would further increase the attractiveness of LNG and CNG fuels. (Note: Some fracking opponents have disputed the premise that natural gas is less carbon-intensive, on a life-cycle basis, than diesel or gasoline. I discussed the complex economics of fracking in this earlier post.) Second, advocates say that federal regulations should be at least as friendly toward NGVs as they are toward other clean and fuel-efficient vehicles. That has not always been the case. For example, up until 2011, the EPA maintained an onerous set of regulations for approving after-market CNG conversion kits for passenger cars and light trucks. Fortunately, those regulations have now been streamlined, which may increase the rate of after-market conversions. Even so, in some respects, tax incentives, fuel economy standards, and other regulations are tilted toward more fashionable technologies like hybrid-electric and all-electric vehicles, ignoring CNG. Third, and more controversially, NGV advocates have pushed for subsidies and tax credits targeted directly at natural gas fuels, expansion of the fueling network, and purchase of NGVs. Legislation known as the NAT GAS Act, introduced in both the House and Senate during 2011, would have boosted NGVs in a number of ways. However, the legislation did not pass, in part because of opposition from conservative organizations like the Heritage Foundation, which objected to the NAT GAS Act as a market distorting subsidy. There is probably some truth to that, in the sense that under a national energy policy that required users of every kind of fuel to pay full costs, including environmental and national security costs, NGVs would be able to make it on their own merits without a need for specially targeted tax credits or subsidies. Adding subsidies for natural gas to a system that already underprices more carbon-intensive fuels seems like the wrong approach. The bottom line When all is said and done, CNG is a decidedly unfashionable entry in the fuel-of-the-future sweepstakes, yet it may be the dark horse that wins the race. If your goal is to flaunt your green credentials, then go ahead and trade in your hybrid Prius for an all-electric Leaf. Meanwhile, the contractor down the block will buy a new dual-fuel F-250, or buy an aftermarket conversion kit for the beat-up model already in service. Which vehicle will make the greater contribution to energy independence, national security, and a healthy planet? You guessed it. The NGV, hands down. Thanks to Šarūnas Merkliopas, who contributed to this post as a research associate. 30 Responses to “What is Holding Back Natural Gas as the Transportation Fuel of the Future?” As an ex user of a duel fuel converted CNG vehicle I can confirm that the system works. Funnily enough I had a 1973 Ford Torino V8, The CNG kit gave the motor a new lease of life and economy in my wallet. From an enviro perspective, it is a no brainer as you avoid the carbon emissions of a new car (which is build into the manufacture etc) and convert a higher polluting car into a less poluting one. I do agree that infrastructure is key, at that time I was living in Argentina who does have a reasonalbe (not perfect) infrastructure to support this fuel. I could make inter city journeys of about 400 miles with very few spots where I ran out of CNG. As the car was dual fuel, the gasoline would supplement any periods where CNG was not available. Only downside is that the tanks can be a bit bulky and the range as ever depends on the size of the tanks. the tanks where not exactly light and therefore they added to the vehicles weight. You miss the obvious, conversion price. In the UK you every town has a garage that convert your car to dual fuel for a few hundred pounds. In the US the equivalent cost is thousands of dollars, driven by onerous legislation (I never heard of a UK dual fuel car blowing up). With dual fuel, the network issues don't make sense, you only convert if ng is locally available, and you use petrol/diesel on long journeys. Yes, the regulatory burden on aftermarket conversion has been a big issue in the US. Actually, in the past, the EPA's worry was not so much cars blowing up as their perception that conversion to CNG constituted a form of tampering with the vehicles emission control system. I talked about that in the earlier post I linked to. More recently, the EPA has streamlined its standards for certifying conversion kits. NGVAmerica, an industry group, seems to think these revisions will help. Here is a link to their discussion of the new standards: http://www.ngvc.org/gov_policy/fed_regs/fed_After… I am not sure conversions will be the main driver of greater CNG use in the US, though. It would help if more companies offered the CNG vehicles in the US that they offer in other countries. Now only Honda sells one, and it is CNG only, not dual fuel. Perhaps there are regulatory barriers to getting the new vehicles certified here, I have not read about that. Any way you cut it, though, you are right on the basic point that the regulatory structure in the US has not been especially hospitable to CNG. Seems it is singing from a different hymnbook as much of the rest of the US energy policy establishment, which is gushing with enthusiasm about natural gas. last week there was an article in Nature stating that they were seeing a 9% leakage to the atmosphere from natural gas fields, shich was subsequently confirmed by NASA; methane is 20 times more potent a greenhouse gas as CO2, and 72 times as potent over a 20 year time span… http://www.nature.com/news/methane-leaks-erode-gr… http://theenergycollective.com/stephenlacey/16600… I fully agree that GHG emissions from natural gas extraction are a serious issue. As I have argued repeatedly in this blog an elsewhere, all energy sources should be charged for their full environmental impacts including GHG emissions. As I wrote in the cited post on the economics of fracking, "Accurate scoring of fossil fuels needs to take a lifecycle approach, including emissions in extraction, storage, processing, and transportation as well as in final use." That, of course, includes methane emissions associated with fracking. With regard to the Nature article you cite, the question is whether the 9 percent is typical or exceptional, and if typical, whether it can be reduced to an acceptable level by following best practices. The Nature article notes that "In April, the EPA issued standards intended to reduce air pollution from hydraulic-fracturing operations — now standard within the oil and gas industry — and advocates say that more can be done, at the state and national levels, to reduce methane emissions." It further quotes Steven Hamburg, chief scientist at the Environmental Defense Fund, as saying "There are clearly opportunities to reduce leakage.” Obviously this issue requires continued attention. In Brazil, tri-fuel vehicles are mandatory (at least in some places): ethanol, gasoline, CNG. Magneti Marelli won an Automotive News PACE Supplier of the Year award in 2008 for inexpensive technology to seamlessly transfer between CNG and gas/ethanol (eg, under heavy acceleration, not just when the CNG tank runs out); MM had already developed a software-based solution for judging what the gas/ethanol mix was in the fuel tank and adjusting air intake etc accordingly. Of course in Brazil sugarcane bagasse is a cheap source of ethanol, but it varies by time of year and region so the price at the pump varies as well. The bottom line is that in Brazil a driver can pull into a refueling station and put whatever is cheapest into their tank without worrying about what's in there already. [Mea culpa: I'm a judge for the PACE competition but other judges got to make the trips to Brazil so I didn't sit through the engineering presentations, only the final discussion of potential winners.] Since the adaptation is done at the assembly plant, it is low in cost relative to converting in the aftermarket. For the ethanol/gasoline part, the incremental cost is essentially zero, no hardware is needed, and the seals and lines are designed from the start to be robust to ethanol, perhaps at a slight cost for the different plastics. For the CNG part, there's the tank and fuel line, and a separate set of fuel injectors, plus (I think as hardware) an anti-knock sensor. The rest is software that goes into the ECU (engine control unit). The engine itself needs no modification. As noted, CNG works with diesel engines, too. So the technical side is well known and done on a production basis. So in the US there's infrastructure, regulation (in California everything has to be CARB — Cal Air Resources Board — certified, and the cost of getting certified is a real barrier to aftermarket conversion. Then in Arizona there was a scandal with a subsidy for converting vehicles to CNG. I'll try to locate the details but as I understand it the legislation was poorly researched — written by lobbyists for a couple would-be converters — and the subsidy was greater than the cost of conversion; the state was soon out many millions of dollars even though there was no intent to actually use CNG in those vehicles (get the conversion done, then throw away the tank to get back your trunk space). Memories of that linger. Anyway, thanks for highlighting the potential for CNG in transport! Very useful information, thanks! I agree, it is ironic that one of the main barriers to use of CNG as a clean fuel is the clean air bureaucracy. I dug up details, Arizona burned through somewhere between $500 million and $800 million on CNG conversion subsidies, something slipped in during a late-night conference committee at the end of a legislative session. The New York Times reported on it 2 Nov 2000 ("Costly Plan…" by Ross Milloy) but I've heard additional (sordid) details through a friend who worked for the father of one of the principles; investigations stopped when the politician who inserted the program into the legislation died of a heart attack. The numbers, though, were big enough to make politicians there reluctant to touch anything related to CNG. I'm told memories of the corruption angle also linger in AZ politics. Of course the need for infrastructue means that to be effective policy must be national. The Arizona case suggests that a bottom-up approach to get to a national policy may not work (and that independent of ideology, there will be Congressmen from the Southwest leery of having their name associated with CNG). STOP BUYING OIL FROM OPEC. We cannot bring peace to the Middle East. We cannot force Afghanistan to be a democracy. Let us do something we can do. Stop buying oil from OPEC. We can do it now. Compressed natural gas (CNG) cars. Iran does it. So can we. We still import 4 million barrels per day from OPEC. But now, we have the capability to stop all oil imports from OPEC within 60 months. We have low cost natural gas and low cost technology for converting cars to operate on CNG. This program would convert 65 million vehicles (23% of our fleet) to (CNG). Cost $98 billion. The other part of the program is to build 10,000 CNG refueling stations. Cost $20 billion. Total $118 billion. All the costs will be money spent on U.S. labor and material. Use of low cost natural gas will save us about $80 billion per year. The program can start immediately by presidential order to convert the 600,000 federal non-military vehicles to CNG. Theses are shovel/wrench ready projects. Total cost: less than $5 billion. This CNG program is not like the Manhattan Project that involved large technical uncertainties and risk. CNG technology is commercially available in the United States. Iran now has 2.9 million vehicles (23% of its fleet) operating on CNG. The collateral benefits are manifold: cost savings; reduction in trade deficit; employment for 100,000 Americans; reduced CO2 emissions; low technical, commercial and environmental risks; progress that can be accurately measured; plus no political party would find it objectionable. a "corrupt" hymn book that is. big oil had flare offs for a reason…and it wasn't "energy conservation." the bottom line is these clowns exist to protect their big trading operations on Wall Street and not for the benefit of the American people in time of war or humanity itself. they big spenders and big wasters of all capital at every level…just about useless actually…and were it not for the fact that the totality of banking interests were beholden to their "goo" then i think we'd actually have a rational energy policy not only in the USA but the world. the irony that Exxon Mobil is now the world's largest natural gas company should be lost on no one….least of all those in the extraction business. Obviously their only goal is to drive the price back up to 15 buck a bcf "so that the economy is a winner." love nat gas…first person to bring it up as a "uber bullish call on the US economy going forward"…has worked BEYOND all expectations when the price crashed to around 2 bucks…but Dr. Evil is back to work screwing over the American War Effort yet again just like they told Congress they would…so i'll hold out hope for ethanol and solar powered direct drive engines. the big banks on Wall Street are dead…they still don't know they're never coming back. the days of the Great Regional Players is under way. i'm sure Exxon Mobil will find a way to keep its share price from collapsing…but doing so at the expense of the American people i think is a thing of the past. it's not like the status quo works. let alone having a heart attack. the bottom line is a ZERO fuel vehicle is now on the horizon. I will be looking forward to the new Cadillac EEV ….which basically runs on a motorcycle engine. If that becomes standardized the way the Chevy 350 V8 did (probably 50 million of those engines made actually) then "look out." easy to work on, easy to maintain. forget a 100k warranty…how about a million miles instead? sorry…they made 90 million of those engines. http://en.wikipedia.org/wiki/Chevrolet_small-bloc… can still get one in a crate from Mexico as well. I've read NG prices are so ow because of over production. The profit margins are very low for companies to get the gas to market, so they produced a lot of it to cover their losses. Once supply and price normalize I think NG prices will rise steadily, which is a good thing because it will mean more CNG Cars and refueling stations and jobs for Americans. It's not a perfect technology, but it's a step in the right direction. I agree that gas prices are currently so low that they barely cover the costs of drilling, if at all. I think it is likely, as you suggest, that they will rebound to a equilibrium that gives a decent profit to gas producers, but still leaves the price of CNG attractive relative to gasoline. However, I should probably have listed the uncertainty about where the natural gas price will end up as an additional factor that encourages a wait-and-see attitude among people who are contemplating investments both in vehicles and fueling infrastructure. With dual fuel, the network issues don't make sense, you only convert if ng is locally available, and you use petrol/diesel on long journeys. Back in 2008, our logistic company outfitted trucks from gasoline to natural gas. Except of purchasing trucks that were traditional diesel or gas trucks. Saved us a bit according to our ledger. In Russia, they are not even close to a paradigm shift for truck usage. Interesting that you should mention this. I would have agreed with you until I read this article in the NYT a few days ago: http://www.nytimes.com/2013/04/12/business/energy… The gist of it is that Gasprom is panicked about falling gas prices and losing market share to LNG, Ukrainian fracking, and all the rest, so they are pushing CNG as a domestic transport fuel. What's holding back natural gas is a combination of factors actually. First, there's the obvious lack of facilities such as refueling stations. Secondly, I think gas companies are a little reluctant to sacrifice some profits and invest on restructuring their business models to accommodate natural gas. Lastly, car owners will need to spend to have their cars modified to efficiently burn the new fuel source. We certainly cannot ignore the fact the technology is there and being used. I think it's going to be a slow process but eventually we will see most of our vehicles running on NG in the future. Not only would business owners like to save money but advocating green technology will go far with customers. DermaTend Mole and Skin Tag Remover is among the most highly rated and effective non-prescription mole remover that is available on today’s market. Now a days the use of fuel vehicles very much high in large cities and also in the rural areas. People wants to travel by their personal vehicles. The number of petrol and diesel vehicles in the cities and other areas increasing day by day. Mostly the diesel vehicles are quite responsible for the environmental pollution. It is because the emission of carbon dioxide is much more in diesel vehicles. Again the price of diesel and petrol also increasing day after day rapidly. So to get out of the situation we have to use the electricity or the natural gas as the fuel of vehicles. Since we use electricity for other necessary works. So the use natural gas will be a better option for the future. <a href ="http://hansamotors.com/german-auto-repair-newport-beach.html">audi repair newport beach Natural gas is way cheaper and actually car gets some extra boost power, because natural gas has a higher temperature and that produces more power in the engine. Now when the price for the gasoline jumped so high (at least in my country that is really high), natural gas become very popular and so many people now are using this. Personally me, i still stick to the gasoline, but if gasoline prices goes still higher, i probably have to use natural gas as well or try a diesel car. Natural gas looks cheaper as compare to petrol that's why mostly countries use this gas for their automobiles. Pakistan i in top of its use that's why its prices rises higher and higher day by day. Natural Gas as the Transportation Fuel is cheap, pollute atmosphere less than other gas. I think it should be use as main Transportation fuel. The use of natural gas is really cheap nowaday, I believe something has to change also considering all the airlines with planes etc.. If I were the president, then I would make the Natural Gas main Transportation Fuel from tomorrow. It is effective in so many ways. big governments don't want to lose out on the hefty tax revenues from petrol or diesel. Isn't it a well bandied about theory that the larger car manufacturers are easily capable of creating an electric engine to power the biggest of vehicles for hundreds of miles without needing to recharge but these manufacturers, oil companies and governments are much too unwilling to admit this as they will lose out on their billions of revenues from fuel. "We still import 4 million barrels per day from OPEC. But now, we have the capability to stop all oil imports from OPEC within 60 months. We have low cost natural gas and low cost technology for converting cars to operate on CNG. This program would convert 65 million vehicles (23% of our fleet) to (CNG). Cost $98 billion. The other part of the program is to build 10,000 CNG refueling stations. Cost $20 billion. Total $118 billion. All the costs will be money spent on U.S. labor and material. Use of low cost natural gas will save us about $80 billion per year. – See more at: http://www.economonitor.com/dolanecon/2013/01/07/… It's all about money and greedy Governments. If the price of fuel was the same as water it would all be solved quicker than 60 months my friend. It's a sad truth that we will go to war and see blood shed over the black gold! A few months ago, I came across a post on nature.com which talked about India's success in curbing smog and pollution by forcing small and medium size commercial vehichles to CNG. When first introduced 4-5 years ago, it ran into serious opposition from the transportation industry and taxi associations but the local governments didn't budge and apparently it has proven to be a big success. Several big metro's such as Mumbai and Delhi have already shown this can be done successfully and that the environmental gains are significant. Now the program is being tried out in tier-2 cities. A previous comment talked bout how this was done in Brazil too. I don't think we'll ever see anything like this in the US because our "leaders" lack the political motivation to drive this issue!
<urn:uuid:7958db50-0505-4d93-bd33-43caa83cfb30>
CC-MAIN-2016-26
http://www.economonitor.com/dolanecon/2013/01/07/what-is-holding-back-natural-gas-as-the-transportation-fuel-of-the-future/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961694
5,639
3.296875
3