text
stringlengths
174
640k
id
stringlengths
47
47
dump
stringclasses
17 values
url
stringlengths
14
1.94k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
43
156k
score
float64
2.52
5.34
int_score
int64
3
5
Contact us now to get immediate help: 1-877-882-9275 Drugs and alcohol are both substances which alter the perceptions of the individual who consumes them. Using these substances regularly can cause a lot of trouble for the user. Many find that their occasional use turns into habitual use, which is more often than not just a step away from addiction. Addiction is where the individual depends on these substances just to feel good or get through Here are a few of the early warning signs that a person may have problems with drugs and alcohol: relying on drugs and alcohol to have fun, forget problems, or relax having blackouts (when a person can't remember what happened when drunk or taking drugs and alcohol by their self withdrawing or keeping secrets from friends or family performing differently in school (such as grades dropping and frequent absences) building an increased tolerance to drugs and alcohol - gradually needing more and more of the substance to get the same feeling There are probably as many definitions of "addiction" and abuse as there are substances to abuse. Misunderstandings occur when we get lost in quibbling over "how much" and "how many times" we take drugs and alcohol. In addition, many of us have cultural, religious, and social baggage about the use of drugs and alcohol. A more useful way to decide if a person is chemically dependent is to consider whether an "impairment" or "negative consequence" occurs as a result of use. This can happen in their physical, emotional, and/or social functioning. Sometimes they notice the effects of drugs and alcohol on their lives, sometimes others have to point it out to them. The range of use includes "experimentation" (use a few times to discover the effect), "regular" or "social use" (use without impairment or negative consequences), "problem use" (impairment in one area of functioning), and "addiction" (the inability to stop using or to stay stopped despite negative consequences in one or more areas of one's life). This includes compulsive use and the loss of control over use. It is usually hard for people to recognize that they have a problem with drugs and alcohol. This is why friends or family often step in. People who are addicted to drugs or alcohol may promise over and over that they'll stop. However, quitting is hard to do. Many people find they can't do it without help. The best thing for an individual who has problems with drugs and alcohol is to talk to someone they trust, preferably someone who can support them emotional. This way they don't have to deal with their problem alone. There are also lots of resources for people who have problems with drugs and alcohol. Find Top Treatment Facilities Near You Speak with a Certified Treatment Assesment Counselor who can go over all your treatment options and help you find the right treatment program that fits your needs. Discuss Treatment Options! Our Counselors are available 24 hours a day, 7 days a week to discuss your treatment needs and help you find the right treatment solution. © Copyright 1998 - 2017 All Rights Reserved. Content is protected under copyright laws, do not use content without written permission.
<urn:uuid:25dd0e68-9a9d-4c54-833f-e7ee322fb960>
CC-MAIN-2017-26
http://www.drug-rehabs.org/research/drugs-and-alcohol.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00484.warc.gz
en
0.946879
690
2.9375
3
Back in 1887, Thomas Edison and his crew in West Orange, New Jersey, invented one of the first ways to view a motion picture: a kinetoscope. It was like a super early version of the film projector. A string of photographs would flash across a peephole, where folks could view a moving image. As National Geographic notes, Edison was tipped to the idea from the British photographer Eadweard Muybridge. Muybridge made images like the below gif, which clued Edison to the fact that motion could be conveyed through a series of photos. After seeing this, Edison — who'd already made recorded history with the phonograph — decided he needed to get into the motion picture game. The inventor wrote: "I am experimenting upon an instrument which does for the Eye what the phonograph does for the Ear, which is the recording a reproduction of things in motion, and in such a form as to be both cheap, practical and convenient." We can learn a couple of things from Edison's analogizing. First, cognitive scientists have confirmed that it's easier to learn new things when you already have an extensive base of knowledge. Second, the bridge that allows us to come up with new ideas is usually an analogy: a way of looking at two things in your memory or in the world and seeing the similarity in their underlying structures. Analogy may be the best way to brainstorm new ideas — and it's a ridiculously old technique. Frequently, the "answer" to a new question exists in a solution somewhere out there in the world. It's just a matter of finding the right fit. Nat Geo tips us to the first recorded invention by analogy. Some 2000 years ago, the Roman architect-engineer Vitruvius used an analogy to figure out how to build an excellent theatre. "As in the case of the waves formed in the water, so it is in the case of the voice," the architect wrote. "The first wave, when there is no obstruction to interrupt it, does not break up the second or the following waves, but they all reach the ears of the lowest and highest spectators without an echo." Analogy helped Johannes Kepler untangle the laws of planetary motion. The German astronomer thought that gravity — though it didn't have a name yet — could act like light. Just like light could move from the sun to the planets, a force could keep them in orbit. Modern-day office folk can employ analogies, too.How to analogize your way to better ideas In a new paper, University of Pittsburgh researchers Joel Chan and Christian Schunn tracked the brainstorming sessions of a design firm trying to make a handheld printer for kids. The designers made new analogies every five minutes. Those analogies allowed the designers to incrementally improve on each other's conceptions. Let's look at the transcript. In this particular selection, they're trying to figure out how to cover the printer head so it doesn't get destroyed by kids when it's not in use. Notice the progression of the idea, analogy by analogy: Incrementally, the idea shifts, recomposes, and evolves. The solution is first like a video tape, then a garage door, and then a rolling garage door. You can get a feeling of what's going on in the designers' minds: They're looking for a new solution to an old problem, how to protect this valuable thing when it's not in use. So they fire off ideas of other types of protectors in a rapid evolution. In 10 seconds, analogies help the idea to evolve. So remember this the next time you're trying to dream up an answer: Look for other "solutions" that already exist out there in the world, and see if they might fit your question, just like Edison, Vitruvius, and Kepler. See Also:This Batting Practice Experiment Exposes Our False Assumptions About LearningWhy People Wait In Hours-Long Lines For Shake Shack, Cronuts, And iPhonesRadio Host Ira Glass: Even The Most Successful People Once Doubted Themselves17 Web Resources That Will Improve Your Productivity17 Easy Habits To Start Today That Will Help You 5 Years From Now SEE ALSO: 7 Memory Skills That Will Make You Way Smarter
<urn:uuid:b50e6def-a2e3-4b3f-ada7-1fee365e12be>
CC-MAIN-2017-26
http://www.neagle.com/article/20140630/BUSINESS/306309982/-1/lifestyle
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00484.warc.gz
en
0.937078
884
3.5625
4
Nemophila maculata Benth. ex Lindl. Hydrophyllaceae (Waterleaf Family) USDA Symbol: NEMA This hairy, trailing annual reaches 12 in. in length. Delicate, showy, bell-shaped flowers grow in clusters at the tips of its branches. Each of the five petals are white, with a bluish to purple spot at the tip. Leaves are lobed and opposite. Fivespot grows quickly and easily in cultivation, especially if given afternoon shade. It reseeds readily. From the Image Gallery Bloom InformationBloom Color: White , Purple Bloom Time: Apr , May , Jun , Jul DistributionUSA: CA , OR , UT Native Distribution: In CA, w. base of the Sierra Nevada from Plumas Co. to Kern Co. Native Habitat: Moist slopes & flats below 7500 ft. Growing ConditionsWater Use: Medium Light Requirement: Part Shade Soil Moisture: Moist CaCO3 Tolerance: Medium Soil Description: Mesic to dry soils. Conditions Comments: Five spot grows quickly and easily, especially if given afternoon shade. It reseeds readily. BenefitConspicuous Flowers: yes Value to Beneficial InsectsSpecial Value to Native Bees This information was provided by the Pollinator Program at The Xerces Society for Invertebrate Conservation. PropagationDescription: Where winters are relatively mild, sow seed in fall; otherwise sow in early spring. Seed Collection: Not Available Seed Treatment: No treatment may be necessary but for more uniform results, stratify for 2 months or germinate in cool temperatures (less than 70 degrees) and in darkness for the first 3 days. Commercially Avail: yes Find Seed or Plants Find seed sources for this species at the Native Seed Network. From the National Organizations DirectoryAccording to the species list provided by Affiliate Organizations, this plant is on display at the following locations: Native Seed Network - Corvallis, OR Additional resourcesUSDA: Find Nemophila maculata in USDA Plants FNA: Find Nemophila maculata in the Flora of North America (if available) Google: Search Google for Nemophila maculata MetadataRecord Modified: 2007-01-01 Research By: TWC Staff
<urn:uuid:298dc9cd-62da-4cbe-8445-e316b7198bc9>
CC-MAIN-2017-26
http://www.wildflower.org/plants/result.php?id_plant=NEMA
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00484.warc.gz
en
0.793524
500
2.71875
3
“It’s always been my hunch that the two disciplines go together,” Putnam said recently, speaking in the Visual Arts Center among pieces made by her students for their first show. For a short time, black-and-white sculptures and prints of birds and bird-like images filled the Center’s halls. A mobile of migrating birds would flutter slightly every time a nearby door opened. To illustrate her case, Putnam points to the drawings by 19th-century naturalists as an example of where the two fields once more strongly overlapped. “They trusted their eyes more than they trusted the camera,” she said. And she said scientists today are developing visual means of persuasively presenting their discoveries and arguments. This year the Boston-based artist is doing a residency on campus as Bowdoin’s Coastal Studies Scholar. Her own prints and printed quilts explore fragile transitional ecosystems and environmental issues. As a teacher, Putnam says the best way for students to develop their perception is through practice. Besides her regular teaching job at St. Mark’s School in Southborough, Mass., Putnam has taught drawing to geology students at MIT. There, she helped a class learn observation strategies to sharpen their visual awareness as they studied and learned the differences between rocks. To help her Bowdoin students hone their observations of their surroundings, Putnam has brought her class to the Coastal Studies Center in Harpswell, to the bird collection in Druckenmiller Hall, and to the Arctic Museum on campus. The students draw directly on wood blocks or linoleum sheets, which they then carve into prints. While some students in the class are art majors, others come from academic backgrounds in biology, earth and oceanographic science, and chemistry, Putnam said. The assignment that generated the student show, The Birds of Maine, The Extended Print: An Aviary, in the VAC, was to create a hanging three-dimensional object exploring the concept of flight or migration. Using their discarded prints and found objects, the students fashioned new art while considering elements such as skeletal structure, feather construction, anatomy, flight physics, orienteering, time and distance. Putnam encouraged the students to also touch on issues such as global warming, human interference with migratory flight routes, mythology, and other areas where birds and people come together, sometimes in conflict. Putnam asks her students to not just sketch habitats and species, but also to research and read about them, such as looking into a bird’s habitat or migration route. “Say you’re reading about nutrient storage as birds get ready to migrate, then as you’re looking at a bird, you think about aspects of bird flight that wouldn’t be apparent to you if you were just drawing a still-life,” she said. She reminds her students, “We make art about the things we don’t know.” “If something is written on my headstone, it will be, “She made them look. It’s so important in science; it’s so important in art,” Putnam said. A second show of the students’ work from this course will be hung in the Environmental Studies common room after spring break.
<urn:uuid:375d0ea5-5bf9-4870-8a05-7c092a28690f>
CC-MAIN-2017-26
http://community.bowdoin.edu/news/2013/03/student-art-show-draws-on-science-for-inspiration/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320995.93/warc/CC-MAIN-20170627050500-20170627070500-00564.warc.gz
en
0.965403
687
2.671875
3
The development of speech and language is influenced by many things; these include muscle control, health, ability to learn, vision, hearing, experience of communication. All children develop at their own rate. Some children with Down’s syndrome say their first word at 13 months, others not until 36 months. This may sound a little daunting but actually when you look at the tips below you will realise that you are already doing many of the things that will help your child to communicate. Speech and Language Therapy Not all children with Down’s syndrome will require regular input from a speech and language therapist. The level of input will depend on the child’s individual need and the availability of services in your area. Ideally, your child should have been assessed by a Speech and Language Therapist between 9 months and 1 year old. Your Paediatrician, Health Visitor or GP can make a referral to the local Speech and Language Service for your child. First steps – Here are some ideas to get you started – your baby might enjoy - Listening: To you talking, To music and musical mobiles, To singing and nursery rhymes, To you copying their sounds - Looking: At toys (wobble toys and baby gym are good in the early stages), At baby books, At baby mobiles and lights, At themselves in mirrors, At you: pulling faces, making funny noises, singing, smiling, talking - Games involving their body such as: Rock-a-bye baby, Round and round the garden, This little piggy, Peek-a-boo, Waving bye bye - Having a good time: Being with family and friends, Kissing, Cuddling, Massage, Laughing We have prepared a factsheet for Speech and Language Practical Activities, please download and have a go.
<urn:uuid:4b567bd4-a069-45a7-ba67-39cb0f1f8c3d>
CC-MAIN-2017-26
https://www.downs-syndrome.org.uk/for-families-and-carers/growing-up/early-communication/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320995.93/warc/CC-MAIN-20170627050500-20170627070500-00564.warc.gz
en
0.941442
378
3.5
4
March 12, 2014 “Changing Minds” webinar for special education leaders provides insights that supports students and prevents behavior problems SAN FRANCISCO, February 17, 2014 – Problem behavior in children and teenagers is a common source of stress for educators, parents and families, impacting both academic performance and overall quality of life. To help educators better understand and address the rising number of behavior issues they see in the classroom, PresenceLearning, the leader in online speech therapy and other special education-related services for K-12 students, has partnered with Dr. Barry Prizant, a leading expert on classroom behavior management strategies, to present “Preventing Problem Behavior in Schools: An Emotional Regulation, Relationship-Based Approach” on Wednesday, March 12, 2014 at 1:00 PM Eastern time/10:00 AM Pacific time. This free webinar is part of the “Changing Minds” webinar series. To register, visit http://pages.presencelearning.com/spedahead-changing-minds-barry-prizant.html. During the webinar, Dr. Prizant will discuss problem behaviors that are often symptomatic of autism and other neurodevelopmental disorders. He will also inform educators of his “Bio-Pyscho-Social” perspective to handling problem behaviors in practical, respectful and innovative ways. By attending the webinar, participants will: • Gain a new perspective and understanding of emotional regulatory problem behaviors • Learn new strategies to prevent and respond to problem behaviors • Develop an understanding of how these strategies enhance a child’s ability to stay emotionally well-regulated, maximize learning, improve social participation and balance relationships. With the rise in cases of neurodevelopmental disorders in students, it is more urgent than ever for schools and administrators in special education to be prepared. The “Changing Minds” webinar series connects leading experts in childhood neurodevelopmental, behavioral and mental disorders with educators. During the webinar series, the experts will share their expertise, tackle common misconceptions and provide practical strategies to better manage challenges and help students succeed. Since 2009, PresenceLearning’s online speech therapy services have provided schools with a practical, affordable new option for service delivery: web-based access to a nationwide network of live, highly qualified, fully licensed speech-language pathologists who are available whenever and wherever they are needed. This past year, PresenceLearning added online occupational therapy (OT), online assessments and online counseling to their service offerings. By partnering with PresenceLearning, school districts can fill staffing gaps related to acute and chronic SLP and OT shortages, reduce high caseloads for onsite personnel, reduce their backlog of assessments, improve student outcomes and become more efficient. PresenceLearning also offers access to technical specialists, as well as culturally and linguistically diverse speech-language pathologists, occupational therapists and counselors. PresenceLearning has delivered more than 250,000 live online therapy sessions in public, charter and virtual school districts of all sizes nationwide. About Dr. Barry Prizant Dr. Barry Prizant has more than 40 years’ experience as a clinical scholar, researcher and international consultant on ASD and managing related emotional/behavioral challenges. He is an Adjunct Professor at Brown University and Director of Childhood Communication Services, a private practice. Barry is co-author of The SCERTS Model: A comprehensive educational approach for children with ASD (Prizant, Wetherby, Rubin, Laurent & Rydell, 2006) He has published more than 120 articles and chapters, serves on the advisory board of five professional journals, and has presented more than 700 seminars nationally and internationally. His forthcoming book is entitled Uniquely Human: Seeing Autism Through a Different Lens (Simon and Schuster). PresenceLearning (www.presencelearning.com) is the leading provider of live online speech therapy services for K-12 students and now offers online occupational therapy as well. The company offers school districts web-based access to a growing, nationwide network of hundreds of highly qualified speech language pathologists (SLPs), occupational therapists (OTs) and other related services professionals via live videoconferencing combined with the latest in evidence-based practices and powerful progress reporting. Serving thousands of students in public, charter and virtual schools throughout the U.S., PresenceLearning has shown that online speech and language therapy is practical, affordable and highly effective. PresenceLearning is an ASHA-approved continuing education provider for SLPs and a U.S. Department of Education grant-winner, dedicated to bringing the highest clinical standards to online therapy. Katie Povejsil, Vice President of Marketing, PresenceLearning Christine Allman, Public Relations for PresenceLearning
<urn:uuid:83e1b504-0005-4f5c-bf9a-222a8049758c>
CC-MAIN-2017-26
https://www.eschoolnews.com/2014/02/17/expert-dr-barry-prizant-shares-classroom-behavior-management-strategies-presencelearning-webinar/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320995.93/warc/CC-MAIN-20170627050500-20170627070500-00564.warc.gz
en
0.917842
976
2.8125
3
Looking at the shapes of Earth's continents today, it's easy to think of them like pieces of a jigsaw puzzle. They look like they could have fit together once. For instance, the western coastline of Africa looks like it fits perfectly with the eastern coastline of South America. And the reason the continents look this way is because they really did fit together once. Hundreds of millions of years ago, scientists believe that our planet had only one continent, named Pangea. This supercontinent slowly, eventually, broke apart to form the continents as we see them today, due to a process called plate tectonics. It's one thing to look into the past and guess at how this process moved the continents to where they are today, and another thing to look at the way the continents are still moving today and guess at where they might be going. But that's exactly what at least one scientist, Christopher Scotese at the University of Texas at Arlington, attempted to try and do. Scotese has created an animation (shown in the video at the top of the page) that predicts where Earth's continents might end up over the course of the next 250 million years, and it turns out they might be on their way to forming another supercontinent, reports the BBC. This future supercontinent, which has been aptly named "Pangea Proxima," could one day make it possible to travel from North America to Antarctica ... by foot. It's a fun concept to imagine, but as Scotese admits, also highly speculative. Just because Earth's continents are moving in certain directions today doesn't mean that a major geological event can't happen and shake everything up. Scotese thinks his model is probably accurate up to about 50 million years. After that, it's mostly just a guess. "In the plate tectonic world, plates do evolve slow and steady until we have one of these plate tectonic catastrophes like continental collisions," he said. "This fundamentally changes plate tectonic regimes." It's a reminder, though, that the Earth is a dynamic place, and that our planet will probably be unrecognizable in several million years. The continents are moving, often only inches per year, but they're moving. And maybe, just maybe, this slow ride is gradually carrying us all back together again.
<urn:uuid:c6769f55-e596-4c05-98ad-2da407a61fc1>
CC-MAIN-2017-26
https://www.mnn.com/earth-matters/climate-weather/stories/earths-continents-might-all-join-together-250-million-years
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321961.50/warc/CC-MAIN-20170627235941-20170628015941-00644.warc.gz
en
0.972527
478
4.21875
4
Welcome to "I Am Woman"...a tribute to all those women who had the courage and perseverance to stand up and fight for their rights. Thanks to those who came before us we enjoy a freedom unknown to women not too long ago. But, sadly, in many parts of the world, women continue to be repressed. In fact, even in this country there are women living today under the threat of violence...completely controlled by a violent spouse. Some may make it; others won't. Hopefully, one day ALL women will be free. May that day come soon. Is there a generational cycle of abuse? Some of these examples of the impacts on children of living with domestic violence would suggest that children are growing up to replicate aspects of the behavior they themselves were subject to. In other words, research has shown that there is a cycle of abuse whereby patterns of abusive behavior are passed down through the generations. Violence is a learned behavior that often is self-perpetuating. In fact, the single most influential factor of domestic violence in society is the continuation of a generational cycle of abuse and/or a history of abuse in the family of origin. When children bear witness to violence, they learn that the people you love the most may hurt you and the violence is the only way to handle conflict. Fear becomes a normal part of life. This is the way people are supposed to act. Women who saw their mother abused may grow to believe that if a man doesn't abuse them, he must not love them. And, as they learn, a generational cycle begins in which children grow up to either be abused or to be the abuser. In addition, the link between domestic violence and child abuse, both emotional and physical, cannot be ignored.Domestic violence or child abuse rarely exist alone. When there is violence in a home it is often multifaceted. Children are frequently involved in episodes of domestic violence, either as witnesses, victims, or participants when they intervene to protect their mothers. This form of family violence can have a profound influence on the child. Those who have been abused in childhood may abuse or neglect their own children, perpetuating an inter-generational cycle of abuse. Violence begets violence. Child abuse, like domestic violence, replicates itself across generations. A cycle of abuse is rarely broken without outside help. Without effective intervention, domestic violence becomes an inter-generational cycle. Abusers must confront and take responsibility for the verbal and physical abusive patterns of behavior. Both victim and abuser need to consider professional counseling as a means to stop the cycle of abuse. If you cannot find or cannot afford professional help, seek out public services to address the abuse in your home before it spirals out of control.
<urn:uuid:1df95f3d-ac06-48a4-aea3-ad31e589c109>
CC-MAIN-2017-26
http://iamwoman-mxtodis123.blogspot.com/2011_11_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128329372.0/warc/CC-MAIN-20170629154125-20170629174125-00084.warc.gz
en
0.968732
554
2.734375
3
Sexual contact accounts for over 80% of reported AIDS cases in Chile, yet cultural norms prevent open discussion of sexual behaviors. Misconceptions about HIV transmission modes are therefore entrenched at every level of society. In some instances, Health Workers have been found to have inaccurate or incomplete knowledge of HIV transmission, while various forms of media have presented contradictory and confusing HIV prevention messages. Until recently, the limited sexual education provided in Chilean schools compounded this issue. Where sexual education was available, it focused almost exclusively on abstinence, perpetuating cultural stigmas related to HIV and enabling misinformation to continue to flourish. Recognizing that a multi-sector approach is critical to reducing the incidence of HIV transmission, the Chilean Government signed the UNESCO Preventing through Education Declaration by which it has committed to reduce by 75% the number of public schools that have not institutionalized comprehensive sexual education. In partnership with the Chilean Ministry of Education, the Center of Integral Sexual Education (CESI), a private organization dedicated to improving access to sexual education and psychological counseling in schools throughout Chile, has rigorously taken up this challenge. Daniel Seguel, Regional Coordinator of CESI, notes that CESI’s HIV mandate includes providing “deep learning of HIV transmission”, in addition to the more broadly-available awareness education. He reports that CESI has integrated the TeachAIDS Spanish language materials into the programs it delivers to classrooms across Chile “in order to make available high quality materials and comprehensive education”. Noting that the limited knowledge of teachers and cultural stigmas had previously been major stumbling blocks in providing such comprehensive HIV education. CESI Lead Psychologist Maria Sandoval notes, “We teach the teachers so that they can better teach their students. Previously, teachers felt uncomfortable and disempowered when it came to presenting HIV education, so it is very good to have a tool that allows them to present this information through a reputable third party. This is a big step, and teachers are very thankful.” Mr. Seguel and Ms. Sandoval proudly site numerous successes of the CESI-TeachAIDS partnership. They note that the materials were recently shared with teachers in the communes of Doñihue and Peralillo, in which talking about the sexual transmission of disease has historically been taboo. With the TeachAIDS materials, hundreds of families in this community will receive the information they need to protect themselves and their loved ones from HIV. The importance of sexual education in multi-sector approaches to reducing the transmission of HIV cannot be underestimated. We applaud the Government of Chile for taking steps to make this education available, and thank CESI for empowering teachers to approach the complex and important subject of sexual education. Along with other nonprofit organizations around the world, including United Way of Hyderabad , Action for the Needy, and Children of Grace, CESI is leading the way in providing life-saving comprehensive HIV education to those who need it most.
<urn:uuid:57d56da8-5083-43c9-8a3f-5d8e2e167e97>
CC-MAIN-2017-26
http://teachaids.org/blog/challenginghiv_stigma_chile
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00204.warc.gz
en
0.949272
600
3.171875
3
Fermented foods, and their probiotic properties, have been making their way slowly in to the mainstream as the health of our guts is taking center stage in staying healthy. Some experts believe that 70 percent of our immune system lies in our gut, its ability to digest foods properly and maintain a healthy balance of bacteria. For thousands of years fermenting has been a traditional food practice for a variety of cultures. They did this not only to preserve food, but also to make it more digestible and nutritious since there was no refrigeration. While fermentation might sound like something is going bad or rotting, it’s quite the opposite. When properly done with raw foods, the process–called lacto fermentation–makes food more vital, nutritious and beneficial to our health in many ways–by keeping the good bacteria on them. This means that ingredients like cabbage and cucumbers have been left to sit and steep until their sugars and carbs become bacteria-boosting agents. Stick with me! French chemist Louis Pasteur was the first known zymologist, a fermentation scientist, when in 1856 he connected yeast to fermentation. Pasteur originally defined fermentation as “respiration without air”. Read more on how fermented foods can lead to a healthier you, and how to get them in your diet. What are the health benefits of fermented foods? Studies suggest these probiotic powerhouses can help treat everything from diarrhea, irritable bowel syndrome and leaky gut, to more serious conditions such as heart attack and hypertension. Though more research is needed, current evidence still gives clients good reasons to consider getting a daily dose of probiotics from a fermented food source. Wellness experts are convinced that probiotics can lead to weight loss and better skin. Like bloating, skin reactions are often a sign of an unhappy gut. Food allergies and food intolerances can lead to dark circles under your eyes, blemishes, rashes and a puffy, swollen appearance. Studies have found that more than half of all acne sufferers have alterations in gut bacteria, and societies that eat a more indigenous diet with little or no processed or sugary foods have virtually no acne and very few gastrointestinal problems. So, improved digestion can improve the look of your skin. You can’t beat that. What kind of products do I look for? Always look for products labeled raw, live, lacto fermented, not pasteurized, and buy them from the refrigerator section. Dairy Products — For those who eat dairy, fermented or cultured dairy products improve the nutritional value of the milk and make it much more digestible. When raw milk is allowed to naturally sour, or bacteria is added to it, the good bacteria flourish. This pushes some of the bad bacteria out, thereby preserving the milk as well as releasing many vitamins, such as B and C and minerals like calcium, magnesium and phosphorus. Another benefit is that the lactobacillus (good bacteria) in the fermented milk helps breakdown the protein and casein so humans can digest it, even if they are lactose intolerant. Kefir is by far the most effective dairy product to digest. Greek yogurt* has recently enjoyed a burst of popularity. It’s full of protein, calcium and healthy bacteria that are good for your digestion and immune system. It’s a great snack, especially if you’re looking to slim down. Not only does this yogurt make you feel full, some studies have shown that diets that include several servings of Greek yogurt a day may aid weight loss and trim waistlines. *Be aware that not all yogurts are created equal, and some can have a higher sugar content than donuts. Look for yogurts that bear a Live & Active Culture (LAC) seal which means they contain at least 100 million bacterial cultures per gram at the time of manufacture, and always look for L. Acidophilus and Bifidus on the label. Sauerkraut — Researchers say that this fermented food has a powerful impact on brain health, including depression and anxiety, and add that there’s a tremendous connection between gut and brain health. If you’re the DIY type, try making our spicy garlic kraut recipe below. If you’re planning on purchasing sauerkraut, you have to buy the refrigerated type, not canned, and think of using it on sandwiches and in salads, besides on that dog! Pickles — Not only do they provide a healthy dose of probiotics, they’re a familiar food item and have a taste that many people already love—including those who may hold their nose at the idea of eating fermented foods. Kim Chi — Koreans eat so much of this super-spicy condiment (40 pounds of it per person each year) that it’s considered a staple. It’s made with Napa cabbage and if you’re a DIY person, you can find many recipes online. I look for this product to being readily available in health food stores. Kombucha tea — This tea is a fermented black tea that’s no stranger to New Yorkers. Kombucha gives you a bang for your bacterial buck because of the variety of microorganisms it contains. “When you drink a bottle of kombucha, you’re drinking four to seven microorganisms all at once, building a really strong gut. Miso — The paste made from fermented soybeans and grains is full of essential minerals, like potassium, and consists of millions of microorganisms giving us strength and stamina. To make miso soup, just add a dollop to boiling water, along with some favorite vegetables, like onions, bok choy, or mushrooms. I’ve also found it available in powder form online in 8-16 ounce packages. Tempeh — Tempeh (fermented soybeans) is a complete protein with all of the amino acids. Many people use it as a yummy substitute for bacon in BLTs. Try flavoring organic tempeh with some tamari (also fermented), then add it to a sandwich with tomato, lettuce, and toast. Or eat it tossed in a bowl of steamed veggies. Tempeh is available in all natural food stores, and possibly in your grocery store. Make a request, and you may be surprised how local grocers would like to help. So, as you can see, there are many options to get started on including fermented foods in your diet. Do it today and your gut will thank you. To get started, try this recipe for Hot Pepper Garlic Kraut.
<urn:uuid:6c61b41c-3fae-4b97-9bb0-3b45faf9b699>
CC-MAIN-2017-26
http://whatisnutritiontips.com/bust-gut-fermented-foods/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00204.warc.gz
en
0.943981
1,360
2.53125
3
The previous three examples discussed animating the geometry of an object and used the vertex processor to achieve this animation (because the geometry of an object cannot be modified by the fragment processor). The fragment processor can also create animation effects. The main purpose of most fragment shaders is to compute the fragment color, and any of the factors that affect this computation can be varied over time. In this section, we look at a shader that perturbs the texture coordinates in a time-varying way to achieve an oscillating or wobbling effect. With the right texture, this effect can make it very simple to produce an animated effect to simulate a gelatinous surface or a "dancing" logo. This shader was developed to mimic the wobbly 2D effects demonstrated in some of the real-time graphics demos that are available on the Web (see http://www.scene.org for some examples). Its author, Antonio Tejada, wanted to use the OpenGL Shading Language to create a similar effect. The central premise of the shader is that a sine function is used in the fragment shader to perturb the texture coordinates before the texture lookup operation. The amount and frequency of the perturbation can be controlled through uniform variables sent by the application. Because the goal of the shader was to produce an effect that looked good, the accuracy of the sine computation was not critical. For this reason and because the sine function had not been implemented at the time he wrote this shader, Antonio chose to approximate the sine value by using the first two terms of the Taylor series for sine. The fragment shader would have been simpler if the built-in sin function had been used, but this approach demonstrates that numerical methods can be used as needed within a shader. (As to whether using two terms of the Taylor series would result in better performance than using the built-in sin function, it's hard to say. It probably varies from one graphics hardware vendor to the next, depending on how the sin function is implemented.) For this shader to work properly, the application must provide the frequency and amplitude of the wobbles, as well as a light position. In addition, the application increments a uniform variable called StartRad at each frame. This value is used as the basis for the perturbation calculation in the fragment shader. By incrementing the value at each frame, we animate the wobble effect. The application must provide the vertex position, the surface normal, and the texture coordinate at each vertex of the object to be rendered. The vertex shader for the wobble effect is responsible for a simple lighting computation based on the surface normal and the light position provided by the application. It passes along the texture coordinate without modification. This is exactly the same as the functionality of the Earth vertex shader described in Section 10.2.2, so we can simply use that vertex shader. The fragment shader to achieve the wobbling effect is shown in Listing 16.8. It receives as input the varying variable LightIntensity as computed by the vertex shader. This variable is used at the very end to apply a lighting effect to the fragment. The uniform variable StartRad provides the starting point for the perturbation computation in radians, and it is incremented by the application at each frame to animate the wobble effect. We can make the wobble effect go faster by using a larger increment value, and we can make it go slower by using a smaller increment amount. We found that an increment value of about 1° gave visually pleasing results. The frequency and amplitude of the wobbles can be adjusted by the application with the uniform variables Freq and Amplitude. These are defined as vec2 variables so that the x and y components can be adjusted independently. The final uniform variable defined by this fragment shader is WobbleTex, which specifies the texture unit to be used for accessing the 2D texture that is to be wobbled. For the Taylor series approximation for sine to give more precise results, it is necessary to ensure that the value for which sine is computed is in the range [p/2,p/2]. The constants C_PI (p), C_2PI (2p), C_2PI_I (1/2p), and C_PI_2 (p/2) are defined to assist in this process. The first half of the fragment shader computes a perturbation factor for the x direction. We want to end up with a perturbation factor that depends on both the s and the t components of the texture coordinate. To this end, the local variable rad is computed as a linear function of the s and t values of the texture coordinate. (A similar but different expression computes the y perturbation factor in the second half of the shader.) The current value of StartRad is added. Finally, the x component of Freq is used to scale the result. The value for rad increases as the value for StartRad increases. As the scaling factor Freq.x increases, the frequency of the wobbles also increases. The scaling factor should be increased as the size of the texture increases on the screen to keep the apparent frequency of the wobbles the same at different scales. You can think of the Freq uniform variable as the Richter scale for wobbles. A value of 0 results in no wobbles whatsoever. A value of 1.0 results in gentle rocking, a value of 2.0 causes jiggling, a value of 4.0 results in wobbling, and a value of 8.0 results in magnitude 8.0 earthquake-like effects. The next seven lines of the shader bring the value of rad into the range [p/2,p/2]. When this is accomplished, we can compute sin(rad) by using the first two terms of the Taylor series for sine, which is just x x3/3! The result of this computation is multiplied by the x component of Amplitude. The value for the computed sine value will be in the range [-1,1]. If we just add this value to the texture coordinate as the perturbation factor, it will really perturb the texture coordinate. We want a wobble, not an explosion! Multiplying the computed sine value by a value of 0.05 results in reasonably sized wobbles. Increasing this scale factor makes the wobbles bigger, and decreasing it makes them smaller. You can think of this as how far the texture coordinate is stretched from its original value. Using a value of 0.05 means that the perturbation alters the original texture coordinate by no more than ±0.05. A value of 0.5 means that the perturbation alters the original texture coordinate by no more than ±0.5. With the x perturbation factor computed, the whole process is repeated to compute the y perturbation factor. This computation is also based on a linear function of the s and t texture coordinate values, but it differs from that used for the x perturbation factor. Computing the y perturbation value differently avoids symmetries between the x and y perturbation factors in the final wobbling effect, which doesn't look as good when animated. With the perturbation factors computed, we can finally do our (perturbed) texture access. The color value that is retrieved from the texture map is multiplied by LightIntensity to compute the final color value for the fragment. Several frames from the animation produced by this shader are shown in Color Plate 29. These frames show the shader applied to a logo to illustrate the perturbation effects more clearly in static images. But the animation effect is also quite striking when the texture used looks like the surface of water, lava, slime, or even animal/monster skin. Listing 16.8. Fragment shader for wobble effect
<urn:uuid:b8d7da87-2774-4ec7-9ea8-8f0bc4bb196f>
CC-MAIN-2017-26
http://www.yaldex.com/open-gl/ch16lev1sec8.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00204.warc.gz
en
0.90683
1,611
4.21875
4
5 April 2011 The United Nations agency dealing with weather and climate today reported that ozone loss over the Arctic has reached an unprecedented level this spring owing to the continuing presence of ozone-depleting substances and extremely cold temperatures. Data shows that the Arctic region has suffered an ozone column loss of about 40 per cent from the beginning of the winter to late March, according to a news release issued by the World Meteorological Organization (WMO). The highest loss previously recorded was about 30 per cent over the entire winter. “The Arctic stratosphere continues to be vulnerable to ozone destruction caused by ozone-depleting substances linked to human activities,” said WMO Secretary-General Michel Jarraud. “The degree of ozone loss experienced in any particular winter depends on the meteorological conditions. The 2011 ozone loss shows that we have to remain vigilant and keep a close eye on the situation in the Arctic in the coming years,” he said. WMO notes that the record loss is despite the success of the Montreal Protocol on Substances that Deplete the Ozone Layer in cutting production and consumption of ozone-destroying chemicals. Substances such as chlorofluorocarbons (CFCs) and halons, once present in refrigerators, spray cans and fire extinguishers, have been phased out under the protocol. “Without the Montreal Protocol, this year’s ozone destruction would most likely have been worse,” stated WMO. “The slow recovery of the ozone layer is due to the fact that ozone-depleting substances stay in the atmosphere for several decades.” The depletion of the ozone layer – the shield that protects life on Earth from harmful levels of ultraviolet rays – is also due to a very cold winter in the stratosphere, which is the second major layer of the Earth’s atmosphere, just above the troposphere. WMO noted that even though this Arctic winter was warmer than average at ground level, it was colder in the stratosphere than for a normal Arctic winter. The agency also pointed out that although the degree of Arctic ozone destruction in 2011 is unprecedented, it is not unexpected. Ozone scientists have foreseen that significant Arctic ozone loss is possible in the case of a cold and stable Arctic stratospheric winter. News Tracker: past stories on this issue
<urn:uuid:a1eeb5af-efbf-4b8c-814c-5ff60986738a>
CC-MAIN-2017-26
http://www.un.org/apps/news/story.asp?NewsID=38010
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320040.36/warc/CC-MAIN-20170623082050-20170623102050-00444.warc.gz
en
0.939319
481
3.5
4
Kiev’s Maidan, or Independence Square, aflame, with the silhouetted Statue of Lybid, sister of the city’s legendary founder, Kyi, and Ukrainian flags in the foreground. THE WORD “MAIDAN” WHERE IT COMES FROM AND WHAT IT MEANS Thomas M. Prymak University of Toronto Philologists, who chase A panting syllable through time and space, Start it at home, and hunt it in the dark, To Gaul, to Greece, and into Noah’s Ark. William Cowper (1731-1800) For a short period in 2014, the name of the central square in Kyiv called “the Maidan” became known throughout the civilized world. That was because it was the place where the Ukrainian people gathered to overthrow the unpopular regime of Victor Yanukovych, who appeared to be attempting to set up a new dictatorship in Ukraine with renewed ties to Russia. This pro-Western, pro-EU, democratic movement, came to be called by Ukrainians the Revolution of Dignity, or “the Euromaidan.” The “Euro” part of this word was clear to all. But for Westerners the “maidan” part required some explanation by visiting journalists, who, however, generally ignored it, or at most, stated simply that it was a Ukrainian word for “town square.” Continue reading → Lviv prisoners who were killed by the NKVD before it retreated from the town, July 1941. Photo: cdvr.org.ua 75 years ago, during June – July 1941, the Soviet NKVD shot around 24 thousand prisoners in western Ukraine. Now the names of many of these victims are made known thanks to documents published the Electronic Archive of the Ukrainian liberation movement. Immediately after Nazi Germany attacked the USSR, the Soviet NKVD began shooting prisoners who were sentenced to death. Plans were made to evacuate the rest to rear, and to free those who were arrested for minor crimes. Continue reading → In July 1910, a teenager named Myron Surmach left his village in Ukraine, boarded the ship Atlanta with a third-class ticket and headed across the ocean to an improbably big city called New York. For 21 days, Mr. Surmach sucked on a lemon to stave off seasickness until he reached Ellis Island. There, he told an interviewer decades later, he was shocked to find an American guard welcoming him to the United States in perfect Ukrainian. Mr. Surmach began his new life in Wilkes-Barre, Pa., but within a few years, he made it back to New York. Eager to preserve his native culture, he opened a small shop on Avenue A in Manhattan where he sold records, books, clothes and other Continue reading → Anna of Kyiv, the queen of France, a daughter of Yaroslav the Wise of Kyivan Rus. Monument in Senlis, France (Image: panoramio) Article by: Anastasiia Chornohorska, Alya Shandra On 19 May 1051, Anna, the youngest daughter of Kyivan Rus Prince Yaroslav the Wise, ascended to the French throne as the wife of King Henry I Capet in the Cathedral of Reims. As traditional Days of Anna of Kyiv take place in Sanslis, an abbey 40 km from Paris, we revisit the story of the earliest dynastic connection between France and Ukraine. Her father Yaroslav, was nicknamed “the father-in-law of Europe.” Yaroslav himself marriedIngigerd Olofsdotter, the daughter of the Swedish King; his sister Maria married the Continue reading → The question of ethnicity, which is closely related to the idea of nationality, and somewhat more loosely related to the idea of “race,” is presently of great concern to many people in North America. This generally includes not only those of European, African, or Asian ancestry, but also more particularly, even to those of east European and Ukrainian ancestry. However, questions of ethnicity and indeed “racial” mixing are not only of import in contemporary poly-ethnic and multi-racial North America, but in the case of the Ukrainians, also go back quite far into Ukrainian history, and in particular, are closely bound up with this traditionally Christian country’s relations with its neighbour to the south, Muslim Turkey. The great Sultan, Suleiman the Magnificent and his Ukrainian wife Roxelana, As is well known to archeologists and linguists, Ukraine most likely formed at least part of the original homeland of the famous and somewhat controversial Continue reading →
<urn:uuid:03e70680-dd69-441e-9de9-37cd03f7ee85>
CC-MAIN-2017-26
http://www.onyschuk.com/wordpresstugg/?cat=152
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00525.warc.gz
en
0.972728
966
2.703125
3
A mother and son outing today was a reminder that environmental history can be discovered in unexpected places. Carter and I went to Wellington this morning to check out the RailEx Model Train Show. As soon as he entered the expo room, his eyes lit up, in unison with the eyes of all the other little boys (and girls – but mainly boys) who ran excitedly from one exhibit to another. Mum was pretty enthralled too, but wasn’t expecting the event to produce any intellectual gleanings of any kind. But then we discovered the bush tram running through the bush-clad landscape of Kerosine Creek. According to the man operating the exhibit, this bush tram, used to cart out milled logs from the Kerosine Creek Sawmill in the northern foothills of Tongariro [click here to view approximate location], was operated until the 1960s. Bush trams played an integral role in the sawmilling history of New Zealand; their use dating back to the 1850s. These early trams were powered by teams of up to eight horses: some larger mills stabled up to 40 horses, an expense which eventually led to the introduction of new technology, making the milling operation more cost-efficient. It is estimated that around 1,000 tramways were built, with a total length of around 5,000 kilometres – almost as long as the public railway system at its peak. Click here to view map of sites of bush tram remnants. An 1877 report by an overseas forester stated: “The universal use of the tramway forms a marked feature of the treatment of New Zealand forests. I have seen them of all descriptions and no sawmiller ever dreams of working a forest without one.” By the early 20th century, most forest on accessible flat land had been logged. Bush tramways had to reach into hilly country beyond, and became longer and steeper. New Zealand’s economy was booming, and larger sawmills were built. Horse teams had a top speed of only 6 kilometres per hour. Steam locomotives, steel rails and eventually rail tractors spelled their end. The last horse-drawn bush tram stopped operating in 1938. Small steam-powered locomotive engines were first used on bush tramways in 1871. In the early 1900s, the advent of steel rails allowed for geared locomotives; these locomotives were designed for the extremely steep grades, sharp curves, and uneven tracks in the bush. Photos top and third down: “Kerosine Creek” model “layout”, originally created by Raoul Quinn 23 years ago. Now owned by Grant Morrell, who has maintained and updated the model. To the right of the top scene, the bush tram can be seen, laden with logs. The bottom scene shows Doughty’s General Store, owned and operated by Jim Doughty. This was still in operation until 1957, servicing the timber mills. The trees in the bush landscape were incredibly realistic, and obviously took great skill (and patience) to make. When I commented on this, the man operating the exhibit told me that these are made from a weed called yarrow – commonly found along roadsides in New Zealand. Second down, right: Carter enjoying the sights and sounds of one of the exhibits. Bottom centre: Knight’s tram at Raurimu (1917), Logging train owned by the Tamaki Sawmill Co., Raurimu (B Len Knight, manager). Photographed by Albert Percy Godber. Raurimu is about 20 km south-west of Mangatepopo Valley, to the west of Mount Tongariro. Permission of the Alexander Turnbull Library must be obtained before any re-use of this image, APG-1208-1/2.
<urn:uuid:4227a99d-98c9-4199-affa-a38c38421fad>
CC-MAIN-2017-26
https://envirohistorynz.com/2010/11/20/environmental-history-in-surprising-places/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00525.warc.gz
en
0.974698
785
2.578125
3
A History of Gruesome Medical Cures The desire to take medicine is perhaps the greatest feature which distinguishes man from animals. Sir William Osler This is a short history of just a few of the hideous, weird, puzzling and often disgusting 'cures' used to treat various conditions. Everyone from the poorest serf to royalty were on the receiving end of, what were, thriving 'medical' businesses. So let's now commence our journey into the incredible world of gruesome medical cures. The Weird and the Wonderful It is a wise man's part, rather to avoid sickness, than to wish for medicines . ~Sir Thomas More, Utopia Herbs and flowers were certainly used in many medieval medicines. This seems palatable enough. However, other preparations were not so wholesome. For example, many contained animal parts, waste products, body fluids and other peculiar substances. If some seem to be similar to a revolting magician's brew, it has to be remembered, that many of these 'cocktails' were based not just on their believed medicinal value but on superstition. Here are a few of these weird recommendations: - Rheumatism - for any pain suffered through rheumatics the patient had to wear the skin of a donkey. - Gout - this extremely painful, crippling condition, was treated by using the following poultice/mixture for the affected areas - pigs marrow, boiled herb, red-hair from a dog and many worms. If this didn't work you could try a paste made up of rosemary or other herbs, honey and a large dollop of goat's droppings. - Deafness - the following disgusting paste was placed inside the ear and was said to cure deafness. It was prepared with gall from a hare and grease from a fox. - Jaundice - if you didn't have jaundice before taking the following potion, you would probably end up feeling so after one mouthful. Here is the procedure - for seven days you must drink some ale each morning that contains nine live lice. The recipe doesn't clarify whether the lice should be from the head, body or lower regions! - Thinning hair/baldness - if a crushed garlic bulb rubbed into the skull did not work then you should slap on a few handfuls of grease from a fox. But you had to first ensure that you shaved all the hair off the scalp. It was also essential that your scalp was clean before applying any of the 'cures'. Cleanliness was ensured by rubbing the scalp with the juices from crushed beetles. - Internal bleeding - no matter what the cause the cure to this potentially lethal condition was to wear a bag around your throat that contained a dried out toad. I can't figure out this one either! - Skin diseases and rashes were thought to be relieved by placing a piece of wolf skin on the area. - Kidney stones were simply cured by placing a hot poultice of honey and pigeon dung on the area. - For heart disease there was a particularly disgusting medicine that would be given to the patient. Herbs were the first ingredients - parsley and sage being the most common. The herbs would be added to a concoction of ground down animal skull and juices from a boiled toad. To finish off the cuisine, dead insects would be added. - Asthma is a terrible and distressing illness for anyone today. But spare a thought for the treatment offered to sufferers in days gone by. The following preparation should be covered in butter to allow them to slide down the throat more easily - either young frogs or live spiders. Despite the fact that either of these animals would hit your stomach and not the lungs would suggest that this cure probably did not work. If on the off chance you happened to vomit up your medicine there was another tried and tested remedy. This was a brew made up from crushed human skull, crushed pig's bone marrow both mixed in with sweat. How the 'sweat' was collected and what amount should be used is not documented. But if anyone has any suggestions please let me know? Human Body Parts For Royal Potions When we think of kings and queens from the past our images tend to be of glorious costumes and beautiful jewels adorning regal personages. This wonderful dream might well be shattered when we look at some of the items they swallowed, rubbed on or stuffed into their imperial bodies. The following descriptions are just some of the shocking and repulsive ingredients used to cure kings and queens of old. These are not made up from some fantasy book of spells, but are documented historical facts. - Painful Joints - it was not unusual for the dead bodies of murderers or those killed by trauma to be used for medicinal concoctions. One popular remedy was using human fat as an ointment that was rubbed over the joints in order to relieve the pain from rheumatism or arthritic conditions. Not only that, but both royal men and women used human fat to soften and ward off wrinkles. Elizabeth I of England is also known to have used 'man's fat ' to fill in the pot marks she was left with when she recovered from smallpox. - Egyptian Mummies - the use of mummified human body parts was widespread in Elizabethan times. John Banister was Elizabeth I's personal physician and advocated this form of treatment for a number of conditions such as ulcers, cuts, wounds and haemorrhage. The thought behind their use was that because mummies were so well-preserved they must contain some form of magical life source within them. The damage done by Elizabethan tomb raiders was immense. Nothing much would have been left of the corpses after various bits were ground down into fine powders. They would then be further mixed into liquids for potions or pastes for use with surgical dressings. - Because Elizabeth had such rotten teeth they must have caused her a great deal of pain, not to mention general ill-health. She may well have been advised to hold the tooth of a corpse next to her rotten teeth and bleeding gums in order to effect some kind of relief. Elizabeth would also have had dental cavities. The teeth needing filled would be packed using the brain of a partridge. Where the ideas or the thought behinds these revolting treatments came from is unknown for the present. Although the partridge brain may have been taken from some of the folklore and legend that surrounds various species of bird. - When King Charles I was executed the scenes immediately after his death were ghoulish. The mob rushed forward to dip pieces of cloth and handkerchiefs into his blood. The reason was that Royal blood was thought to cure common ailments - in particular the skin disease scrofula. - In the time of Charles II a popular remedy for many common ailments was powdered human skulls that were then distilled into liquid form. They were known as Goddard's Drops after the famous chemist Jonathan Goddard. Charles II also used this particular potion as a hangover cure and it eventually became known as the King's Drops. - For epilepsy there were numerous weird brews that a person could take - if the King's Drops didn't work. These included 'the dung of an infant pulverised' - rest assured it is the dung that is pulverised not the infant. Testicles of a bear, maggots or earthworms. There are no clues to follow to identify the source of why these 'cures' were thought to be worth taking. Old surgical procedures Surgeons must be very careful When they take the knife! Underneath their fine incisions Stirs the Culprit - Life! There were no qualified surgeons as we know them today. In many cases people went to a specified person or trade, because they were known to be handy with a knife or other tools. Quite a few operations were handled by the local barber - this is where the traditional red and white pole sign originated from. The red is alleged to signify blood and white for bandages or dressings. Only the wealthier people could hire his services and many would for example go in for a haircut, shave and to have a tooth pulled at the same time. Poorer people would more than likely have had to rely on other local trades such as blacksmiths, butchers or farmers etc. People within these occupations are believed to have carried out operations such as cataract removal from the eye and tooth extractions. Trepanning - for evil spirits in the head Basically trepanning is the technique of cutting a hole into the skull when the patient was awake and without anaesthetic. In Medieval times, the procedure was performed when it was believed that evil spirits/demons were lurking and trapped within the victim's brain. Trepanning sometimes went as far as removing a section of brain thought to be infected. Of course what we think of as evil spirits is different to the perception in olden times. Many forms of illness were thought to be caused by supernatural forces. Trepanning was thought to be used for conditions such as - epilepsy, insanity and fractured skulls. For people in the past developing haemorrhoids must have been something to try to avoid at all costs or put up with the pain. It is known that some medieval physicians used cautery irons to treat them - in other words they were burned off. It is also documented that pulling them out with their fingernails was the best solution. The 'fingernail treatment' was a method favoured by the Greek physician Hippocrates. This was remember without the use of pain relief and they did not have tubes of ointment such as 'Preparation H'. Bladder Stones & Blockage There are many reasons why people can suffer from urine retention and kidney and bladder stones. But one of the main causes in the past was due to sexually transmitted diseases such as syphilis. It was common knowledge what could result from syphilis, so it is a wonder that so many continued to dice with danger. If obstruction did occur there was a particularly gruesome method, that although often worked, was certainly more painful than the retention of urine experienced. The cure involved a metal urinary catheter (tube) being inserted into the urethra and into the bladder. Today when a catheter is inserted in hospital a local anaesthetic is always used and modern catheters are pliable and soft. You can imagine the pain that must have been experienced by the insertion of a metal catheter. This particular procedure was thought to have been first used in the 14th century. A number of trades were believed to have carried out the delicate process of cataract removal. Sources from the time describe the use of sharp instruments - a knife or large needle - being pushed through the cornea of the eye in order to remove the film. It was not until Islamic medicine became more widely known in medieval Europe that a more gentle form of removal was used that involved suction. Probably the most feared procedure of all was amputation. Not only because of the pain but the survival percentage was very poor. Death was certainly caused, some of the time, by blood loss. More frequently however, it was thought to be post-operative infection that caused the highest mortality rate. This was at a time when bacteria had never been heard of and as a consequence hand washing and sterilisation of surgical instruments was not carried out. Amputation 'surgeons' were never sought after for their delicacy but for their speed. Two instruments were mainly used. First a curved knife would cut away the flesh from around the bone. When the bone was reached then a saw had to be used. To stop the bleeding either hot irons or boiling oil was placed on the end of the stump. Many of these surgeons did not bother with pain relief for the patient. It was a widespread belief that experiencing pain was essential for proper healing to occur. If it was prescribed it usually took the form of poisonous plants such as mandrake. In addition, the use of opium and/or alcohol was common. But many of these toxic brews, when taken in combination, not only sedated the patient and killed pain, but often lead to coma and death. Needless to say that many of the patients who underwent amputation - the most common were soldiers from battle - were scarred psychologically for life. Not only from coping with disfigurement and disability but due to the mental trauma of the ordeal. What Would You Have Feared Most? If you were living in the past what procedure would you have feared the most? Question About Modern Medications Do we rely on modern medicines too much for trivial complaints? God and the Doctor we alike adore But only when in danger, not before; The danger o'er, both are alike requited, God is forgotten, and the Doctor slighted. This has been a gruesome journey into the world of medicinal cures from the past. But having said this, there is evidence - usually found by archaeologists - suggesting historical medicine actually did not too badly with some areas of treatment - especially in relation to herbal therapies, many of which are still used or making a comeback today. In later times physicians began to study at universities on the European mainland and brought their skills back to Britain. Much of what they learned was from text written by Arabic doctors. Monks and nuns as well had a wide experience of dealing with all manner of complaints and did have an impressive success rate for the times. However, because the monastic remedies were herbal based, the church began to frown on their use. The belief was that monks and nuns might be dabbling in witchery, so were banned from practising. As a result many of their skills and knowledge built up over centuries was lost. Thankfully due to continued research and learning medicine continued to improve over the centuries and today our doctors and surgeons are of course highly educated and skilled men and women. This I think is something we do need to be thankful for in our modern age.
<urn:uuid:ce6c9530-6020-457e-a915-cbd05d181258>
CC-MAIN-2017-26
https://hubpages.com/education/A-History-of-Gruesome-Cures
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00525.warc.gz
en
0.983052
2,886
2.765625
3
Statement of Anthony S. Fauci, M.D., Director, NIAID, NIH, on National HIV Testing Day, June 27, 2010 June 25, 2010 Routine HIV testing is central to ending the HIV/AIDS pandemic. In the United States, someone becomes infected with HIV every nine and a half minutes. More than 20 percent of the estimated 1.1 million Americans living with HIV infection do not know they are infected. On National HIV Testing Day, the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health, urges everyone between the ages of 13 and 64 years to be tested for HIV at least once in their lifetime in keeping with the recommendation of the Centers for Disease Control and Prevention (CDC). People at high risk for HIV infection -- including substance abusers and their sexual partners, gay and bisexual men, female partners of bisexual men, and individuals with multiple sex partners -- should get tested at least once a year. Knowing one's HIV status is vitally important to the individual and for protecting the broader public health. Testing positive for HIV infection is the critical first step linking a person to counseling, medical care and treatment, which help improve quality of health and stave off HIV-related complications and co-infections. People who know they are infected with HIV also are more likely to reduce behaviors that could transmit the virus to others, which benefits the larger community. Additionally, a growing body of evidence suggests that people infected with HIV who consistently take antiretroviral therapy to control the virus not only protect their health but may be less infectious to others -- a theory NIAID is currently examining through clinical research. Later this year, in collaboration with CDC and local health departments, NIAID will launch a feasibility study in several U.S. cities, designed to determine whether expanded HIV testing along with better linkages to medical care and treatment can show value as part of a broader campaign to reduce HIV incidence. Although expanded HIV testing initiatives and prevention efforts appear to be having some positive impact, far too many people still are getting infected with HIV. Nearly three decades into the HIV/AIDS epidemic, more than 56,000 new HIV infections occur each year, an unacceptably high rate that has remained relatively stable since the late 1990s. HIV infection may not grab the headlines as it did during the darkest days in the 1980s, but it is still a serious, incurable medical issue that can lead to AIDS -- a disease that claimed nearly 18,000 American lives in 2007. Too many people are diagnosed with HIV late in the course of infection, missing the window of opportunity when antiretroviral therapy can provide the best health outcomes. Sadly, the stigma and fear associated with HIV testing are still very real concerns for many. On this National HIV Testing Day, we all must do our part to eliminate these obstacles and emphasize the important, lifesaving value of getting tested. To find an HIV testing site near you or for more information about HIV testing, visit AIDSinfo and AIDS.gov. Dr. Fauci is director of the National Institute of Allergy and Infectious Diseases at the National Institutes of Health in Bethesda, Maryland. This article was provided by National Institute of Allergy and Infectious Diseases. Visit NIAID's website to find out more about their activities and publications. Add Your Comment: (Please note: Your name and comment will be public, and may even show up in Internet search results. Be careful when providing personal information! Before adding your comment, please read TheBody.com's Comment Policy.)
<urn:uuid:d4140c2b-a5ca-48b4-aec7-a9d4aee06794>
CC-MAIN-2017-26
http://www.thebody.com/content/art57208.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00605.warc.gz
en
0.949698
743
3.40625
3
President Bush is changing the way the U.S. manages its forests. It is the country's biggest forestry reform in more than 25 years. President Bush says his healthy forests plan is a common sense approach to reducing the threat of destructive wildfires by clearing more underbrush in areas at risk. "Overgrown brush and trees can serve as kindling, turning small fires into large, raging blazes that burn with such intensity that the trees literally explode," the president said. Congress passed the president's plan following two years of catastrophic fires in western states that burned more than four million hectares of land. This year's California fires alone cost $250 million and claimed the lives of 22 people. The president's plan aims to shorten response times to disease and insect infestations as well as restoring degraded forests on private lands to promote endangered species. Signing the bill into law Wednesday, Mr. Bush said the measure will shorten the review process for legal challenges to forestry decisions by ordering courts to consider the long-term environmental impact of not carrying out the decision. "It places reasonable time limits on litigation after the public has had an opportunity to comment and a decision has been made," he said. "No longer will essential forest health projects be delayed by lawsuits that drag on year after year after year." Critics of the plan say shortening that review process weakens their ability to use the courts to challenge federal forestry policy. The plan pays private loggers to clear undergrowth on public lands near houses or water supplies at risk for catastrophic fires. Environmentalists fear the timber industry will use that access to harvest bigger trees as well.
<urn:uuid:f1ebf3d3-65b2-4f9b-b950-fd8c44a6b929>
CC-MAIN-2017-26
https://www.voanews.com/a/a-13-a-2003-12-03-10-bush-67297202/380834.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00605.warc.gz
en
0.966223
331
3.171875
3
Although coal is expected to be the backbone of energy sources in the future, the country has made no progress in limiting extraction so that the mineral can be used in the years to come. Amid a declining global price, the country’s coal output keeps growing. The government failed to meet its commitment to limit coal output to the same level as last year. Early this year, the Energy and Mineral Resources Ministry’s mineral and coal directorate general said it would cap total coal production for this year at 421 million tons — similar to the 2013 output — partly by controlling coal diggers’ work plans. Until October, the policy went smoothly as the 10-month-output was in line with the full-year plan. In November, however, things changed. It was revealed that during the January-November period, as many as 427 million tons of coal had been extracted, with around 366 million tons sent overseas. Thus, the mineral and coal office adjusted its full year coal output’s estimation to 458 million tons, which is almost a 9 percent increase from last year’s figure. “The increase in output is partly caused by better documentation,” the mineral and coal director general, R. Sukhyar, said in a recent interview. The government implemented on Oct. 1 a policy requiring coal miners to obtain export licenses. The licenses require them to have a “clean and clear status”, a term referring to coal miners’ compliance to royalty and tax obligations as well as being free from conflict due to overlapping claims of ownership. Moreover, the ministry is also cooperating with the Corruption Eradication Commission (KPK) to crack down on illegal mining activities. The government has been criticized for its poor coal management. There are reports that coal exports are lower than the actual volume of coal shipped overseas. The country’s 2013 coal output was questionable. The official report repeatedly claimed the national output was 421 million tons in 2013. However, the ministry’s Energy Outlook 2014 report said that coal output in 2013 was 431 million tons. The report also said Indonesia’s coal resources reached 28.97 billion tons. Assuming that coal output is at the current level, the country’s coal production could only last for the next 50 years, the report added. A domestic market obligation (DMO) for coal has been implemented for several years to ensure that domestic buyers have access to Indonesian coal. Also, the DMO is set to increase from year to year, with the government expecting producers to reduce its dependency on the overseas market. Last year, 85 million tons were directed to local buyers. Domestic absorption is expected at 95 million tons this year and 110 million tons next year. However, once again infrastructure hurdles made the policy unrealistic. State-owned electricity firm PT Perusahaan Listrik Negara, the biggest domestic user of coal, said that its planned coal usage was 55 million tons this year, meaning that there was uncertainty on whether the domestic obligation of 95 million tons of coal would be fully absorbed by local users. The slow growth of power-plant development, which is caused mostly by land acquisition issues, has contributed to slower growth in domestic coal absorption compared to the pace of increases in production set by miners, which are trying to balance the weakening price by selling more. Price pressures are expected to continue as global demand slows. The International Energy Agency’s (IEA) World Energy Outlook report reported that global coal demand grew and would increase at an average of 0.5 percent per year between 2012 and 2014, a much lower rate compared to 2.5 percent over the last 30 years. The growth is hampered not necessarily by a weakening economy but by new air pollution and climate policies in the main markets, particularly in the US, China and Europe. Amid the bleak outlook and rising environmental concerns, the ministry once again expects next year’s production level to be similar to this year’s, at no more than 460 million tons. The public will see if the realization of the policy is as poor, as the government still considers coal as a source of state revenue amid declining prices instead of securing supply for future utilization. – See more at: http://www.thejakartapost.com/news/2014/12/29/coal-sector-management-poor-amid-policy-changes.html#sthash.7kyMNC0x.dpuf
<urn:uuid:62be51a9-56f3-4766-a2ca-afdf061f45eb>
CC-MAIN-2017-26
http://coalindoenergy.com/2015/01/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00045.warc.gz
en
0.965595
921
2.515625
3
Quasars, the brightest objects we're aware of, are powered by the supermassive black holes that are thought to reside at the center of every galaxy. But many galaxies fail to feed their black holes enough matter, leading to a body that's quiet and difficult to detect. Our own galaxy's central black hole, called Sgr A*, falls into the latter category. We can detect it at wavelengths up to the X-ray range, but it's dim enough that we'd have a hard time spotting it if it weren't so close. That may be about to change, however. Astronomers have spotted a cloud of gas with a mass about three times that of Earth that's on a trajectory that will have it pass close to Sgr A* in 2013. When it does, it may feed matter into the black hole's accretion disk, powering a sudden surge in Sgr A*'s output. Since Sgr A* doesn't emit much in the way of radiation, a lot of what we've learned about it comes from tracking the stars that orbit it at close range; many of these have eccentric orbits that take them very close to the black hole. The Very Large Telescope has a program set up to perform periodic observations of the stars in order to track their orbits closely. It was during these observations that the team "discovered an object moving at about 1,700 km/s along a trajectory almost straight towards Sgr A*." Observations of its emissions indicated that the object was a gas cloud that was much more dense than the material that's typically found in the area, and cooler as well. It's not quite heading straight at Sgr A*, but it's on a highly eccentric orbit that will take it extremely close to the body—36 light hours by the summer of 2013 (for comparison, the Voyager spacecraft are over a dozen light hours from the Earth). As a result of this plunge, the black hole's gravity has been accelerating the gas within the time we've been observing it; its total velocity (including some motion that's not towards the black hole) has increased from 1,200 km/s to nearly double that speed over the last seven years. In the nearly 20 years we've been observing Sgr A*, only two stars have ever come closer to it. But stars are held together by gravity; this cloud is too diffuse to have that sort of coherence. As a result, the authors expect that it will undergo dramatic changes as it blasts in to the neighborhood of the black hole. The shock of hitting the low-density, high-temperature gas will compress the cloud even as the black hole's gravity starts to stretch it out along the direction of its orbit. This could eventually split the cloud into multiple fragments, each of which may take a slightly different path around the black hole. As these fragments reach the point in the orbit closest to black hole, its temperatures may reach 106K, hot enough for it to start emitting X-rays. But that may not be the only fireworks. If the cloud does end up fragmenting, then there's a chance that one of the fragments will end up feeding into the accretion disk surrounding the black hole. "This could in principle release up to around 1048 erg over the next decade," the authors estimate. We'll have to wait and see, but you can be sure lots of electronic eyes will be watching. In the meantime, when nothing in particular happens in 2012, the cloud's destruction may end up providing a new bit of excitement for doomsday aficionados.
<urn:uuid:af8acdd3-e5c8-4b4c-a23b-7f57a76cfd22>
CC-MAIN-2017-26
https://arstechnica.com/science/2011/12/the-milky-ways-black-hole-may-spring-to-life-in-2013/?comments=1
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00045.warc.gz
en
0.963149
729
4.03125
4
Headaches, blood clots, tumors and strokes can all cause severe head pain, according to Harvard Medical School. Patients should seek medical attention if the pain is exceptionally severe, unremitting or different from any type of headache they've had before.Continue Reading Physicians don't really know what causes most headaches, although there are about 300 categories of them, claims Harvard Medical School. Interestingly, the brain and the skull don't register pain, but the blood vessels, tissues and nerves around the brain do. A headache can also originate from the scalp, the teeth, the muscles and joints in the neck, and the sinuses. Migraine and cluster headaches are notorious for the severity of pain they cause. Aneurysms in the brain are another cause of severe head pain, says MedlinePlus. Aneurysms happen in a weakened part of a blood vessel wall. Some of them resemble tiny berries and have a genetic component. In other types of aneurysms, the blood vessel balloons out. Pain results when even a small amount of blood leaks out of the aneurysm. This leakage is a warning that the aneurysm may rupture, which leads to a stroke. With a brain tumor, headaches become more severe and more frequent over time, notes Mayo Clinic. In addition to headaches, patients may experience nausea or vomiting, difficulty speaking and vision problems.Learn more about Pain & Symptoms
<urn:uuid:b85cec40-24cb-4fba-b7c7-0c2a1984a9cc>
CC-MAIN-2017-26
https://www.reference.com/health/causes-severe-pain-head-45dbcf83c2a050d4
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00045.warc.gz
en
0.93567
288
3.375
3
Milk is nature's perfect food but only if you are a calf. The evidence of dairy's harmful effects to humans is increasing. Dr. Dr. David Ludwig and Dr. Walter Willett, of JAMA Pediatrics, recently published an editorial in which they argue that milk may increase the risk of cancer and promote weight gain. Recent studies have found that one diet drink (diet milk shake) a week increases your risk of type 2 diabetes by 33 percent and a large diet drink increases the risk by 66 percent. What about low fat milk or non-fat milk? These are the healthier options, right? Wrong! Fat is not what makes us fat, despite commonly held mistaken belief. Low fat diet is known to increase hunger and slow metabolism. Increased hunger leads to overeating. Overeating starchy and refined carbohydrates leads to obesity and type 2 diabetes. Dr. David Ludwig found that those who ate a low fat, higher glycemic diet burned 300 calories less a day that those who ate an identical calorie diet that was higher in fat and lower in glycemic load. For those who ate the higher fat, lower glycemic diet, that’s like exercising an extra hour a day without doing anything! Studies have shown that those who consumed low fat milk products gained more weight than those who ate the full fat whole milk products. They seemed to increase their overall intake of food because it just wasn’t as satisfying as the real thing. In fact, those who drank the most milk overall gained the most weight. It makes logical sense. Milk contains over sixty different hormones that boost growth. That's how a little calf quickly grows into a big cow. The sad thing is that many schools and “healthy” beverage guidelines encourage the idea that flavored milk is better than soda and that getting kids to drink more milk by any means is a good idea. This is dangerously misguided. There are 27 grams of sugar in 8 ounces of Coca Cola and a whopping 30 grams of sugar in 8 ounces of Nestlé Chocolate Milk. Sugar is sugar and it drives obesity and diabetes. It is not a good way to get kids to drink milk. There are many problems with milk: - Dairy and milk products do not promote healthy bones. Studies show that higher calcium intakes are actually associated with higher risk of fracture. - Milk may not grow strong bones, but it does seem to grow cancer cells. Milk increases the hormone called IGF-1 or insulin-like growth factor, and that's like Miracle-Gro for cancer cells. - Dairy products have been linked to prostate cancer. And cows are milked while pregnant (yes, even organic cows), filling milk with loads of reproductive and potentially cancer-causing hormones. - Dairy increases the risk of type 1 diabetes. - Dairy is a well-known cause of acne. - Dairy causes millions around the world (75 percent of the population) to suffer digestive distress because of lactose intolerance. - MIlk causes intestinal bleeding in 40 percent of infants leading to iron deficiency. - Dairy is a known trigger for allergy, asthma, and eczema. Bottom line: Milk promotes weight gain, cancer and osteoporosis. If you ate only whole foods, fruits, vegetables, beans, nuts, seeds, and whole grains (not whole grain flour), you might be better off overall.
<urn:uuid:73736266-f7f1-446a-b040-48f299c3f2a6>
CC-MAIN-2017-26
http://rielworks.com/got-proof-lack-of-evidence-for-milks-benefits/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321306.65/warc/CC-MAIN-20170627083142-20170627103142-00125.warc.gz
en
0.950881
692
3.1875
3
School Readiness is more than asking if the children know their A,B,Cs and 1,2,3s. Determining true readiness requires us to look at all areas of development. Readiness does not solely refer to the academic skills that children may learn in preschool. It refers to the equally important emotional and social skills, language development skills, cognitive skills and fine and gross motor skills that preschoolers develop both at home and in preschool. Each year preschool teachers are asked, "Is my child ready for kindergarten?". There are times when we do not think a child is ready. And, they may not be, right now! However, they very well may be in 6 months or 8 months when kindergarten begins. All areas of growth and development need to be considered based on where each preschool is now. We must also consider the amazing leaps and bounds of growth that happen over the summer months! Gathering information on social promotion vs. retention, what preschoolers need to know by the end of the year and how to help them transition--both in the classroom and at home--is the best way to help parents make an informed decision. In this section, you will find articles which discuss the history of school readiness and how it applies to us in the early childhood education field. The history and the view on each of the items listed below will be discussed. This is a series of articles, meant to be read in order to give you a fuller understanding of the aspects of this topic. The history of the debate regarding each practice has changed throughout the decades. We'll discuss some of those changes on this page. Does holding a child back in preschool an extra year help better prepare them for kindergarten? What is a readiness test? These are a couple of the items that will be discussed on this page. As preschool teachers, the main focus in prek tends to be the academics (letters, numbers, etc.) You may be surprised to find what kindergarten teachers expect from their preschoolers entering kindergarten. Teachers and Parents can help the children prepare for the big move with these 7 ideas!
<urn:uuid:cc09d545-aeee-463a-9a66-a3b34217ea88>
CC-MAIN-2017-26
http://www.preschool-plan-it.com/school-readiness.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321306.65/warc/CC-MAIN-20170627083142-20170627103142-00125.warc.gz
en
0.948045
433
3.734375
4
Понравилась презентация – покажи это... http://www.flickr.com/photos/slightlynorth/3470300872/in/photostream/ Teaching and Learning With social media The need for a new habitus Image and quote by Dean Shareski http://www.flickr.com/photos/shareski/2655113202/in/pool-858082@N25 (CC BY-NC 2.0) More than a resource http://www.flickr.com/photos/graciepoo/215649963/ Any technology tends to create a new human environment... Technological environments are not merely passive containers of people but are active processes that reshape people and other technologies alike.M. Mcluhan, 1962 Digital literacies defines those who exhibit a critical understanding and capability for living, learning, and working in the digital society. JISC, 2013 https://sites.google.com/site/dlframework/the5resourcesframework Juliet Hinrichsen and Antony Coombs University of Greenwich http://www.flickr.com/photos/futurestreet/3334257292/ (CC BY 2.0) Digital Natives? Digital Visitors or Digital Residents White & Le Cornu “Learners do not appear ‘to see beyond’ the immediately obvious functionality of the technology and there is little evidence of transfer” Clark et al, 2008, p.68 “To possess the machines, [they] only need economic capital; to appropriate them and use them in accordance with their specific purpose [they] must have access to embodied cultural capital, either in person or by proxy” Pierre Bourdieu 1986 The beginning of a new habitus (that needs to be nurtured; not ignored) How we do things around here historical continuum Dispositions Technology… disrupts the historical continuum change Digital Habitus ? specific set of values/practices Participation Agency Social Media is a Mindset!
<urn:uuid:18b784ae-4dc3-47e4-af02-4dd871de4cc8>
CC-MAIN-2017-26
http://keepslide.com/education/4955
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00205.warc.gz
en
0.787907
455
2.828125
3
Unpacking the Terminology used in Learning within an Internet Environment After reading a couple of texts online, I became a bit confused with some of the jargon being used. In some texts I would find that the words e-learning, computer assisted learning, mobile learning and many others are being used instead of online learning. I sought to demystify this jargon by trying to distinguish the differences (if any) between the various terms. Below is a list of defined terms commonly used when referring to learning on the internet on learning using computer devices: Educational Technology: Is the use of any technological resources and processes in teaching and learning that is aimed at enhancing and developing student learning. Computer Assisted Learning (CAL): This is a process or method aimed at enhancing student learning through the use of computer software packages. CAL systems are developed in such a way that they do not rely on an internet connection, thus allowing them to perform complex functions without the drawbacks that arise from poor internet connections. Distance Learning: Is a learning process or system in which the learning facilitator is situated in a significantly distant geographical area than that of the learners. The geographical distance is enabled through the aid print and/or electronic learning processes and resources E-learning: stands for electronic learning. In essence it is a type of learning in which electronic devices are used to deliver learning resources or content. Online Learning: Is the learning process that occurs within an internet or networked environment. Mobile learning: Is an enhanced learning process that uses mobile electronic devices (usually handheld) which allow access to and delivery of learning resources whilst the learner travels through different geographical locations. Learning Management System (LMS): Is a controlled network based platform that enables individuals or organizations to manage, design and monitor student teaching and learning activities. I partially touched on these few terms since some of these are often used synonymously. It is vital that I highlight that the terms above have been defined in various ways by authors and institutions. I left some of the definitions above purposefully vague, so that the consumer can add more breadth to them depending on their target audience. I would be glad to hear how others can refine or define the terms above. Furthermore, there are a lot more terms that are also used in addition to the above terms of which would add value to the understanding of online learning. Please share any that you might have in mind.
<urn:uuid:1d8f5f8a-73e1-4b5e-bea3-4ecd7d3bc1fd>
CC-MAIN-2017-26
http://teachingandlearning.org.za/unpacking-the-terminology-used-in-learning-within-an-internet-environment/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00205.warc.gz
en
0.954352
487
3.625
4
Vincent van Gogh lived with his uncle Johannes “Jan” van Gogh, the director of Amsterdam’s naval dockyard in the eastern part of the city. Vincent often got up early in the morning, and he enjoyed watching the thousands of workers trickle into the dockyard; the sound reminded him of the murmur of the sea. The lively surroundings fascinated Vincent, and he imagined there would be plenty for an artist to see on the docks. His uncle had set up a study bedroom for him, and Vincent spent a lot of time there. He was compelled to study hard, starting early every morning and continuing until late at night. Every day he walked to the house of Maurits Benjamin Mendes da Costa, who was teaching him Greek and Latin. At home, he prepared for lessons and wrote essays designed to deepen his knowledge of the Bible; he produced a paper on the history of the Reformation, a map of the apostle Paul’s travels and a list of all the biblical parables and miracles. Meanwhile, he was also studying algebra and geometry and trying to keep up his English and French. He read the Bible often, and it inspired the drawing The cave of Machpelah, based on a story in the book of Genesis. Vincent covered his walls with art prints bought from a Jewish book dealer in the city in a bid to give the room a bit of atmosphere, Almost every time he wrote to his brother Theo, Vincent mentioned how difficult he was finding his studies; he confessed: Over time, Vincent grew increasingly anxious about meeting his self-imposed goal. In February 1878, he wrote of his doubts: he was not sure he could succeed, and in spite of reassurances from his friends and family, he became consumed by fear. That July, he decided to return to Etten and moved out of Uncle Jan's house.
<urn:uuid:1a76e25a-de7d-44cd-ab16-36a7b05644a5>
CC-MAIN-2017-26
http://www.vangoghroute.com/the-netherlands/amsterdam/uncle-jan/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00205.warc.gz
en
0.990132
383
3.03125
3
Although it is noncancerous, asbestosis is one of the most severe diseases caused by a constant exposure to asbestos. Practically, this disease is described by severe scars on the lungs. It is quite common in people who work in asbestos factories or construction workers. Smokers experience even higher risks to develop this problem. Aside from the actual scars, patients will develop a gradual breath shortness, as well as painful sensations in the chest and a constant cough. If you are worried about this condition, you’ll find out more: this page has all the required information to protect yourself under specific circumstances.
<urn:uuid:48fbf5ab-bf89-4bd8-9d73-aac25a77c51d>
CC-MAIN-2017-26
http://www.asal-dz.org/asbestosis-asbestos-exposure-can-harm/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00285.warc.gz
en
0.9551
123
2.78125
3
Speech-language pathologists use the PCC -- percent consonants correct -- to determine what speech sounds a client is unable to produce. This measurement also helps the speech-language pathologist determine the severity of a client's speech disorder. PCC helps decide if a client is appropriate for speech therapy, as well as determine therapy goals. For a speech-language pathologist to accurately calculate PCC, a client must be able to produce multiple words for a lengthy speech sample. Things You'll Need - Digital voice recorder Record with a digital voice recorder a communication sample of at least 100 words from the client. Use toys or pictures to elicit language from younger children. For older children or adults, ask open-ended questions or ask the client to describe an event or experience. Transcribe the speech sample phonetically by hand or on a computer. Clearly indicate when the client produces consonants in error by highlighting incorrect pronunciations in a different color or using a specific symbol to denote errors. Add up the total number of consonants and the total number of correct consonants. Divide the number of correct consonants by the total number of consonants. Multiply the answer by 100 to determine the PCC. Use the PCC to determine the severity of the speech disorder. A percentage of 85 to 100 indicates a mild disorder; 65 to 85 percent, mild-moderate; 50 to 65 percent, moderate-severe; and below 50 percent, severe. - Photo Credit George Doyle/Stockbyte/Getty Images George Doyle/Valueline/Getty Images BananaStock/BananaStock/Getty Images
<urn:uuid:3ef42ef9-7f24-4984-8062-ebb606a38bce>
CC-MAIN-2017-26
http://www.ehow.com/how_8711265_calculate-pcc-speech-sample.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00285.warc.gz
en
0.860365
329
3.296875
3
Tel: 01752 660427 Email:firstname.lastname@example.org Bramley Rd, Laira, Plymouth, PL3 6BP Year 2 Forest School – Week 1 – September 2015 With the new Forest School procedures now in place the Year 2 children made their inaugural trips of the term. This would see two groups go on alternating weeks for the first seven weeks. The idea is to build upon the things they learn to help them recall and make use of these skills in years to come, especially as they go on up to KS2 and have more sessions in the woods in Y6. As Mr Blake’s Forest School work is assessed and passed back by his assessors, a handbook about what we do in Forest School will soon be available to view on the website. During the first weeks the groups made adventure sticks! These were not any old sticks though, we talked about which sticks we could use so as not to irreparably harm trees. We used loppers (having a good tool talk beforehand) to cut the branches from some overgrown Sycamore specimens we know of and then used secateurs to trim them carefully. The children then had the opportunity to search their area of woodland to find things to tie onto their stick, be it a pine cone, a fern frond, an interesting leaf or a beech nut husk. This made each and every stick individual and many of the children showed great skill and perseverance to independently attach the items using either string or other means. Some sticks were hidden in the woodland den and others ended up making a den! We finished the first session by showing the children how to put up the large shelter in case of bad weather. We talked about how it was good practice to put it up every session to become familiar with the knots needed and so that if and when we did need it ( and we almost certainly will!), we could get it up fast! Overall it was a great first week for both groups of the Year 2 children, they showed a fantastic level of responsibility and maturity using the tools and long may that continue into the coming weeks.
<urn:uuid:bfd8d4b9-1fcb-4026-b29f-8ce7d64d7f78>
CC-MAIN-2017-26
https://lairagreen.com/2015/09/27/forest-school-visit-year-2/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00285.warc.gz
en
0.967222
441
2.796875
3
GUIDELINES FOR USE OF HYDROGEN FUEL IN COMMERCIAL VEHICLES –Final Report Over the next 50 years, hydrogen use is expected to grow dramatically as an automotive and electrical power source fuel. As hydrogen becomes commercially viable, the safety concerns associated with hydrogen systems, equipment, and operation are of concern to the commercial motor vehicle industry. This report is intended to provide guidelines for use of hydrogen fuel as an alternative fuel by a commercial vehicle fleet operator to ensure long-term safe operation. In this paper by the Federal Motor Carrier Safety Administration all aspects of Hydrogen as a fuel source are explored. Below are extracts related to Hydrogen on demand fuel systems: Hydrogen Injection Systems A hydrogen injection system for a diesel engine produces small amounts of hydrogen and oxygen on demand by electrolyzing water carried onboard the vehicle. The electricity required is supplied by the engine’s alternator or 12/24-volt electrical system (see Section 1.5 for a description of electrolysis). The hydrogen and oxygen are injected into the engine’s air intake manifold, where they mix with the intake air. In theory, the combustion properties of the hydrogen result in more complete combustion of diesel fuel in the engine, reducing tailpipe emissions and improving fuel economy (CHEC, n.d.). Limited laboratory testing of a hydrogen injection system installed on an older diesel truck engine operated at a series of constant speeds showed a 4 percent reduction in fuel use and a 7 percent reduction in particulate emissions with the system on (ETVC, 2005). A hydrogen injection system for a diesel engine produces and uses significantly less hydrogen than a hydrogen fuel cell or hydrogen ICE, and does not require that compressed or liquid hydrogen be carried on the vehicle. The system is designed to produce hydrogen only when required, in response to driver throttle commands. When the system is shut-off, no hydrogen is present on the vehicle. 1.5 ELECTROLYSIS OF WATER The most abundant source of hydrogen on earth is water—every molecule of water contains one oxygen atom and two hydrogen atoms. It is relatively simple to separate the hydrogen in water from the oxygen using electricity to run an electrolyzer. An electrolyzer is a galvanic cell composed of an anode and a cathode submerged in a water-based electrolyte. In many ways, the operation of an electrolyzer is the opposite of operating a hydrogen fuel cell. In a fuel cell, hydrogen and oxygen are supplied to the anode and the cathode, and they combine to form water while creating an electrical current that can be put to use (see Section 1.2.1 and Appendix A). In an electrolyzer, an electrical current is applied between the anode and the cathode, which causes the water in the electrolyte to break down, releasing oxygen gas at the anode and hydrogen gas at the cathode. 2.1 GASEOUS HYDROGEN Hydrogen gas is colorless, odorless, tasteless, and noncorrosive, and it is nontoxic to humans. It has the second widest flammability range in air of any gas, but leaking hydrogen rises and diffuses to a nonflammable mixture quickly. Hydrogen ignites very easily and burns hot, but tends to burn out quickly. A hydrogen flame burns very cleanly, producing virtually no soot, which means that it is also virtually invisible. 2.1.1 Flammability, Ignition, and Luminosity A mixture of hydrogen and air will burn when there is as little as 4 percent hydrogen or as much as 75 percent hydrogen in the mix4 This is a very wide flammability range. In comparison diesel fuel vapors in air will burn over a range of 0.6 percent to 5.5 percent. With less than 0.6 percent diesel in the mixture it is too lean to ignite, and with more than 5.5 percent diesel in the mixture it is too rich. Natural gas will burn over a range of 5 percent to 15 percent. It takes very little energy to ignite a hydrogen-air mixture—a common static electric spark may be sufficient. As shown in Table 4, it takes less than one tenth of the energy to ignite a hydrogen air mixture as it does to ignite a mixture of gasoline vapors in air. Over much of its flammable range, common static electricity would be enough to ignite a hydrogen-air mixture. In some cases, the electrostatic charges or heating created by the flow of hydrogen from a leaking vessel would be enough to ignite the leaking hydrogen (Murphy, et al., 1995; Argonne, 2003). Hydrogen flames burn very cleanly, producing virtually no soot. It is the soot created by most fuel that makes a flame visible. In addition, much of the energy radiated by a hydrogen flame is in the ultraviolet range, rather than the infrared or visible ranges of the light spectrum. Therefore, a hydrogen flame is virtually invisible to the human eye in day light, though the energy being At room temperature and one atmosphere pressure being released by the flame may create a visible “shimmer” in surrounding air due to changes in the air density. At night, hydrogen flames are visible to the unaided human eye, and in daylight, they can be “seen” by an ultraviolet light sensor. Read the full report Guidelines-H2-Fuel-in-CMVs-Nov2007 FINAL_0
<urn:uuid:e621e90f-3cdc-436b-86cc-75461e812096>
CC-MAIN-2017-26
http://www.hydrogenhybrids.uk.com/scientific-evidence/u-s-department-transport/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00564.warc.gz
en
0.932356
1,123
3.4375
3
This video from California in the USA says about itself: Yosemite mountain lion appears to be treed by two coyotes. In fact what is going on is a mother mountain lion distracting the coyotes from her two cubs. The mother mountain lion and her cubs were feeding on a deer in the Ahwahnee Meadow when the coyotes arrived on the scent. From the Christian Science Monitor in the USA: Coyotes, bears, and lions: the new urban pioneers? New research suggests mountain lions and bears may be following the urban pioneering of raccoons, foxes and, most notably, coyotes as they slowly encroach on major US metro areas. By Patrik Jonsson, Staff writer October 6, 2012 Americans have been moving from the country to the city for decades, so maybe it’s not surprising that researchers are finding a similar pattern among other North American apex predators. New research suggests mountain lions and bears may be following the urban pioneering of raccoons, foxes and, most notably, coyotes as they slowly encroach on major US metro areas from New Jersey to California. In the case of coyotes, they don’t even mind the density, with some coyote packs now confining themselves to territories of a third of a square mile. “The coyote is the test case for other animals,” Ohio State University biologist Stan Gehrt told EcoSummit 2012 conference on Friday in Columbus, Ohio. “We’re finding that these animals are much more flexible than we gave them credit for and they’re adjusting to our cities. That’s going to put the burden back on us: Are we going to be able to adjust to them living with us or are we not going to be able to coexist? Cougars survived the late Pleistocene extinction because they’ll eat just about anything meaty: here. Mountain lions in southern California are facing an uncertain future as urbanisation forces them to live in isolated groups, and suffer a severe loss of genetic diversity: here. Most Mountain Lion deaths in Southern California caused by humans: here. - Coyotes may be start of larger urban carnivore trend – CBC.ca (cbc.ca) - Coyotes may soon be hanging out in your backyard (news.blogs.cnn.com) - Urban coyotes making the big city their home (sott.net) - The Carnivores Next Door (newswatch.nationalgeographic.com) - Urban coyotes could be setting the stage for larger carnivores to move into cities (sciencedaily.com)
<urn:uuid:76ea0da2-4b0b-49e4-9575-b834ba4fa2ef>
CC-MAIN-2017-26
https://dearkitty1.wordpress.com/2012/10/16/bears-cougars-in-united-states-cities/?like=1&source=post_flair&_wpnonce=dd1ec55e91
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00564.warc.gz
en
0.927774
554
2.875
3
The Kite from Days with Frog and Toad Journeys Unit 6 Lesson 28 Common Core aligned Created and tested by a first grade teacher Table of Contents and explanation P. 3 Help the Frogs Find Their Lily Pads – Unscramble the spelling words on the lily pads and match the frog with the correct spelling word. P. 4 Compare and Contrast the Characters – compare and contrast Frog and Toad P. 5 Adjectives! – match the adjective with one of the five senses P. 6 Flying High with the “I” Sound – match the and color the igh, ie, and y sounds of long i. Circle all of the spelling words. P.7 Story Structure foldable book – match adjectives to the characters, draw the setting, and write about the plot of The Kite P. 8 Homographs! – draw the homograph that has a different meaning than the picture P. 9 My Opinion! Write about it. Who would you like to spend the day with? Frog or Toad? Why? P. 10-16 Help Frog and Toad Get Their Kite Up Into the Air1 - a words to know reading game for 2 to 4 people. P. 17-22 Frog and Toads New Adventure – creative writing and craft project – write a new adventure for Frog and Toad. P. 23-24 Let’s Make a Kite! - students get to make their own kites. P. 25 Thanks and Credits
<urn:uuid:016d167b-b90d-4218-b154-1bbe66462669>
CC-MAIN-2017-26
https://www.teacherspayteachers.com/Product/The-Kite-from-Days-with-Frog-and-Toad-Journeys-Unit-6-Lesson-28-1st-gr-sup-act-1192405
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00564.warc.gz
en
0.867197
316
4.28125
4
Bulbs are almost a guaranteed success. The embryonic plant is tucked away inside the bulb, just waiting for the right conditions to get growing. Provide those conditions and you'll have a beautiful flowering plant in no time. Most bulbs started indoors are spring flowers. After the bulb has bloomed transplant to your garden Spring Bulbs that Require Chilling Bulbs like daffodils, hyacinths, crocus and tulips require a period of dormancy in cold conditions. Place the bulbs in the crisper container of the refrigerator for eight to 16 weeks depending on the bulb. Do not place in the freezer. If temperatures outside don't get below freezing, but stay below 45 degrees during the day, the bulbs can go outside. Critters like squirrels love to eat bulbs so be careful where you place them outside. Daffodil bulbs are poisonous, but the other types may be eaten by animals. Remove the bulbs from cold storage and plant in potting soil. The bulb should be planted at least its own length under the soil. The optimum planting depth is twice its length. For example: A tulip bulb 2 inches could be planted at 2 inches, but should be planted 4 inches under the soil in the pot. Keep the pots in a dark, cool room until the bulb sprouts. Move to a sunny window. Turn the pot on a regular basis so the leaves don't bend toward the light. Alternative Method for Chilled Bulbs Another method for forcing spring bulbs that require chilling is to plant the bulbs in the pots. Water and let it drain thoroughly. Place the pots in the refrigerator for the required amount of time. If you plan on forcing quite a few bulbs and using this method buy a second-hand fridge. The pots take up quite a bit of space. Water Method for Chilled Bulbs Chill bulbs for the required amount of time. Place in shallow glasses that have a few inches of marbles or glass pebbles in the bottom of the glass. Fill with water until the water reaches about 1/3 the way up the bulb. Do not cover the bulb with water. Place in a sunny window away from heating sources. Don't put on a radiator or under a heating vent. No Chilling Required Spring Bulbs Bulbs like freesia, narcissus, bella donna lilies, and ranunculus don't require a chilling period. Plant in new potting soil at a depth that is twice the length of bulb. Water and place in a sunny window away from heat sources.
<urn:uuid:a913203f-2565-4a5c-b462-2b37d6a754b1>
CC-MAIN-2017-26
http://www.gardenguides.com/102889-procedures-planting-bulbs-indoors.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320057.96/warc/CC-MAIN-20170623114917-20170623134917-00004.warc.gz
en
0.92752
525
2.84375
3
What is POSITIVE DIRECTION? Positive Direction is a year-long program for middle school students at St. Joseph School (with the support and encouragement from elementary students). The purpose of the program is to demonstrate to our students the importance of setting goals and making positive choices to achieve these goals. During the year, students will have the opportunity to learn from motivational and inspirational speakers who will cover important topics such as respect for self and others, and the dangers to their well-being from drugs and alcohol. They will also learn about the importance of faith and living a life that is healthy in mind, body, and spirit. In May, the Positive Direction program closes with three days of special events. These events include Mass for grades 3-8, a breakfast with parents, a day of community service, a spiritual retreat and speakers on various topics. In an effort to celebrate the talents of many, there is an essay contest, a slogan / theme contest, and a T-shirt design contest. On Friday, the final day of this special week, the 7th and 8th grade students will participate in a 3K / 5K Run / Walk Challenge (student’s individual choice) and will celebrate the week’s completion with an afternoon of fun.
<urn:uuid:c26887b6-3acb-4c39-bdeb-253d53cbbab8>
CC-MAIN-2017-26
http://www.stjosephschoolsylvania.org/student-life/positive-direction
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320057.96/warc/CC-MAIN-20170623114917-20170623134917-00004.warc.gz
en
0.947541
259
2.53125
3
Celebrities often have streets or buildings named after them, but famed singer-songwriter Johnny Cash has a tarantula named after him. The spider, Aphonopelma johnnycashi, was one of 14 new spiders discovered in the southwestern United States, doubling the known number in the region. The Johnny Cash spider got the name because “the species is found in California near Folsom Prison,” about which Cash penned a famous song. Also, the spider is generally solid black, and Cash was known as “The Man in Black” because of his style of dress. The new spiders were described by biologists at Auburn University and Millsaps College in the journal ZooKeys. "We often hear about how new species are being discovered from remote corners of the Earth, but what is remarkable is that these spiders are in our own backyard," said Chris Hamilton, lead author of the study. "With the Earth in the midst of a sixth mass extinction, it is astonishing how little we know about our planet's biodiversity, even for charismatic groups such as tarantulas." Tarantulas belong to the genus Aphonopelma, which researchers say is found in 12 states in the southern third of the U.S. The hairy spiders can grow to be up to 15 centimeters or more in leg span. Others are small, less than 2 centimeters across. The discovery of the new spiders marked the culmination of more than a decade of research that was called “the most comprehensive taxonomic study ever performed on a group of tarantulas.” Researchers say the additional 14 bring the total number of tarantula species in the U.S. to 29. Tarantulas are often hard to distinguish from one another because they are often similar looking. The large hairy spiders are often portrayed as dangerous, but the researchers say that is “unfounded,” as “they do not readily bite.” They likened them to “teddy bears with eight legs.”
<urn:uuid:58983045-5771-469f-b7dd-e423b08bb18a>
CC-MAIN-2017-26
https://www.voanews.com/a/mht-tarantula-species-named-after-johnny-cash/3178707.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320057.96/warc/CC-MAIN-20170623114917-20170623134917-00004.warc.gz
en
0.976983
418
2.96875
3
|The Hobbit (c) 1937 by J. R. R. Tolkien| the encouraging story behind it all. Bilbo is a quiet hobbit who is very content to stay at home, and doesn't want to go on any adventures, thank you very much. "Nasty, disturbing things! Makes one late for dinner!" Yet still, he finds himself off in the wild with Gandalf and thirteen dwarves, doing things he never expected he'd do, and finding courage he didn't think he had. I enjoy the story of The Hobbit partly because of the fun, fantastical adventure Bilbo has, and partly because of what I learn from Bilbo. People can move out of their comfort zones. People can confront frightening things, and not back down. People can stand up to bullies. And most importantly, that those bullies, whether they're dragons, orcs, or anything scary, can be beaten.
<urn:uuid:219f95f4-0572-42c8-b078-ab43ea4dbf67>
CC-MAIN-2017-26
http://loraleeevansauthor.blogspot.com/2017/01/the-hobbit-by-j-r-r-tolkien.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00085.warc.gz
en
0.963515
188
2.953125
3
Psychic ability is in fact a form of ESP. There are 4 main forms and they are telepathy, which is the ability to communicate information through thoughts. An individual can perceive another's thought and send thoughts to another person. Clairvoyance is the ability to sense an event or object in another place or time. One example is the perception that a friend's house is on fire. Precognition is the ability to see future events. Psychokinesis is the ability to control an object with the mind. clairvoyance is the power to perceive objects that are not accessible to the senses, this is sometimes called clear sight and it is this ability that shows itself as images within the minds eye. If you think of an orange, what it looks like, the texture, the peel, the centre of the orange, you use your imagination to see this in your minds eye. Psychics see like this, only they are seeing images that are put there by spiritual beings. They could be your spirit guides, their spirit guides, angels who have called in to help, or guides who have never incarnated as human, but who like to help out on the earth plane. So clairyoyance is also part of mediumship as the deceased spirit uses this channel to communicate with them. Telepathy is mind to mind communicaton. So in essence, when you are having a psychic reading and something is on your mind, then these emotions will sit in your energy field also known as an aura, this can be read by the reader. The other way they can pick up information is by reading your mind. For example, if a pet is lost then a pet psychic will ask the caller or face to face client to imagine the pets face and image. The psychic will then link into this using telepathy. Then, using remote viewing which is a form of clairvoyance the psychic will often see an image of where the pet has been lost. There have been some very very accurate descriptions of this type, where the owners have been able to go and retrieve the pet based on the information given by the reader, this is a very real and very useful service one person who has the ability can offer another. In the same way a psychic reader can link into a person when the client has been asked to imagine their face, and so help out with interpersonal issues as they can also read the energy of the clients partner in addition to the energy of the client. So you can see ESP has a very real and useful place in society. Those people who have the gift are using their right brain ability, when most of us use our left brain ability. It is the opinion of many spiritual gurus, philosophers and teachers, that this world is in a mode of change and that a re-birth of sorts is taking place.
<urn:uuid:b8a569d0-89bb-4330-b791-fb06605aece2>
CC-MAIN-2017-26
http://thepsychicparlour.co.uk/popups/psychicability.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00085.warc.gz
en
0.972364
574
2.59375
3
Food allergies plague approximately 15 million Americans, and according to a CDC study released in 2013, food allergies among children increased by nearly 50% between 1997 and 2011. There are actually just 7 food allergens that cause most of this distress: Wheat, Soy, Dairy, Corn, Eggs, Peanuts/Tree Nuts and Shellfish. Reactions to nuts and shellfish account for most cases of allergic anaphylaxis and related ER visits, but they aren’t necessarily the most common allergens. The other 5 offenders listed above are much more common, and while the symptoms can be mild, they can also be pervasive. Most Common Symptoms of a Food Allergy A food allergy or sensitivity can cause a range of symptoms from anaphylaxis on the extreme end, to foggy-headedness on the low end. Here are some of the most common symptoms to look for if you want to identify a food allergy: What IS A Food Allergy? A food allergy is actually the formation of an Antigen-Antibody complex in the blood stream which stimulates an overactive immune response. It is this hyperactive immune response that you feel when you get itchy, puffy, nauseated and tired. When everything else in your body is going smoothly, it actually takes two weeks for your body to naturally eliminate an allergen, meaning you can still experience symptoms for 2 weeks even if you aren’t exposed to the allergen anymore. If the specific allergen in question is Gluten (Wheat), it can take as many as 4 weeks to clear. How To Identify A Food Allergy There are many ways to pinpoint or identify a food allergy. Symptoms are the first clue that something may be amiss, and then it is time to pursue testing. Food allergen identification can be done by a Naturopath or Allergist through a blood test and/or scratch test to see what you are allergic to. While blood tests are accurate, it is important to note that food sensitivities can bear all the same symptoms as a food allergy but not create a strong enough blood-mediated response to show up in a blood test. If you get a blood test and scratch test and identify allergens but find that even when you eliminate these allergens from your life and environment, you are still having symptoms a month later, it may be time to pursue an Elimination Diet. An Elimination Diet is a food allergy identification technique that can be done at home, on your own any time. It is not fun, there is no ideal time to do one, but it gives you a crystal clear picture of what you react to dietarily and how you react to it. Full PDF instructions for an elimination diet can be downloaded below. It takes approximately a month to complete and it involves eating a very simple anti-allergenic and anti-inflammatory diet for 2-4 weeks and then testing yourself for sensitivities by reintroducing suspect allergenic foods in a systematic manner while you watch for symptoms. When you directly experience a food allergy or sensitivity for yourself you know exactly what causes the symptoms and you can consciously choose to eat that food and suffer or not eat that food and feel better, and if you choose to indulge you know exactly how long the symptoms will last as well. I often recommend Elimination Diets in my clinic because I feel they give you a lot of control over your symptoms as well as a clear understanding of your food choices. Furthermore, once you’ve done it you have the information. Even if nothing shows up through an Elimination Diet, at least you know that none of these dietary factors are in play and you don’t have to worry about allergenic foods contributing to your symptoms. Until Next Time, All Acupuncture Affordable Care Act/Obamacare All Ways Wellness All Ways Wellness All Ways Well News And Updates Anti Aging Anti-Aging Archive Arthritis Car Accident Diet And Nutrition Diet And Nutrition Digestive Electro Acupuncture Facial Rejuvenation Fertility Find Your Well Find Your Well Foot Reflexology Goodell Pt Healthcare Healthy Living Healthy Living Heart Health Herbs Infertility Intermittent Fasting Japanese Acupuncture Menstrual Irregularity Motor Vehicle Accident Treatment Mthfr Mva News/Updates Pain Physical Therapy Postpartum Recovery Preventative Medicine Preventative Medicine Psoriasis Psoriatic Arthritis Seasonal Stress Wellness
<urn:uuid:ba33f99d-beb8-4c67-a260-41d1b37b96b3>
CC-MAIN-2017-26
http://www.allwayswell.com/blog/5-ways-you-know-you-have-a-food-allergy
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00085.warc.gz
en
0.918411
909
2.984375
3
The wikipedia example is a very nice example, but ironically, can lead to a lot of confusion, as well. It should be noted that the 24-bit gradiant example achieves a nicely smooth gradiant (ymmv, depending if you are viewing on an LCD or CRT), so that should suggest that the bit depth of the color used in these enduser formats is quite sufficient to do the job (naturally, footage capture and mastering should continue to strive at a higher bit depths, to ensure that the best of 24-bits can make it to the consumer). "Deep color" would not solve the problem, if the problem is occuring elsewhere. It would just result in even more bits getting "lost" at the problem point, rather than more bits making it to the end result. The problem point is lossy compression, which attempts to eliminate finer bit depths it thinks we would not see, if they were missing from the picture. The result is a reduction in data size, but also the subtle loss (or sometimes not so subtle) of intermediate shades of color which would smoothly fill in a gradient. What's left is a gradient with clearly visible steps, where adjacent colors fail to blend together. It's not that 24-bit color is not enough. It's the lossy compression that leaves the end result with only a subset of colors available from that 24-bit color pallette. So what of this "confusion" I mentioned earlier? The Wikipedia example chooses a single primary color of red to show a gradient. Hence, it is no longer using the "full" 24-bits to achieve that color. It is only using 1 of three primary components, since it is red. Technically, this isn't a 24-bit gradient, but an 8-bit gradient (since the red only can utilize 1 channel out of the 3). So it is actually possible to show a pretty smooth gradient of a single primary color with a "mere" 8-bits of actual data (shown in the 3rd figure). So what about the very coarse gradient that is labeled "8-bit gradient"? Why is it so coarse at 8-bit (the 1st figure), when I just explained that 8-bit is enough to show the color red in a very smooth gradient (the 3rd figure)? The answer is that the 1st figure is not technically using all 8-bits to achieve that gradient in that color. Just like before, the color red corresponds to 1 of 3 available color channels. So it is only a sub 3 bits out of 8 bits that are getting used to show the red gradient. If you count the shades in the coarse gradient example, you see it works out to 6 discrete shades, which is what would be possible in a colordepth greater than 2 bits but not quite 3 bits (4 shades vs. 8 shades). Naturally, if the example used a color resulting from a mixture of red, green, and blue, then all 3 channels with their associated bits would get a work out, and the labeling of 8-bits and 24-bits would be more technically valid. Also forgot to mention, the whole situation as it relates to incidences of banding on PE gets even more complex, as the bits get shuffled here and there when/if a colorspace conversion has to occur during the production process between rgb and yuv, or vice versa. Shades of colors don't match up quite right between the 2 systems, so therein lies another opportunity for subtle color information to get lost/mangled between generational cycles. The end result is, again, color performance which is only using a subset of what would be available in a fully intact 24-bit pallette. There are even colors in one system that are outright illegal in the other color system. In this scenario, I believe this applies to the high intensity shades of the primary colors, rather than the near black intensities. So it is certainly possible for high saturation colors to get clipped to a lower level when converting between colorspaces. So it is possible that the right scenario gives a double whammy in sacrificed color depth. You lose some bits on the low end, get clipped on the high end, and maybe the end result has to contend with a shockingly limited color range of that possible with a fully intact 24-bit pallette.
<urn:uuid:ccebdabe-a337-435d-aad9-61c79f7465ab>
CC-MAIN-2017-26
http://www.avsforum.com/forum/150-blu-ray-software/839376-planet-earth-banding-issues-sigh-9.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00085.warc.gz
en
0.943837
887
3.375
3
Over the last few years, a lot of work has gone into digitally scanning printed books, both as a way of preserving their contents and to make them available to a wider audience online. As of March this year, Google had scanned more than 20 million works for their Google Books project. And other initiatives, like Project Gutenberg (the oldest digital library) and the Million Book Project, are also working on creating digital copies of printed works. But a lot of those efforts have faced trouble: scanning large numbers of printed works requires either a whole lot of time or a whole lot of destroyed books. The cheapest and fastest method of scanning a book is to tear it apart and feed the pages through a scanner. Other methods have traditionally involved a person standing by to turn pages. It takes a long time, and it's no fun for the person. The BFS-Auto (it stands for "Book Flipping Scanning") can scan 250 pages per minute, it turns the pages itself, and it causes no damage to books. The machine uses 3D sensing technology to figure out the optimum moment to take a photograph of each page. It's also programmed to automatically convert a shot of a curved page into a flat image for easy digital reading - Google patented a similar technology back in 2009. There's no word yet on how much the BFS-Auto will cost. But judging by the high-tech features, you probably won't be able to get one for your home library. Still, it should help accelerate the rate at which some organizations are able to digitize their book collections, while leaving the books it scans unscathed. This may be good news for the long-term preservation of the printed word.
<urn:uuid:b1649612-c61f-415a-8ff7-c9522375b4c2>
CC-MAIN-2017-26
http://www.cbc.ca/strombo/news/the-future-of-scanning-physical-books-this-robot-can-read-250-pages-a-minut
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00085.warc.gz
en
0.9517
350
2.8125
3
acceleration of charged particles的用法和样例: - The Van Allen belts are composed of charged particles trapped by the earth's magnetic field. - A continuous stream of charged particles, called the solar wind, impinges on the earth's magnetosphere. - What ACE detected was a violent gust of "solar wind", the constant flow of charged particles from the sun. - In an accretion disk composed of charged particles, magnetic field lines work in exactly the same way.
<urn:uuid:8eb8a719-12a5-417c-810d-bb7622188900>
CC-MAIN-2017-26
http://dict.cn/acceleration%20of%20charged%20particles%0D
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00165.warc.gz
en
0.705046
111
3.890625
4
Many people would argue that the American political process is unfair, but they would say that for different reasons. Some people would say that the American political process does not accurately reflect the will of the people and that this is unfair. Other people would argue that it is not feasible for this to be the case and that certain people deserve more influence in American politics because of their greater contribution to society or because they are more qualified for the job. These two sides have been in conflict since the early days of the American political process. The representative democracy of the United States does render the opinions of individual voters relatively unimportant. While voters and their votes do matter and candidates spend millions of dollars trying to sway the opinions of voters, many individual voters are frustrated with the fact that it barely seems to make a difference whether they vote or not and the voting seems to be a matter of principle.However, the fact that every voter is in the same situation does seem to make the process fair in its own way. People have been arguing since the beginning that American democracy has to be representative. Pure democracy with no representatives is very rare when the voting public has hundreds of millions of people. It usually only works in much smaller societies. While some people would argue that this does not mean the situation is fair, they might still make a case for the system in a pragmatic sense. Democracy requires an educated middle class to be sustainable, or people will often vote for the very same individuals that democracies seek to eliminate. You may also like these articles: - The Value of Academic Debate - Women's Right to Education - Combining Academic Knowledge and Practicality - Torture Is Never Justified - Is High IQ a Guarantee to Academic Success? In Principle and In Practice It should be noted that a lot of Americans functionally never vote for reasons beyond their control. Even getting to the polling booths or getting absentee ballots is tough in some areas, which is genuine discrimination against poorer people and people who live in certain regions. Disabled individuals often find it difficult to vote for various reasons, so their voice gets excluded from American politics. Some wealthy people argue that since they pay most of the taxes, they deserve a bigger voice in American politics. However, wealthy people pay fewer taxes in America than they do in other countries. Also, wealthy people have more control over American elections than almost anyone even though they each have one vote. Wealthy people can give campaign contributions to the candidates of their choice, so the candidates of their choice will have an advantage during the election. Elections are automatically slightly biased in favor of the wealthy on this basis alone. Wealthy people represent a small portion of the population, and the policies that favor lining their pockets further will directly go against the interests of most of the country. More and more wealth has been directed to the wealthy over the past thirty years, and campaign contributions towards certain candidates have had a huge impact on that. The situation involving wealthy people buying elections is reflective of faulty laws in the sense that there could be laws limiting campaign contributions. However, this situation does not directly reflect a problem with the baseline American political process or democratic structure itself. If anything, this problem demonstrates that the American political process is not working as it was intended. Wealthy people who have no political experience and who are acting purely in their own self-interest have more political power than many politicians. The overall system for American voters and the American representative democracy isn't perfectly fair, but having a direct democracy that was perfectly fair would be too difficult. However, the fact that wealthy people are able to subvert the political process and control it so substantially automatically taints the American political process, rendering it unfair even though there are no laws mandating that this should be the case. The disproportionate influence of the wealthy has made the American political process unfair, and not the representative democratic structure.
<urn:uuid:1254da5d-4970-4bd9-8d01-811d623ee697>
CC-MAIN-2017-26
https://www.privatewriting.com/blog/is-our-political-process-fair
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00165.warc.gz
en
0.978413
785
3.453125
3
|Part of a series on| |Health care and medicine| |Society and culture| |Theory and concepts| Shemale (also known as she-male) is a term primarily used in sex work to describe a trans woman with male genitalia and female secondary sex characteristics, usually including breasts from breast augmentation or use of hormones. Many transgender people regard the term shemale as offensive, arguing that it mocks or shows a lack of respect towards transgender individuals; in this view, the term emphasizes the natal sex of a person and neglects their gender identity. Using the term shemale for a transsexual woman often implies that she is working in the sex trade. The phrase is commonly used in pornography. The term shemale has been used since the mid-19th century, when it was a humorous colloquialism for female, especially an aggressive woman. Some biologists have used shemale to refer to male non-human animals displaying female traits or behaviors, such as female pheromones being given off by male reptiles. Biologist Joan Roughgarden has criticized the use of the term in the reptile literature, which she says is "degrading and has been borrowed from the porn industry." She writes that gynomorphic male and andromorphic female are preferred in scientific literature, adding, "I hope future work on these animals is carried out with more professionalism." Some mental health researchers consider attraction to transgender people to be a paraphilia. John Money and Margaret Lamacz proposed a series of terms along these lines. Gynemimetophilia denotes sexual attraction to male-assigned people who look or act like women, including genetically male crossdressers. It can also refer to an attraction to trans women. A related term is gynemimesis which refers to a homosexual male who engages in female impersonation without sex reassignment or to describe the adoption of female characteristics by a male. The terms were used by Money for classification purposes in his gender-transposition theory. He also proposed gynandromorph and gynemimetomorph as technical terms for trans women. A gynandromorph is an organism that contains both male and female characteristics. Gynandromorphy is a term of Greek etymology which means to have some of the body morphology and measurements of both an average woman and man. Psychologist Ray Blanchard and psychiatrist Peter Collins coined the term gynandromorphophilia. Sociologist Richard Ekins writes that this attraction can include both identification and object choice in "fantasy femaling" masturbatory scripts. Blanchard has proposed that this is "partial autogynephilia." Psychiatrist Vernon Rosario has called labels like these "scientifically reifying" when applied to those attracted to trans women. As an alternative to a paraphilic model, sexologists Martin S. Weinberg and Colin J. Williams have used the term Men Sexually Interested in Transwomen (MSTW). Slang terms for individuals with such preferences include transfans, tranny chasers and admirers. In Japan, the term "New Half" is used for trans people. It is a variation on the familiar term "hafu" (half or ハーフ) that is commonly used for people of mixed Japanese descent, signifying that transgender people are a new type of "half". Since the mid-19th century, the term she-male has been applied to "almost anyone who appears to have bridged gender lines", including effeminate men and lesbians. In the early 19th century, she-male was used as a colloquialism in American literature for female, often pejoratively. Davy Crockett is quoted as using the term in regard to a shooting match; when his opponent challenges Davy Crockett to shoot near his opponent's wife, Davy Crockett is reported to have replied: "'No, No, Mike,' sez I, 'Davy Crockett's hand would be sure to shake, if his iron pointed within a hundred miles of a shemale, and I give up beat...'" It was used through the 1920s to describe a woman, usually a feminist or an intellectual. Flora Finch starred in The She-Male Sleuth, a 1920 film comedy. The term came to have a more negative connotation over time and been used to describe a "hateful woman" or "bitch." Up through the mid-1970s, it was used to describe an assertive woman, "especially a disliked, distrusted woman; a bitch." The term later took on an implicit sexual overtone. In her 1990 book, From Masculine To Feminine And All points In Between, Jennifer Anne Stevens defined she-male as "usually a gay male who lives full-time as a woman; a gay transgenderist." The Oxford English Dictionary defines she-male as "a passive male homosexual or transvestite." It has been used as gay slang for faggot. In 1979, Janice Raymond employed the term as a derogatory descriptor for transsexual women in her controversial book, The Transsexual Empire: The Making of the She-Male. Raymond and other cultural feminists like Mary Daly argue that a "she-male" or "male-to-constructed female" is still male and constitutes a patriarchal attack by males upon the female essence. In some cultures it can also be used interchangeably with other terms referring to trans women. The term has since become an unflattering term applied to male-to-female transsexual people. Psychologists Dana Finnegan and Emily Mcnally write that the term "tends to have demeaning connotations." French professor John Phillips writes that shemale is "a linguistic oxymoron that simultaneously reflects but, by its very impossibility, challenges [gender] binary thinking, collapsing the divide between the masculine and the feminine." Trans author Leslie Feinberg writes, "'he-she' and 'she-male' describe the person's gender expression with the first pronoun and the birth sex with the second. The hyphenation signals a crisis of language and an apparent social contradiction, since sex and gender are 'supposed' to match." The Gay and Lesbian Alliance Against Defamation has said the term is a "dehumanizing slur" and should not be used "except in a direct quote that reveals the bias of the person quoted." Some have adopted the term as a self-descriptor but this is often in context of sex work. Gender non-conforming author Kate Bornstein wrote that a friend who self-identified as "she-male" described herself as "tits, big hair, lots of make-up, and a dick." Sex researchers Mildred Brown and Chloe Rounsley said, "She-males are men, often involved in prostitution, pornography, or the adult entertainment business, who have undergone breast augmentation but have maintained their genitalia." According to Professors Laura Castañeda and Shannon Campbell at the University of Southern California's Annenberg School of Journalism, "Using the term she-male for a transsexual woman would be considered highly offensive, for it implies that she is working 'in the [sex] trade.' It may be considered libelous." Melissa Hope Ditmore, of the Trafficked Persons Rights Project, notes the term "is an invention of the sex industry, and most transwomen find the term abhorrent." Biologist and transgender activist Julia Serano notes that it remains "derogatory or sensationalistic." According to sex columnist Regina Lynn, "Porn marketers use 'she-male' for a very specific purpose — to sell porn to straight guys without triggering their homophobia — that has nothing to do with actual transgendered people (or helping men overcome their homophobia, either)." According to sex columnist Sasha, "The term shemale is used in this setting to denote a fetishized sexual persona and is not typically used by transgendered women outside of sex work. Many transgendered women are offended by this categorization and call themselves T-girls or trans." In popular culture In addition to its use in pornography, the term has been used as a punch line or for rhetorical effect. As part of the 42nd Street Art Project in 1994, designer Adelle Lutz turned a former shop in Times Square called American Male into "American She-Male", with brightly colored mannequins and clothes made of condoms. The 2004 Arrested Development episode "Sad Sack" had a gag where Maeby tricks Lindsay into wearing a shirt that says "Shémale", in order to convince a suitor Lindsay is transgender. Film critic Manohla Dargis has written about the lack of "real women" in summer blockbusters, claiming Judd Apatow comedies feature men who act more like leading ladies: "These aren't the she-males you find in the back pages of The Village Voice, mind you. The Apatow men hit the screen anatomically intact: they’re emasculated but not castrated, as the repeated images of the flopping genitals in Forgetting Sarah Marshall remind you." The word came under extreme criticism when it was used during episode four of RuPaul's Drag Race Season 6. Logo TV, the show's broadcast station, released a statement on April 14, 2014 saying: "We wanted to thank the community for sharing their concerns around a recent segment and the use of the term 'she-mail' on Drag Race. Logo has pulled the episode from all of our platforms and that challenge will not appear again. Furthermore, we are removing the 'You've got she-mail' intro from new episodes of the series. We did not intend to cause any offense, but in retrospect we realize that it was insensitive. We sincerely apologize." |Look up shemale in Wiktionary, the free dictionary.| - Ignatavicius, Donna J.; Workman, M. Linda (2016) . Blair, Meg; Rebar, Cherie; Winkelman, Chris, eds. Medical-Surgical Nursing: Patient-Centered Collaborative Care (8 ed.). St. Louis (MO): Elsevier. p. 1520. ISBN 9781455772551. - Lennard, Natasha (2016-09-29). "Can These Pornographers End ‘MILFs,’ ‘Teens,’ and ‘Thugs’? Porn May Never Be the Same". The Nation. Retrieved 2017-02-22. - Castañeda, Laura and Shannon B. Campbell News and Sexuality: Media Portraits of Diversity. SAGE, ISBN 978-1-4129-0999-0 - Shine, R.; Phillips, B.; Waye, H.; LeMaster, M.; Mason, R. T. (2001). "Benefits of female mimicry in snakes: She-male garter snakes exploit the amorous attentions of other males to warm up.". Nature. 414: 267. - Mason, R. T.; Crew, D. (1985). "Female mimicry in garter snakes". Nature. 316: 59–60. PMID 4010782. doi:10.1038/316059a0. - Rubenstein, D. I. (1985). "Animal behaviour: The serpent's seductive scent". Nature. 316: 18–19. doi:10.1038/316018a0. - Moore, M. C., & Lindsey, J. (1992). The physiological basis of sexual behavior in male reptiles. In C. Gans and D. Crews, Hormones, brain and behavior: Biology of the reptilia, vol. 13, physiology E, pp. 70-113. - Flam, Faye (2008).The Score: How the Quest for Sex Has Shaped the Modern Man. Avery, ISBN 978-1-58333-312-9 - Roughgarden, Joan (2005). Evolution's rainbow: diversity, gender, and sexuality in nature and people. University of California Press, ISBN 978-0-520-24679-9 - Blanchard, R.; Collins, P. I. (1993). "Men with sexual interest in transvestites, transsexuals, and she males". Journal of Nervous and Mental Disease. 181: 570–575. PMID 8245926. doi:10.1097/00005053-199309000-00008. - Bailey, J. Michael (2003). The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. Joseph Henry Press, ISBN 978-0-309-08418-5 - Dixon, D., & Dixon, J. (1998). She-male prostitutes: Who are they, what do they do, and why do they do it. In J. Elias, V. Bullough, V. Elias, & G. Brewer (Eds.), Prostitution: On whores, hustlers, and johns (pp. 260-266). New York: Prometheus. - Olsson, S.-E.; Möller, A. (2006). "Regret after sex reassignment surgery in a male-to-female transsexual: A long-term follow-up". Archives of Sexual Behavior. 35: 501–506. PMID 16900416. doi:10.1007/s10508-006-9040-8. - Sigel, Lisa Z.; John Phillips (2005). "Walking on The Wild Side: Shemale Internet Pornography". International Exposure: Perspectives on Modern European Pornography, 1800-2000. Rutgers University Press. pp. 254–271. ISBN 0-8135-3519-0. Retrieved 2008-12-14. - Money, J.; Lamacz, M. (1984). "Gynemimesis and gynemimetophilia: Individual and cross-cultural manifestations of a gender-coping strategy hitherto unnamed". Comprehensive Psychiatry. 25 (4): 392–403. PMID 6467919. doi:10.1016/0010-440X(84)90074-9. - Money, J (1984). "Paraphilias: Phenomenology and classification". American Journal of Psychotherapy. 38: 164–178. - Sexual deviance: theory, assessment ... - Google Books - The Illustrated Dictionary of Sex: Gynemimism - John Money, Gender-transposition theory and homosexual genesis, Journal of Sex & Marital Therapy, Volume 10, Issue 2 Summer 1984 , pages 75 - 82 - The Illustrated Dictionary of Sex: Gynandromorphy - Ekins, Richard (1996). Blending genders: social aspects of cross-dressing and sex-changing. Routledge, ISBN 978-0-415-11551-3 - "The she-male phenomenon and the concept of partial autogynephilia". R. Blanchard - Journal of Sex & Marital Therapy, 1993. - Rosario, Vernon (2004). "Quejotobonita!": Transgender Negotiations of Sex and Ethnicity. In Ubaldo Leli, Jack Drescher (eds.) Transgender Subjectivities: A Clinician's Guide. Routledge, ISBN 978-0-7890-2576-0 - Weinberg, MS; Williams, CJ (2009). "Men Sexually Interested in Transwomen (MSTW): Gendered Embodiment and the Construction of Sexual Desire". J Sex Res. 47: 1–10. PMID 19544216. doi:10.1080/00224490903050568. - Kelts, Roland. "Japan's blurred genders: Embracing my New Half". CNN. Retrieved 2 March 2015. “It all goes back to the 1950s,” he says, tracing the rise of a gei ba (gay bar) entertainment culture to the early postwar era, and the coinage of the phrase to one such bar in Osaka, Betty's Mayonnaise, in 1982. Transgender proprietor Betty borrowed the loanword for mixed-race Japanese, “half,” and pronounced herself, “half man and half woman, therefore 'New Half'.” - Herbst, Philip H. (2001). Wimmin, Wimps & Wallflowers: An Encyclopaedic Dictionary of Gender and Sexual orientation Bias in The United States. Intercultural Press. pp. 252–3. ISBN 1-877864-80-3. Retrieved 2007-10-25. - Cassidy, Frederic Gomes; Joan Houston Hall (2002). Dictionary of American Regional English. Harvard University Press. p. 901. ISBN 9780674008847. Retrieved 2008-12-22. - Boorstin, Daniel J. (1965). "Part Seven: "Search for Symbols"". The Americans, vol. 2 The National Experience. N.Y.: Vintage. p. 335f. ISBN 0-394-70358-8. - Green, Jonathon (2006). Cassell's Dictionary of Slang. Cassell. ISBN 978-0-304-36636-1. - Lowe, Denise (2005). An encyclopedic dictionary of women in early American films, 1895-1930. Routledge, ISBN 978-0-7890-1843-4 - Spears, Richard A (1991). A Dictionary of Slang and Euphemism. Signet, ISBN 0-451-16554-3 - Wentworth, Harold and Stuart Berg Flexner (1975). Dictionary of American Slang. Crowell, ISBN 978-0-690-00670-4 - Stevens, Jennifer Anne (1990). From Masculine To Feminine And All points In Between. Cambridge, MA 02238: Different Path Press. ISBN 0-9626262-0-1. - Oxford English Dictionary. Cambridge, MA 02238: Oxford University Press, USA. 1989. ISBN 978-0-19-861186-8. - Aman, Reinhold (1982). Maledicta, Volume 6, Issue 1, p. 144. - Raymond, J. (1994). The Transsexual Empire. New York: Teachers College, Columbia University. ISBN 0-8077-6272-5. - Daly, Mary (1985). Beyond God the Father: toward a philosophy of women's liberation. Beacon Press, ISBN 978-0-8070-1503-2 - Finnegan D, McNally E (2002). Counseling Lesbian, Gay, Bisexual, and Transgender Substance Abusers: Dual Identities. Routledge, ISBN 978-0-7890-0403-1 - Feinberg, Leslie (1997). Transgender Warriors. Beacon Press, ISBN 978-0-8070-7941-6 - Staff report (October 05, 2007). GLAAD Condemns "Dehumanizing" Page Six New York Post Column.[dead link] The Advocate - GLAAD GLAAD Media Reference Guide: Defamatory Language. Archived July 26, 2011, at the Wayback Machine. - Carmichael, Amy (June 8, 2002). Rare 'shemales' seek respect and understanding. The Toronto Star - Bornstein, Kate (1994). Gender outlaw: on men, women, and the rest of us. Routledge, ISBN 978-0-415-90897-9 - Brown M, Rounsley C. (1996) True Selves: Understanding Transsexualism-For Families, Friends, Coworkers, and Helping Professionals. Jossey-Bass, ISBN 978-0-7879-0271-1 - Ditmore, Melissa Hope (2006). Encyclopedia of Prostitution and Sex Work. Greenwood Publishing Group, ISBN 978-0-313-32968-5 - Serano, Julia (2007). Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity. Seal press, ISBN 978-1-58005-154-5, p. 175. - Lynn, Regina (March 16, 2007). "When Words Fail, So Do We". Wired. Retrieved 2015-02-04. - Sasha (October 9, 2008). "Green sex toys". Montreal Mirror. - Sagalyn, Lynne B. (2003). Times Square Roulette: Remaking the City Icon. MIT Press, ISBN 978-0-262-69295-3 - Dargis, Manola (May 4, 2008). Is There a Real Woman in This Multiplex? New York Times - "'RuPaul's Drag Race' To Refrain From Using 'Transphobic Slur' In Wake Of Controversy". Huffington Post. Retrieved 14 April 2014.
<urn:uuid:bfd82d83-f4e0-4b16-9aa6-288a2e27151e>
CC-MAIN-2017-26
https://en.wikipedia.org/wiki/Shemale
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321309.25/warc/CC-MAIN-20170627101436-20170627121436-00405.warc.gz
en
0.817737
4,347
2.671875
3
The Congress party-led government, and especially the party’s president, Sonia Gandhi, believes the answer is to set aside – by law – larger number of parliamentary seats for women. Supporters point to quotas introduced two decades ago reserving seats for women at village council level elections, a measure they say has boosted female representation and – as a result – increased the focus of these bodies on issues like sanitation and water. Critics of quotas, though, have blocked attempts to set aside legislative seats for women at a national level. Some complain the proposals will push more higher-cast women into politics, leading to greater underrepresentation of lower-caste groups. Others say the key is first to improve education of women, so they don’t become proxies for male relatives under a quota system, which has happened in some cases at the local village – or “panchayat” – level. A new report by the United Nations Development Program, released last month, gives backing to the supporters of a broadening of quotas. The report looked at eleven countries in the Asia-Pacific region that introduced laws to reserve seats for women in lower houses of parliament. It found that between 2000, before any of the countries introduced these laws, and 2010, the proportion of women in these legislatures jumped by 10 percentage points on average. Even factoring in other reasons for this rise – say, improved educational opportunities for females – the report estimated the quotas led to a 5- percentage-point rise in the representation of females in these parliaments. “The uses of reserved seats for women members or gender quotas for candidates generally expand women’s representation,” the report said. India could look at its neighbor Nepal, which revised its quotas in 2007. In the decade to 2010, the proportion of women in its lower house of parliament rose to 33% from 6%. These kinds of measures mean Nepal now stands at 21st position on this global survey ranking countries in descending order of women’s representation in lower houses of parliament. While Nepal was one position above Germany, India was in 109th position, tied with Liberia and the Ivory Coast... Click here to read full article on WSJ Blogs: http://blogs.wsj.com/indiarealtime/2012/10/01/u-n-report-backs-india-push-for-quotas/
<urn:uuid:a37bfcea-9a99-438a-b471-3cc267463d03>
CC-MAIN-2017-26
http://undpwatch.blogspot.com/2012/10/wsj-un-report-backs-india-push-for.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322870.59/warc/CC-MAIN-20170628050821-20170628070821-00485.warc.gz
en
0.951885
496
2.875
3
Economic analysis of climate change and climate policies is fraught with many problems. Uncertainty and imprecision surround fundamental scientific and economic factors, such as, for example, how much warming will result from a doubling of carbon-dioxide (CO2) levels and how much damage is done by a given change in average world temperature. This paper addresses the following question: How does economic analysis compare costs and benefits when they occur at different times? In a broad sense, this question can be addressed by looking at climate policy as an investment. Investment requires giving up something now and receiving something in the future. An investment that will generate less than its initial cost makes little sense. Though investment opportunities are virtually unlimited, there are limited resources for undertaking them, so not all investments that pay more than their costs are worth undertaking. Any particular investment should be undertaken only so long as it does not crowd out a superior investment. In practice, this standard is tested by comparing an investment’s rate of return to a comparable market rate of interest. The tool for performing this test is discounting. Discounting is compounding run backwards. The product of discounting is called “present value” or “discounted present value” (henceforth referred to as “present value”). For instance, if $100 left in the bank for two years would be worth $121, then $121 received two years from now would have a present value of $100. Applying Discounting to Climate Policy Climate economists use mathematical models to estimate the impact that human emissions of carbon dioxide will have on the climate. The models estimate a range of possible future temperature changes caused by the emissions. These temperature changes are then plugged into different models that attempt to quantify the potential cost of environmental damage (from things like sea-level rise and changing weather patterns) at some future point in time. The point in time chosen is often far in the future, and the projected cost of environmental damage may be quite high. In general, the payback for cutting CO2 emissions comes with such delay that those making the “investment” now are different from those who will reap the benefits later. Of course, climate policy is not the only investment those in the present can make to provide benefits to those in the future. Discounting allows comparison of this climate investment to a set of alternative investments, with the end goal of finding whatever investment will provide the greatest benefit to its recipients in the future. Some economists argue that using any positive discount rate is inappropriate. In essence, this argument holds that discounting more heavily weights the welfare of those alive today relative to that of those who are yet to be born and is, therefore, immoral. However, this view is based on a flawed understanding of discounting. The discount rate reflects the value of alternatives to the policy in question. It is not a tool for diminishing relative importance of life or utility in the future. The following hypothetical example can illustrate the absurdity of denying the use of positive discount rates. Suppose a salesman markets to grandparents an investment fund that will provide college tuition for their grandchildren. The salesman argues that the return on the fund will be zero since it is immoral to weigh a dollar’s benefit for the grandchildren at a value of anything less than a dollar to the grandparents. That is, the grandparents should put $100,000 into the fund today in order to provide tuition of $100,000 two decades later. This sales pitch is as nonsensical as the argument that discounting is immoral. Grandparents can do better for their grandchildren than a zero rate of return. Policymakers can do better as well and should refrain from pursuing zero-rate-of-return policies that arise when discounting is ignored. How Discounting Works Which stream of receipts is better, a million dollars per year for 20 years or $21 per year for a million years? To choose the first stream is to use discounting, at least implicitly. Even though simple addition shows the second stream of payments generates an additional million dollars, the reasonable choice is the first stream. Since ten hours’ worth of income from the million-dollars-per-year stream could be invested at a modest rate to replicate the $21-per-year stream of payments, the choice is especially easy. Not all questions of discounting, however, are so obvious. When the costs and benefits of an investment or a policy occur at different times they need to be compared in a way that accounts for these time differences. The guiding question for the comparison is straightforward: How much would have to be invested today to generate the future value in question? The three factors needed to calculate a present value are: - The future value, - The length of time, and - The interest or discount rate. The lower the future value in question, the lower will be its present value. A higher interest (discount) rate or a longer time horizon will also lead to lower present value. The guiding question can be rephrased as how much money needs to be put into a savings account to generate the future value in question? The higher the interest rate or the longer the time interest accrues and compounds, the less is needed for the initial bank deposit. The most controversial part of present value calculations for climate impacts is choosing the appropriate discount rate. The case of the Environmental Protection Agency/Interagency Working Group (EPA/IWG)’s estimation of the social cost of carbon (SCC) illustrates the calculation’s high sensitivity to choice of discount rate. The Office of Management and Budget (OMB) guidance to regulatory agencies stipulates discount rates of 3 percent and 7 percent per year for benefit-cost analysis. The EPA/IWG, however, used rates of 2.5 percent, 3 percent, and 5 percent. The EPA/IWG settled on 3 percent as the best choice, but its omission of 7 percent was glaring to those who follow this regulatory issue. When Heritage Foundation researchers re-ran the models using the 7 percent discount rate, the SCC dropped by more than 80 percent in one of the models and actually went negative in the other. How Is the Social Cost of Carbon Used by Policymakers? The SCC is a benchmark to guide policymakers on the optimal or efficient amount of CO2 reduction. It is not intended to compel the elimination of all future climate damage from CO2 emissions. In the world of the economist, all or none is rarely the right answer. Instead their answers revolve around optimal adjustments—giving a little here to get a little more there. The SCC is an exercise in comparison—comparing a monetized cost of today’s action with a monetized future benefit of today’s action. Some may find it totally illegitimate to express environmental impacts in dollar terms. These policymakers may feel it is illegitimate to compare values from market transactions to changes in sea level or temperature or other environmental changes. For them, social cost of carbon calculations and, perhaps, environmental economics altogether are irrelevant to their world view. In that world view, the SCC cannot be redeemed by reducing discount rates or even setting them to zero. Such discount-rate manipulation is simply an attempt to rig an economic model so that it produces a result pre-determined in the alternate world view. For policymakers who find the SCC a useful tool, they must estimate it using a discount rate that reflects the greatest, reasonably expected return on alternative investments. Economic efficiency across time is not achieved by providing future benefits at too high a cost or by providing too few future benefits for a given cost. A Hypothetical Climate Example Suppose emitting a ton of CO2 today will cause $2,000 of damage (adjusted for inflation) in the year 2116. How much should be invested to prevent this emission? In other words, how much should be invested today so that in 2116 the damage of the ton of CO2 is offset? Assume the best alternative to cutting CO2 is a no-maintenance tree farm. Suppose further that $1.15 worth of seedlings will grow to $1,000 worth of trees in 100 years. In this example, the value of the trees grows at a 7 percent compound rate. Discounting the $2,000 of CO2 damage for 100 years at 7 percent would give a present value of $2.30. That is, the SCC in 2016 is $2.30 (assuming, for simplicity, the damage is done only in 2116). In short, $2.30 worth of seedlings would create enough value in 2116 to offset the damage of the ton of CO2 emitted today. It would make little sense, then, to spend more than this for any other investment that produces only $2,000 of benefit in 2016. However, that is just what some, including the EPA/IWG, recommend. Using the EPA/IWG’s too low 3 percent rate for discounting the climate damage gives a value of $104.07—roughly 50 times the value for the SCC obtained with the 7 percent discount rate. With this higher value as a guide, policies that cost $104.07 to cut CO2 emissions by one ton would pass muster, even though they provide no more benefit than planting $2.30 of seedlings. Table 1 illustrates how the future is shortchanged if discounting is done with 3 percent when 7 percent returns would be available. The chart shows how a 3 percent discount rate equilibrates the present and future values when the cost of cutting CO2 is $104.07 per ton. With a 3 percent rate of return, $104.07 in 2016 would grow to $2,000 in 2116. However, if the current generation undertook projects with a 7 percent return rate, the return would grow to a value of more than $90,300. What, then, is the best interest/discount rate to use? Choosing the Right Discount Rate Discounting is an opportunity cost exercise. The rate should reflect the best alternative return that an investment of the same size could reasonably be expected to generate. What, then, is the best reasonable return on investment? While one cannot predict what future rates will be, past rates of return on broad indexes are an excellent guide. The return on the Standard & Poor’s 500 from 1928 to 2014 was 9.60 percent. Over this time inflation was a compounded 3.1 percent. The real rate of return would be the difference—6.5 percent per year. Another source estimates the return for all stocks in the U.S. from 1802 to 2002 and gets the same 6.5 percent real return on capital. Yet another source calculates the real return on stocks between 1802 and 2002 to be 6.8 percent per year. These estimates reflect the returns after corporate income taxes are paid. Adjusting for corporate profits taxes increases these rates to between 7.5 percent and 9.9 percent. (See Appendix 2 for these calculations.) In any event, the 7 percent discount rate that is part the Office of Management and Budget’s guidance does not seem too high. It is important to note that private investment differs in many ways from climate investment. - The individuals making investments in climate improvement are not the same as those who receive the benefit of the investment. - Climate improvement is a public good, which breaks the link between those who pay for the benefit and those who receive it. - There are not well-developed markets for climate amenities that can give accurate valuations for the benefits and costs. These differences may be problematic, but they are not problems with discount rates. As a result, they cannot be resolved by arbitrarily lowering the discount rate in a benefit-cost analysis. Discounting is a critical component of cost-benefit analysis, especially when the costs and benefits occur at separate and temporally distant points. To be done properly, the discount rate should reflect the best rate of return that could reasonably be expected in capital markets. Over the past two centuries, the stock market in the U.S. has generated a return of more than 7 percent (after adjusting for the portion paid in taxes). Therefore, the 7 percent discount rate stipulated by the Office of Management and Budget for benefit-cost analysis seems very appropriate for use in analysis of climate policies.—David W. Kreutzer, PhD, is Senior Research Fellow for Energy Economics and Climate Change in the Center for Data Analysis, of the Institute for Economic Freedom and Opportunity, at The Heritage Foundation. Appendix 1: Compounding and Discounting Most people are familiar with the concept of compound interest. For instance, if $100, the principal (present value or PV), were compounded at a 10 percent rate of interest, it would be worth $110 in one year. If none of the money were removed and the $110 were compounded for another year, the value at the end of this second year would be $121.00 (future value or FV). The value at the end of the first year is $100 x (1+ 0.10) = $110. The interest rate is expressed as a decimal fraction, 0.10 = 10%. The value at the end of the second year can be expressed as: $110 x (1+0.10) = $121. Another way of writing this is: ($100 x (1+0.10)) x (1+0.10) = $121; or $100 x (1+0.10) = $121. The general formula for compounded future value is: FV=PV x (1+r)[n] Where “r” is the rate of interest and “n” is the number of periods (typically years) over which the interest is compounded. In the example above PV = $100; r = 10 percent per year (written 0.10); and n = 2. Discounting is the inverse of compounding. For any amount in the future the present value (or discounted value) is the amount that, if invested today, would compound out to that future value. In the example above, the present value of $121 to be received two years from now discounted at 10 percent per year is $100. The general formula for compounding can be rearranged to give the general formula for discounting as: Appendix 2: Rates of Return Taxing corporate profits transfers the taxed portion of these returns on capital to the government. Because of that transfer, this portion of the return to capital is not reflected in the standard rate of return estimates. The average rate of return on the Standard & Poor’s from 1928 to 2014 was 6.5 percent. The average federal corporate profits tax for firms in the top income bracket (where most corporate profits are earned) from 1928 to 2014 was 38.3 percent. Not all profits are taxed at that rate nor does that rate include taxes at the state and local levels. The Bureau of Economic Analysis, however, did an in-depth study of the taxes paid by corporations in 1996. Its numbers indicate an overall effective profits tax of 30.8 percent. In 1996 the statutory federal rate for corporations in the top bracket was 35 percent. Assuming that the same 4.2 differential between the federal rate and the effective overall tax rate holds in general puts the average corporate tax rate for 1928 to 2014 at 34.1 percent. Therefore, the before-tax profits are larger than the after-tax profits by a factor of: 100 / (100 – 34.1) = 1.52. That is, the rate of return to the country as a whole (as opposed to the return paid to stockholders) on the capital in the U.S. stock market for 1928–2014 was: 6.5 percent x 1.52 = 9.9 percent. While there may have been other forms of corporate taxes before 1909, there was no federal corporate profits tax before that date. The average rate for the top profits bracket (which was not always the highest marginal tax rate) for the years 1909–2002 was 32.4 percent. Subtracting 4.2 percentage points, to convert the federal nominal rate to an overall effective rate, yields 28.2 percent. Since the federal corporate profits tax was in effect for only 46 percent of the years between 1802 and 2002, the average corporate tax would be estimated by: 28.2 percent x 46 percent = 13 percent. The adjustment factor for this time frame is: 100 / (100 – 13) = 1.15 percent. Using the lower of the two estimates for the rate of return on all U.S. stocks from 1802–2002 gives a rate of return to the country as a whole of: 6.5 percent x 1.15=7.5 percent. After including the value of corporate taxes, the return on stocks in the U.S. over long periods of time (1802–2002 in two cases and 1928–2014 in the other) range from 7.5 percent to 9.9 percent. Appendix 3: The Logic (and Illogic) of Climate Investments Some proponents of low discount rates argue that the forced investment in future climate amenities will come primarily at the expense of reduced consumption instead of alternative investment. If this were true, then the cost of reduced consumption is even greater than the cost of reduced investment. Why would people who consider a 7 percent return not worth sacrificing current consumption somehow think that a 3 percent return would be worth it? Appeal to social-welfare functions and theoretical measures of how much additional income would be worth in the future, however, turn the argument on its head. Arguing that climate policies sacrifice current consumption instead of current investment is nonsensical with respect to basic economics. Sacrificing current consumption for greater future consumption is how all investment is done. Requiring people to invest in future climate benefits by imposing climate regulations, but saying that it is unfair to compare that rate of return to rates from alternative investments, is tantamount to an investment advisor recommending a poorly performing fund. He may say it is illegitimate to compare the expected return to better investments because one is not saving for retirement in the first place. Yet who would want such advice? Anyone moving from not saving to saving will want the best return on the savings. In a similar vein, any investment for the future (whether voluntary or coerced) should be the one with the highest rate of return the investors could reasonably expect to earn. Biasing cost-benefit analysis so that investment is shunted to projects with lower rates of return is a poor strategy for helping those in the future.
<urn:uuid:a07c7caf-cc38-467d-9783-a044debed281>
CC-MAIN-2017-26
http://www.heritage.org/environment/report/discounting-climate-costs
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322870.59/warc/CC-MAIN-20170628050821-20170628070821-00485.warc.gz
en
0.943818
3,869
3.09375
3
Unformatted text preview: b) What is the relative change in volume of the system (in other words, what is V 2 /V 1 )? Ideal gas law: P 1 V 1 = nRT = P 2 V 2 since n and T are constant Therefore P 1 V 1 = P 2 V 2 and V 2 /V 1 = P 1 /P 2 = 1 atm / 0.5 atm = 2 c) What is ΔE in Joules? ΔE = C v ΔT = 0 because ΔT = 0 d) What is ΔH in Joules? ΔH = C p ΔT = 0 because ΔT = 0 e) What is Δ(PV) in Joules? Δ(PV) = Δ(nRT) = nR Δ(T) = 0 because ΔT = 0 f) What is w in Joules? w = - ∫ p dV = - ∫ nRT dV = -nRT ∫ dV = -nRT ln (V 2 /V 1 ) V V w = - (1 mol) (8.314 J mol-1 K-1 ) (25+273.15) K (ln 2) = -1718 J g) What is q in Joules? Which way is heat transferred (to the system or out of the system)? q = ΔE – w = 0 – (-1718 J) = +1718 J. Positive value means heat into the system. You have to show your work. No partial credit for unsubstantiated answers!... View Full Document This note was uploaded on 02/09/2012 for the course THERMO 306 taught by Professor Berthuiame during the Spring '11 term at Rutgers. - Spring '11
<urn:uuid:9b41251b-567c-4020-9e8e-fa74dd39c1c3>
CC-MAIN-2017-26
https://www.coursehero.com/file/6767080/QUIZ-1v2Solution/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322870.59/warc/CC-MAIN-20170628050821-20170628070821-00485.warc.gz
en
0.930332
370
2.78125
3
“Lieut. Bradley sends word that he has counted 196 dead cavalrymen on the hills to the left; what appeared yesterday in the distance like buffalo lying down are dead troopers and horses.” So reads the journal of Edward J. McClernand, 2nd Lieutenant of the Montana Column. The scene he describes is the aftermath of the Battle of Little Bighorn. On the afternoon of June 25th, 1876, George A. Custer, Lieutenant-Colonel of Seventh Cavalry, along with five companies of the Seventh Cavalry had faced a force of Sioux and their allies near a tributary of the Big Horn River. All of Custer’s forces perished, save for a single horse. The battle was part of the Sioux War, the outcome of the United States government’s failure to honor the Treaty of Fort Laramie, which granted territory in the Dakotas, Wyoming, and Montana to the Sioux nation. Special Collections houses a copy of Custer’s Last Battle by Brigadier-General Charles Francis Roe. Our copy was signed by Custer’s widow, Elizabeth Bacon Custer. She presented this copy to late MU professor John Neihardt, whose entire library is now housed in Special Collections. The library is an especially rich source of Americana. Custer’s Last Battle (Rare JGN E 467.1 C99 R7 1927) presents the reports of Charles Roe and other veterans of the Sioux War, accompanied by photographs and maps.
<urn:uuid:3e69b6b8-ab03-4201-b62b-9e624b091f58>
CC-MAIN-2017-26
http://library.missouri.edu/news/special-collections/custers-last-battle
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00565.warc.gz
en
0.944461
313
3.5
4
This ever-expanding reference list provides background on a diverse spectrum of illustrators across time, cultures, and artistic styles. Hungarian-born children's book illustrator, commercial illustrator, and animator. Surrealist painter who created a new art form of interpretive landscapes and portraits. Illustrated Americans in everyday activities. Successful commerical illustrator and founder of the Famous Artists School. William Hanna and Joseph Barbera animated Saturday morning TV for generations of children. His long career encompasses story illustrations for pulp magazines, advertising, and historical depiction. A dedicated artist with a distinct personal vision, he quietly transformed the art of the book with unique imagery that defied convention. Prolific painter of pulp covers and film posters. An influential mid-20th century female illustrator known for her vibrant psychdelic style art Known for drawings and watercolors during the 1960s and 1970s, she embraced digital art in the 1980s. Best-known for his "Saturday Evening Post" covers, he depicted civil rights struggles for "Look" in the 1960s. Ross has revitalized classic superheroes into works of fine art. Illustrator of fairytale picture books, young adult novels, and related products. Award-winning illustrator and Founding Director of the MFA Illustration Practice program at MICA. Artist best known for his distinct style of comic book art. Best known for his gritty, urban scenes, and one of the famous Eight. A highly acclaimed humorous illustrator and animator. Illustrator, art director, and educator. Artist who created Coca-Cola's iconic Santa Claus. Disney animator who became Creative Producer for Hanna-Barbera.
<urn:uuid:ead86e50-d54f-48ab-9699-a0993ae02bc0>
CC-MAIN-2017-26
http://www.illustrationhistory.org/artists/genre/product-illustration/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00565.warc.gz
en
0.944608
361
2.953125
3
Today we published a story and interactive news application revealing why the flood risk maps in effect across New York and New Jersey predicted Sandy’s flooding so inaccurately. Instead of the latest technology available, which would have painted a far more accurate picture of the risks for homeowners and flood planners, FEMA’s maps relied on a patchwork of technologies, some dating to the 1980s. For our project, we did an analysis of a few geographical data sets. In order to rank how well each county’s maps predicted Sandy's flooding, we compared the area of Sandy's storm surge as measured by the FEMA Modeling Task Force (specifically, the February 14, 2013 update) to the area of storm hazard zones in the effective maps in coastal New York and New Jersey counties. To make the calculation, we only considered land in zones starting in A or V. These are areas which FEMA requires mandatory flood insurance. We wrote software to compare the areas of the two maps using the C++ OGR API. The software calculated the amount of overlap between the geographical areas —how well the maps predicted where Sandy would flood. There are some important caveats to note when comparing predictive flood risk maps and actual flood inundation maps. Floods like the ones that came ashore with Superstorm Sandy do not hit everywhere equally. Although FEMA estimates that flood zones starting with an A or a V carry a one percent risk of flooding in any given year, there were only certain places in which Sandy was actually a “100 year storm.” In some places, Sandy was a rarer flood and in other places, a more common flood. This explains some, but not all, of the variation in accuracy. Also, a FEMA official told ProPublica that the accuracy of the inundation maps may vary from location to location, but overall it represents a “very accurate overall depiction of the extent of flooding from Sandy.” Nevertheless, we believe, as do experts we spoke with, that there is such a striking difference in the accuracy of the flood risk maps -- look, for instance, at the gap in accuracy between New York’s adjoining Queens and Nassau counties -- that these differences were notable. Areas with newer maps using newer technology predicted the flood extents far more accurately overall. To find buildings that the maps updated in 2007 left out, ProPublica analyzed data from several New York City agencies, Preliminary Work Maps FEMA released in June and FEMA’s Sandy structure damage assessments. We counted 9,503 buildings damaged during Sandy that are included in FEMA's 2015 preliminary risk maps, still in the works, but not in the 2007 risk maps, which were in effect when the storm hit. Of those buildings, 398 were built or altered in or after 2007 -- when more accurate flood maps, had they been ready, could have alerted owners and builders to the dangers of building in those areas. This is the subset we used to find affected homeowners to interview for our story. We chose buildings that fell into the following categories according to the US Army Corps of Engineers and FEMA: “affected,” “major,” “minor” and “destroyed” (see their methodology). To find the subset of these buildings that were built or altered in or after 2007, we restricted our query to buildings the city's assessment roll specified had “year built” or “year altered” values of 2007 or higher. We wrote software using the C++ OGR API to find these buildings. We’ve made the scripts and the data they rely on available to download. We welcome further analysis and research. If you see any problems or ways we could have done our analysis better, please let us know by emailing email@example.com.
<urn:uuid:f96208df-62be-495a-9159-9b43ef18646d>
CC-MAIN-2017-26
https://www.propublica.org/nerds/item/how-we-analyzed-femas-risk-maps
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00565.warc.gz
en
0.961766
778
3.125
3
In general, there could be a possibility to stop climate warming with Geoengineering methods. It doesn’t matter whether with CDR or SRM they both give possible solutions. But at what price? Most of the solution are implicating high risks or uncertainties, because most solutions are just theories. Just a few scholars actually believe in Geoengineering and most of them just with the condition of more research. For example, Marchetti (1977) recommends the CO2 storage underground as a temporally solution in a worst-case scenario. In addition, some more scientists support this idea, but mostly with the condition of first trying to reduce CO2 in a ‘normal way’. ’Climate geoengineering is best considered as a potential complement to the mitigation of CO2 emissions, rather than as an alternative to it.’ (Lenton and Vaughan 2009:5556) This represents quite well the general opinion to Geoengineering in the scientific world. Moreover, the idea of CDR is far more popular, than SRM, because of lower risks more abilities to control it. (Lenton and Vaughan 2009) According to Keith (2000) there are not just natural risks, but social aspects as well. For example, the politics, it would be most likely that not every country wants to go with geoengineered solutions. Therefore, just a few would make a decision, which would affect everybody. The questions of security, sovereignty and liability could lead to international conflicts. Furthermore, Keith (2000) underlines the problem of environmental ethnics. He stresses three main problems, which will arose by using Geoengineering. 1. The problem of, if we do once, we will do it again. 2. The problem of, rather than solve the causes, we are just trying to fix the problem. 3. The problem of, playing god in system we just barely understand. He ends his essay with a very good statement, which meaning I completely support. ‘Humanity may inevitably grow into active planetary management, yet we would be wise to begin with a renewed commitment to reduce our interference in natural systems rather than to act by balancing one interference with another.’ (Keith 2000: 280) Another good and critical article, written by Robock (2008), gives 20 reasons why Geoengineering may be a bad idea, which is by the way the headline of the article as well. He concludes by warning about the risks of Geoengineering, but at the same time encourage more theoretical research in this area. Moreover, he sees the problem of increased atmospheric CO2 mainly in bad politics. ‘If global warming is a political problem more than it is a technical problem, it follows that we don’t need geoengineering to solve it.’ (Robock 2008: 18) On account of that I am going to focus in my next post on the political opinion concerning Geoengineering. In particular find out the importance of this topic in current international environmental discussions like the upcoming COP21 conference in Paris. To conclude, most scholars are not convinced of the idea of Geoengineering. There is lack of knowledge and uncertainties, which should be filled with more theoretical research, before we try it on our environment.
<urn:uuid:281e63dc-2922-4c9f-bae0-cb67fab95baf>
CC-MAIN-2017-26
http://manmadeclimatechange.blogspot.com/2015/11/could-geoengineering-safe-our-planet.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319688.9/warc/CC-MAIN-20170622181155-20170622201155-00204.warc.gz
en
0.95688
660
2.921875
3
As more Americans are hopping on bikes, it’s no surprise that more cyclists are getting injured. In the U.S., there’s been a 120 percent bump since the late 1990s in hospital visits due to bike crashes. And more than 800 riders died in car-on-bike incidents in 2015, averaging out to about two fatal wrecks each day. What is less evident, though, is that on a case-by-case basis the costs of these incidents are increasing. While an adult rider who suffered a serious (but nonfatal) crash in 1997 might expect it to cost roughly $52,495—including medical bills, missed work, and loss of quality of life—the inflation-adjusted price tag grew to $62,971 in 2005 and a whopping $77,308 in 2013. That’s according to a new paper in Injury Prevention revealing that the total costs of bike injuries in the U.S. have risen an average of $789 million yearly since the late ‘90s, reaching a sky-high $24 billion in 2013. “Our overall message is to remember that the health benefits of cycling certainly outweigh the potential drawbacks. Many, many people cycle every day injury-free,” says Thomas Gaither, a study coauthor and medical student at the University of California, San Francisco. “However, our hope is that by quantifying these costs it will help to spur discussion and policy surrounding infrastructure for safe cycling.” What’s behind the worsening consequences of eating face? Age has something to do with it: The pool of cyclists in the U.S. is turning grayer by the year, with the number of miles traveled by bike annually by folks 45 and older increasing from 1.9 million in 2001 to 3.6 million in 2009, according to Gaither and his compatriots at the Zuckerberg San Francisco General Hospital and Trauma Center, Maryland’s Pacific Institute for Research & Evaluation, and elsewhere. In some medical circles, being over 39 is considered a risk factor for incurring a life-altering cycling injury. All the fun stuff that can happen to your noggin in a bike wreck, such as intraventricular bleeding and subdural hematomas, occurs more frequently in older populations. If you’re over 55, meanwhile, you have double the chances of dying when hit by a car compared to a younger person. “Older patients not only require longer recovery periods,” says Gaither, “but they also are more susceptible to more-severe injuries and have more medical comorbidities, which drive hospital costs.” Where you ride also influences the costs of wrecking, with urban environments seeming more prone to costly crashes. In the past, says Gaither, a lot of bike accidents arose from non-street incidents. But as people continue to flock to cities and pedal in high-traffic, often chaotic avenues, more are experiencing high-impact vehicle collisions requiring longer hospital stays. According to the study, costs stemming from crashes on streets and highways have risen about 0.8 percent every year since the late ‘90s. “We found that along with increasing costs, crashes on urban streets have increased,” he says. “Cycling is becoming more popular in urban areas and may be a reason for the increased cost. Certainly, crashes in urban areas may be more severe as they are more likely to encounter motor vehicles.” There is a bit of silver lining this grim research-wad, however. And that’s that injury costs per mile ridden in the U.S. have dropped from $2.85 in 2001 to $2.35 in 2009. That basically means that miles ridden are increasing faster than costs due to injuries, says Gaither. “This is a bit of good news,” he says. “It definitely coincides with bicycle helmet laws in many states and with the general trend of helmet use in the U.S. We know that head injuries are extremely costly and this may be a reason for the slight decline.”
<urn:uuid:0b9cd693-40c9-45d4-83dc-563908eb1e43>
CC-MAIN-2017-26
https://www.citylab.com/transportation/2017/06/why-are-the-costs-of-bike-crashes-rising-so-much/528774/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320063.74/warc/CC-MAIN-20170623133357-20170623153357-00285.warc.gz
en
0.966847
854
2.515625
3
Home Medications and Treatments for Headaches Headaches are some of the most common afflictions from which people suffer, more than 45 million Americans suffer from chronic, recurring headaches. A headache is pain or discomfort in the head, scalp, or neck. They are often nature's warning that something is wrong somewhere in the body, although headaches do not necessarily indicate that there is a serious disorder. Most people with headaches can feel much better by making lifestyle changes, learning ways to relax, and occasionally by taking medications. Common Symptoms of Headaches: Pain in the generalized area of head and neck Nausea and vomiting Sensitivity to light and sound A general feeling of discomfort. Top three classifications for common kinds of headaches: - Tension headaches are the most common and are characterized by a tightening around the head and neck, and a dull ache. - Cluster headaches come all at once and then disappear for a period. These affect one-side of the head, usually around the eye. - Migraines are severe pain on one side of the head. The pulsating and throbbing pain is often worsened from exertion. The common causes of headaches are: - Not getting enough sleep - Emotional stress - Hormonal changes - Eye strain - High blood pressure - Food additives - Caffeine withdrawal - Low blood sugar - Nutritional deficiency - The presence of poisons and toxins in the body There are several remedies for various types of headaches. Lemon is beneficial in its treatment. The juice of three or four slices of lemon should be squeezed in a cup of tea and taken by the patient for treating this condition. It gives immediate relief. The crust of lemon, which is generally thrown away, has been found useful in headaches caused by heat. Lemon crusts should be pounded into a fine paste in a mortar and applied as plaster on the forehead. Applying the yellow, freshly pared off rind of a lemon to each temple will also give relief. Apples are valuable in all types of headaches. After removing the upper rind and the inner hard portion of a ripe apply it should be taken with a little salt every morning on an empty stomach in such cases. This should be continued for about a week. The flowers of henna have been found valuable in headaches caused by hot sun. The flowers should be rubbed in vinegar and applied over the forehead. This remedy will soon provide relief. Cinnamon is useful in headaches caused by exposure to cold air. A fine paste of this spice should be prepared by mixing it with water and it should be applied over the temples and forehead to obtain relief. The herb marjoram is beneficial in the treatment of a nervous headache. An infusion of the leaves is taken as a tea in the treatment of this disorder. The herb rosemary has been found valuable to headaches resulting from cold. A handful of this herb should be boiled in a liter of water and put in a mug. The head should be covered with a towel and the steam inhaled for a long as the patient can bear. This should be repeated till the headache is relieved. Non-citrus juice such as apple, pear or peach Whole grain, calcium fortified cereal topped with skim milk or soy milk and fresh berries Scrambled eggs (purchase those high in omega-three fatty acids) or add in some fresh cooked salmon or canned salmon and fresh herbs such as basil or cilantro Fresh Blueberry Muffin or toasted whole grain bread French toast recipe such as Seattle Apple French Toast (using skim milk) Vegetable cottage cheese (low fat) in whole-wheat pita with lettuce or sprouts Homemade soup that doesn't contain prohibited foods, such as Asparagus and Sesame Chicken Soup (substituting cider vinegar for the rice wine vinegar) Calcium fortified juice Tuna salad sandwich on whole grain bread with lettuce Strawberry Sports Shake Pasta stir-fry, such as Linguini Honey-Sauced Prawns Garlic bread sticks Fresh fruit salad Broiled fish, such as salmon or tuna Microwave Rhubarb Crisp Gingered Pork and Peaches (made without the lemon juice or peel) Mixed green salad Cinnamon-Scented Raspberry Rice Pudding Lifestyle changes can reduce the chances of getting a headache and reduce the frequency of headaches. - Recogize what causes your headaches and avoid or reduce these conditions. - Cut down on alcohol and nicotine consumption. - Manage stress by practicing relaxation techniques such as yoga, deep breathing, and meditation. - Exercise on regular basis. - Get an adequate amount of sleep each night. - Maintain a healthy diet. Hot food baths are also beneficial in the treatment of chronic headaches. The patient should keep his legs in a tub or bucket filled with hot water at a temperature of 40degrees C to 45degrees C for fifteen minutes every night before retiring. This treatment should be continued for two or three weeks. Yogic kriyas like jalneti and kunjal; pranayamas like anulomaviloma, shitali, and sitkari; and asanas such as uttanpadasana, sarvangasana, paschimottanasana, halasana, and shavasana are also beneficial in the treatment of headaches.
<urn:uuid:bda43e5b-33df-4ffa-b3d6-cbff0a0de129>
CC-MAIN-2017-26
http://www.naturalmedications.com/ailments-remedies/headache
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320243.11/warc/CC-MAIN-20170624082900-20170624102900-00365.warc.gz
en
0.921765
1,136
2.921875
3
WebMD Medical News Reviewed by Louise Chang, MD Oct. 12, 2012 — Don’t have time to exercise? That excuse no longer works. Increasing evidence, including new research presented this week, shows that even short workouts that include surges of very high intensity can boost fitness and potentially shrink the waistline. In the new study, exercise physiology graduate student Kyle Sevits of Colorado State University and his team demonstrated that a mere 2.5 minutes of giving it your all on an exercise bike can burn up to 220 calories. That doesn’t mean that you can do an entire workout during a commercial break. Instead, those 2.5 minutes should be divided into five 30-second sprint intervals, each followed by a four-minute period of light, resistance-free pedaling. All told, that is less than 25 minutes, during which you will burn more calories than if you did 30 minutes of moderate cycling. “You burn a lot of calories in a very short time,” says Sevits. “Nearly all the calories are burned in those 2.5 minutes; you burn very few during the rest period.” He also points to additional benefits that come from interval training, including increased insulin sensitivity and glucose tolerance, both of which are important for overall good health. “This kind of research could help motivate people to get fitter and burn more calories,” says Heather Gillespie, MD, a sports medicine specialist at UCLA Medical Center in Santa Monica. She was not involved in the research. “It’s a very small study, but it’s very promising and adds more evidence to the benefits of interval training.” Sprinting in the Lab For the study, Sevits and his colleagues recruited 10 healthy men with the average age of 25. For three days, the recruits prepared for the study by eating a strict diet based on their caloric needs so that the researchers could be sure they were neither overfed nor underfed. Then they were checked into the lab. The rooms where they spent the next two days were outfitted with equipment that allowed the researchers to measure the number of calories each recruit burned during their stay. They stuck to the same diet while they sat in front of the computer or watched movies. On one of the days, though, they had to exercise. The sprint interval workout went like this: After a two minute warm-up came 30 seconds during which each man had to pedal as hard and fast as he could against high resistance. Four minutes of relaxed riding followed. Then, he went all out for another half minute. All told, the participants each did five bursts in which they pushed themselves to their limits. They each burned approximately 220 calories for their efforts. Previous studies have shown that high-intensity interval training such as this can aid the heart, both in healthy people and in those already suffering from heart disease. But while its health benefits may be established, its effect on calories has been far from clear, according to the authors. This study provides preliminary evidence that this kind of exercise may help maintain a healthy weight and, potentially, help shed pounds. Do Try This at Home — With a Bit of Caution Gillespie says that, like any workout, sprint interval training comes with caveats. “Everybody’s 100% is different,” she says, so people should know their limits. “I want people to move, but I also want to prevent injury.” She points out that interval training on a stationary bike is a low-impact exercise, which means it’s easier on the joints. People should be more cautious with higher impact exercises, like running, especially if they are overweight or obese. Gillespie also cautions that no one should try to cram their workout into just a couple of minutes. “You can’t sustain that high intensity for 2.5 minutes, and the rest period is just as important as the workout,” she says. “If you want, you can always check your email during those four minutes.” When it comes to reaping the benefits of interval training, Sevits says people face some significant hurdles. “The biggest barriers are the difficulty of this type of exercise and maintaining the commitment to do it,” says Sevits. He says that working with a personal trainer, who can encourage their clients to really push themselves, may be a way to go. “That kind of coaching can be really motivating,” he says. Beginners, Sevits continues, should ease into interval training. “First, build up your endurance, confidence, and comfort on whatever machine you have chosen before you start to really push yourself, then toss a few sprints into your regular 30-minute workout.” And if you find yourself struggling to maintain your max for those 30-second sprints? Don’t sweat it too much. “In reality, there’s a whole continuum of benefits to reap as you get closer to your max,” says Sevits. The study was presented in Westminster, Colo., at a joint meeting of the American Physiological Society, the American College of Sports Medicine, and the Canadian Society for Exercise Physiology. These findings were presented at a medical conference. They should be considered preliminary as they have not yet undergone the “peer review” process, in which outside experts scrutinize the data prior to publication in a medical journal.
<urn:uuid:4ac59664-1706-434e-9b75-071a5c048218>
CC-MAIN-2017-26
http://sacramento.cbslocal.com/2012/10/15/interval-training-burns-more-calories-in-less-time/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320395.62/warc/CC-MAIN-20170625032210-20170625052210-00445.warc.gz
en
0.974607
1,140
2.71875
3
Second and Third Grade finished up their positive and negative space projects last week with some interesting results. We talked about the final products and discussed how the magazine strips in the positive space draw more attention when they have brighter colors and more details, and how the negative space (the silhouette in this case) is easier to identify when it has a lot of defined edges with less details. Scissors, for example, were easy to recognize, but a koala in a tree was more difficult. Before moving on to the next project, we also took a day to paint some ornaments for the school’s entry for the festival of trees. This week, we moved on to lessons on landscapes and how the painting style varies by time period and location. We looked at American landscapes versus Japanese ones, and talked about the different subject matter for each region.
<urn:uuid:508f634d-0a6e-4d38-97cd-e036e7467885>
CC-MAIN-2017-26
https://cipspaint.wordpress.com/2015/11/15/2nd3rd-grade-and-on-to-the-next/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320395.62/warc/CC-MAIN-20170625032210-20170625052210-00445.warc.gz
en
0.971501
173
2.8125
3
Qualitative observation in science is when a researcher subjectively gathers information that focuses more on the differences in quality than the differences in quantity, which usually involves fewer participants. Qualitative observation is more interested in bringing out and knowing all of the intimate details about each participant and is conducted on a more personal level so that the researcher can get the participants to confide in the researcher.Continue Reading When participants feel comfortable with the researcher and confide in him or her, the researcher is able to get the information that he or she needs to make concrete observations. Most qualitative observational studies take place in a natural setting such as a public place and ask that participants answer questions in their own words. Qualitative observations and studies are usually done by social scientists, psychologists and sociologists with the goal of better understanding human and animal behavior. Quantitative observation, on the other hand, is an objective gathering of information. It focuses on things such as statistics, numeric analysis and measurements. Quantitative observation typically measures things, such as shapes, sizes, color, volume and numbers looking for the differences between the test subjects. It is the most commonly used observational method except for the social sciences where qualitative observation is the most commonly used observation method.Learn more about Chem Lab
<urn:uuid:fd9e77bd-573e-491b-badc-c233d04d3d77>
CC-MAIN-2017-26
https://www.reference.com/science/qualitative-observation-science-63e7107185a8f338
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320841.35/warc/CC-MAIN-20170626170406-20170626190406-00605.warc.gz
en
0.944354
251
4.03125
4
The people who take the time to catalog such things report that there are about 900 species of wildflowers in Grand Teton National Park. That’s good to know, because most of us are never going to look down while we are here: We are going to be looking up. The Grand Teton range is a relentlessly spectacular, 40-mile-long series of serrated peaks. Jutting dramatically from the broad Jackson Hole (pioneers’ term for a valley), the Tetons may be North America’s most impressive mountain panorama. To stand awhile gazing at them is to ponder mankind’s tentative position in the planet’s scheme. As with Yellowstone, just a few miles to the north, this park is the result of massive geologic activity: About 9-million years ago, two huge slabs separated, one rising to fashion the mountains, one dropping to form the valley. While the tallest peak, Grand Teton, soars to 13,770 feet, it has to vie for attention with 11 partners that top 12,000 feet. Their jagged, gray granite faces are laced with patches of snow and with glaciers. Trees seem to quit their climb early on these slopes; even the valley’s green carpet abruptly halts to let the mountains rise. Awesome yet approachable But the Tetons can be approached and even scaled: There are more than 200 miles of hiking trails that wend around the sparkling lakes and up into the mountains. For instance, you can circle pretty Jenny Lake in just six miles or take a turn-off at the south end to find the aptly named Hidden Falls, whose sound reaches the hiker’s ears long before the waterfall appears through the trees. Two paved roads run north and south through the park, roughly parallel to the mountains on the west, and there are enough scenic overlooks to fill even a big memory card. But for a languid look at the Tetons, get aboard one of the popular raft-floats on the Snake River, flowing about 6-8 miles from the mountains. The trip is calm, the young people handling the steering oars are full of history, corny jokes and naturalist lore. They are also quick to point out the eagles, ospreys, waterfowl, wading birds and beavers’ lodges on the river and its shores. When people lived here While several Indian tribes had migrated regularly through the flat valley, the first white settlers brought cattle herds here in the late 19th century. Just a trace of this pioneering effort remains, so it’s worth a stop at the Cunningham Cabin Historic Site, on the eastern edge of the park. Pierce Cunningham had led the effort to have the area proclaimed a national park, which came to pass in 1929; more land was added in 1950, making the park 485 square miles. Another remnant is the Menor’s Ferry Trail, where a half-mile path takes visitors to look at homesteading ways, including a replica of a turn-of-the-century ferry across the Snake. Close by is the 71-year-old Chapel of the Transfiguration, a tiny church that features a special backdrop to its altar: a picture window showcasing the Tetons. Horseback rides, lasting from an hour or so to overnight camping trips, are a special way to enjoy the back country, or you can pedal your bicycle along the paved roads – no bikes allowed on the trails. For a brief foray on the water, check at the Colter Bay Visitor Center for the breakfast and dinner trips to an island in big Jackson Lake. The grilled steaks taste special amid the natural splendor. The wildlife enhances the meals: Rare sandhill cranes shatter the stillness as they call from their nesting area, and white-tail deer prance by the picnic tables. Back at the Colter Bay Visitor Center, make time to visit the well-done Native American art. Creativity and craftsmanship are the focus. The center also shows films on wildlife and on Native American history. Best of all, when you step back outside and turn around, those marvelous mountains are there, defining the horizon and encouraging you to dream. If you go Grand Teton National Park is on the western edge of Wyoming, just north of the city of Jackson, which has commuter plane service. The park is open year-round, but visitor centers and concession services in the park close in the late fall through the winter. Snowshoe and snowmobiling trips are available in the winter. For information about Grand Teton National Park, call (307) 739-3300 or go to www.nps.gov/grte/index.htm. The park has five campgrounds with 865 sites, and five hotels that offer rooms and rustic cabins. For information on accommodations, contact the Grand Teton Lodge Co., (800) 628-9988 or go to www.gtlc.com. For lodging in Jackson, a few miles to the south, go to the Jackson Hole Chamber of Commerce site, www.jacksonholechamber.com/lodging/hotels-motels-lodges.php.
<urn:uuid:2418d2f4-5714-441e-af31-1e1a8cdd76bc>
CC-MAIN-2017-26
http://bobjenkinswrites.com/grand-tetons-wonders-of-the-wild/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321410.90/warc/CC-MAIN-20170627115753-20170627135753-00685.warc.gz
en
0.935377
1,094
3
3
Diabetic gastroparesis is a medical phrase used to describe an uncomfortable condition in which the stomach takes too long to empty its contents. Gastroparesis is usually caused by damage to the vagus nerve. The vagus nerve is responsible for moving food through your digestive system. When the nerve doesn’t send signals to the stomach to empty, or the signals don’t get through, a number of symptoms can result. Heartburn, nausea, weight loss, abdominal bloating, stomach muscle spasms, and vomiting undigested food are all common problems associated with diabetic gastroparesis. Other conditions can also trigger gastroparesis. These include anorexia nervosa, stomach surgery that affects the vagus nerve, certain medications (anticholinergics and narcotics), reflux disease, smooth muscle disorders (including amyloidosis and scleroderma), hypothyroidism, and Parkinson’s disease. Diabetics often suffer from gastroparesis because high blood glucose levels can damage the vagus nerve over time. High blood glucose seems to trigger adverse changes in many nerves in the body, including chemical changes which cause damage to the blood vessels that carry essential oxygen and nutrients to nerves. So how do you protect yourself from a case of gastroparesis, especially if you’re diabetic? You might want to consider trying acupuncture, as it has been shown to exert favorable results in clinical trials. One recent trial compared the effectiveness of domperidone (a common medication used to treat gastroparesis) and acupuncture for the treatment of diabetic gastroparesis. Patients diagnosed with both diabetes and gastroparesis were recruited for the study. The patients were given a 20 mg dose of domperidone, four times a day for 12 weeks. The researchers then stopped the medication and let the patients have a two to three week washout period. This was followed by biweekly acupuncture treatments for eight weeks. The researchers then compared the effectiveness of both treatments. They found that there were no changes in blood glucose or symptoms of gastroparesis in the domperidone treatment group. Conversely, acupuncture treatments resulted in a decrease in all symptoms associated with gastroparesis as well as improving overall quality of life scores. In another clinical trial, researchers investigated the effects of electroacupuncture on gastric emptying time and blood glucose levels in 19 diabetic patients with gastroparesis. Participants were randomized to receive either placebo treatments or four sessions of acupuncture spread over two weeks. The researchers found that gastric-emptying time significantly improved in nine patients receiving the acupuncture treatments. In contrast, gastric emptying time did not improve in any of the patients in the placebo group. Symptom severity improved significantly in the acupuncture group—both at the end of the trial two weeks later at follow up. The research team concluded that short term electro acupuncture reduces symptoms associated with gastroparesis. If you’re diabetic and having trouble getting your digestive system to work properly, acupuncture may be an alternative treatment you can turn to. Talk to your healthcare provider about finding a professional in your community. - Miller, N., et al., “Benefits of acupuncture for diabetic gastroparesis: a comparative preliminary study,” Acupunct Med. December 9, 2013. - Wang, C.P., et al., “A single-blinded, randomized pilot study evaluating effects of electroacupuncture in diabetic patients with symptoms suggestive of gastroparesis,” J Altern Complement Med. September 2008; 14(7): 833-9. Richard M. Foxx, MD has decades of medical experience with a comprehensive background in endocrinology, aesthetic and laser medicine, gynecology, and sports medicine. He has extensive experience with professional athletes, including several Olympic competitors. Dr. Foxx practices aesthetic and laser medicine, integrative medicine, and anti-aging medicine. He is the founder and Medical Director of the Medical and Skin Spa located in Indian Wells, California, at the Hyatt Regency Resort. Dr. Foxx is certified by the National Board of Medical Examiners and is a member of the American Academy of Anti-aging Medicine, the American Academy of Aesthetic Medicine, the International Academy of Cosmetic Dermatology, and a Diplomat of the American Board of Obstetrics and Gynecology.
<urn:uuid:c752e617-6354-4689-bc2f-3cab924d0272>
CC-MAIN-2017-26
http://caroladelmanpainting.blogspot.com/2014/01/acupuncture-can-alleviate-diabetes.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321410.90/warc/CC-MAIN-20170627115753-20170627135753-00685.warc.gz
en
0.929016
905
3
3
In people with dementia, exercise may significantly improve cognitive functioning and the ability to perform activities of daily living (ADLs), according to an updated research review from the University of Alberta in Edmonton. Researchers are motivated to find ways to treat or to slow dementia’s progress, since rates of the disease are expected to rise exponentially along with the aging population. Twelve new studies were included in the current review, bringing the number of participants who completed the trials to 798 (compared with just 208 in the original four studies reviewed in 2008). While research findings are positive for improvement in cognitive function and performance of ADLs, no improvement was found for those with mood issues, such as depression. “Clearly, further research is needed to be able to develop best practice guidelines to enable healthcare providers to advise people with dementia living at home or in institutions,” said lead study author Dorothy Forbes, associate professor of nursing at the University of Alberta in a Wiley news release. “We also need to understand what level and intensity of exercise is beneficial for someone with dementia.” The review of studies appeared in the Cochrane Database of Systematic Reviews (2013; 12, Article CD006489). PHOTOGRAPHY: Jeff Marchant
<urn:uuid:82bb7a9a-d728-4c47-a781-e9aa4c8a5759>
CC-MAIN-2017-26
http://www.ideafit.com/fitness-library/exercise-helps-people-with-dementia
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321410.90/warc/CC-MAIN-20170627115753-20170627135753-00685.warc.gz
en
0.951269
257
2.84375
3
3. Ecology And Climate The earth’s remaining forest cover is being destroyed by human exploitation at an almost unbelievable rate: about 50 million acres a year, or 50 acres per minute. Trees are cut down by hungry people to get fuel or a few more crops off demineralized jungle soils, and the lumber business takes its own heavy toil. Our forests and jungles must be saved. Our rain forests have been called “the lungs of the earth”, because so much of the earth’s life-giving vegetation is contained in them. The present level of carbon dioxide over “normal” levels will increase 50% in the next decade. Many jungles are now living off the minerals in the decaying wood of dead trees, but they are usually in areas of high rainfall, and if minerals are added to the decaying organic matter, the trees will increase their growth rate and be immensely valuable in taking up and storing carbon from the atmosphere. When water evaporates, oxygen is released into the air. Photosynthesizing plants are also a source of oxygen; leaves of the trees absorb carbon from the air and produce oxygen, releasing it into the air. We are disturbing the whole oxygen-carbon dioxide balance of our biosphere with our unwise activity. Volkswagen Foundation has about 300,000 acres of former virgin forest land in Brazil, that is now used for an expanding cattle export operation involving deforestation at an average 13,000 acres per year (Grainger, 1980). Weyerhauser Corporation has 6,000 square kilometers of timber concession in the fragile rain forests of Indonesia (Myers, 1979). If the jungles are not saved, John Hamaker says we have no chance at survival, and that they cannot be saved unless croplands of starving people are remineralized. Rain forests have been virtually eliminated from most parts of West Africa, Southern Asia, and the Caribbean. The world’s forests are also affected by climatic extremes, soil degeneration, insects, diseases, worsening climate, air pollution and acid rains—fires also ravage our forests, especially in dry seasons and times of drought. As more forests burn, a cycle of destruction actually takes place, because forest fires contribute to adverse conditions that, in turn, accelerate the destruction of more forests. In forest fires, not only are more precious trees lost, but destruction occurs on all these levels: - climatic stress (including record heat and drought) - when trees burn, carbon dioxide increases in the atmosphere, so pollution—and acid rain—are increased (they’re already caused by burning fossil fuels and by auto/vehicle and industrial exhausts/emissions) - deforestation and spreading deserts - chronic insect and/or disease epidemics Data on tropical forest fires is scarce, but it is reported that the nutrient-poor soils and highly-carbonaceous (mineral-poor) vegetation there burns quickly when moisture is withheld for a time. Wide-scale drought and acid rains not only lead to destruction of forests; they can also lead to more tropical forest fires. At present rates of human deforestation and desertification, most researchers say these forests are scheduled for virtual extinction in 15-30 years. The April 1961, American Forests magazine warned of the explosive fire situation building up in U.S. forest lands—this was already 23 years ago. |Average # of fires “War Technology Comes to the Forests”, by J. A. Savage, was printed in Friends of the Earth’s Not Man Apart (December 1980), and described how the U.S. Forest Service is adapting technologies used in Vietnam to “modern” silviculture. In addition to the arborcide Agent Orange, flame-throwers and bombs of napalm-like jelly are used to achieve a “clean burn” of all the “debris” left after clearcutting. With these methods, no “slash” (from the slash-and-burn technique) is left, only “charred dirt”. I assume their “clean” overlooks the damage to the environment and toxicity of the chemicals involved. In 1984, nationwide publicity of Vietnam veterans who had been exposed to Agent Orange revealed its effects in victims and their children; I hope the U.S. Forest Service isn’t making more victims. American Forests (March 1969) said that in a few years all varieties of trees were dying in a tract of forest in the Adirondack Mountains, except for hemlock and tamarack. Insects that attacked the trees multiplied greatly in the same span of time. The same thing that happened to this forest land is happening in all of the forests and jungles. The last of the minerals have come up in the forest lands, as in the croplands. Over the last 30 to 60 years, the finer fraction of used rock has been turned into subsoil, greatly reducing the surface area, and therefore, protoplasm production. Because these compounds build health, and resistance to disease and insects, the trees become easy prey to parasites. Acid rain (heavy in the northeastern states) has wiped out the last of the carbonates, resulting in excessive acidification of the soil. The lakes of that region have also been acidified. When acidity of water and soil drops below about pH 5.5, it begins to kill off various kinds of microorganisms. Only a few acid-tolerant organisms can survive, and only a few acid-tolerant trees and plants can survive on the poor quality and quantity of protoplasm which the soil provides. No amount of pesticides can stop this dying in a forest—only immediate aerial remineralization can save what’s left of it. In September 1961, W. Schwenke presented a paper on “Forest Fertilization and Insect Buildup”. The paper described work done in the previous nine years at the Institute of Applied Zoology at the Forest Research Center, Munich, Germany. The work was based on the observation that forest parasites had greater population density on poor forest soil than on more fertile forest soil, and on the observation that forest soils can be improved by fertilization. They used 1/2 to 1 1/2 tons per acre of limestone plus a light application of NPK. This minimal soil remineralization cut parasite population from 30 to 50%. On some of the oils the effect was still observable nine years after the application. The increase in growth rate produced a value hat far exceeded the cost of fertilizing the soil. Limestone probably has a broader range of elements to support living organisms. This was shown by the observed fact that the lasting effect of the fertilization depended on the minerals that were in the soil before fertilization. Severe deterioration of tree foliage and declining tree growth are also being observed throughout the Ohio Valley (AP news, April 16, 1984). The damage is a result of air pollution more acidic than the acid rain believed to be destroying freshwater life in the Northeast, according to a scientist who studies the valley trees. Dr. Orie Loucks said the decline can best be explained by the cumulative impact of over 20 years of stress from a combination of air pollutants. One important pollutant was the sulfate emitted from power plant and factory smokestacks. The acidity of the sulfate particles exceeds that of battery acid, he said. The major difference between the air quality of the Ohio Valley and that in the Northeast, he said, is that the sulfate content of the air is significantly higher in the Ohio Valley region (which includes Ohio, Indiana, Pennsylvania, West Virginia, and Kentucky). For some years forest deterioration has been reported in parts of the Northeast and other areas of the world—now Loucks has found that tree damage may be even more severe in the Ohio Valley, where there is a heavy concentration of coal-burning power plants that lack devices to clean emissions. The region is also believed to be he source of much of the acid rain now falling over the Northeast and Canada. States in the Ohio Valley have been resisting legislation aimed at curbing acid rain through programs requiring modifications of power plant smokestacks, because such measures would mean higher costs for public utilities—but it’s now obvious that the cost to life is far greater in the long run. In August 1984, New York became the first state to pass a law to curb acid rain, with legislation designed to reduce smokestack emissions 30% in the next decade. State environmental officials said the cost of the program, including pollution control devices, would add from $2.40 to $4.80 to the monthly utility bill for the consumer by 1991. Which of us would not gladly forfeit the price of a movie or a few magazines, if it would mean better air quality for everyone? As we said, acid rain also comes from sulfur dioxide from lignite and coal-burning power plants and nitrogen oxides from auto exhausts and factories. It changes chemically in the atmosphere before falling to earth, killing freshwater life and damaging crops and forests. Acid rain has destroyed fish populations in 200 lakes in the upstate Adirondacks (many lakes have become so acidic that no life can exist in them) and, as we said, has damaged millions of acres. Congress must adopt legislation to require a nationwide reduction of 10,000,000 tons of emissions. Lewis and Grant (Science, 1/11/80) also present some frightening statistics. On the Colorado section of the Colorado Divide where there is very little industrial pollution in the direction of the prevailing wind, the pH of all precipitation still dropped from 5.43 to 4.63 in just three years. Neutral pH is 7.0. Hamaker says that since the CO2 curve is almost vertical at the year 1995, we can go back 20 years to 1975 for the start of the 20-year critical period (to be mentioned in a moment) and not be off by more than a couple of years. The pH then must have been about 6. Acid rain occurs “naturally” in some places—in the Canadian arctic, natural fires in exposed lignite coal beds produce tremendous amounts of sulfur oxides. These chemicals fall to earth, rendering nearby lakes as acidic as lemon juice. Studies of the Greenland ice cap show that acidic depositions on the earth’s surface have been rising since the beginning of the industrial age, with the greatest increase occurring since the 1940s. Central Europe seems hardest hit. Forests are dying throughout Czechoslovakia, Poland, and East Germany. In West Germany, 3,700 acres of woodland died from 1978-1983, and 200,000 acres were seriously damaged, the most vulnerable being dense, pure stands of conifers between 20 and 40 years old that will probably not survive another 10 years (Bernhard Ulrich, German biochemist, 1983). Mr. Ulrich estimates that almost 5,000,000 acres of German forest soils are at the threshold where toxic aluminum will begin its lethal work. Industrial emissions drift from England to Scandinavia. The industrial Ruhr and Rhine area in Germany affect most of central Europe, and Russia (the largest burner of sulfur-bearing fuels) is also polluting Finland. America’s industrial Midwest helps render the rain acidic in virtually every state east of the Mississippi; much of the Midwest’s emissions join those from Canada, acidifying eastern Canada and threatening its fish and forests—two of its chief resources. In the U.S., only some of the Rocky Mountain states and parts of the Southwest enjoy healthy rains of pH 5.5 or more. Crops and temperate zone vegetation cannot grow on acidic soils, so the large number of dead and dying trees in our forests is attributable both to increasing soil acidity and decreasing quantities of available elements. Dead forests burn easily with a hot fire which oxidizes large quantities of atmospheric nitrogen. Lewis and Grant found that the oxides of nitrogen were dominant in the acidic precipitation. The more trees die and burn, the more the soils become acidified and the more trees must die. There are also a number of mildly acidic gases released from burning wood. These, plus the acidic gases from volcanism (volcanic power or action), are nature’s way of bringing on glaciation. Man’s fossil fuel fires are also a big factor in the destruction. Belgian scientist Genevieve Woillard showed that the final changeover to sub-arctic climate and vegetation (to be discussed later) took only 20 years at previous interglacial-to-glacial transitions, as recorded in the undisturbed pollen deposits of Grand Pile, France. In Woillard’s study, the change in vegetation was from hazel, oak, and alder to pine, birch, and spruce—that is, a change from warm-weather to cold-tolerant trees. But even more significant: this change is from nut-bearing trees to trees that can’t yield a proteinaceous crop. That translates to mean a decline in soil minerals to the point where there are insufficient microorganisms in the soil to grow proteinaceous trees. It now appears that the 20 years for the change in vegetation can be shortened because of industrial pollution; we are actually speeding up the deterioration process on all fronts, by the sum total of all our environmental errors. Hamaker said: “Judging from the CO2 curve, we are actually 5 years into such a period.”(This was at the time his book was written.) The Amazon forest is the largest tropical rain forest left in the world, but it is paying a heavy price for “progress”. Deforestation of large tracts (such as Volkswagen’s aforementioned tract) is causing a change in the region’s climate, something climatologists have warned of for some time. A change in the region’s water balance seems to be the result of increased runoff due to deforestation. If so, the long-predicted regional climatic and hydrological changes expected as a result of Amazon deforestation may already be beginning. Increased flooding is the first sign of damage to the Amazonian ecosystem. A heavily-deforested area has developed along the edge of the mountains in upper Amazonian Ecuador and Peru during the past 10-plus years, the result of large slices of forests being cleared for roads, housing, and other development, all of which are exposing the land to increased runoff and erosion. Scientists have found that runoff is increasing in the area while rainfall patterns remain the same; this is caused by interference in the process of transpiration—trees take up moisture that falls and send it back into the air. Now that the trees have been eliminated, the recycling process has been curtailed to an extent that the report warns “might eventually convert much of now-forested Amazonia to near desert.” Note: While in most areas (such as the North American Great Plains or Western Europe) most of the rainfall represents moisture blown in from the sea, about half of the Amazonian rainfall is water that is recycled within the basin. Thus, in tampering with the balance of ecology in the Amazon rain forest, one tampers with its rainfall cycle as well. Since population and farming are concentrated along the Amazon’s seasonally-flooded river margins, scientists warn that the magnitude of damage is potentially great, and say that the “rapidity with which relatively-limited forest destruction (which has since increased) appears to have altered the Amazonian water balance, suggests the need for planned development.” This is obviously an understatement—planned reforestation and remineralization are also needed to save the Amazon area, before going about any so-called “planned development”. When viewing the earth via satellite, you can literally see the moisture that swirls and sweeps outward from the Amazon area—it covers such a large area that it is seen as a giant moving form that takes on a life of its own—rapid development in the Amazon not only tampers with local ecology, it also affects areas farther away that would normally be affected by these huge, moving atmospheric systems of the Amazon. Throughout the Third World, unchecked erosion is washing away valuable topsoil. Reforestation could stop the process, aid in CO2 removal, and aid rainfall cycles—it must be a top survival priority. Because it can take years for reforestation’s results to be felt, local governments and villagers have been reluctant to take on what appear to be long-term, labor-intensive projects, but they are failing to realize what failure to do so will mean to their ecosystems. Researchers are working on what they call a “miracle tree”, the Leucanna leucocephela, which is an extraordinarily fast-growing, all-purpose, self-fertilizing tree, used for both fodder and timber. Under ideal conditions it reaches a 10-inch circumference in one year. Arbor Day began in Nebraska in 1872, when more than one million trees were planted to help prevent erosion and moisture loss in a state with few trees. Within two decades, 100,000 acres had been turned into forested preserves. Arbor Day is now a legal holiday in four states and is celebrated in all the states, but please don’t wait for Arbor Day to plant trees—do so whenever you can. Fruit trees are especially needed everywhere. Over 100 countries grow tobacco; flue-curing about 2,500,000 tons annually uses about one hectare of trees for every ton, amounting to about 12.5% of 18-20 million hectares of trees cut yearly, which means about 1 in 8 trees is axed just for drying tobacco! Cropland used for tobacco should be used for growing food instead. 3.2 Carbon Dioxide—Global Climate Changes—Weather Patterns The increase of carbon dioxide in the atmosphere is our most urgent problem. John Hamaker drew a carbon dioxide curve projection in 1979 and said that unless we gained control of the curve shortly after 1985, by 1990 the rate of breakdown of the environment would be occurring much faster than we could repair the damage. However, in order to gain control by 1985, we would have had to start in 1980 to have a fully-operating program of soil remineralization, pollution reduction, and so on. As of 1985, few people took seriously what the curve was saying—nevertheless, Hamaker hasn’t given up hope for humanity’s survival, even though he’s also considered the possibility that “if we were to start to work in the next few months, we could have less than a 50% chance of success”. He’s written countless letters and says three world science organizations finally agreed to meet in 1985—he thinks action is long overdue, with “nature just beginning to show her teeth”. While we wait around for statistics and more data, the power of centralized wealth is holding us to a system of soil destruction. World leaders, concerned with what they must do to get re-elected (if they are indeed elected), merely serve the interest of a wealthy minority that controls an economic system that is ruining our lands, keeping millions of people poor and/ or in debt, keeping our countries in debt, and threatening our very survival with destructive weapons, aggressive foreign policies, and decisions that continually compromise the quality of our environment. The Global 2000 Report to the President was commissioned in 1977 by President Carter and finally released in July 1980, as a three-volume work of over 1,000 pages. The report’s findings aren’t represented as predictions, but as depictions of conditions likely to develop if there are no changes in public policies. Some of its findings on CO2 were: - CO2 emissions will increase to 26 to 34,000,000,000 short tons per year, roughly double the CO2 emissions of the mid-70s. - 446,000,000 hectares (each is 2.47 acres) of CO2-absorbing forests will be lost. - Burning of much of the wood on 446 million hectares will produce more CO2. - Decomposition of soil humus will release more CO2. By June 1979, the percent of increase of CO2 over an assumed “normal” level of 290 ppm was about 15%. In 1985, it could be 18%. By 1990, it could be 22% (50% more than it is now). Yet we go on bringing carbon out of the ground and putting it into the atmosphere. John Gribbin (New Scientist, 4/9/81), noting the intensification of worldwide forest destruction and fossil fuel combustion, reports that the present annual CO2 increase has jumped to 2 to 4 ppm, and “is increasing rapidly today, in 1981”. (Hamaker’s CO2 curve projection could even prove conservative.) “The Role of CO2 in the Process of Glaciation”, published in April 1980, was written as a concise explanation of the glacial process which could be understood by the U.S. Congress, at a time when the CO2 problem was just being recognized by some of its members. It appears in Hamaker’s book, and refers to the relationship that has been virtually never considered by the hundreds of researchers of glaciation, starting with the first “Great Ice Age” theory of Louis Agassiz in 1837 (Imbrie and Imbrie, 1979). This excess carbon dioxide is causing what is known as “the greenhouse effect “because carbon dioxide behaves like the glass in a greenhouse, permitting the sun’s rays to reach the earth, but not allowing the heat to escape. The effect is like that of a “thermal blanket” around the globe. As a result, some scientists think that the earth will become warmer, but others, including John Hamaker, say that it is now getting colder. All scientists now agree that carbon dioxide levels are too high, and with acid rain, forest fires, deforestation, and trees dying from soil demineralization, CO2 levels continue to increase. Nature will complete her necessary cycles and go about her own self-healing processes, just as our bodies do. We’d do well to understand her cycles and healing crises better, and offer help instead of waiting for chronic illness to set in. We tend to forget that the earth is very much alive, and a living being/entity (albeit a large one!)—it regulates itself as surely as our bodies do. Because we need the earth to survive, its state of health is very fundamentally—and really—speaking, as important as our own. I’ll present both the warm and cold predictions to show how complex climate “analysis” becomes—all environmental factors interrelate to affect it. Having considered their total impact on our ecology and weather, heard both sides (warm/cold) of the story, and watched worldwide weather trends these past years, my intuition tends to believe scientists who say the world is cooling. In any case, we can’t deny that our planet is being manipulated (and often assaulted) on all sides daily by millions of its inhabitants. Some of these assaults are very serious; we discussed long, periods of time that some radioactive waste materials remain dangerous in Lesson 53—this is only one example. Life Scientists know of chemical medicines adverse effects in the body. Can you imagine how our planet’s health is affected and weakened by millions of daily assaults on its body? The saying “do unto others as you would have them do unto you” is not just a suggestion on how to be “nice”. It says, in essence, that what you do unto others you do unto yourself—more and more we see how true this is. Now we must also do unto our planet as we would have our planet do unto us, for what we do to our planet, we do to ourselves. 3.3 Warmer or Colder? Before continuing, let’s clarify the fact that scientists who see the world as cooling do not necessary dispute the greenhouse effect’s warming potential in and of itself—some see a preliminary warming as part of an “energy booster” or catalyst in the Ice Age transition process: the tropics do become hotter/drier as precipitation increases farther north, but increased cloud cover and other factors, to be discussed later, lead to increased cooling conditions. Let’s take a look at the two opinions … warmer or colder: In the fall of 1983. the federal government, based on an Environmental Protection Agency report, said that a “dramatic warming of the earth’s climate could begin in the 1990s because of the greenhouse effect, with potentially-serious consequences for global food production, changes in rainfall and water availability, and a probable rise in coastal waters”. The report said that “levels of CO2 in the air created by burning of fossil fuels could result in an increase of 3.6 degrees Fahrenheit by the middle of the next century and a 9-degree rise by 2100, representing an unprecedented rate of atmospheric warming”. “It’s going to have a very profound impact on the way we live,” said John Topping, staff director for the EPA’s office of air, noise, and radiation. “Some of the effects will be beneficial; some will be detrimental. But our ability to accommodate them will depend much on our planning beforehand. Temperature rises are likely to be accompanied by dramatic changes in precipation (more rainfall in some areas, more drought in others) and storm patterns and a rise in global average sea level,” the study said. “As a result, agricultural conditions will be significantly altered, environmental and economic systems potentially disrupted, and political institutions stressed.” Stephen Seidel, one of the authors of the report, said that milder winters and much warmer summers by the 1990s may no longer be unusual. The report said the trend will occur regardless of what steps are taken to reduce the burning of fossil fuels. The study said a warmer climate would raise the sea level by expanding the oceans and by melting ice and snow now on land. An increase of only two feet “could flood or cause storm damage to many of the major ports of the world, disrupt transportation networks, alter aquatic ecosystems and cause major shifts in land development patterns”. The warming is expected to be greater at the North and South Poles and less at the equator, the EPA said. John Hoffman, head of strategic studies for the agency, said “New York City could have a climate like Daytona Beach (Florida) by 2100”. A major report issued in 1983 by the National Academy of Sciences said that the approaching warming of the earth “is reason for concern, not panic”. The report warned, however, that a warming trend and decreased precipitation could “severely affect” the Texas gulf, Rio Grande, upper and lower Colorado River regions; California; and other Western regions. One projection in the report shows a possible reduction in water supply of nearly 50% when the full force of the warming phenomenon is felt after the year 2000. The tone of the academy warning was less urgent than the EPA’s, stressing the need for “more intense research”. However, the academy found that since (in their opinion) there is no politically or economically realistic way of heading off the greenhouse effect, strategies must be prepared to adapt to a “high temperature world.” The EPA report said that even a total ban on coal would only delay the process for a few years, and said that, because the CO2 in the earth’s atmosphere retains heat rather than permits it to escape into space, thus creating the greenhouse effect, the buildup of gas will be accompanied by a rise of global surface temperatures, most likely in the range of 2 to 8 degrees F. These projections are roughly similar to those in the EPA report; it is expected that this rise will be accompanied by “rapid climate change, including changes in rainfall patterns, as well as a rise in the sea level of over two feet”. Some additional notes on the greenhouse effect are of importance: Recent investigations have established that other man-made pollutant trace gases may increase the greenhouse effect by another 50% (Flohn, 1979; Kellogg and Schware, 1981). These gases come primarily from burning vegetation, release of industrial halocarbons (freons), and the denitrification of nitrogen fertilizers in the soil. The Greenhouse Effect by meteorologist Harold Bernard, issues a strong warning that the heating effects alone will likely be devastating to humanity due to increasing climatic stress; agriculture in particular will suffer greatly. He cites increasing storminess with tornados, hurricanes, floods, searing “dust bowl”-type droughts, water depletion, and massive forest fires if we continue on the fossil fuel route, presenting a whole bank of reasons against doing so. The last few years have seen dramatic changes in precipitation—more rainfall in some areas, more drought in others— but these are also part of the weather forecast given by scientists who say the world is cooling. Apparent warming trends could be superceded by cooling trends in the long run, if we are due for transition into a glacial period. Systematic measurements of atmospheric CO2 began only as late as 1958 (Calder, 1975). Most climatologists seem fond of repeating the dangerous oversimplification of CO 2’s greenhouse effect, that is, that the earth will warm up as a result. In a 1977 paper, Hamaker asked, “How Rapidly is CO2 Increasing in our Atmosphere?” In 1977, a National Academy Sciences panel on energy and climate provided a frightening statistic (Charles Keeling, Science, 9/2/77). Keeling said there’d been a 13% rise since the Industrial Revolution began. Alarming is the fact that five of this 13% had occurred since 1962″. That same Science article discussed the oversimplified computer models of CO2’s “general warming” effect, and stated that there are some scientists who “privately suggest” that because of “complex feedback phenomena”, global cooling could result. Hamaker says that even if the average temperature of the atmosphere is getting warmer, it is false to assume polar ice will melt and temperate zones will move toward the poles. According to Hamaker, “the experts have given us a time scale for weather changes that is longer than we have. Many things are operating at once to affect climate. They all have long overlapping time lags so that we cannot say that this happens, then this, and then this. But the first stage of glaciation, which is initiated by a change from temperate zone to northern latitude types of trees, and by dying of tropical forests, is here now.” Hamaker says “the theory that the world will get warmer is based on the absurd idea that the earth’s average temperature depends solely on the sun’s energy and the heating effect of atmospheric CO2. On that basis these scientists have projected a rise in temperature in the next century when the CO2 has doubled, so they have drawn a line tangent to the recorded curve and ending up in the next century.” He disagrees with the projection, saying that nature is clearly drawing a curve that is constantly increasing at an accelerating rate of increase, and the scientists have merely decided that nature must change her ways to suit their predictions. The time to stop the onset of glaciation is before it starts, because it starts with the destruction of agriculture. Hamaker says that we must act now, before our technological capacity to remineralize the soil is lost in the chaos of a world of starving and dying nations. As we said, climatic cycles and factors may overlap, but we can identify a point in the whole climate cycle at which the temperate zone climate is destroyed and we stop eating! We can chart the CO2 content of the atmosphere and know whether we have enough minerals in soil and water. The CO2 curve is showing us that the time of no temperate zone could be approaching. We must remineralize the world’s soils and put carbon back into the earth as fast as we can to reverse the CO2 curve and bring it back to a safe level. Hamaker says that scientists predicting a warming also aren’t taking into consideration the role of life in and on the soil in demineralizing it in a period of 10,000 to 15,000 years, depending on the amount of ground rock supplied by the last glacial advance, nor do they all understand the earth’s tectonic system and its role in determining the weather. The climate cycle is a by-product of the entire life system, all of which rests on the expenditure of atomic energy in the tectonic system. There are two energy systems which are powerful in comparison to other factors (such as sun spots, Milankovitch’s theory, or the alignment of planets in space)—the effects of these other factors may be noted, but they don’t substantially alter the glacial process—both of the primary energy systems use the energy in the atom. One is the sun and the other is the tectonic system. The earth constantly intercepts the sun’s energy. If the energy incident to the earth at the higher latitudes is deflected into space instead of being absorbed at ground level, the total amount of energy available to warm the earth is decreased by that amount. During a glacial period the total amount of sun energy reaching the earth is decreased because the CO2 (from the tectonic system) directs a heavy cloud cover to the polar latitudes. The clouds have a very high albedo, that is, ability to reflect the sun’s rays back into pace. The tectonic system constantly removes materials from the mantle of the earth, separates the compounds containing a balance of elements useful to living organisms, and moves them into the mountains or into the atmosphere. Compounds containing elements not required for life processes are consigned to the core or are recycled to build the basic ocean floor at the ridge. Everything on earth is totally dependent on the tectonic system; if it were to run out of fuel, the earth would be cold and lifeless like Mars. Climate is directly controlled by the discharge of carbon and sulfur oxides by the tectonic system. Now that mankind has a hand in adding CO2 to the air (and making other environmental errors), climate is also affected by the human factor. There is a scarcity of minerals on the land and in the sea, further contributing to the CO2 buildup in the atmosphere as more and more CO2 is supplied by the tectonic system and less and less is put back into the earth’s crust by the living organisms. All these factors overlap and affect climate. We can say that the minerals (those available to microorganisms) and the carbon released by the tectonic system can be monitored—and thus, theoretically, can be controlled to some extent—we still have much to learn in this area, but we can and do have an affect on climate. The burning of temperate zone vegetation will carry huge quantities of CO2 into the atmosphere. In the zones of latitude where the sun’s rays are most intense (the equatorial region), CO2 holds the sun’s heat at the surface of the earth, increasing surface temperature and providing the energy to increase the evaporation and to move the massive cloud cover to the polar regions; CO2 has no heating effect at the poles in the winter when it’s dark 24 hours a day. The warm, demineralized ocean can’t take up the CO2 as fast as it is being put into the air, and decreasing plant life and less trees also mean less CO2 is being converted. We cannot allow the CO2 increase to reach the point of no return—that is, the increase in CO2 from the tectonic system and our own input must not be allowed to exceed the capacity of the remaining forests and sea life to remove the CO2. When the minerals are too few to support enough life to hold down the CO2 level, the level begins to rise and the death of the temperate and tropical zone forests swiftly initiates the air flow pattern which brings glaciation to polar latitudes and extreme, killing heat and drought in between. When air gets hotter, its atmospheric pressure decreases. It’s then easier for the cold air moving d own over a cold land mass to displace the warm equatorial air and force it to move poleward over the warm ocean to replace the cold air moving toward the equator. This is the normal air circulation pattern impressed on the west winds. During glaciation, when there is an extensive ice field, there is no summer because the refrigerated air from the ice field maintains the temperature differential required to carry the clouds to the northern latitude. Thus there can be unusually large masses of hot air in the equatorial latitudes and unusually large masses of cold air in the polar latitudes. Glaciation, or for that matter, anything else on earth, can’t take place without an expenditure of energy. Without a buildup in CO2 and hence temperature, glaciation cannot happen. Hamaker says that the average temperature at the start of a glacial period must be higher than the interglacial temperature, and must remain higher until the cooling effect of the ice sheets starts bringing it down, but says this won’t help agriculture: the southern temperate zone will have excessive heat/drought; northern/temperate zone: summer freezes and frosts; cloud cover lowers the temperature and increases the quantity of cold air which flows south over the land masses. With early cold snaps and longer, colder winters, the temperate zone will become a part of the subarctic zone. The summer frosts/ freezes, short-growing seasons, drought and violent storms, rapidly diminishing soil minerals, and increasing rain acidity will destroy the world’s grain crops; we can’t grow grain in the subarctic. Growing seasons have already been shortened and interrupted by freeze damage. (The local areas to survive will be the few near the equator that are blessed with a constantly renewed supply of basic minerals sufficient to maintain a neutral soil in spite of the acidic rains, says Hamaker in Survival of Civilization.) We’ve already seen indications of these patterns. He says we can stand cold winters for some time, but not if they carry over into summers to destroy crops and trees. Cold waves, just a few degrees lower in temperature, can cause major crop losses in Canadian and Eurasian grain crops that are at the latitude of Michigan or farther north. Hamaker says food production in the northern hemisphere in 1980 had lost about 20% of potential because of adverse weather (drought/ heat in the U.S.; cold, wet weather on the Eurasian continent; and, in the southern hemisphere the growing season started with drought in Australia, Africa, and South America). He fears that famine could begin soon, that it could be a few years away; 1978 and 1979 fruit and vegetable losses in California, Texas, and Florida, as well as wintercrop losses in 1983/84, show what could happen to crops in the years just ahead. Anyone interested in studying the whole glacial process in more depth is urged to read Hamaker’s book—there is an entire section on the tectonic system, plus more details on the role of CO2 in glaciation and many other facts and figures on the glacial process, including the period of glaciation itself. Our space in this lesson requires us to focus more on the transition period from interglacial (warm) to glacial (cold) so that we may become more aware of signals observable during a change to glaciation. Let’s take a look at what some other scientists who foresee a cooling have to say about the energy expenditure required for glaciation; we’ve seen that scientists agree, in general, on some information about past glacial periods and our present interglacial, but they don’t all agree on why glaciation happened. What force could bring such a change about? We’ve said that Hamaker saw the greenhouse effect as occurring differentially: the increasing temperature differential between warmer (hotter/drier) and colder (colder/ wetter) latitudes has taken on a life of its own and is accelerating the whole process. When the supply of minerals ground from rocks by the last glaciation is used up in the soil, this exhaustion of soil minerals by the life in and on the soil initiates a whole chain of events which results in restocking the soil with minerals and a new proliferation of life. David P. Adam of the U.S. Geological Survey, a longtime student of glacial periods, has emphasized that to understand their causes, one must solve the “energy problem” they present. His Quaternary Research paper (1976, “Ice Ages and the Thermal Equilibrium of the Earth (II)”) shows that an essential requirement to begin and sustain a glacial period is an increased transfer of (excess) energy towards the glaciated regions, and that energy is in the form of moisture. This is of course precipitated largely as snow, thus forming the initial perennial snowfields and subsequent ice sheets. He states that some increased energy source must therefore be invoked to sustain these vast energy transfers, yet he does not consider in his paper the fact of excessive CO2’s solar heat-trapping effect as the possible “booster” for providing this increase of effective energy, which, as Adam points out, is “required to fuel a continental glaciation”. In a personal communication to Hamaker, David Adam agreed that Hamaker’s theory (CO2) indeed fulfills the requirements of providing the glacial energy fuel. Yet, surprisingly, David knew of no one in the history of modern Quaternary research who had postulated a CO2-glaciation relationship, perhaps due to the relative state of infancy of modern CO2/climate studies, but he said there was one well-respected climatologist who had presented an explanation of the basic glacial process very similar to Hamaker’s, Sir George Simpson of Britain. He was first to point out that the glaciation that characterizes an ice age can’t come about by a general cooling of the earth’s atmosphere—because some source of increased energy is required to transport poleward the huge amounts of moisture which make up the glaciers. Most climatologists now agree, because a decrease must lower the mean temperature of the earth’s surface (especially in the tropics), decrease the equator-to-pole temperature gradient, and distinctly lower the moisture content of the atmosphere. He realized that it’s obviously paradoxical to expect fulfillment of certain fundamental requirements for glaciation (intensified equator-to-pole temperature gradients, stepped-up atmospheric circulation, and increase of poleward heat and moisture transfer) with a declining surface temperature, especially in tropical regions. John Hamaker, while unaware of Simpson’s theory, was apparently the first to correlate the basic heating and circulation principles operating at glacial initiation with the soon-to-be-infamous” differential greenhouse effect. Other recent warnings on this differential heating effect have come from Lester Machta (head of National Oceanic and Atmospheric Administration (NOAA) Air Resources Labs), saying that CO2 could indeed cause the massive cooling cloud coverage and cooling at the poles, and from Justus (1978) of the Congressional Research Service: “If the earth’s temperature rises, the water vapor content of the atmosphere is likely to rise. A rise in water vapor would quite likely increase the fraction of the globe covered by clouds. Such an increase could cause the amount of primary solar radiation absorbed by earth to fall.” In a document prepared for Congress (“Weather Modification: Programs, Problems, Policy, and Potential,” Chapter 4), Justus says: “In geological perspective, the case for cooling is strong. … If this interglacial age lasts no longer than a dozen earlier ones in the past million years, as recorded in deep-sea sediments, we may reasonably suppose that the world is about due to begin a slide into the next Ice Age.” (p. 153.) Hamaker says that failure to remineralize the soil will cause continued mental and physical degeneration of humanity and quickly bring famine, death, and glaciation, in that order. The majority of the world’s people fall into one of these categories: those who are aware of problems and take action; those that are angered by problems, but talk or worry about them and don’t take action; those who just give up hope; those who trust in the system, right or wrong, problems or no problems; those who are just plain indifferent to problems; and even those who are unaware that problems exist at all! Most people probably think that the last ice age was “a million years ago”, but the fact is, it ended only about 10,000 years ago—a few seconds in geological time. Everything that we know in terms of our “civilization” has taken place in that brief span of time since the earth last warmed up. The potential global climate changes that face all of humanity could re-arrange everything on the planet, and affect every living creature on earth more than any other ecological issues in question—even beyond such crucial concerns as world peace—for the issue here is whether we want to have a world at all in which to live in peace. We must make the ecological changes necessary for survival. Because most of the subsoil and topsoil of the world have been stripped of all but a small quantity of elements (by time, water, erosion, chemical fertilizers, pesticides, and so on), Hamaker says man can stay on this earth only if the glacial periods come every 100,000 years to replenish the mineral supply—or if we get smart enough to grind the rock ourselves and apply it everywhere on soil that is depleted. Glaciation is an acceleration of the normal process of using evaporated water to carry excessive heat energy from warm zones to cold zones, and the greenhouse effect (of an increase in atmospheric CO2) is to increase cloud cover over polar latitudes. The clouds have a cooling effect as well as providing the snow for glaciation. The energy is dissipated in arctic space. Glaciation occurs whenever the soil minerals left by the last glacial period are used up and the plant life (forests are the major factor in CO2 control) can no longer regulate the carbon dioxide by growing faster in response to its increase in the air. 3.4 The Glacial-Interglacial Cycle The glacial-interglacial cycle was revealed by numerous workers in many fields of Quaternary research as of the 1970s. (The Quaternary is the present geological period including the Pleistocene epoch and the Holocene—recent—epoch, the present interglacial in which we now live). A National Academy of Sciences (NAS) publication, Understanding Climate Change (1975) says: “The present .interglacial interval—which has now lasted about 10,000 years—represents a climatic regime that is relatively rare during the past million years, most of which have been occupied by colder, glacial regimes. Only during about 8% of the past 700,000 years has the earth experienced climates as warm or warmer than the present. The penultimate interglacial age began about 125,000 years ago and lasted for approximately 10,000 years. Similar interglacial (warm) ages—each lasting 10,000 (+- 2,000) years and each followed by a glacial (cold) maximum averaging 90,000 years—have occurred on the average every 100,000 years during at least the past half-million years. During this period, fluctuation of the northern hemisphere ice sheets caused sea-level variations of about 100 meters.” This NAS publication concludes that: “If the end of the interglacial is episodic in character, we are moving toward a rather sudden climatic change of unknown timing. … If, on the other hand, these changes are more sinusoidal in character, then the climate should decline gradually over a period of thousands of years.” All factors considered, Hamaker doesn’t think we have that long. Paleoclimatologists agree that the major warm periods (interglacials) that followed each of the ends of the major glaciations (cold periods) have lasted from about 10,000 to 12,000 years, and that, in each case, a period of considerably colder climate has followed immediately after these intervals. About 10,000 to 10,800 years have now passed since the onset of our present period of warmth, so the question certainly arises as to whether we are really on the brink of a period of colder climate. The 100,000-year cycle of glaciation is now recognized as occurring with regularity, so, technically-speaking, we could be due for another ice age “any time during the next 1,200 years”. As we said, though, signs that signal the changeover or transition from temperate to colder climate are already in evidence, and increasing due to our environmental errors. Most scientists are noncommital, but those who are beginning to express concern say that these signs mean that we may be much closer to the first stages of the next ice age than anybody would like to think. Let’s review some of the signs we’ve already talked about: We have already seen that the earth’s total soil microorganism and earthworm populations have been dying back over the recent centuries and decades due to soil demineralization, and so the earth’s plant and tree life has been forced to die back—known as “retrogressive vegetational succession” in the literature of ecology. Deserts (now growing at a rate of 15 million acres per year) are generally a final stage of this retrogression process. Our abuse and neglect has reinforced this desertification, as it has deforestation. Soil demineralization (with acid rains accelerating the devastation) is causing the increasingly rapid sickening and dying of whole forests. The massive death and burning of the forests is signaling the “telocratic” or end phase of our present interglacial period. Svend Th. Andersen saw the broad picture of glacial/interglacial stages and said that the interglacials were stable intervals between the glacial stages of disturbance and chaos. The vegetation had a chance to develop until the new glacial released its destructive forces. He divided the interglacials (warm intervals) into four broad phases: - Protocratic phase. At the start of warm intervals, open forests of pioneer species entered—these were quickly-spreading trees and shrubs with unpretentious requirements to climate and soils. Birch, pine, poplar, juniper, and willow were most important in Denmark, Andersen’s home. - Mesocratic phase. The soil had developed a high fertility, and plants of rich soils reached maximum frequencies. Immense forests covered great portions of the earth in the last mesocratic phase (from about 6,000 to 3,000 B.C.) Some of these trees, such as oaks, were reported to be often of remarkably large size; these are found preserved in now-degenerate treeless peat soils in England and elsewhere. The phase is dominated by trees such as elm, oak, lime, hazel, ash, hornbeam, and alder, growing on stable mull soils which Dr. Johannes Iversen (State Geologist, Geological Survey of Denmark), showed to eventually begin to retrogress. Iversen tried to find out at what point in the interglacial the retrogressive vegetational succession starts, and said it is “when the yearly disintegration of the plant debris no longer keeps pace with the fresh supply from the living plants, and consequently a layer of ‘mor’ (raw humus) is accumulated on top of the mineral soil”. “Mull” humus has a richness of available minerals; “mor” is acidifying humus. He studied soil conditions and said that, from the point approximately 10,000 years ago commonly accepted as the beginning of our present (warm) interglacial, it took about 3,700 to 4,500 years for the first of the glacially-deposited raw mineral soils of basic or alkaline pH to “mature” and then go into a gradual “irreversible” degradation/depletion. Iversen says this degradation process is characterized by reduced soil organisms, earthworms dying out, and by the vegetation regression that comes when soil is depleted and lacks minerals. Andersen and’ Iversen have similar descriptions of this process. In these mull soils, of roughly 6000 to 3000 B.C., the leaching of the soil salts is to some extent counteracted by the mixing activity of the soil fauna and the ability of the prevailing trees and shrubs to extract bases from the deeper soil layers and contribute them to the upper layers during the decomposition of their litter. However, a slow removal of calcium carbonate will bring the soils into a less stable state, where the equilibrium may be more easily disturbed. This leaching of calcium carbonate (lime) is shown to be so significant to the topsoil ecology because, according to Andersen, “the leaching of soil minerals other than lime will be insignificant, until the calcium carbonate has been removed”. With this gradual leaching, the mull forest could not maintain itself, and with the lapse of time, caused itself a depauperization and acidification of the upper soil layers, which extended so far that the dense forest receded and more open vegetation types expanded. The changeover from mineral-rich mull soils to acidifying mor soil conditions begins in the mesocratic, and with the gradual demineralization of formerly-calcareous soils, growth of impenetrable hardpans and soil life die-outs follow. This creates shallow topsoils susceptible to drought or being easily swamped; and this infertile state leads to takeover by heathlands, peat bogs, and trees with ability to survive on acidic soils—spruce, pine, birch, poplar, etc. - Oligocratic phase. This condition becomes prevalent in this phase, and is brought on as a result of degeneration of soils. The increasing podzolization, characterized by increased demineralization and acidity, continues up through the telocratic (end) phase. (Podzolization is a process of soil formation, especially in humid regions, involving principally leaching of the upper layers with accumulation of material in lower layers and development of characteristic horizons; specifically, the development of a podzol. Podzol: any of a group of zonal soils that develop in a moist climate especially under coniferous or mixed forests and have an organic mat and a thin organic-mineral layer above a gray leached layer resting on a dark alluvial horizon enriched with amorphous clay.) - Telocratic (end) phase. The final interglacial phase is the time when the demineralized soils begin to be removed. The rigorous conditions at the end of the interglacial are reflected by an increase in allochthonous mineral matter, no doubt due to increasing surficial erosion. The information in virtually every textbook on soils, forestry, or ecology leaves no doubt that the present world civilization is at least deep into the oligocratic phase. Andersen’s work also shows that the Scandinavian lakes and soils reflect a close parallel development from basic to acidic conditions—again, many thousands of lakes there, as well as in other parts of the world, are now already acidified into lifelessness from acid rain. Rapidly accelerating worldwide erosion rates are evident; the figure in 1981 was already 6,400,000,000 tons of topsoil lost per year to erosion. These facts, along with increasingly rigorous conditions imposed by the weather since at least 1972, very strongly indicate that the telocratic end phase may indeed have begun. As we said, the final changeover to sub-arctic climate and vegetation has been seen to have been made in only 20 years in other interglacial to glacial transitions. What other changes come with the end of a period of interglacial warmth? From studies of sediments and soils, George Kukla agreed that “major changes in vegetation occurred at the end of the previous warm period. Deciduous forests that covered areas during the major glaciations were replaced by sparse shrubs, and dust blew freely. The climate was considerably more ‘continental’ than it is now, and agricultural productivity would have been marginal at best.” George Kukla and Julius Fink studied interlayered soils exposed in excavated brickyards of Czechoslovakia. Seventeen major cycles of glacial loess deposition (loess is mixed rock dust and silt ground by the glaciers and swept by the winds) and subsequent interglacial soil “decalcification” (and overall demineralization) over the last 1.7 million years were revealed. The interglacial soils are shown to have supported the deciduous forests native to northwest and central Europe until in some way they died off and gave way to the steppe vegetation of a chilled, wind-torn glacial desert with blowing dust. Loess always returns to cover the demineralized soils. Then, again, over the centuries, the loess becomes mostly consumed by the soil formation and development process. The cycle of glaciation is complete when the supply of minerals ground from rocks by the last glaciation is used up and glaciation occurs again. Whereas plant life normally removes all excess CO2 from the atmosphere by growing faster as CO2 increases, it can no longer do so, since it gets its cell protoplasm from the soil microorganisms and, as we know, the microorganisms start dying too when insufficient elements are available to them. A conference was held at Brown University in 1972 with paleontologists, sedimentologists, stratigraphers, paleoclimatologists, and others, entitled The Present Interglacial, How and When Will It End? They strongly confirmed the 100,000-year average glacial-interglacial cycle, and many stressed the fact that we should be at or close to the end of the present interglacial. The search for causes of the Ice Age began over a century ago, and Hamaker says the answer literally lies beneath our feet: progressive soil demineralization of the earth’s soil mantle causes an eventual collapse of the global carbon cycle. The cycle is: soil remineralization -> interglacial soil demineralization -> vegetational succession and collapse -> the glacial process -> soil remineralization Hamaker also believes the large increase in earthquakes can be attributed to the steadily-increasing weight of snow and ice cover pressing on the molten layers just underneath the earth’s crust, causing shifting and slippages. He notes that the sharp rise in major earthquakes began about 10 years after the climate began to get noticeably colder beginning in 1940. He also predicts a steadily-increasing incidence of volcanic eruptions, for the same reason, and suggests this has already begun in the last few years. Glaciation usually comes at a time when the earth’s tectonic system has fired up volcanic activity by feeding ocean floor into the continental heaters, mostly located in the Pacific “ring of fire”. Volcanic action releases larger amounts of liquified gases trapped in the molten rock. Carbon dioxide and sulfur dioxide are the main gases released, and both cause the greenhouse effect, resulting in our present “100-year cold cycle”. These cycles vary in their time interval, intervals being determined by the pressure in the tectonic system. Carbon dioxide from decaying and burning mineral-starved vegetation is then added to these volcanic gases—together, they initiate the change from interglacial to glacial climate. Acidic gases from volcanism and burning forests can then stifle life on earth by leaching the few remaining basic elements into the subsoil. In this way the change from interglacial to glacial conditions can be made in 20 years (Nature, G. Woillard, 1979). Hamaker says that man may have moved the present glacial process! forward in time by 500 years by the continued pouring of CO2 into the air, by acidic gases and acid rain, and by forest; and jungle destruction by people seeking lumber and fuel or farmland … the 20-year change period can also be shortened. Hamaker estimates that the beginning of a 20-year changeover period from interglacial to glacial conditions was about 1975. If this estimate is accurate, then tremendous weather changes should have begun by that time, signified by growing intensification of all storm effects, including unusually heavy rains and snows, record cold and heat, drought, hail, tornados, etc., all symptomatic of increasing temperature and pressure differentials, greater evaporation of moisture, and an overall speeding up of global atmospheric circulation. Iversen warns us that in former interglacial epochs, the anthropogenic factor was negligible; i.e., man‘s impact on nature was less dramatic than it is today. According to Hamaker, all the requirements for glaciation are now in place and accelerating in intensity at a very fast pace: CO2 increases; precipitation pH moves toward intolerable acidity; earth’s soils (demineralized) can’t support a strong, healthy plant/forest cover; the carbon of the soils and trees is being transferred back to the atmosphere in huge amounts as carbon dioxide gases. As the primary infrared heat-trapping “greenhouse effect” gas, CO2 excess causes the sustained overheating of the vast oceans (especially tropical oceans), thus causing the sustained evaporation increase required to nourish the polar regions with the “food” of glaciers: water, snow—and keep them shaded from melting with clouds. This increase of glaciation is now occurring and has been since about 1950, so, although some scientists expect a warming from the greenhouse effect, the rise isn’t being found over the last century—on the contrary, the earth seems to have been cooling in recent decades. The polar ice field is expanding and growing in northeast Canada (more on this in weather section), and pressure is rising in the tectonic system, indicated by the accumulation of lava flows along the ridges, and by increased volcanic activity. We’re in the high-pressure part of the “ocean floor feeding cycle”, which has occurred about every 100 years, at least for a few centuries. It’s certainly not a good time for CO2 to rise! 3.5 “Hope Springs Eternal” Scientists tell us of a glacial/interglacial cycle of 100,000 years, and say we are now about 10,000 to 10,800 years into a warm interval that can last from 10,000 to 12,000 years. Some scientists also say there is a “magnetic pole reversal cycle” of 200,000 to 1,000,000 years and that, since the last one took place about 710,000 years ago, we could be “due” for one “some time in the future”. As of 1984, there hasn’t been much talk among the general public about Ice Ages or magnetic pole reversals; if either of these possibilities do exist, even remotely, as calculated by scientists, one would expect at least some debate on these issues to have hit the national/international media by now. There are several explanations for the apparent lack of awareness. For one thing, countless brilliant minds go into fields totally unrelated to science, so Ice Ages and pole reversals aren’t necessarily familiar to them. Then, within the field of science, scientists specialize, usually in one specific area of research, often depending on the project(s) they’ve received funds and grants for. They may be experts on one particular subject, but unfamiliar with either fields of science (even related fields) or even with other areas of study within their own fields. They may have spent years refining a certain body of knowledge and focusing on one aspect of one branch of science. This narrows down the number of experts available on any given subject, let alone that of glaciation or magnetic pole reversal. Most scientists accept as fact many things they don’t have the time, knowledge or money to prove for themselves, relying on research done by other scientists to fill in the gaps. This means the number of informed people who could “accurately” predict art onset of another Ice Age is quite limited anyway. Within this number of informed people there are: scientists too busy working on something else to become involved in speculation about an Ice Age; others uninterested one way or the other; some who have considered it, then given it no further thought; others who may have speculated on when it could come, but don’t want to give their opinion because they don’t want to make a mistake or prefer not to contradict scientists who think the world will warm up; others who don’t want to alarm the general public (or perhaps fear causing “mass panic” or migration?); and finally, there might be a few who are willing to make a statement. As we said, this will be a rare person, one with courage of convictions, faith in his/her calculations, enough concern about humanity to bring something of such epic proportions out into the open, and nerve to contradict other scientists’ theories, such as theories of scientists who initiated the Environmental Protection Agency report and the National Academy of Sciences report. Anyone who disagrees with them has to prove his own theory and discredit theirs—somewhat comparable to a single doctor challenging the entire American Medical Association—it happens, but this is probably considered an awesome task, one many professionals would undoubtedly prefer to avoid if at all possible, having their “careers” and reputations” to think about. Scientists and experts need more than knowledge and facts—they also need intuition, the ability to synthesize what they know into an overall picture from all the little random bits and pieces of information. Beyond book learning, they need sensitivity and awareness, consciousness and creativity. Educated experts often lack some of these qualities needed to make good judgment and a proper diagnosis. We can see, in light of the above “analysis”, that it could indeed be possible for the general public to miss something of such magnitude, even if it were true. John Hamaker puts it this way: “It may seem incredible that up to now this work could have escaped becoming common knowledge, at least to workers in agriculture, forestry, geology, climatology, and other such immediately-related fields. Apparently the many diverse pieces of the glacial/ interglacial climate cycle ‘puzzle’ had to be gradually discovered through various disciplines over decades, before at least enough pieces were evident to be joined in a coherent picture by a trained ecological thinker.”(John Hamaker in this case.) Yet now everyone may see for themselves the truth in his synthesis. He continues: “Congress has evaluated the CO2 problem on the basis of a consensus reached by ‘specialists’. They freely admit that they do not know what causes glaciation, yet say the average temperature must drop several degrees C before we can have glaciation simply because they have evidence that it does get much colder during glacial periods. They ignore the fact that, historically, glaciation has alternated with interglacial periods on a roughly 100,000-year cycle and the fact that glaciation is due. Do they think that crop soils turning to deserts (due to erosion and soil demineralization, etc.), and weather catastrophies we’ve observed, are all just coincidence? They haven’t thought about soil and its relation to glaciation, nor the role of the tectonic system in the glacial process. “The people charged with the responsibility for the CO2 problem are simply not trained to solve problems. They are trained to be observers and have done a creditable job of that. But the job of making a rational synthesis of the facts as a basis for Congressional action ought to have been assigned to engineers and physicists, both of whom have been trained to work with the facts and laws of Nature. The fault lies at the higher levels of education, which have neglected the necessity for interdisciplinary education and action in favor of specialization.” The meteorologist Harold Bernard, who also warned of CO2 increases and effects on climate, wrote a chapter “We Can’t Put Weather in a Test Tube,” which criticizes scientists’ incorrect assumptions, inaccurate modeling techniques, and ignorance of important processes through lack of knowledge. It is clear that the interglacial soil demineralization is one such process they have ignored. The knowledge is .now freely available. Let’s consider a parallel that Life Scientists are very familiar with by now. The concept of the body as self-healing and the body of knowledge found in the Life Science philosophy both follow the laws of common sense, of Nature, and of logic. We need only try it for ourselves if we want “proof”, since Truth is self-evident. We have come to accept as obvious the fact that live food (uncooked fruit, vegetables, nuts and seeds) imparts the most perfect state of health possible. We have experienced our bodies’ self-healing powers and learned about fasting as a means of allowing our bodies the chance to rest and divert all their energy into healing. We have decided that medicine and herbs interfere with the body’s self-directed healing actions, and that suppression of symptoms (which are manifestations of the healing process going on) likewise interferes with the body’s innate wisdom. We have found that health is produced only by healthful living and that sickness will vanish only when cause is removed (not when symptoms are suppressed). That about sums it up in a nutshell. What I’m getting at is this: if all the above is so obvious to us, why isn’t it obvious to the countless doctors and “health” professionals all over the world? Why is it obvious only to a few people? How can something be true and not be recognized by more people? All we can say is, truth is still truth, in and of itself, even if not one single person sees it. Truth doesn’t need believers in order to be true; it doesn’t need followers or majority acceptance in order to be valid. Truth doesn’t have to wait for everyone to catch up. The earth was still round when everyone believed it was flat, despite what “everyone” thought. Microscopic life existed long before we saw it in microscopes; it didn’t have to wait for us to see it in order to exist. If we are sliding into another Ice Age, and the scientists who foresee its arrival are correct, an Ice Age won’t need our approval or belief in order to be a reality, that much we can be sure of. Of course it would be easier for our own “practical purposes if some of their calculations we are “off .” After all, many so-called scientific theories have fallen by the wayside throughout the years, as new knowledge superceded old knowledge. Even the “world is flat” theory fell prey to the test of time. Whereas truth is truth despite what people believe, knowledge may or may not be true despite what people believe. Even if it isn’t true, it may be paraded around as fact for years, centuries, or even indefinitely. In the meantime, many people continue to believe what they’re told, looking to “experts” for answers and depending on them for knowledge; it’s not a foolproof learning technique, but it’s often the best they can do. So, when the experts themselves make mistakes, it doesn’t matter how big their herd of followers is—but, of course, many people are influenced by the size of the herd when choosing their beliefs. They feel safety in numbers, and prefer the comfort and “security” of a large herd. If “everyone else” believes something, it must be true, says their inner logic, or if nothing else, they’d still rather be with the majority. There is an alternative to joining herds and following experts: intuition. If you can trust your intuition, you are fortunate. As a free thinker, you can ask yourself what your intuition tells you about the world’s current situation, the state of our environment, weather patterns, and Ice Ages. I’ve tried to present various opinions on these subjects, but I don’t presume to have all the answers. My intuition tells me to keep an open mind, and not to give up hope. If the observations and premonitions of the scientists who see the world as cooling are correct, I for one would rather have had a hint ahead of time than be surprised at the last minute! At least this leaves us with the option to take action, and to try to survive on this planet. It’s been said that we don’t fail until we give up trying. Hope is our strongest ally—it reinforces our will to live. Without it, we are lost, for without hope, nothing matters anymore. So, even if an Ice Age were approaching during our lifetime, we would still have hope as our “open door”. For one thing, we have the potential for change. Some people believe that there is a future that can be known in the present (often called destiny), but that, at the same time, there is still our free will—a powerful force that can change or alter “what is meant to be”. This gives us control over our “destinies” and the ability to create the lives we choose. As we said in an earlier lesson, we ourselves are responsible for our states of being; we underestimate our power as individuals when we believe that random outside influences alone shape our lives. Ironically, though, there is also some element of “chance “in life that can weave its influence into what we are busily creating; while we often tend to define things in simple dualities of yes and no we actually have yes, maybe, maybe not, and no. We can predict that something will or will not happen, and we can be very sure that it will or will not happen, if we are accurate. Even so, the fact still remains that, beyond our free will or any so-called destiny, there are also other powers and forces of life in the universe that can enter into every situation and coincide with any variables involved, and these sometimes alter the outcome or cause slight variations between what we expect and what actually happens. For this reason, when considering the return of an Ice Age, we can still allow for the possibility, however small, that something completely unpredictable at this present time—some unforeseeable factor—could still come to pass, something we cannot even conceive of or envision with our present knowledge or awareness. This is not to say that we should resort to an escapist mentality or rationalize our way out of solving our serious environmental problems by using the excuse that “a miracle could happen” as a justification for inertia—this would be wishful thinking and sheer delusion! We’re merely trying to show that everything that happens in life is affected by the intricate interworkings of many multi-faceted forces, and that this includes our attempts to predict specific global climate changes. We’ve attempted to speculate on the past and present factors pertaining to Ice Ages, so now we’re considering future factors, which, of course, also lead us to the unknown. Technology and scientific knowledge that we use daily and now take for granted were unimaginable to people a century ago, so it is conceivable that someone could still discover an energy force/source that is presently unknown to humanity, or find a new technique for cleaning and restoring the environment, or invent something that we can’t even imagine that would change our world or its course of events. We can hope that our ingenuity will prove itself once more; we’ve gotten ourselves into our present world state—maybe we can get ourselves out of our problems, as well. There is a tremendous growth in spirit evident all over the planet—we ourselves can perform the miracle of increased awareness—with a quantum leap in consciousness, we could save ourselves by realizing what must be done before it is too late. It has been said that our strongest instinct is to survive. When I finished reading Hamaker’s book, I began to see our world ecology as a whole, and realized the importance of seeing our environmental problems collectively, as they interrelate, rather than individually. There’s an old expression that comes to mind: “Couldn’t see the forest for the trees.” We’ve been looking at the trees so long that we’ve forgotten what the whole forest looks like. Few things can make us appreciate life more than the realization that it can end. The suggestion that time could run out for our planet forces us to reassess our values as human beings. Where are we going? What are we doing to our environment, our source of life? What are our real priorities? Ask anyone who’s ever been told s/ he would have “only 3 months to live”. The first thing that happens is a total overhaul of priorities, a total rethinking of what the person can still do. Time becomes more precious than ever before. Energy becomes focused as never before. Life is no longer taken for granted. I guess we never wake up until after we’ve been asleep. Let’s hope we wake up in time—it seems we’ve ignored the alarm clock already. Even if we are “let off the hook” somehow and an Ice Age is averted or postponed, or its timing was miscalculated to some extent, we still have some very important moral decisions to make regarding our ability—and, moreover, our will—to revitalize the world for our continued survival on this planet, because we are still left with our CO2, soil, water, and other pollution problems, and as long as we continue to put money and technological “advances” before the welfare of humanity and our ecosystem, we still have our greed to deal with. And we still have to figure out a way to keep from destroying ourselves in nuclear war. One way or the other, we have to get together worldwide and face the problems that we ourselves have created. We call ourselves civilized, and we want to believe that we have advanced and evolved, but an honest appraisal of our collective self-portrait reveals that we are painting ourselves into a corner every time we compromise our ethics and assault Nature’s principles. We cannot hope to survive if we destroy our planet, because it is our source of life, but we must also understand that our survival is just as surely threatened by the destruction of our basic human values— love for humanity—and that we now have a profound need to revive and restore these basic values. Only by realizing that we co-exist—what we do to others (both psychologically and environmentally) we do to ourselves—can we expect to rally on the large scale necessary at this point for our survival on this planet. It’s obvious that we’ve been born into a time of incredible challenge, so let’s meet this challenge with all our strength—and with a smile—for as always, life continues amidst the chaos. We must see the world as we want to be, as it must be for our survival, and use this positive image to create this world. The key to our survival lies in visualizing and acting for our survival over and over again until it becomes a reality. Every time another individual loses hope and gives up, our survival as a group is also threatened, because the force of our collective will to live is diminished once again. Every time our basic values of faith, hope, and charity are abandoned, the quality of life on earth is tarnished for everyone, and if we continue on a collision course with Nature, life on earth will only become more miserable. Without love, food, natural resources, and an environment clean enough to support life, people everywhere would have little to live for or to look forward to. We create our reality, and if this is the reality we choose to create, humanity as a whole will despair, and it doesn’t take a genius to imagine what will happen if no one cares. As surely as we need faith, love, and action, we need hope. Fear is the lock and laughter the key to your heart.
<urn:uuid:831cab20-a578-48da-99dd-bd729adb90a9>
CC-MAIN-2017-26
http://www.rawfoodexplained.com/the-way-we-produce-our-foods-part-ii/ecology-and-climate.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321410.90/warc/CC-MAIN-20170627115753-20170627135753-00685.warc.gz
en
0.954512
16,985
3.65625
4
This is a test after a unit on punctuation. Students will answer two fill in the blank questions. Students will then complete a 10 matching questions. Students are provided with words for 10 types of punctuation and 10 sentences describing the job of a type of punctuation. Students will write the letter of the definition next to the correct type of punctuation.
<urn:uuid:709f146e-afbb-471f-a704-655a569f566f>
CC-MAIN-2017-26
https://www.teacherspayteachers.com/Product/Test-Punctuation-Marks-671278
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321410.90/warc/CC-MAIN-20170627115753-20170627135753-00685.warc.gz
en
0.868829
71
3.859375
4
The 63 United Nations Information Centres or UNICs, make up the global network of field offices of the UN Department of Public Information (DPI) which was established in 1946, by General Assembly resolution 13 (I), to promote global awareness and understanding of the work of the United Nations. DPI undertakes this goal through radio, television, print, the Internet, video-conferencing and other media tools. The Department reports annually on its work to the UN General Assembly’s Committee on Information. The Committee, which meets once a year, is responsible for overseeing the work of DPI and for providing it guidance on policies, programmes and activities of the Department. UNICs are key to the UN’s ability to reach the peoples of the world and to share the United Nations story with them in their own languages. These centres, working in coordination with the UN system, reach out to the media and educational institutions, engage in partnerships with governments, local civil society organizations and the private sector, and maintain libraries and electronic information resources. The United Nations Information Centre (UNIC) in Australia was established in November 1948 in Sydney. Later moving to Canberra, it is the formal UN presence in Australia and the principal local source of information about the United Nations system. Its information-related responsibilities also extend to Fiji, Kiribati, Nauru, New Zealand, Samoa, Tuvalu, Tonga and Vanuatu. An internationally recruited Director heads UNIC, and is the official representative of the United Nations Secretary-General in Australia and the South Pacific. With the United Nations playing an ever-increasing role in many of the world’s conflict zones to resolve key political, social, and economic issues, UNIC aims to help people in the region better understand what the UN is doing to make a difference in their lives. It’s activities include: - disseminating information on the UN to government agencies, non-governmental organizations, educational institutions, and the general public. - assisting the media on all UN-related issues. - promoting better public awareness and understanding of the aims and principles of the United Nations by organizing public events, coordinating seminars and conferences etc. - speaking engagements and addressing events organised by government, non-governmental organisations and other community groups. The UNIC network gives global messages a local accent and helps bring the UN closer to the people it serves.
<urn:uuid:c9a1e3f4-d0c5-4312-83e7-21de4801fb0b>
CC-MAIN-2017-26
http://un.org.au/about-us/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319902.52/warc/CC-MAIN-20170622201826-20170622221826-00484.warc.gz
en
0.916879
493
2.84375
3
(National Health Care System in Great Britain) by Allyson Pollock New Internationalist magazine, Since its birth in 1948 Britain's National Health Service (NHS) has been a model for the rest of the world. It's been a national system of publicly owned and accountable hospital and community services funded from central taxation - where hospital doctors and nurses are salaried, under national terms and conditions of service. Universal healthcare, provided free and fairly, released the population from fear of the risks and costs of care. Before the NHS more than half the population - mainly women, children and the elderly had no health coverage. However, a relentless concern with cost-cutting and market-defined 'efficiencies' over the last two decades has drastically eroded the central premises of universal healthcare in Britain. The undermining of central taxation as the funding base has been accompanied by governments shifting the costs and risks to patients and their families. The internal market introduced by Margaret Thatcher in the 1980s was the most visible aspect of these changes, but Tony Blair's Labour Government has followed the same privatizing path. The 1948 contract with the people is slowly being shredded. In 2000 the Government launched a 10-year 'reform' programme called the NHS Plan which continued the market-oriented, probusiness policies begun under the Tories. The Blair administration maintains that it doesn't matter who provides care - so long as it is publicly funded. And the extra costs of private profits? They're to be offset by increased efficiency and access, a claim which has been neither tested nor subject to scrutiny. The reality is that under the new plan people will pay more tax for fewer services and be hit with extra patient charges, plus the cost of private insurance. The NHS will be funder and regulator - but business will run the show. Choice and competition: that's the promise. With 'money following the patient', competition between providers is intended to improve both efficiency and quality of care. Doctors, nurses, hospital and community services will be more responsive to patient needs. The Government claims that the NHS is centralized, bureaucratic and inflexible - a claim which has little evidence other than popular myth to support it. The Health Secretary, Alan Milburn, talks about 'redefining' the National Health Service: 'Changing it from a monolithic, centrally run, monopoly provider to a system where different healthcare providers - public, private, voluntary and not-for-profit - work to a common ethos, common standards and a common system of inspection... This is the modern definition of the NHS.' In 1992 the Conservatives created the Private Finance Initiative (PFI) as a scheme for luring private capital into new hospitals, instead of using tax money. It e seems simple. Bankers. builders and service operators (like cleaning, catering and laundry firms) produce the cash and in return they get to lease the building back to the Government or to sell their services to the hospital. The contract is ringfenced and guaranteed, usually for 30 years. Predictably these public-private partnerships have turned out to be a boon for investors but not so good for the public. Shareholder returns in the range of 15-25 per cent and the need for profits increase the costs to local communities. And the private sector's view of 'efficiency' has meant reduced services and job redundancies. Because the cost of PFI is met from the annual operating budgets of the hospital, less is available for direct patient care. The high costs of the first wave of PFI hospital schemes resulted in a 30-per-cent reduction in beds and a 25-per-cent reduction in budgets for clinical staff. More than 12,000 NHS beds have closed since 1997. Low-paid, non-union jobs Britain has also been exporting this model abroad to Canada, Australia, Aotearoa/New Zealand and Europe - with similar results. In Abbotsford, British Columbia, a plan to rebuild the local hospital with private funding has run into stiff opposition from the Hospital Employees Union. A report on the scheme by PricewaterhouseCoopers assumed that collective bargaining rights would be destroyed and that cost-savings would be based on low-paid, non-union jobs. Back in Britain in '2000 the Secretary of State signed a new 'concordat' with the private sector, describing it as 'a permanent feature of the new NHS landscape'. The agreement allows private clinics and hospitals to provide the public with up to 150,000 'procedures a year - things like cataract surgery, hip replacements or hernia operations. It also allows business to run NHS hospitals, form joint ventures with NHS organizations and to recruit overseas clinical teams for existing hospitals. Alan Milburn has allowed eight private corporations to bid for public hospitals which don't meet the Government's draconian performance targets. These are BUPA and BMI (which together control 70 per cent of the British private health-insurance market), the Swedish-owned Capio, Interhealth Canada, Hospitalia Active Health from Germany and the British-owned Serco, Secta Group and Quo Health. Some of these outfits have never run hospitals before. The others have never run hospitals like those of the NHS which are at least 10 times the size of a typical private hospital. The most controversial element is the creation of independent public-interest corporations with 'Foundation status'. These organizations will have NHS assets transferred to their ownership and be granted a licence to operate by an independent regulator. The proposals were drawn up in consultation with the private operators, including the chief executive of Kaiser Permanente, the giant Californian healthcare company. The Foundation Trusts will be freed from NHS controls - they'll no longer be accountable to the Health Secretary but to a locally elected board. They are prohibited from selling their core assets. But they are allowed to raise funds for new building from capital markets and to set up joint ventures with the private sector. All public hospitals are now to be run along business lines - although there will be no shareholders Free from NHS control, hospitals will be able to break with national bargaining arrangements and negotiate or impose their own pay scales and conditions of service. The end result will be widening gaps in pay and working conditions. There will be increasing pressure to generate new sources of income. NHS hospitals already do this by opening private beds, leasing out parts of their land or allowing companies to run on-site services. For example, National Car Parks runs hospital car parking. Capita and Serco provide visitor and staff catering. McDonalds and WH Smith operate in hospital lobbies. Patient Line supplies telephones and televisions at astronomical rates. This will now be expanded. New legislation allows hospitals to create companies which can exploit for research tissue samples taken during surgery. With ownership of human tissue unclear under British law, genetic data is a valuable commodity that many biotech companies would love to own. At the same time the Government has introduced legislation to 'redefine' some NHS care. For example, an elderly or infirm patient may be fit for discharge but still have health and care needs - washing, dressing or feeding. This used to be called 'nursing' - but it can now be redefined as 'personal care' and is no longer covered by state funding. If local authorities don't pick up the tab then patients can be billed. The costs and risks of continued care will pass to the individual, especially the elderly, who account for around 50 per cent of all hospital admissions. The fundamental principle of universal services, free at the point of delivery, will be undermined. The Government has also established new regulatory bodies to smooth the way for privatization. The Independent Regulator has the power to determine the range of services and treatments to be provided by the NHS, which assets it can retain and which can be sold. The Commission for Healthcare Audit and Inspection (CHAI) polices performance standards. CHAI is the direct route to private sector control. It undertakes reports on quality in hospitals, success in which can lead to the 'earned autonomies' of a Foundation Trust - basically entrepreneurial freedoms which uncouple the hospital from the NHS. The next step is to be forcibly subjected to new management and franchised to the private sector. With this new regulatory regime the future of the NHS will no longer be a state responsibility. The Government says its doesn't matter who provides healthcare services as long as they're state-funded. 'Reforms' are sold to the public as improving efficiency and choice and 'changing the delivery system'. The system will continue to be funded through taxation. But a delivery system based on profits and returns to shareholders fragments the ability to pool the risks and costs of care from healthy to sick and from wealthy to poor. It introduces new inefficiencies and transaction costs making universal healthcare unsustainable. The inherent but unstated logic of the NHS Plan is that the privatesector 'partners' will take over the running of hospitals in all but name. There will be a gradual reduction in free, tax-paid services at the point of use. Profits will compete with needs and, as the British experience with railway privatization and long-term care shows, access to services and quality is sacrificed. There is no country in the world that delivers comprehensive, equitable healthcare through the market and on the back of for-profit providers. Yet governments across the world are rushing to follow the British path and are dismantling their healthcare systems. They and their citizens are in for a shock. When the market comes to health, access to care will be a lottery decided at the local level. The fear and uncertainty of the past are set to reappear. Allyson Pollock is Head of the Public Health Policy Unit at University College London and Director of Research & Development at UCL Hospitals NHS Trust. She is also Chair of the Society for Social Medicine.
<urn:uuid:e5e28530-d888-459f-88b4-51d7f82f6a10>
CC-MAIN-2017-26
http://www.thirdworldtraveler.com/Health/Bad_Medicine.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319902.52/warc/CC-MAIN-20170622201826-20170622221826-00484.warc.gz
en
0.931122
2,199
2.5625
3
If we were to have human life on Jupiter we would need to be airborne, since there is no solid surface on Jupiter. Also there is alot of hydrogen in Jupiter's atmosphere so you couldn't breathe on Jupiter unless you had a special gas helmet. An average day on Jupiter would be about 9 hours and 56 minutes, which would be hard for the average human to adjust to. Since humans need to sleep about 8 hours we would only have an hour of consciousness before the day is over. That is not true. Humans have to sleep about one third of the time. The 8 hours thing is just because 8 is one third of 24. Barely 10 hours may be a bit short cycle, but the fact that Humans live north of the Arctic circle proves that Humans can adjust to non-light triggered sleep cycles.
<urn:uuid:dc031466-1495-4aeb-b9de-01b8ad305452>
CC-MAIN-2017-26
http://spacecolonization.wikia.com/wiki/Jupiter_space_colony
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00565.warc.gz
en
0.96817
163
2.765625
3
The term renal artery stenosis (RAS) applies to a cluster of disease conditions with varying etiologies. The most prominent among them are fibromuscular dysplasia and atherosclerotic RAS (ARAS). Renal artery stenosis is a variant of peripheral arterial disease and ARAS accounts for 90% of all cases. ARAS usually occurs in older individuals, may present with hypertension or renal insufficiency and has an equal prevalence in men and women. In contrast, fibromuscular dysplasia is more often seen in the young, in women, and is usually associated with hypertension without renal insufficiency. Other causes include: - Vasculitic conditions - Congenital fibrous bands - Compression of the renal arteries by unavoidable masses - Radiation-induced injury ARAS accounts for more than 90% of RAS. Risk factors for ARAS include a history of heart disease or patients who are posted for cardiac catheterization. Renovascular diseases are divided into two broad subtypes: hypertension and nephropathy. Signs and symptoms Renal artery stenosis is always a unilateral condition. The mechanism of RAS is commonly the result of underperfusion of the kidneys because of the proximal stenosis of the renal artery. This leads to activation of the renin-angiotensin-aldosterone axis. This leads to no symptoms in most cases, but in some cases, patients may suffer from hypertension, nephropathy and eventually congestive cardiac failure. Signs of a failing kidney may include: - Atrophy of one kidney - Unexplained rapid-onset pulmonary edema - Many leg or heart vessels with atherosclerosis RAS may lead to any of the following: - High blood pressure - Congestive cardiac failure - Prevention of nephropathy Diagnosis and treatment Diagnosis is based on the history backed up by imaging tests, preferably ultrasound scan, with renal angiograms as required. CT angiogram (CTA) is performed. Treatment is mainly supportive, with antihypertensives and a good control of cholesterol and blood sugar. Other desperate measures include anti-smoking and the decision to control cholesterol levels. Antiplatelet drugs may be prescribed. Renal revascularization is a final option. Even the small accessory renal arteries can be detected by CTA because of its high spatial resolution. It is also preferred for patients who have implanted devices, for patients with limited breath-hold capacity (requiring shorter acquisition times), and for patients with claustrophobia. However, CTA has less specificity than MRA for detecting hemodynamically significant ARAS. It cannot be used safely in patients with borderline renal dysfunction because of the necessity of iodinated contrast agents. Images obtained with CTA are difficult to interpret in heavily calcified arteries and CTA requires use of ionizing radiation. Magnetic resonance angiography has a reported sensitivity and specificity of 90% to 100% and does not require use of iodinated contrast or radiation. MRA should not be used in patients with certain implanted devices (ie, pacemakers, defibrillators, cochlear implants and spinal cord stimulators) or in claustrophobic patients. In addition to assessing the severity of ARAS, angiography can detect intrarenal vascular abnormalities and anatomic abnormalities of the kidneys, renal arteries and aorta. Digital subtraction angiography improves contrast resolution and may decrease the volume of contrast needed to as little as 15 mL. arterial trauma, spasm or thromboembolic phenomenon. Use of angiotensin-converting enzyme inhibitors and angiotensin receptor blockers to inhibit the sympathetic and renin-angiotensin systems, respectively, is recommended for controlling hypertension and for reducing clinical events in those with known cardiovascular disease. Patients with uncontrolled renovascular hypertension, despite optimal medical therapy, ischemic nephropathy and cardiac destabilization syndromes who have severe RAS, are likely to benefit from renal artery revascularization. When revascularization is deemed appropriate, atherosclerotic RAS is most often treated with stent placement. However, patients with fibromuscular dysplasia are usually treated with balloon angioplasty. Patients with impaired renal function can develop contrast-induced nephropathy if iodinated contrast is used but generous fluid hydration before contrast administration can effectively prevent this complication. Almost 27% of the patients had progressive renal failure because of the inevitable loss of renal mass. Reviewed by Yolanda Smith, BPharm
<urn:uuid:1341a85d-6f54-439d-86f7-5c238b8aa42d>
CC-MAIN-2017-26
http://www.news-medical.net/health/Renal-Artery-Stenosis.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00565.warc.gz
en
0.90929
962
2.75
3
Cyclic Moisture Resistance Testing Cyclic moisture testing is performed for the purpose of evaluating, in an accelerated manner, the resistance of component parts and materials to the deleterious effects of the heat and high-humidity which are typical of tropical environments. Most tropical degradation results directly or indirectly from the absorption of moisture by vulnerable insulating materials and from the surface wetting of metals and insulation. These phenomena produce many types of deterioration in constituents of materials including corrosion of metals and detrimental changes in related electrical properties. The cyclic moisture resistance test, as performed per MIL-STD-883, test method 1004, differs from the steady-state humidity test and derives added effectiveness in its employment of slow temperature cycling, which provides alternate periods of near-condensation and drying. In addition, it produces a breathing action of moisture into non-hermetic packages. Cyclic moisture resistance testing also includes low-temperature “sub-cycles” that act as an accelerant to reveal otherwise indiscernible evidences of deterioration as stresses caused by freezing moisture tend to widen cracks and fissures. The resultant deterioration can then be detected by the measurement of electrical characteristics. Cyclic moisture is typically run as ten 24-hour cycles. Each cycle contains two 8-hour linearly-ramped excursions at 90% to 95% humidity from 25°C to 65°C (soak temperature) and back down to 25°C. Every 65°C soak is 3-hours long and the down-ramp has a relaxed humidity specification of 80% to 90%. Typical requirements for the ten-day test is a minimum of five -10°C sub-cycles. The procedure starts with a subcycle consisting of one 8-hour excursion (humidity uncontrolled) linearly-ramped from 25°C to -10°C (soak temperature), then back up to 25°C; each -10°C soak is 3-hours long with ramp times of 1.5-hours. It then proceeds into a subcycles of two 8-hour 25°C to 65°C to 25°C excursions which continues into an 8-hour portion of the cycle. All testing is monitored for accuracy. Test Specifications / Standards
<urn:uuid:237c5280-6beb-4350-93d6-4ff69ac0e23d>
CC-MAIN-2017-26
https://www.siliconcert.com/environmental/cyclic/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00565.warc.gz
en
0.913785
464
2.578125
3
(1 of 8 in a series) One of the best ways to understand what changed between the Old and New Testaments, between the Old and New Covenants, is to explore, study, and evaluate what exactly Jesus did when He ushered in the kingdom of God. We often comprehend the meaning of Jesus’ first advent totally around His substitutionary death on the cross. But we cannot separate Jesus’ delivery of a new covenant, a new arrangement with God that was made true by His death and resurrection from His broad description of initiating the kingdom of God on earth. The two are inseparable. When we accept the new arrangement with God that Jesus wrought on the cross as our substitute, when we embrace the gospel message of Jesus Christ, we become citizens of His kingdom immediately in the here and now. Jesus began His earthly ministry with this proclamation, “The time is fulfilled, and the kingdom of God is at hand; repent and believe in the gospel.” (Mk 1:15). That Jesus announced the kingdom of God has arrived should be of no surprise to us looking back since we believe that Jesus is indeed the Messiah King promised as the one to come by various Old Testament prophets. However, in real time 30 AD, it soon became apparent that Jesus was not fulfilling the Old Testament prophecy as His contemporaries expected. Their familiarity with kings and kingdoms involved political and military might, subjection of populations, and ruling with power. Since Jesus avoided these power structures altogether, what kind of kingdom could He be proclaiming? And what is its nature? One of the more perplexing aspects of Jesus’ kingdom is its secret nature. This nature is brought into sharper focus as we investigate the Old and New Testament timeline in the announcement of the new kingdom; the kingdom of God, the kingdom of heaven, the kingdom of His beloved Son as it is variously called. In the Old Testament, proclamation was made loud and clear that a Messiah is coming. And this Messiah carries with Him a strong political significance. Quoting the “Messiah” entry in The Zondervan Pictorial Bible Dictionary, we read, “[The Messiah] is to destroy the world powers in an act of judgment, deliver Israel from her enemies, and restore her as a nation. The Messiah is the King of this future kingdom to whose political and religious domination the other nations will yield. His mission is the redemption of Israel and His dominion is universal. This is the clear picture of the Messiah in practically all of the Old Testament passages which refer to Him.” In essence, The Messiah was to come with power and bring deliverance, judgment, and restoration. His future coming was called “the Day of the Lord,” and this proclamation, taking various forms and spokesmen, is a prominent theme throughout the Old Testament. Joel 1:15, 2:1,11,31, 3:14, Amos 5:18, Zeph. 1:14-16, and Mal. 4:1,5 use this phrase – the Day of the Lord – with various adjectives such as great, awesome, and terrible. The summary of the Old Testament prophecies regarding the Messiah King is that He would appear at the great and awesome Day of the Lord. With this background of Old Testament prophecy regarding the nature of the Messiah’s coming, we will continue our timeline next post with the arrival of the last of the “Old Testament” prophets; John the Baptist.
<urn:uuid:ff8b7349-6124-4df6-9474-91548d5a04f7>
CC-MAIN-2017-26
http://jaylehman.com/2011/06/the-new-kingdom/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320257.16/warc/CC-MAIN-20170624101204-20170624121204-00645.warc.gz
en
0.957907
712
2.59375
3
Disc (Disks) act as cushions between the vertebrae in your spine. They’re composed of an outer layer of tough cartilage that surrounds softer cartilage in the center. A bulging disc extends outside the space it should normally occupy. The bulge typically affects a large portion of the disc, so it may be seen as an over flowing structure over the vertebral bone above and beneath the disc. The part of the disc that’s bulging is typically the tough outer layer of cartilage. Bulging usually is considered part of the normal aging process of the disc. A herniated disc, on the other hand, results when a crack in the tough outer layer of cartilage allows some of the softer inner cartilage to protrude out of the disc. Herniated discs are also called ruptured discs or slipped discs. Bulging discs are more common and usually cause no pain. Herniated discs are more likely to cause pain, but some cause no pain whatsoever. If any of the above conditions concern you and you would like more information, please call or schedule an appointment.Dr. JP Silvera DC FREE eBook for Disc Hernationbulges, Please fill out the form below and instantly download.
<urn:uuid:136fb14c-f3eb-4bf1-9fdc-61515a8ce8cc>
CC-MAIN-2017-26
http://www.tayloredwellness.com/conditions/disc-hernationbulges/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320257.16/warc/CC-MAIN-20170624101204-20170624121204-00645.warc.gz
en
0.913416
258
2.984375
3
On the face of it, it looks pretty embarrassing. A recent Unicef report on the well-being of children in affluent countries suggested Canada’s childhood immunization rate was stunningly low – near the bottom of a list of more than 30 countries. The report, which used data provided by the countries themselves, said only 84 per cent of Canadian children had the appropriate number of doses of vaccine for measles, polio and DPT3 – that’s the three-dose diphtheria, pertussis and tetanus vaccine – for children between the ages of 12 and 23 months. In this day and age, is it possible that 16 per cent of Canadian children are either undervaccinated or unvaccinated? And why would Canada’s rates be lower than those of Britain, the home of the modern anti-vaccination movement, or the United States, where the seeds of vaccine rejection have fallen on fertile soil? It turns out there are really no good answers to those questions. That’s because, though Canadian governments spend oodles to protect children against avoidable diseases that can sicken, maim or kill, they collectively do not gather data on the delivery of those vaccines in ways that are useful for assessing the reach and efficacy of the programs. To put it more simply: Canada doesn’t have a national vaccination registry, so no one is really sure which children have been vaccinated against which diseases. While it seems likely there are communities or neighbourhoods where clusters of unvaccinated children are like dry tinder waiting for a flying ember to ignite an outbreak – someone arriving back from abroad with measles, say – often public health officials can only really guess at where they are. “My first reaction to the Unicef report was: Well, how do they know? Yeah, it looks bad, but how do they know it’s that low?” says Dr. Natasha Crowcroft, chief of infectious diseases at Public Health Ontario. Crowcroft’s comment serves both as a defence of the state of vaccination uptake in Canada and an indictment of the way Canadian jurisdictions record it. The gut reaction of many involved in immunization policy in Canada is that the figure in the Unicef report probably does not accurately reflect the immunization status of Canadian children. If rates were that low, Canada would be having more outbreaks of measles, mumps, rubella, and other vaccine-preventable diseases, they say. Because Canada doesn’t have a national vaccine registry – or even a full set of provincial and territorial registries – to draw on, when Unicef asks Canada for immunization estimates for its report (which is issued every two years), Canada must resort to a telephone survey. The 2009 survey, which was used for the most recent Unicef report, was done by a commercial polling company and drew on information from 5,000 households, says Dr. John Spika, director general of the Public Health Agency of Canada’s centre for immunizations and respiratory infectious diseases. The 2011 survey, which was conducted by Statistics Canada, had a stronger methodology, Spika says. The results are still being analyzed, and he won’t reveal a figure, but he says the number was better. “I can tell you that the results that we’re looking at from that methodology put is clearly in the top tier of the countries that are listed.” Spika is also a bit dubious of how the data for the other countries was gathered, noting that some which have had large outbreaks of measles and other vaccine preventable diseases in recent years are well above Canada on the list. “When you look at some of the countries that ranked higher than Canada – Romania, France, U.K., Germany, Switzerland – where have the big measles outbreaks been in the last couple of years? And Bulgaria ... their rates are 96 per cent or so. Bulgaria had a huge measles outbreak, what two, three years ago.” A decade ago, people in public health might have predicted Canada would have a better handle on childhood immunization status by now. In the wake of the 2003 SARS outbreak, provincial and territorial governments knew they had a problem gathering key health information in ways that would allow it to be shared across borders. That had become all too apparent during SARS. With public health needs suddenly a national priority, governments across the country talked about building a system where information could be shared across jurisdictions. For a brief moment, it was thought the program, called Panorama, might serve as a platform for a national vaccination registry, says Spika. But Canada’s decentralized health-care system got the better of those dreams. And the idea of one registry gave way to the notion of a collection of provincial and territorial registries, housed within the Panorama system. Ten years later, that still hasn’t come into being. Five provinces and one territory have some form of electronic registry, some within and others outside of the Panorama system, Spika says. Two others – Ontario and Quebec – have committed to moving towards a Panorama-compatible registry. But progress has been slow. Part of the problem has been differences in delivery systems. In some provinces, public health nurses give the majority of childhood vaccines, making it easier to gather data. In others, children may get their shots from a family doctor, a public health nurse, a pharmacist, or at a school-based clinic. That diversity of providers may provide convenience for parents, but it’s a challenge for those trying to gather data. “I think everyone’s really looking forward to an integration of immunization data that would source from whatever provider gave that vaccine – whether it’s a pharmacist ... or a public health nurse or a family doctor or a First Nations health-care provider. They would be entering it into an electronic health record of some kind and it would automatically go where it needs to go,” Naus says. “That’s what hasn’t happened yet, anywhere, as far as I’m aware.” Why does it matter? For starters, consider the Unicef report. Without registries, provinces and territories cannot say for sure how many of their children are vaccinated and how many are not. Nor can they say if the vaccinated kids got their shots at the right time, or if some followed alternate schedules that appear to be growing in popularity among some vaccine-hesitant parents. Alternate schedules are not advised by the experts who recommend immunization policies.Report Typo/Error
<urn:uuid:31977ea1-b523-419e-8b3b-c1ef343e3481>
CC-MAIN-2017-26
https://www.theglobeandmail.com/life/health-and-fitness/health/are-canadian-kids-undervaccinated-or-is-it-that-we-just-dont-know/article11477965/?cmpid=rss1
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320257.16/warc/CC-MAIN-20170624101204-20170624121204-00645.warc.gz
en
0.963148
1,369
2.859375
3
Feeding and swallowing disorders (also known as dysphagia) include difficulty with any step of the feeding process—from accepting foods and liquids into the mouth to the entry of food into the stomach and intestines. A feeding or swallowing disorder includes developmentally atypical eating and drinking behaviors, such as not accepting age-appropriate liquids or foods, being unable to use age-appropriate feeding devices and utensils, or being unable to self-feed. A child with dysphagia may refuse food, accept only a restricted variety or quantity of foods and liquids, or display mealtime behaviors that are inappropriate for his or her age. Dysphagia can occur in any phase of the swallow. Although there are differences in the relationships between anatomical structures and in the physiology of the swallowing mechanism across the age range (i.e., infants, young children, adults), typically, the phases of the swallow are defined as Oral Preparation Stage—preparing the food or liquid in the oral cavity to form a bolus-including sucking liquids, manipulating soft boluses, and chewing solid food. Oral Transit Phase—moving or propelling the bolus posteriorly through the oral cavity. Pharyngeal Phase—initiating the swallow; moving the bolus through the pharynx. Esophageal Phase—moving the bolus through the cervical and thoracic esophagus and into the stomach via esophageal peristalsis (Logemann, 1998).
<urn:uuid:4be855d6-3ed5-49a7-954a-970b037e5fad>
CC-MAIN-2017-26
http://www.asha.org/Practice-Portal/Clinical-Topics/Pediatric-Dysphagia/Overview/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320443.31/warc/CC-MAIN-20170625064745-20170625084745-00005.warc.gz
en
0.926415
308
3.53125
4
Principles of Analog Electronics Publisher: CRC Press | 2014 | ISBN: 1466582014 | 567 pages | PDF | 11 MB In the real world, most signals are analog, spanning continuously varying values. Circuits that interface with the physical environment need to be able to process these signals. Principles of Analog Electronics introduces the fascinating world of analog electronics, where fields, circuits, signals and systems, and semiconductors meet.
<urn:uuid:67b26784-3c71-4356-9539-0a3071a47f0b>
CC-MAIN-2017-26
https://soek.in/analog-electronics-handbook/520236.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320443.31/warc/CC-MAIN-20170625064745-20170625084745-00005.warc.gz
en
0.798915
88
2.84375
3
to all schools in Denmark. General education had long been at a relatively higher level, and the lower classes in particular had better educational opportunities. The resulting difference between the two countries is perhaps most strikingly illustrated in the respective conditions of their agricultural populations. For a long period Danish agriculturalists learned much from the south, but during the last generation they themselves, in turn, have exerted an influence extending far beyond the boundaries of their country. Well equipped both technically and intellectually, they have created a co-operative movement which has improved agricultural production to a remarkable degree. It is also characteristic that the labour movement, notwithstanding the fact that Denmark acquired her principles of socialism from Marx, Lassalle, and other practical and theoretical German leaders, has developed on saner lines and with greater strength here than in Germany. Early Condition of the Danish Peasants Some three hundred years ago a French writer, Pierre d'Avity, in reviewing the living conditions and the character of the natives of various countries, said of the Jutlanders: 'They are a strong people who eat and drink a great deal; they are provident and clever and cling to their own; they are quarrelsome, suspicious, and irascible, and fight stubbornly in defence of their opinions.' If this judgement is true to fact, it is difficult to think of the Danish peasants of that day as members of a cowed and oppressed class, in spite of all the burdens that were imposed upon them. From the Middle Ages until late in the eighteenth century there was a continuous change to the disadvantage of the peasants. Freeholds gradually disappeared and were replaced by leaseholds. On taking over a patch of land the leaseholder had to pay a premium to the landowner or lord, besides which he had to make a yearly payment called Landgildet (ground-rent), which in most cases was an incommutable
<urn:uuid:c89eafb2-669b-4884-b26c-8d049e6926ce>
CC-MAIN-2017-26
https://en.wikisource.org/wiki/Page:Economic_Development_in_Denmark_Before_and_During_the_World_War.djvu/19
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320666.34/warc/CC-MAIN-20170626013946-20170626033946-00085.warc.gz
en
0.983583
383
3.421875
3
By Shelley Preston The Indian River Lagoon (IRL), one of the world’s most diverse estuaries, has always offered a spectacular backdrop to the student experience at Florida Tech for both recreation and marine studies. But years of poorly managed human activities have brought the IRL close to collapse. With a massive fish kill earlier this year and frequent algal blooms, scientists say the lagoon is in peril. Thanks to a lobbying effort to bring attention to the wounded waters, government agencies promise over $4 million to Florida Tech researchers to pinpoint sources of the lagoon’s problems and hopefully find real solutions to the once thriving estuary. One major research effort is examining a big problem below the lagoon’s surface: muck. Throughout the IRL, thick pockets of black, viscous goo made from decaying organic matter such as suburban yard waste not only prevents sea grass from growing where it settles, but saturates the lagoon with nitrogen and phosphorous that feed algae. Dredging the muck out of the lagoon is one option. Florida Tech scientists are currently monitoring a dredging operation in Turkey Creek and studying its impact on native plants and animals. On the engineering side, Florida Tech researchers are investigating ideas such as a weir that would flush out the lagoon with periodic infusions of fresh seawater, looking at ways to prevent contaminated storm water from reaching the lagoon in the first place and aeration systems for when oxygen dips to fish-kill levels. One of those engineers, Tom Waite, university research professor says, “Florida Tech is in an excellent position to make a difference. Not only are we looking at the lagoon’s problems as scientists, but we have the resources to come up with engineering solutions.” And, as a service to the community, more than 20 Florida Tech faculty members formed the Indian River Lagoon Research Institute. The group offers the public expertise on developing sustainable solutions for the revitalization and care of the Indian River Lagoon. Find them here: http://research.fit.edu/irlri and on
<urn:uuid:155be3c2-6e05-4d75-a52a-8435b0dd960f>
CC-MAIN-2017-26
http://ecurrent.fit.edu/blog/campus/ftt/solutions-science-engineering-answers-indian-river-lagoons-woes/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00165.warc.gz
en
0.932775
435
3.375
3
Over the past 20 years the costs of natural disasters have escalated significantly, with the lives of over 800 million people disrupted. With the growth in world population, there is an urgent need to understand the potential threats posed by natural hazards and to ascertain the best ways of mitigating their damaging effects. Part 1 Climatic and Atmospheric Hazards: Evaluation of climatic change through harmonic analysis; R. Rodriguez, M.C. Llasat, E. Rojas. Some characteristics of typhoons as revealed by the recent SSM/I microwave radiometry; G.V. Rao. Structure of prefrontal convective rainband in northern Taiwan determined from Dual-Doppler data; Y.-J. Lin, R. Pasken, H.-W. Chang. Part 2 Hydrological Hazards: Recent floods in Bangladesh - Possible causes and solutions; Md. Khalequzzaman. Meteorological factors associated with floods in the North-Eastern part of the Iberian Peninsula; M.C. Llasat, M. Puigcerver. Hydrological response to radar rainfall maps through a distributed model; I. Becchi, E. Caporali, E. Palmisano. Simulation and modelling of rainfall radar measurements for hydrological application; D. Giuli, L. Baldini, L. Facheris. Part 3 Storm Surges: Storm waves in the Canadian Atlantic - a numerical simulation; M.L. Khandekar. Storm surge mitigation through vegetation canopies; M.B. Danard, T.S. Murty. Numerical simulation and prediction of storm surges and water level in Shanghai harbour and its vicinity; Z. Qin, Z. Sher, K. Xu, Y. Wang, Y. Duan. Part 4 Geological Hazards: Mass movements in hilly areas (with examples from Nigeria); A.E. Scheideger, D.E. Ajakaiye. Characteristics and mitigation of the snow avalanche hazard in Khagan Valley, Pakistan Himalaya; F.A. de Scally, J.S. Gardner. Seismic hazard analysis with randomly located sources; M.S. Yucemen, P. Gulkan. Regional fracture analysis south of latitude 20 N of Egypt and their influence on earthquakes; A.F. Kamel. Seismic hazards in Bulgaria; I. Stanishkova, D. Slejko. Comparison of different approaches to seismic hazard assessment; L. Peruzza, D. Slejko. There are currently no reviews for this product. Be the first to review this product!
<urn:uuid:baff6349-f145-41e2-8599-f9fc89f1a37f>
CC-MAIN-2017-26
http://www.nhbs.com/title/29213?title=recent-studies-in-geophysical-hazards
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00165.warc.gz
en
0.841327
534
2.59375
3
ALS, for those of you who don’t know, is a degenerative disease that targets the all-important nervous system. Otherwise known as amyotrophic lateral sclerosis, or Lou Gehrig’s Disease, this pressing disease is affecting more people as time passes on. Even if you have not been diagnosed with the condition, there are so many crucial tidbits of information that everyone should be aware of about this medical threat. We’re here to fill you in! Don’t forget to come back for our part to article, coming soon to reveal the top eight crucial things you didn’t know about ALS! Number Fifteen: The Ice Bucket Challenge For those of you who partook in the challenge, or remember your social media feeds blowing up with videos from the Ice Bucket Challenge, you have already been introduced to ALS. This trending challenge was created as a fundraiser, in order to create a more widespread awareness of this pressing disease. Number Fourteen: The Effect on the Body Those who experience the onset of this disease are put through an unimaginably frustrating and painful ordeal. The disease targets significant nerve cells in the body and spinal cord, leading its victims to a life of progressively increasing impairment of basic bodily movements. Over time, it will make it more difficult for its victims to move their arms, legs, and even face. Number Thirteen: The Effect on the Mind Much research has been done in relation to how ALS may affect a patient’s cognitive skills, but it is widely accepted that the disease does not hinder intelligence. However, affected persons tend to be more prone to developing depression, or may experience a decline in memory and decision-making abilities. Number Twelve: It is Genealogically Spontaneous When it comes to inheriting the disease, it is more likely for a person who has no family history of the disease to develop it. Of all of the documented cases, only about five to 10 percent of patients have observed the disease in their family history. All that is known of the correlations to its development is that is much more likely among military veterans, especially those deployed during the Gulf War. Number Eleven: Risk Factor Other than this particular group of military veterans, the disease has been known to be more prominent in men and people of Caucasian origin. Every year, it is estimated that 5,600 new cases are discovered. Of these cases, it is 20% more likely to be diagnosed among men than women. Of all of the cases known today, roughly 93% of these affected persons have been white. Number Ten: Risk with Age ALS is primarily observed in older people, mainly between the ages of 60 to 69 years. However, it is completely possible to develop symptoms at any age. In fact, the man who began the Ice Bucket Challenge was diagnosed in 2012 at the young age of 29 years. Number Nine: ALS Symptoms Take Time The symptoms of ALS may take quite a bit of time to notice. Affected persons don’t simply wake up one morning unable to control their limbs, the degeneration takes place over a prolonged period of time. Most people don’t notice the early signs of its onset, but it may be indicated by cramps, stiff muscles, twitching, or decreased function in chewing or swallowing. Eventually, most people affected with ALS will die from inability to breathe. Don’t forget to come back for our part to article, coming soon to reveal the top eight crucial things you didn’t know about ALS!
<urn:uuid:b3f06088-8886-4e2a-95e5-cb03c9961d97>
CC-MAIN-2017-26
http://ppcorn.com/us/2016/02/29/als-15-crucial-things-didnt-know-part-1/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321458.47/warc/CC-MAIN-20170627152510-20170627172510-00245.warc.gz
en
0.9634
724
2.984375
3
Easter Island’s famously enigmatic statues still challenge rational explanation. This tiny Polynesian island is thousands of miles both from the coast of South America and its nearest inhabited Pacific neighbour, Pitcairn. But what makes Easter Island legendary are nearly 900 moai – monolithic human figures – carved from volcanic rock and dotted around the island. A visit to Easter Island is one of the world’s great treasure hunts. Moai, many of which are displayed on ahu (stone platforms) are found all around the island’s coast and interior. Some questions remain about the moai: why they were carved in such volume, how they were erected and why so many were toppled. The moai are thought to represent the faces of ancestors, to have been moved on wooden rollers or by rocking from side to side and many were pulled to the ground during civil strife on the island, possibly caused by deforestation. Exploring Easter Island Visitors can explore on foot, horseback or in a 4WD. Once you’ve seen the moai and the quarries at Rano Raraku from which they are carved, there are clear waters to scuba dive in and mighty waves to surf on from beautiful Playa Anakena. Easter Island (or Rapa Nui, to give it its Polynesian name) can be visited year-round. It is only accessible by a Lan flight from Santiago (four a week) and Papeete (Tahiti), twice weekly. Budgeting for your trip On the island you’ll pay western prices for accommodation, food and tours. But don’t be put off. Most visitors to Easter Island manage the cost of getting here by including it on a round-the-world air ticket – but make sure you overnight rather than just make a refuelling stop or you won’t get out of the airport. © 2009 Lonely Planet Publications Pty Ltd
<urn:uuid:7143573d-cd2a-42de-ac66-630c49a874fe>
CC-MAIN-2017-26
https://eden.uktv.co.uk/adventure/lonely-planet/lonely-planet-october/article/easter-island-chile/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321458.47/warc/CC-MAIN-20170627152510-20170627172510-00245.warc.gz
en
0.962943
406
2.59375
3
NASA's Mars Curiosity rover has measured a tenfold spike in methane, an organic chemical, in the atmosphere around it and detected other organic molecules in a rock-powder sample collected by the robotic laboratory's drill. NASA researchers, including some from JPL, will present new findings on a wide range of Earth and space science topics next week at the annual meeting of the American Geophysical Union in San Francisco. Persistent computer resets and "amnesia" events on NASA's Mars Exploration rover Opportunity that have occurred after reformatting the robot's flash memory have prompted a shift to a working mode that avoids use of the flash data-storage system. A spacecraft built for humans left the domain of low-Earth orbit Friday for the first time in 42 years when NASA's first Orion soared 3,604 miles above Earth and returned safely hours later, having accomplished a flawless flight test as part of NASA's journey to Mars. The Committee on Space Research (COSPAR), an international scientific organization, will have its 2018 meeting in Pasadena, California, hosted by the California Institute of Technology and supported by NASA's Jet Propulsion Laboratory. Two NASA and one European spacecraft that obtained the first up-close observations of a comet flyby of Mars on Oct. 19, have gathered new information about the basic properties of the comet's nucleus and directly detected the effects on the Martian atmosphere. NASA’s new Orion spacecraft received finishing touches Thursday, marking the conclusion of construction on the first spacecraft designed to send humans into deep space beyond the moon, including a journey to Mars that begins with its first test flight Dec. 4. NASA has awarded five-year grants totaling almost $50 million to seven research teams nationwide, including one from the agency's Jet Propulsion Laboratory in Pasadena, California, to study the origins, evolution, distribution and future of life in the universe. NASA will host a briefing at 11 a.m. PDT (2 p.m. EDT) Thursday, Oct. 9, to outline the space and Earth-based assets that will have extraordinary opportunities to image and study a comet from relatively close range to Mars on Sunday, Oct. 19. In collaboration with NASA’s Jet Propulsion Laboratory in Pasadena, California, Pacific Gas and Electric Company (PG&E) announced that it is testing state-of-the-art technology adapted from NASA’s Mars rover program. Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and X-rays produced in a solar flare - which can reach Earth at the speed of light in eight minutes - coronal mass ejections are giant clouds of solar material that take one to three days to reach Earth. NASA's Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft successfully entered Mars' orbit at 10:24 p.m. EDT Sunday, Sept. 21, where it now will prepare to study the Red Planet's upper atmosphere as never done before. Media representatives are invited to NASA's Goddard Space Flight Center Visitor Center on Sept. 21 from 8:30 p.m. to 11 p.m. EDT, for local coverage and interviews of the orbit insertion countdown activities for the Mars Atmosphere and Volatile Evolution, or MAVEN, mission. NASA will host a televised media briefing at 1 p.m. EDT, Wednesday, Sept. 17, to outline activities around the Sunday, Sept. 21 orbital insertion at Mars of the agency's Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft NASA's Low-Density Supersonic Decelerator (LDSD) project successfully flew a rocket-powered, saucer-shaped test vehicle into near-space in late June from the U.S. Navy's Pacific Missile Range Facility on Kauai, Hawaii. The goal of this experimental flight test, the first of three planned for the project, was to determine if the balloon-launched, rocket-powered, saucer-shaped, design could reach the altitudes and airspeeds needed to test two new breakthrough technologies destined for future Mars missions. NASA's Low-Density Supersonic Decelerator (LDSD) project successfully flew a rocket-powered, saucer-shaped test vehicle into near-space in late June from the U.S. Navy's Pacific Missile Range Facility on Kauai, Hawaii. NASA's Opportunity Mars rover, which landed on the Red Planet in 2004, now holds the off-Earth roving distance record after accruing 25 miles (40 kilometers) of driving. The previous record was held by the Soviet Union's Lunokhod 2 rover. NASA has issued a Request for Information (RFI) to investigate the possibility of using commercial Mars-orbiting satellites to provide telecommunications capabilities for future robotic missions to the Red Planet. Repeated high-resolution observations made by NASA's Mars Reconnaissance Orbiter (MRO) indicate the gullies on Mars' surface are primarily formed by the seasonal freezing of carbon dioxide, not liquid water. NASA's Low-Density Supersonic Decelerator (LDSD) project plans to fly its rocket-powered, saucer-shaped landing technology test vehicle into near-space from the U.S. Navy's Pacific Missile Range Facility (PMRF) on Kauai, Hawaii, later this week. NASA's Mars Curiosity rover will complete a Martian year -- 687 Earth days -- on June 24, having accomplished the mission's main goal of determining whether Mars once offered environmental conditions favorable for microbial life. On Thursday, June 19, NASA will host a televised update on recent progress and upcoming milestones in the agency's efforts to identify, capture and relocate an asteroid, and send astronauts to take samples of it in the 2020s. NASA's Low-Density Supersonic Decelerator (LDSD) project will fly a rocket-powered, saucer-shaped test vehicle into near-space next week from the U.S. Navy's Pacific Missile Range Facility in Kauai, Hawaii. Researchers have discovered on the Red Planet the largest fresh meteor-impact crater ever firmly documented with before-and-after images. The images were captured by NASA's Mars Reconnaissance Orbiter. A mission overview briefing about NASA's upcoming flight test of the Low-Density Supersonic Decelerator (LDSD) experiment will be provided to reporters attending a media day on Monday, June 2, at the U.S. Navy's Pacific Missile Range Facility (PMRF) on Kauai, Hawaii. The public can watch the briefing via live streaming, at 11 a.m. PDT (2 p.m. EDT/8 a.m. HST). In its sixth Martian winter, NASA's Mars Exploration Rover Opportunity now has cleaner solar arrays than in any Martian winter since its first on the Red Planet, in 2005. Cleaning effects of wind events in March boosted the amount of electricity available for the rover's work. Scientists using NASA's Curiosity Mars rover are eyeing a rock layer surrounding the base of a small butte, called "Mount Remarkable," as a target for investigating with tools on the rover's robotic arm. In support of the President's initiative to graduate one million STEM students over the next decade, NASA is the anchor exhibit in the Aerospace Pavilion on the second floor of the Washington Convention Center, April 25-27. NASA's 60' x 60' booth will feature thirty different hands-on demonstrations designed to engage families and students in fun science and engineering activities. On April 9 reporters got a chance to don "bunny suits" (protective apparel that sometimes makes people look like large rabbits) and enter a NASA clean room at the agency's Jet Propulsion Laboratory in Pasadena, Calif. Some 60 scientists and engineers came together March 26-28, 2014, for the first ExoMars 2018 Landing Site Selection Workshop, held at ESA's European Space Astronomy Centre near Madrid. Their task was to begin the process of drawing up a shortlist of the most suitable landing locations for ESA's first Mars rover. A team of scientists at NASA's Johnson Space Center in Houston and the Jet Propulsion Laboratory in Pasadena, Calif., has found evidence of past water movement throughout a Martian meteorite, reviving debate in the scientific community over life on Mars. Researchers have determined the now-infamous Martian rock resembling a jelly doughnut, dubbed Pinnacle Island, is a piece of a larger rock broken and moved by the wheel of NASA's Mars Exploration Rover Opportunity in early January. NASA's Mars Odyssey spacecraft has tweaked its orbit to help scientists make the first systematic observations of how morning fogs, clouds and surface frost develop in different seasons on the Red Planet. Opportunity is up on "Solander Point" at the rim of Endeavour Crater. The rover is continuing to investigate this curious surface rock, called "Pinnacle Island" that apparently was kicked up by the rover during a recent traverse. New findings from rock samples collected and examined by NASA's Mars Exploration Rover Opportunity have confirmed an ancient wet environment that was milder and older than the acidic and oxidizing conditions told by rocks the rover examined previously. Opportunity is up on "Solander Point" at the rim of Endeavour Crater. Opportunity is positioned on the edge of an exposed outcrop where orbital observations suggest the possible presence of small amounts of clay minerals. Opportunity landed on Mars on Jan. 24, 2004 PST (Jan. 25, 2004 UTC) on what was to be a three-month mission, but instead the rover has lived beyond its prime mission and roved the planet for nearly 10 years. The rover is maintaining favorable northerly tilts for energy production. Opportunity is positioned on the edge of an exposed outcrop where orbital observations suggest the possible presence of small amounts of clay minerals.
<urn:uuid:9a075dbc-0e67-4855-adfb-28f874aa7dec>
CC-MAIN-2017-26
https://mars.jpl.nasa.gov/news/index.cfm?y=2014
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00405.warc.gz
en
0.917876
2,024
3.28125
3
What is the difference between water flow and water pressure? Water Pressure – What is Water Pressure? Water pressure is the force that pushes water through pipes. Pressure is needed to get water into homes, businesses and also local public services, regardless of whether they are bungalows or really tall skyscrapers, or whether they are on lower or higher ground. Water Pressure is measured in ‘bars’, the force needed to raise water 10 metres is equivalent to one bar of force. Did you know the height of your home can effect water pressure? Homes at the top of a hills may receive a lower pressure than homes which are at the bottom of hills. Your water pressure can also change depending on the time of day. You will probably find that your water pressure is higher later on at night. This is because less water is taken from your water service provider’s network as the majority of peoples taps are turned off. On balance water is often lower at times when people are taking baths or showers or when they are filling up their paddling pools or watering the garden in the summer. The amount of pressure at your tap may depend on how high your water service providers’ reservoir or water tower is above your home, or it could be on how much water is being used by their other customers. It is possible to increase your water pressure by making changes to the internal plumbing in your home or business. A good starting point would be to make sure your stop tap is open fully. Making sure all of the systems that rely on the pressure of water reaching your property are set to the minimum level of one bar/ 10 metres head. Water Flow – What is Water Flow? The water flow in your home is the maximum quantity of water you receive and this depends on the size of the pipe that connects between your home and your water service provider. For example: You will most probably get a satisfactory supply of water from a single tap connected to a small pipe. If however you connect many taps and appliances to this same small pipe and try to run them at the same time there would probably not be enough water coming at each point. This means you would have a 'low water flow' - perhaps just a trickle of water coming out of the taps. Water pipes are half an inch in diameter (approx 15mm) in older properties and back then was enough to supply water to a group of properties. When modern appliances such as dishwashers, washing machines and power showers are used, the amount of water used can cause low flow problems. The first appliance turned on draws most or all of the water from the pipe and there will not be enough for any other taps or appliances you may want to use at the same time. This can be a problem for people living in converted houses. Usually the ground floor receives adequate flow but higher floors sometimes benefit from the installation of additional water pumping arrangements. Properties built more recently usually have bigger pipes - 25mm diameter (outside diameter). This means the water flow will be a lot higher, so there is enough for numerous appliances to be used at once.
<urn:uuid:ca99457b-cadb-4244-81bd-666231e34ee3>
CC-MAIN-2017-26
https://www.mbd-bathrooms.co.uk/water-pressure-and-water-flow_93.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00405.warc.gz
en
0.968013
633
3.40625
3
The Teachers.Net Gazette is a collaborative project published by the Teachers.Net community Kathleen Alape Carpenter Editor in Chief Cover Story by LaVerne Hamlin Effective Teaching by Harry & Rosemary Wong Contributors this month: Dr. Marvin Marshall; Cheryl Sigmon; Barbara & Sue Gruber; Marjan Glavac; Dr. Rob Reilly; Barb S. HS/MI; Ron Victoria; Brian Hill; Leah Davies; Hal Portner; Tim Newlin; Barb Gilman; James Wayne; P.R. Guruprasad; Todd Nelson; Addies Gaines; Pat Hensley; Alan Haskvitz; Joy Jones; and YENDOR. Want your students to develop high-level communication skills? The ability to arrive at informed judgments? The ability to function in a global community? Flexibility, persistence, and resourcefulness? Try Problem-Based Learning. by Hal Portner Regular contributor to the Gazette March 1, 2008 We are continually faced with a series of great opportunities brilliantly disguised as insoluble problems. Problem-Based Learning (PBL) has the potential to help your students acquire these and other skills needed in the 21st century. PBL is a set of instructional strategies and techniques characterized by the use of ‘real world’ problems as a context for students to learn critical thinking and problem solving skills while acquiring essential concepts of the curriculum. Here is the PBL process. You present your students with a predicament, dilemma, or similar problem-case. The students, in groups, organize their ideas and previous knowledge related to the problem, and attempt to define its nature. Throughout their discussion, students pose questions to each other and you, their teacher, on aspects of the problem they do not understand. These issues are recorded by the group. You encouraged students to define what they know, and more importantly, what they don’t know. Students rank, in order of importance, the issues generated. They decide which questions or issues will be followed up by their whole group. They also determine which can be assigned to individuals who will later share with the entire group. You and your students discuss what resources will be needed in order to research the issues and where they could be found. When students reconvene, they summarize and integrate their findings into the context of the problem. They continue to define new issues as they progress through the problem and in the process, learn that learning is an ongoing process, with new issues to be explored. What is your role as the Teacher in PBL? In PBL, you act as facilitator and mentor. Ideally, you guide, probe and support students’ initiatives, not lecture, direct or provide easy solutions. However, the degree to which you make the process student-directed versus teacher-directed is your decision based on the size of the class and the maturity of the students. The goal is, of course, to have your students take responsible roles in their own learning. A critical factor in the success of PBL is the problem itself. In next month’s Gazette, I will discuss the characteristics of good PBL problems and provide some examples. Meanwhile, here are a couple of related web sites you may want to check out. Hal Portner is a former K-12 teacher and administrator. He was assistant director of the Summer Math Program for High School Women and Their Teachers at Mount Holyoke College, and for 24 years he was a teacher and then administrator in two Connecticut public school districts. From 1985 to 1995, he was a member of the Connecticut State Department of Education’s Bureau of Certification and Professional Development, where, among other responsibilities, he served as coordinator of the Connecticut Institute for Teaching and Learning and worked closely with school districts to develop and carry out professional development and teacher evaluation plans and programs. Portner writes, develops materials, trains mentors, facilitates the development of new teacher and peer-mentoring programs, and consults for school districts and other educational organizations and institutions. In addition to Mentoring New Teachers, he is the author of Training Mentors Is Not Enough: Everything Else Schools and Districts Need to Do (2001), Being Mentored: A Guide for Protégés (2002), Workshops that Really Work: The ABCs of Designing and Delivering Sensational Presentations (2005), and editor of Teacher Mentoring and Induction: The State of the Art and Beyond (2005) – all published by Corwin Press. He holds an MEd from the University of Michigan and a 6th-year Certificate of Advanced Graduate Study (CAGS) in education admin¬istration from the University of Connecticut. For three years, he was with the University of Massachusetts EdD Educational Leadership Program.
<urn:uuid:c2a46059-35c0-4404-ab94-45d497eaae9e>
CC-MAIN-2017-26
https://www.teachers.net/gazette/MAR08/portner/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00405.warc.gz
en
0.945629
982
3.25
3
This simple and inexpensive art activity will keep your little one entertained on a warm summer day. It is probably best to do this outside though, as it can get rather messy! It took a while for bub to get going with this activity – I was too keen to get started so the popsicles hadn’t melted yet. I also think the ice was a strange sensation for him. Once he got use to it, however, he was fascinated by the bright coloured wiggly lines that appeared on the paper as he doodled. It wasn’t long before the floor became his canvas and he started drawing here and smudging with his hand there. The funniest moment was when he began trailing the ice cube across his feet, experimenting with the cold. By the time we had finished, it wasn’t just the floor that was multi-coloured! It is likely that they will try to eat the paint so do keep a watchful eye! Have fun 🙂 You will need -Washable paint or food colouring (we used paint) – Ice cube tray – Wooden lolly sticks Making the paint popsicles 1. Add different colours of paint or food colouring to the ice cube tray. If using food dye and you want the colour to be brighter or more intense, add more drops until you achieve the desired colour. 2. Pour water into the ice cube tray – but don’t overfill! 3. Sprinkle various coloured glitter on top of the mixture, if using. 4. Prepare the wooden lolly sticks by cutting them in half. 5. You may wish to freeze the paint a little bit before placing the wooden sticks in, as this means they will come out straighter. I’m afraid the toddler keeps me on my toes whilst trying to prepare such activities so I just plonked the handles in at any angle! 6. Pop the ice cube tray in the freezer. 7. When you’re ready to use the frozen paint, place the tray in the sun and allow them to melt a little before removing them. 8. I recommend taping the paper to the floor so your little one can focus on painting. Or, if you’re brave enough, forego the paper altogether! What your little one will learn – Self expression with paint – Hand-eye coordination and control – Sensory experience of exploring paint, textures and prints – Experimenting and exploring a new art tool – Fine motor development; holding the sticks strengthens the fingers and hand muscles – All the senses are involved: seeing, smelling, hearing, touching and, if you use edible paint, tasting!
<urn:uuid:8a39769a-fab1-433c-8a7f-6bc12561ccb6>
CC-MAIN-2017-26
https://bubsonboard.com/2016/05/30/glittery-ice-popsicles-painting-with-melting-colours/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319933.33/warc/CC-MAIN-20170622234435-20170623014435-00044.warc.gz
en
0.949224
558
2.609375
3
Goals and Philosophy Internet computing has already developed into a vast area that no one individual can hope to understand fully. However, because of its obvious practical importance, many people need to understand enough of Internet computing to be able to function effectively in their work. This need is not addressed by any existing source. Typical books and articles concentrate on narrow topics. Existing sources have the - Those targeted at practitioners tend to discuss specific tools or protocols but lack a discussion of the concepts and how they relate to the subject broadly. - Those targeted at managers are frequently superficial or concentrated on vendor jargon. - Those targeted at students cover distinct disciplines corresponding to college courses, but sidestep much of current practice. There is no overarching vision that extends across - Those targeted at researchers are of necessity deep in their specialties, but provide only a limited coverage of real-world applications and of other topics of Internet computing. For this reason, this handbook was designed to collect definitive knowledge about all major aspects of Internet computing in one place. The topics covered range from important components of current practice to key concepts to major trends. The handbook is an ideal comprehensive reference for each of the above types of reader. - An exhaustive coverage of the key topics in Internet computing. - Accessible, self-contained, yet definitive presentations on each topic, emphasizing the concepts behind the jargon. - Authored by the world's leading experts. Audience and Needs Our intended readers are people who need to obtain in-depth, authoritative introductions to the major Internet computing - Practitioners who need to learn the key concepts involved in developing and deploying Internet computing applications and systems. This happens often when a project calls for some - Technical managers who need quick, high-level, but definitive descriptions of a large number of applications and technologies, so as to be able to conceive applications and architectures for their own special business needs and to evaluate - Students who need accurate introductions to important topics that would otherwise fall between the cracks in their course-work, and which might be needed for projects, research, or - Researchers who need a definitive guide to an unfamiliar area, e.g., to see if the area addresses some of their problems or even to review a scientific paper or proposal that impinges on an area outside their specialty.
<urn:uuid:3107d686-dcb4-4cd2-a387-863d0fc8cc24>
CC-MAIN-2017-26
https://www.csc2.ncsu.edu/faculty/mpsingh/books/PHIC/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319933.33/warc/CC-MAIN-20170622234435-20170623014435-00044.warc.gz
en
0.924977
522
2.859375
3
This week, New Yorkers learned just how vulnerable their city is to rising ocean levels. Gale force winds fed fires in Breezy Point, burning more than 80 houses to ash. Story Continued Below Surging waters and violent winds ripped the Rockaway boardwalk from its moorings and tossed it inland. Water covered Coney Island, Alphabet City and City Island. Wind whipsawed the crane erecting the city's tallest apartment building, leaving its boom suspended 1,000 feet in the air. Salt water surged into every East River subway tunnel between Manhattan and Brooklyn, filling the South Ferry station to the ceiling, corroding equipment, and rendering the system unusable for at least the next several days. Trees blocked emergency vehicles, stormwater trapped cars, electric lines fell into puddles, and all the lights went out downtown. "It can get a lot worse than this," said Richard Barone, the Regional Planning Association's chief transportation policy planner. "That's where the concern lies. I think that this was a significant event, but there could be worse storms. This is in no way fearmongering. I'm not even sure we should print something like that." Yet, though the city had seen the effects of Hurricane Irene, and climate change and its effects on sea levels are well known to people who believe in science, thus far New York City has done little to prepare for storms like the one it just endured. "Irene and now Sandy have posed some really hard questions for us," said Rob Pirani, the Regional Plan Association's vice president for environmental programs. "And in the past, we've been able to duck these questions, and now we've got to come to grips with it." As my colleague Katharine Jose reported in February, Mayor Michael Bloomberg's administration has made strides in lessening the city's greenhouse gas emissions, but hasn't done much more than any of the administrations before it to prepare for the effects of the climate change that's already underway. "I would say three or four years ago there was—and this a general statement, not specific to New York—the emphasis was on mitigation; in other words, reducing our contribution to climate change," said David Bragdon, then the head of the city's Office of Long-Term Planning and Sustainability. "It's only more recently that policy makers are acknowledging what scientists have known, which is that even if we magically stop emissions tomorrow—if we were successful in all these mitigation efforts, there's still effects that are happening already." Columbia Earth Institute professor Klaus Jacob told Jose that, "I think it's not understood how serious the situation will be in coastal areas and what the costs will be to society at large." In fact, in some ways, New York City has made the problem worse by encouraging taller and denser development in flood-prone places like the Williamsburg and Long Island City waterfronts, parts of which had to be evacuated this week. Ideas exist for how to go about protecting at least part of New York from future storm-related catastrophe, but those ideas come with hefty price tags that cause them to lose out against the city's other budget priorities. How about, for example, finding a way to better seal the city's older subway stations, the ones that have no centralized ventilation systems and rely instead on vents that open up to city streets? Or, more ambitiously, how about creating what Barone called a "greater redundancy throughout the network," through something like his organization's proposal for a so-called X line connecting the outer boroughs, obviating the need to travel through flood-vulnerable Lower Manhattan? Along with softer strategies like restoring wetlands, and building oyster reefs and dunes, the city might also consider building surge-mitigating storm barriers, of the sort being used or underway in Stamford, Rotterdam, London, Venice and St. Petersburg. A SUNY Stony Brook professor has proposed building such barriers near the Verrazano, Arthur Kill and Throgs Neck, for a projected cost of $10 billion, but even that wouldn't make the city invulnerable to another Sandy. "The idea that somehow we can protect all the shoreline in New York City or in the region is just not possible," said Pirani. The less-dreamy alternative is even more difficult, politically: to discourage residential development in the city's low-lying coastal areas. "The perfect use of the Rockaways is the way it used to be used, for summer and seasonal housing, what Jones Beach is used for," said Barone. "It's a barrier island. That's what it is. it's the first natural line of defense for storms." Pirani said the city should consider "buying people out" who live in particularly flood-prone areas, as New Jersey does with its Blue Acre program. "I think all these things need to be laid out and considered in light of the damage that we suffered over the last couple of days," said Barone. "Storms are inevitable," said Pirani. "They were inevitable before climate change, and now it's just gonna get worse." Bloomberg earlier today wasn't willing to attribute the storm surges to climate change, likely for fear of creating an unnecessary political issue by engaging climate-change deniers. But Governor Andrew Cuomo went right ahead and said it, just about. "Going forward, I think we do have to anticipate these extreme types of weather patterns," said Cuomo. "And we have to start to think about how do we redesign the system so this doesn't happen again. After what happened, what has been happening in the last few years, I don't think anyone can sit back anymore and say 'Well, I'm shocked at that weather pattern.' "There is no weather pattern that can shock me at this point. And I think that has to be our attitude. And how do we redesign our system and our infrastructure assuming that?"
<urn:uuid:b93b2a4b-9911-493c-bf13-10884c6b2c6e>
CC-MAIN-2017-26
http://www.politico.com/states/new-york/city-hall/story/2012/10/there-could-be-worse-what-new-york-isnt-doing-yet-about-the-next-storm-000000
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00205.warc.gz
en
0.969178
1,239
2.5625
3
Building a Classroom Structure Find out more about learning theory with our online SLA course on Udemy. Click here for information and discounts! Building a sturdy structure, such as a house, requires careful consideration of many factors. First there is the foundation which will support the weight of the house. The walls and bearing wall will support the roof and defend against winds, violent rains, or most things Mother Nature throws against it. Then there is the flooring, roof, pipes, wires, and aesthetic features of the house to consider. Classroom structure is just as vital to learning as the walls of a house. The educator can have a solid philosophy of instruction and learning, and create well thought out lesson plans, but the environment and students can ruin everything. There will be features that are out of the control of the educator, but there are a myriad of elements within control and should be considered to build a better classroom.So what aspects should the educator control, and how much control should the educator have over these aspects? This will totally depend on the educator and his preferences for the type of environment he wants to create. What works for one educator may not work for another, so Tesol Class will list areas of concern in regards to classroom structure and briefly talk about why it is important for the overall structure. There will be more detailed articles in the future to discuss the pros and cons of each aspect. There will always be two categories in the classroom structure. The things the educator cannot control and aspects the educator has total control over. Things that can’t be controlled These elements are totally out of the educator’s control, so it might be beneficial to have discussions with administration to solve such issues. Classroom Size: Classrooms are usually too small, but a classroom that is too large can be problematic also. If the classroom is too large, there are certain strategies to account for this, but one that is too small puts students on top of one another and really makes everyone uncomfortable. Discussions will need to be made with administration to handle this problem appropriately. Number of Students: There are two issues with the number of students. The first deals with classroom size and the number of students and this has to be handled with administration. The second is conducting small/large classes and will be covered under the control section. Classroom Temperature: This is normally not a problem in most classrooms, but some educators may be faced with this enormous problem. This issue can quickly destroy a class as the students will become focused more on the discomfort than the lesson at hand. Things that can be controlled These are issues the educator can control and have dominion over. As always, these areas can be acknowledged or not acknowledged by the educator as being pertinent. These topics will have more detailed articles coming. Educator Personality: What is the educator’s personality in class and towards the students? It’s very important to consider how the educator wants to be viewed by the students as there are always pros and cons with each personality. An educator can be viewed as strict, lenient, outgoing, laid back, friendly, a friend, caring, cold or a mixture of many personalities. These personalities may depend on many different factors and go according to which age group is being instructed and the number of students. Strategies for Class Size: What strategies must be implemented to account for small/large classes? A class of six students can be quite different than a class of forty students. How to handle the latter can be a cause of concern for most teachers especially if the class emphasizes communication. In addition, a lot depends on the age group being instructed. Seating Arrangement: How are the students arranged in class? Are the students in lines, pairs, small groups, large groups or in a U-shape? The manner in which the students are arranged can have far reaching implications on behavior and productivity. Consideration needs to be made for objectives and the type of class being presented. Rules: Are there rules in place for the classroom and do the students know them? There are always rules, but have they been explicitly conveyed to the students? When the educator signs a contract there is always a section explaining what is expected of the educator and what warrants dismissal. What standards do the students have to live up to and what repercussions are in place if they do not abide by the rules. Expectations/Grades: Do students know how they will be graded? Sounds simple at first, but educators place various weights on certain areas of the classroom and grades reflect these criteria. Some educators put emphasis on tests, some on attending class, others on participation. There are numerous areas to weight classes and the grades, so do the students understand how they will be graded? Are students free to receive grades based on meeting predetermined criteria regardless of relation to other students’ grades? Or, are students put into a curve? This requires the educator to create a structure in which to evaluate who deserves what and students must be aware of expectations. Student Management: How do you handle the different personalities in the classroom? Students carry in varying personalities can help or destroy a classroom. Predetermining how to deal with students who are shy, disruptive, too eager, disrespectful, etc… will help educators handle situations as they arise. Self Rules: What rules has the educator put in place to monitor himself. Many educators rarely think of creating standards for themselves, like they create for the students, but this allows the educator to remain consistent by what is expected of him and free from being manipulated emotionally. Building a Structure In the near future Tesol Class will feature articles on each of the aspects, but until that time, we encourage the educator to explore and contemplate these elements on his own to find a structure that benefits him. A solid structure will not eliminate all the problems, but will surely make the problems more manageable. Start thinking of a structure today!
<urn:uuid:262669ba-9530-4172-88db-2e5cbd4634c0>
CC-MAIN-2017-26
http://www.tesolclass.com/classroom-management/classroom-structure/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00205.warc.gz
en
0.961667
1,209
3.53125
4
A few people must have been very busy yesterday in Indonesia. Officials say that 79 million trees were planted in a single day in an effort to replant lost forest cover and signal how seriously the government views the problem of climate change. In this archipelago nation the root cause of carbon emissions is through widespread deforestation. Indonesia is losing its forests at a faster rate than any other nation and, a surprise to me, is the world’s third largest emitter of greenhouse gases because of it. Environmentalists applaud the mass tree planting but also warn that it is only a first step and does not address the root cause of the problem of government permits to clear forests combined with pervasive illegal logging. Indonesian president Suslio Bambang Yudhoyono has declared illegal logging the nation’s “biggest enemy”, but many criticize the government for not doing enough to stop it. National and international environmental groups are calling for a moratorium on palm oil plantations, the land for which is usually cleared forest, and all forms of logging. photo courtesy of Interet-General.info
<urn:uuid:b3dc2504-93b7-4bc8-b655-e72602dd18d0>
CC-MAIN-2017-26
https://globalwarmingisreal.com/2007/11/28/mass-tree-planting-in-indonesia/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00205.warc.gz
en
0.957258
227
2.625
3
Speciesism refers to the “prejudice or discrimination based on species, especially against animals”. Essentially, it is the idea that humans have greater moral rights over animals – for simply being human. This was not a term I was familiar with before this week’s lecture. Walking out of this week’s tutorial, where we watched 2013’s Blackfish, the term seemed a whole lot more relevant. Blackfish is a documentary that focuses on SeaWorld’s most famous performing orca, Tilikum, who was involved in the deaths of three people, including Keltie Bryne, a 21-year-old marine biology student and competitive swimmer; 27-year-old Daniel P. Dukes, a trespasser; and 40-year-old SeaWorld trainer Dawn Brancheau. The film brought into question the moral issue of holding whales captive for our entertainment. The Guardian summed it up perfectly: “We have no business keeping such large, intelligent mammals in such crippling confinement. We too might get a little psychotic, it suggests, if we were imprisoned in a bath for 30 years.” I visited SeaWorld San Diego in 2014 after the documentary was released. At the time, I don’t think I was aware of the film, but looking back, I can definitely say that it did feel like the park was on damage control. The park was pretty quiet (I thought it was because of seasonality) and every show, including the orca show, had an emphasis on ‘conservation’ and ‘protection’ and such. It almost seemed forced. I remember seeing the tanks where the whales and dolphins were kept in and thinking to myself how wrong it was. Having now finally seen Blackfish, it all now makes sense. The documentary delved into a side of SeaWorld that was not meant to be seen by the public – in such detail at least. It showed a world where killer whales were plucked out of the ocean to live out their lives in small pools, be forced to live in unnatural social groups and be in constant danger from teeth raking, as a result. If I was ripped away from my home and family, forced to lived an environment that limited all aspects of my life and I couldn’t escape, I’d eventually reach my breaking point. What gives humans the right to interfere with creatures whom otherwise can’t defend themselves? Yes, we eat animals for food, but that is out of necessity. Orca shows at SeaWorld are not a necessity. Humans have captured and exploited whales for their own entertainment because they simply can. It’s almost not surprising that humanity has this prejudice towards animals. It’s because we think we know what is best for them. In the media, animals are anthropomorphised – animals are given human characteristics. The 2005 documentary, March of the Penguin was one of my favourites as a child. The film portrayed the life of emperor penguins in the wild. It was a story of family, love and death. The audience could relate to these animals despite being of a completely different species, with these human experiences being projected onto these penguins. We have an idealised view of animals and this in turn has consequences. Trainers at SeaWorld underestimated the natural instincts and capabilities of Tilikum, resulting in three deaths. An animal doesn’t stop being an animal once you think you can control it. Freeman, et al (2011) said humans make the assumption that animals are capable of human feelings and, thus, overestimate the potential of dangerous animal behaviours. DeWall (2001) terms this to be ‘bambification’, in which for entertainment purposes, animal characteristics are replaced with human attributes to appeal to human audiences. What are your thoughts on Blackfish and the sense of entitlement mankind has over animals? - Barkham, P. (2013). Blackfish, SeaWorld and the backlash against killer whale theme park shows. [online] The Guardian. Available at: https://www.theguardian.com/film/2013/dec/11/blackfish-seaworld-backlash-killer-whales [Accessed 24 Mar. 2017]. - DoRazario, RC 2006, ‘The Consequences of Disney Anthropomorphism: Animated, Hyper-Environmental Stakes in Disney Entertainment’, Femspec, vol. 7, no. 1, p. 51-63 - Freeman, C, Leane, E, & Watt, Y 2011, Considering animals : contemporary studies in human-animal relations / edited by Carol Freeman, Elizabeth Leane, and Yvette Watt, Farnham, Surrey, England ; Burlington, VT : Ashgate Pub., c2011. - Kesling, J 2011, Anthropomorphism, double-edge sword, WordPress, weblog post, 21st May, viewed 24th March 2017, <https://responsibledog.net/2011/05/21/anthropomorphism-double-edged-sword/>.
<urn:uuid:8f88b0b2-f80b-45b3-ba8e-b2dd61002d60>
CC-MAIN-2017-26
https://originalcliches.wordpress.com/2017/03/25/specieism-and-bambification-do-we-not-take-animals-seriously/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00205.warc.gz
en
0.952571
1,050
3.078125
3
Multiply Mixed Numbers (Grade 5) Videos and lessons to help Grade 5 students learn to solve real world problems involving multiplication of fractions and mixed numbers, e.g., by using visual fraction models or equations to represent the problem. Common Core: 5.NF.6 Suggested Learning Targets - I can solve problems that multiply fractions and mixed numbers. - I can explain or illustrate my solution using fraction models or equations. Common Core for Grade 5 More Lessons for Grade 5 Multiply mixed numbers using pictures - 5.NF.6 In this lesson you will learn how to solve mixed number multiplication problems. You will use pictures and repeated addition to prove that you can compute with the standard algorithm. 5.NF.6 - Multiply Mixed Numbers (Area Model) This video explains how to multiply mixed numbers using the area model. Multiplying a Mixed Number by a Mixed Number using an Area Model. 5.NF.6 - Multiply Mixed Numbers (Distributive Property) This video explains how to multiply mixed numbers using the distributive property and the meaning of mixed numbers. It uses equations to represent the problem , as is called for in the Common Core Math Standards of 5.NF.6. Multiplying Mixed Numbers Using the Distributive Property. 5.NF.6 - Multiply Mixed Numbers (Writing as Fractions) This video shows how to multiply mixed numbers by first writing them as fractions and then multiplying fractions. This is actually an effective method, known as a "smart" cut, provided you have fluency with multiplying and dividing multi-digit numbers. In this video, you will see two strategies to multiply a fraction by a mixed number - the area model and the multiplication smartcut. Multiplying Mixed numbers by converting to improper fractions. Multiplying Mixed Numbers Example: Riley's cookie recipe calls for 3 2/5 cups of sugar. She wants to make 4 1/3 batches. How much sugar will Riley need? Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
<urn:uuid:053868dc-a81e-4a1c-9221-4d201c6afa00>
CC-MAIN-2017-26
http://www.onlinemathlearning.com/multiply-mixed-numbers-5nf6.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320476.39/warc/CC-MAIN-20170625083108-20170625103108-00285.warc.gz
en
0.862183
506
3.96875
4
The M1911A1 .45 cal. pistol was the standard personal defense weapon carried by officers of all services during World War I, World Was II, and Korea. It has a rich military heritage, was very reliable, and the weapon of choice for use in close quarters. The M1911A1 pistol has been replaced by the more modern M9 9mm pistol. The M1911A1 had been the standard handgun issued to Marines for many decades. Selected weapons were modified in the 1980s to meet the requirements of the MEU(SOC) in lieu of arming them with the M9 9mm pistol. The .45 caliber semiautomatic pistol M1911A1 is a recoil-operated hand weapon. It is a magazine-fed semiautomatic weapon, which fires one round each time the trigger is squeezed once the hammer is cocked by prior action of the slide or thumb. This design is referred to as "single action only." The thumb safety may only be activated once the pistol is cocked. The hammer remains in the fully cocked position once the safety is activated. (Note: More modern pistol designs of the "double action" type will allow the hammer to move forward to an uncocked position when the thumb safety is activated.) The M1911A1 was widely respected for its reliability and lethality. However, its single action/cocked and locked design required the user to be very familiar and well-trained to allow carrying the pistol in the "ready-to-fire" mode. Consequently, M1911A1s were often prescribed to be carried without a round in the chamber. Even with this restriction on the user, numerous unintentional discharges were documented yearly. Although commercial pistols were purchased and issued to General Officers, some standard Army issue pistols were specially modified for use by General Officers, including the Pistol, Cal. .45, Semi-automatic, M1911A1, General Officer's. Primary function: Semiautomatic pistol Length: 8.625 inches (21.91 centimeters) Length of barrel: 5.03 inches (12.78 centimeters) Magazine empty: 2.5 pounds (1.14 kg) Magazine loaded: 3.0 pounds (1.36 kg) Bore diameter: .45 caliber Maximum effective range: 82.02 feet (25 meters) Muzzle velocity: 830 feet (253 meters) per second Magazine capacity: 7 rounds Unit Replacement Cost: $242
<urn:uuid:9b22f0c0-874e-44bc-9ec0-12119ba35d20>
CC-MAIN-2017-26
https://fas.org/man/dod-101/sys/land/m1911a1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320476.39/warc/CC-MAIN-20170625083108-20170625103108-00285.warc.gz
en
0.950755
514
2.84375
3
In this series of articles, we will study the stories of heretics, rebels, reformers and revolutionaries who attempted to overthrow organized priestly religions all over the world. In the first article of this series we will study how the first great revolution against ‘Brahmanic’ religion was launched in Egypt in 14th century B. C. Priest-Kings And Temples In the dawn of ancient civilizations, such as Mesopotamia and Egypt, priests were practically the rulers of the land by virtue of their skills in magic, medicine, astronomy, temple architecture, literature, and knowledge of various gods, which they created to represent some aspect of nature. Temples were their power bases. They held sway over people in a given community, and established rules of social conduct within that community. They deluded common people into believing that their gods would fulfill their desires and protect them from evil forces. They gave the society the internal stability, and by means of great personal sacrifices, they safely conveyed civilization from one generation to another. In the process they earned much gratitude of common people, and became wealthy and powerful. Before secular kings came on the stage of history, there were the priest-kings. Kings And Palaces The Intellectual priestly class in ancient civilizations suffered from two great weaknesses: 1. They just could not subdue their jealousies of other priests in charge of temples dedicated to other gods, which led to chronic conflicts among them. 2. They were not able to effectively fight off barbaric tribes marauding their lands. These two problems required creation of secular kingships. The fighting men chosen to be kings built an army of able-bodied men who could protect the society from external aggression, and wage war on other lands to increase their wealth, power, and territory. In the language of Brahmanism, they were both Dhananjaya (Conquerors of Wealth) and Paranthapa (Enemy Burner). However, with the kings came their palaces. Now palaces became the second center of power in the ancient societies. As secular kings became more powerful, they began to hold sway over people’s lives by virtue of their muscle power and wealth. However, the priestly class did not totally surrender to the kings. Instead they manipulated kings into believing that their rule must be shown to the public as granted by the grace and will of the gods, who just happened to reside in their pockets. Hammurabi (1792-1750 B.C.), the founder of the first Babylonian empire, acknowledges the supremacy of Sumerian gods by beginning one of his inscriptions, “When Anu and Bel entrusted me with the rule of Sumer and Akkad…” (H. G. Wells). Conflict between priestly class and kings is a universal theme in the history of all civilizations. As we read in my earlier articles, Upanishadic revolution to overthrow Brahmanism was led by Kshatriya sages. God-Kings Of Egypt In Egypt, however, Pharaohs, as the kings were known, with the connivance of the priests, declared themselves as the earthly manifestation of various Egyptian gods such as Osiris, Hathor and Amun Ra. Due to the enormous power derived from their purported divinity, and authority over people derived from their prowess in war and public service, they were able to muster enough manpower to build colossal monuments to their glory such as pyramids, temples and Sphinx. To prevent diluting their divine blood, they married only immediate relatives of opposite sex, such as sisters or cousins. Anyone marrying outside the ‘divine clan’ was subject to social ostracism, which, of course was in the domain of priestly class. Even though the power and authority of the Pharaoh living in relative isolation of their palaces was seemingly absolute, the priests of great temple-casino complexes such as the ones in Karnak and Luxor, held considerable amount of stranglehold on the Pharaohs as well as the populace. As long as the Pharaohs toed the line drawn by the priests, their power base was secure. If they crossed that line, they would do so at their own peril. Inevitably, such a delicate balance of power would certainly receive a jolt sooner or later. Seeds Of Revolution An incident happened during the rule of Amenhotep III, a Pharaoh of 18th Dynasty who ruled from 1386/88 -1349/50. He fell prey to his lust for a beautiful damsel of Syrian/Semitic extract by the name of Tii (“Tee”), and made her his principle wife. This did not sit well with the priests of the chief god Amun Ra. The priests did not hide their dislike for Tii or her offspring. Amenhotep III did another thing to offend the priests. He took an obscure sun god known as Aten, and elevated it to the position of the chief god, while tolerating other gods side by side. (We read in my articles on the Bhagavad Gita how Upanishadic sages took the mysterious spirit Brahman invoked by Brahmins at Yajnas and elevated it into ‘all-pervading Universal Soul Brahman’; and Bhagavatas promoted prince Krishna of Mahabharata epic to the status of Parameshwara.) Thus provoked by Amenhotep III, the priests of Amun Ra became angry, vengeful and turned on his entire family. They did not treat the offspring of Tii well, particularly her second son who came to power later on as Amenhotep IV. Hate for the priests of Amun Ra grew in the heart of Amenhotep IV. It is said that Tii further fuelled the fire of hate in the heart of her son. Now a struggle began between the priests of Amun Ra and the family of Amenhotep III. Pharaoh Amenhotep IV Launches Monotheism Amenhotep IV succeeded his father upon his death, following two years of co-regency. He set out to destroy the entire priest-dominated Egyptian polytheistic religion, which had evolved over at least two thousand years. Realizing that the only way to undermine the power of the priests was to take their gods away from them, he rejected Amun Ra as the supreme god, and elevated Aten the Solar Disc to the position of the Only Supreme God as declared in the hymn, “O Sole God beside whom there is none!” This declaration of One Supreme God –monotheism- has echoed through the centuries in Jewish, Christian and Islamic religions. It has been speculated that Moses got the idea of monotheism from Akhenaten as evidenced by his First of Ten Commandments, “You shall have no other gods before Me,” and in Islam’s oft-repeated utterance, “There is no god but God.” In fact, its echo could be heard even in the monotheistic Bhagavata creed as uttered by Krishna in the Bhagavad Gita, “Surrender unto Me alone” (18:66) and “Worship Me alone” (9:22). Amenhotep IV Becomes Akhenaten And Attacks Old Religion Amenhotep IV changed Amun in his name into Aten, his Supreme God, and called himself Akhenaten. He named his son Tutankhaten, whom we now know as Tutankhamen (king Tut). Akhenaten built many huge temples for Aten in Thebes and systematically knocked down old temples dedicated to Amun Ra as well as other gods. He abolished all different quarrelling sects. Disgusted by the narrow-mindedness and oppressive atmosphere created by the priestly class, which completely dominated his capital city Thebes, he built a new capital at Amarna, 180 miles north of Thebes. He named his new capital Akhetaten. He banished priests from his capital and banned their ancient religious ceremonies. In his religion, one could relate to Aten directly, without brokers. He dictated that his statues should be as realistic as possible so that his subjects would see him as he is rather than as an awe-inspiring phony figure as dictated by the priestly rules of sculpturing. Defying the priestly tradition, he portrayed his wives and children with him in the carvings so that his subjects would see him as having a family life just like them. He made sure that the Sun Disc with radiating rays was depicted in all his portraits. Thus he became the first king in history to initiate a revolution to overthrow the ancient polytheistic religion mediated by hoards of corrupt and powerful priests, and establish a monotheistic religion without priests. The Priestly Backlash Akhenaten did not live long. He died around 1334 B. C. after ruling Egypt for seventeen years, and his revolution died immediately thereafter. As Ashoka the Great did, he underestimated the weed-like power of priests rooted in two thousand years of Egyptian history. The priests had merely bent with the wind. As soon as the winds blew away, they came back to power and immediately began to destroy every temple and palace Akhenaten had built so lavishly. They used the debris of the demolished buildings as the filler material for the foundations of their new temples built for Amun Ra. Akhenaten’s successor Tutankhaten was about eight years old when he was put on the throne, and he could not rule the country without the guidance of experienced priests. The priests renamed him Tutankhamen to reflect his renewed allegiance to Amun Ra, and made him a puppet in their hands. Like Brahmins did to Ashoka the Great after his death, they wiped out the names of Akhenaten and his family from the history of Egypt. Thanks to their thoroughness, Tutankhamen’s tomb remained intact till Howard Carter discovered it in the early part of twentieth century. Grave robbers did not know such a king existed and so they did not look for his tomb! Why Akhenaten’s Revolution Failed Ordinary people, who had been bewildered by the new religion of Atenism, reverted back to the comfort of worshiping their old animal-headed gods by means of their traditional rituals and festivals conducted by their trusted priests, no different than 21st century Hindus finding solace in worshiping elephant-headed god Ganesha or monkey god Hanuman. They could better relate to these gods in their cool stone temples than to the Sun Disc in the burning desert. The concept of a Sun Disc as the Supreme God was too abstract for their simple minds, just as people of post-Vedic period found it difficult to relate to the concept of all-pervading, invisible Brahman as replacement for various anthropomorphic Vedic gods. Besides, unlike Ashoka the Great, Akhenaten did not appoint a huge cadre of emissaries to spread the message of his new religion far and wide. Ashoka’s incessant effort resulted in Buddhism becoming the dominant religion of India for a thousand years, and one of the great religions of the world to this day. Besides, unlike Ashoka, Akhenaten did not undertake great community projects such as building wells, tree-lined roads, hospitals, etc. to serve the public and enhance his own stature. Some historians say that because of Akhenaten’s preoccupation with his religious revolution, he neglected his kingly duties; did not wage war against potential enemies as expected of Pharaohs, nor maintained proper diplomatic relationship with his neighbors. Others have provided evidence to contradict these claims by quoting correspondence in the clay tablets unearthed at archeological sites in Amarna. In any case, the truth is whereas in the beginning of his rule Egypt was very prosperous, by the time he died, decline had already set in. Thus ended the first great revolution against ‘Brahmanism’ of Egypt. Lessons From Akhenaten’s Failed Revolution Akhenaten was a revolutionary, but unlike Ashoka the Great, he was not a visionary. It is clear from all the available evidence that Akhenaten attempted to overthrow the old religion of Egyptians by brute force rather than by means of clever set of strategies and tactics. He did not understand the limitation of power of even God-Kings, and the extent of power of priests over the minds of common people. He did not understand the reality that to reform or overthrow a well-established priestly religion, he needed to take small steps, and carry people with him by means of reasoning, education, sympathy and support. He did not realize that for a new ideology to take roots and spread, he would need the services of thousands of dedicated emissaries and selfless volunteers. He underestimated the power of priests over the minds of simple folks, which they had gained over two thousand years by means of great personal sacrifices. He seemed driven more by hatred for the priests than by genuine desire to reform Egyptian religion. Besides all this, he failed to understand that the new ideology or religion must be so down-to-earth that even common people should be able to relate to it. Atheists should note that Buddhism, Hinduism, Christianity, and Islam took deep roots only because thousands of dedicated missionaries sacrificed their lives to promote them. No one can convert another person to his way of thinking without making great personal sacrifices. Modern day Atheists dedicated to enlightening common people about stupidity of religion will do well to take note of the lessons from the Story of Akhenaten’s Revolution. (To be continued) Read Dr. Kamath’s complete series on Heretics, Rebels, Reformers and Revolutionaries here. Read Dr. Kamath’s series on The Truth About The Bhagavad Gita here. Dr. Prabhakar Kamath, is a psychiatrist currently practicing in the U.S. He is the author of Servants, Not Masters: A Guide for Consumer Activists in India (1987) and Is Your Balloon About Pop?: Owner’s Manual for the Stressed Mind.
<urn:uuid:24f692af-fe79-422d-ae25-408b7044351b>
CC-MAIN-2017-26
http://nirmukta.com/2010/07/19/heretics-rebels-reformers-and-revolutionaries-part-1/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00365.warc.gz
en
0.97866
2,924
3.34375
3
However, the Romans gave way before the good fortune of the man and accepted the bit, and regarding the monarchy as a respite from the evils of the civil wars, they appointed him dictator for life. This was confessedly a tyranny, since the monarchy, besides the element of irresponsibility, now took on that of permanence. It was Cicero who proposed the first honours for him in the senate, and their magnitude was, after all, not too great for a man; but others added excessive honours and vied with one another in proposing them, thus rendering Caesar odious and obnoxious even to the mildest citizens because of the pretension and extravagance of what was decreed for him. It is thought, too, that the enemies of Caesar no less than his flatterers helped to force these measures through, in order that they might have as many pretexts as possible against him and might be thought to have the best reasons for attempting his life. For in all other ways, at least, after the civil wars were over, he showed himself blameless; and certainly it is thought not inappropriate that the temple of Clemency was decreed as a thank-offering in view of his mildness. For he pardoned many of those who had fought against him, and to some he even gave honours and offices besides as to Brutus and, Cassius, both of whom were now praetors. The statues of Pompey, too, which had been thrown down, he would not suffer to remain so, but set them up again, at which Cicero said that in setting up Pompey's statues Caesar firmly fixed his own.1 When his friends thought it best that he should have a body-guard, and many of them volunteered for this service, he would not consent, saying that it was better to die once for all than to be always expecting death. And in the effort to surround himself with men's good will as the fairest and at the same time the securest protection, he again courted the people with banquets and distributions of grain, and his soldiers with newly planted colonies, the most conspicuous of which were Carthage and Corinth. The earlier capture of both these cities, as well as their present restoration, chanced to fall at one and the same time.2
<urn:uuid:b5af4ea4-1668-499d-bbcf-cab38899c044>
CC-MAIN-2017-26
http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0244%3Achapter%3D57
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00365.warc.gz
en
0.992916
476
2.640625
3
One of the mysteries of the English language finally explained. The main hormone produced by the thyroid gland, acting to increase metabolic rate and so regulating growth and development. - ‘The thyroid releases too much of the hormone thyroxine, which increases the person's basal metabolic rate.’ - ‘Human beings require iodine for the production of the thyroid hormones, thyroxine and triiodothyronine.’ - ‘He tested for thyrotrophin releasing hormone in 67 women with menorrhagia who had normal concentrations of thyroxine and thyroid stimulating hormone.’ - ‘When your thyroid gland produces too much of the hormone thyroxine, you develop hyperthyroidism.’ - ‘Hormones that require amino acids for starting materials include thyroxine (the hormone produced by the thyroid gland), and auxin (a hormone produced by plants).’ Early 20th century: from thyroid + ox- ‘oxygen’ + in from indole (because of an early misunderstanding of its chemical structure), altered by substitution of -ine. In this article we explore how to impress employers with a spot-on CV.
<urn:uuid:dc81966a-7e82-4466-b773-408620880997>
CC-MAIN-2017-26
https://en.oxforddictionaries.com/definition/thyroxine
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00365.warc.gz
en
0.899696
248
3.3125
3
Children's book author Medear°s has bitten off more than she can chew in trying to cover Africa and the Caribbean as well as early and modern African-American cooking. Simple recipes are nothing special: almond-infused warm milk from Morocco is soothing, but hardly worth the hour necessary to prepare it, and an eggplant dip from Nigeria is piquant, although attempts to grind, as instructed, a teaspoon of sesame seeds and a single clove of garlic in a standard blender are bound to fail. The chapter on ``Slave Kitchens'' provides some of the most interesting fodder for thought with a recipe for fried squirrel. Modern African-American dishes are somewhat characterless in comparison. It is hard to discern any appropriate cultural roots in crab salad with feta dressing and fajitas filled with shellfish. A brief, tacked-on chapter supplies menus and a few dishes for holidays like Juneteenth (June 19, emancipation day in Texas) and Kwanzaa. There are a few cooking faux pas here that simply cannot be ignored: A recipe for black beans and rice calls for undrained canned beans, adding a hefty dose of sodium, and a recipe for Ethiopia's flat injera bread calls for Aunt Jemima's Deluxe Easy Pour Pancake Mix in place of the traditional grain teff; while this may be the way injera is commonly made today, it will strike some readers as a bad ethnic joke. Medear°s dots these pages with mostly banal quotes from well- known African-Americans like Booker T. Washington, Oprah Winfrey, and...herself. A multicultural mess.
<urn:uuid:3d2c5897-ac66-4ce4-911b-38a0197b47a2>
CC-MAIN-2017-26
https://www.kirkusreviews.com/book-reviews/angela-shelf-medearis/the-african-american-kitchen/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00365.warc.gz
en
0.939415
328
2.640625
3
Are you a transparent leader? Soon after taking office, President Obama issued an executive order calling for agencies to be transparent, participatory and collaborative as a means to strengthen democracy and to make government more efficient and effective. The directive focused on transparency in dealing with the public, but this is neither achievable nor sustainable unless leaders can create it within their organizations. The definition of "transparency" is to share all relevant information in a way that is timely and valid. Being transparent means sharing the reasoning and intent underlying your statements, questions and actions. For example, when you make a decision, you explain your reasoning by saying something like, "Here's what led me to make the decision this way." When you ask someone a question, you follow it by saying something like, "The reason I am asking is because . . ." When you are transparent, you create better results and relationships because others understand your thinking. People always are trying to find the meaning of actions, especially leaders' behaviors. When you fail to be transparent, you increase the chance that others will come up with their own theories about your intentions and motives-theories that often will differ from yours. Share your thinking and you influence others to see things from your perspective while reducing people's need to invent stories about your actions. Transparency includes sharing your strategy for conversations. When preparing for a conversation or meeting, especially a challenging one, people often develop a strategy for that conversation. For example, when you have to give negative feedback to an employee, you might decide to use the sandwich approach. You begin by offering some positive feedback to put the employee at ease, and then share the negative feedback, and end on a positive note, so he will feel better about you and himself. Here is a simple three-step test to determine whether you are being fully transparent. First, identify your strategy. Second, imagine telling the other person your strategy. It might sound like this: "Lee, I want to talk with you because I have some feedback for you. I want to be transparent with you about my strategy for our conversation. I'm going to start by giving you some positive feedback because I think it will put you at ease. Then I'll give you the negative feedback, which is why I really called you in today. I'll end on a positive note, so that you'll feel better about yourself and won't be as angry with me. How will that work for you?" Then notice your reaction. If you think it would sound absurd to share this strategy, or that sharing it would not work, then you get the point of the test. If you cannot share your strategy without reducing its effectiveness, then you are using a unilaterally controlling strategy, one that must be kept secret to work. The biggest challenge with transparency isn't learning to share what you are thinking; it's learning to productively share what you are thinking. Creating transparent leadership requires changing your mind-set as well as your behavior. It's easy to be transparent about your strategy when the stakes are low. But how transparent are you when the stakes are high, views differ greatly, or you are heavily invested in your solution? The key is whether you are willing to work on changing your thinking, so you can lead your organization to better results and relationships. Roger Schwarz, an organizational psychologist, is president of the leadership and organization development consulting firm Roger Schwarz & Associates and author of The Skilled Facilitator: A Comprehensive Resource for Consultants, Facilitators, Managers, Trainers and Coaches (Jossey-Bass, 2002).
<urn:uuid:a342d7d3-06d1-4a48-9ab0-cb03753fdb30>
CC-MAIN-2017-26
http://www.govexec.com/excellence/management-matters/2010/04/transparent-leadership/31229/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00525.warc.gz
en
0.960341
729
3.5
4
Gastroenterologists are medical doctors who specialize in the diagnosis and treatment of diseases of the digestive system, such as hepatitis, ulcerative colitis, Crohn's disease, and colon or rectal cancer. Gastroenterologists may perform many specialized tests, such as endoscopy, to diagnose or treat diseases. When necessary, they may consult with surgeons. Gastroenterologists may further specialize in treating people in certain age groups, such as pediatric gastroenterologists, who only treat children. Gastroenterologists can be board-certified by the Board of Internal Medicine, which is recognized by the American Board of Medical Specialties. Primary Medical ReviewerAnne C. Poinier, MD - Internal Medicine Specialist Medical ReviewerE. Gregory Thompson, MD - Internal Medicine Current as ofNovember 20, 2015 WebMD Medical Reference from Healthwise
<urn:uuid:21c795a6-0c0c-4f2c-8e12-7c9973ed4df3>
CC-MAIN-2017-26
http://www.webmd.com/hw-popup/gastroenterologist
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323680.18/warc/CC-MAIN-20170628120308-20170628140308-00605.warc.gz
en
0.940758
182
3.046875
3
The 7 December 1941 Japanese raid on Pearl Harbor was one of the great defining moments in history. A single carefully-planned and well-executed stroke removed the United States Navy's battleship force as a possible threat to the Japanese Empire's southward expansion. America, unprepared and now considerably weakened, was abruptly brought into the Second World War as a full combatant. Eighteen months earlier, President Franklin D. Roosevelt had transferred the United States Fleet to Pearl Harbor as a presumed deterrent to Japanese aggression. The Japanese military, deeply engaged in the seemingly endless war it had started against China in mid-1937, badly needed oil and other raw materials. Commercial access to these was gradually curtailed as the conquests continued. In July 1941 the Western powers effectively halted trade with Japan. From then on, as the desperate Japanese schemed to seize the oil and mineral-rich East Indies and Southeast Asia, a Pacific war was virtually inevitable. By late November 1941, with peace negotiations clearly approaching an end, informed U.S. officials (and they were well-informed, they believed, through an ability to read Japan's diplomatic codes) fully expected a Japanese attack into the Indies, Malaya and probably the Philippines. Completely unanticipated was the prospect that Japan would attack east, as well. The Pearl Harbor naval base was recognized by both the Japanese and the United States Navies as a potential target for hostile carrier air power. The U.S. Navy had even explored the issue during some of its interwar "Fleet Problems". However, its distance from Japan and shallow harbor, the certainty that Japan's navy would have many other pressing needs for its aircraft carriers in the event of war, and a belief that intelligence would provide warning persuaded senior U.S. officers that the prospect of an attack on Pearl Harbor could be safely discounted. During the interwar period, the Japanese had reached similar conclusions. However, their pressing need for secure flanks during the planned offensive into Southeast Asia and the East Indies spurred the dynamic commander of the Japanese Combined Fleet, Admiral Isoroku Yamamoto to revisit the issue. His staff found that the assault was feasible, given the greater capabilities of newer aircraft types, modifications to aerial torpedoes, a high level of communications security and a reasonable level of good luck. Japan's feelings of desperation helped Yamamoto persuade the Naval high command and Government to undertake the venture should war become inevitable, as appeared increasingly likely during October and November 1941. All six of Japan's first-line aircraft carriers, Akagi, Kaga, Soryu, Hiryu, Shokaku and Zuikaku, were assigned to the mission. With over 420 embarked planes, these ships constituted by far the most powerful carrier task force ever assembled. Vice Admiral Chuichi Nagumo, an experienced, cautious officer, would command the operation. His Pearl Harbor Striking Force also included fast battleships, cruisers and destroyers, with tankers to fuel the ships during their passage across the Pacific. An Advance Expeditionary Force of large submarines, five of them carrying midget submarines, was sent to scout around Hawaii, dispatch the midgets into Pearl Harbor to attack ships there, and torpedo American warships that might escape to sea. Under the greatest secrecy, Nagumo took his ships to sea on 26 November 1941, with orders to abort the mission if he was discovered, or should diplomacy work an unanticipated miracle. Before dawn on the 7th of December, undiscovered and with diplomatic prospects firmly at an end, the Pearl Harbor Striking Force was less than three-hundred miles north of Pearl Harbor. A first attack wave of over 180 aircraft, including torpedo planes, high-level bombers, dive bombers and fighters, was launched in the darkness and flew off to the south. When first group had taken off, a second attack wave of similar size, but with more dive bombers and no torpedo planes, was brought up from the carriers' hangar decks and sent off into the emerging morning light. Near Oahu's southern shore, the five midget submarines had already cast loose from their "mother" subs and were trying to make their way into Pearl Harbor's narrow entrance channel. Japanese planes hit just before 8AM on 7 December. Within a short time five of eight battleships at Pearl Harbor were sunk or sinking, with the rest damaged. Several other ships and most Hawaii-based combat planes were also knocked out and over 2400 Americans were dead. Soon after, Japanese planes eliminated much of the American air force in the Philippines, and a Japanese Army was ashore in Malaya. These great Japanese successes, achieved without prior diplomatic formalities, shocked and enraged the previously divided American people into a level of purposeful unity hardly seen before or since. For the next five months, until the Battle of the Coral Sea in early May, Japan's far-reaching offensives proceeded untroubled by fruitful opposition. American and Allied morale suffered accordingly. Under normal political circumstances, an accommodation might have been considered. However, the memory of the "sneak attack" on Pearl Harbor fueled a determination to fight on. Once the Battle of Midway in early June 1942 had eliminated much of Japan's striking power, that same memory stoked a relentless war to reverse her conquests and remove her, and her German and Italian allies, as future threats to World peace. Jhesu + Marie, *A nearly vertical view of Ford Island and the East Loch. This view shows 8 battleships and an aircraft carrier possibly the USS Saratoga or the Lexington (judging by the size of the aircraft carrier's superstructure). The battleships are in the possition they would be in during the raid 19 months later. This view also shows the airfield which maintained seaplanes and the aircraft groups (CAG) assigned to the carriers which were landed when the carriers were in port. The other ships including the battle groups escorts; cruisers, destoyers and refuelers. Naval Historical Command
<urn:uuid:165d2bad-7cf7-4849-9be5-2e73a09214dd>
CC-MAIN-2017-26
http://lefleurdelystoo.blogspot.com/2010/12/pearl-harbor-7-december-1941.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320174.58/warc/CC-MAIN-20170623202724-20170623222724-00405.warc.gz
en
0.974935
1,219
3.65625
4
Aztec Death Whistles Sound like Human Screams and May Have Been Used as Psychological Warfare When odd, skull-shaped grave items were found by archaeologists decades ago at an Aztec temple in Mexico, they were assumed to be mere toys or ornaments, and were catalogued and stored in warehouses. However, years later, experts discovered they were creepy ‘death whistles’ that made piercing noises resembling a human scream, which the ancient Aztecs may have used during ceremonies, sacrifices, or during battles to strike fear into their enemies. Quijas Yxayotl , a musician who plays an array of traditional Mexican Indian Civilizations instruments, demonstrates an Aztec death whistle. Two skull-shaped, hollow whistles were found 20 years ago at the temple of the wind god Ehecatl, in the hands of a sacrificed male skeleton. When the whistles were finally blown, the sounds created were described as terrifying. The whistles make the sounds of “humans howling in pain, spooky gusts of whistling wind or the ‘scream of a thousand corpses” writes MailOnline. Quetzalcoatl, the feathered serpent god, combined with the attributes of Ehecatl, deity of the wind. The wind instruments may have been linked with this god. Gwendal Uguen/ Flickr Roberto Velázquez Cabrera, a mechanical engineer and founder of the Mexico-based Instituto Virtual de Investigación Tlapitzcalzin, has spent years recreating the instruments of the pre-Columbians to examine the sounds they make. He writes in MexicoLore that the death whistle in particular was not a common instrument, and was possibly reserved for sacrifices – blown just before a victim was killed in order to guide souls to the afterlife- or for use in battle. “Some historians believe that the Aztecs used to sound the death whistle in order to help the deceased journey into the underworld. Tribes are said to have used the terrifying sounds as psychological warfare, to frighten enemies at the start of battle,” explains Oddity Central . If the whistle was used during battles, the psychological effect on an enemy of a hundred death whistles screaming in unison might have been great, unhinging and undermining their resolve. Illustration of Aztec Warriors as found in the Codex Mendoza. Public Domain Other types of ancient noisemakers have been found made from different materials, such as feathers, sugar cane, clay, and frog skin. A zoomorphic whistle from Mexico, circa 200 B.C. - A.D. 500. Public Domain Los Angeles Times reports that some experts think the ancients used the different tones to send the brain into certain states of consciousness, or even to manage or treat illnesses. Some of the replica whistles created by Cabrera make sounds and tones reaching the top range of human hearing, almost inaudible to us. An expert in pre-Hispanic music archaeology, Arnd Adje Both told Los Angeles Times "My experience is that at least some pre-Hispanic sounds are more destructive than positive, others are highly trance-evocative. Surely, sounds were used in all kind of cults, such as sacrificial ones, but also in healing ceremonies.” Roberto Velázquez Cabrera notes that although pre-Columbian music has been lost to us in modern times, the sounds of recreated whistles can be used to give us a better understanding of the ancients. He said, “We've been looking at our ancient culture as if they were deaf and mute. But I think all of this is tied closely to what they did, how they thought.” Featured Image: Aztec ritual human sacrifice portrayed in the Codex Magliabechiano. Public Domain By Liz Leafloor
<urn:uuid:aa8033d0-b45e-4227-a2e5-ace5910b5f28>
CC-MAIN-2017-26
http://www.ancient-origins.net/news-mysterious-phenomena/aztec-death-whistles-sound-human-screams-020129
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320174.58/warc/CC-MAIN-20170623202724-20170623222724-00405.warc.gz
en
0.962417
795
3.203125
3
It's getting colder and although there are a few things to celebrate (hurrah! it's opaque tights and cosy pyjamas time!), the prospect of a cough, cold or sore throat during the winter months is enough to make any one want to stay tucked up under the duvet for the whole seas. However - when it comes to fighting winter bugs, it's all about eating the right foods to ensure our immune systems stay robust. Yep, vitamin C is essential but where can we find it? Founder of Wild Nutrition, Henrietta Norton, shares her winter foods advice below: Blackberries - High in vitamin C which contributes to the normal function of the immune system, protection of cells from damage and cellular repair. In addition they also provide bioflavannoids which support the absorption of vitamin C as well as circulation. Stew with orange zest and cinnamon for a immune-supporting addition to your porridge. Pumpkin seeds - These seasonal seeds provide magnesium. Inadequate magnesium appears to reduce serotonin levels and recent research has highlighted the mood-supporting benefits of this fabulous mineral, even in treatment-resistant mild depression. Use as a snack or sprinkle on top of winter soups. Butternut squash - Particularly dense in key nutrients such as selenium and vitamin C which contribute to the normal function of the immune system. Also a rich source of fibre for healthy digestion, these colourful wonders make great winter warming soups. Live yoghurt - Live plain yoghurt provides bacteria which could contribute to the beneficial flora of the gut. As 70% of the immune system resides in the gut, this natural flora can support your first line of attack against the winter bugs. Beetroot - Love it or hate it, this powerhouse of a root vegetable is high in beta-cyanin and vulgaxanthin. These which promote circulation and offer antioxidant strength needed for a healthy immune system. Cook with ginger for a warming, antioxidant rich soup.
<urn:uuid:f5f88687-798d-4dd4-8202-918641fd52d2>
CC-MAIN-2017-26
http://www.huffingtonpost.co.uk/2014/08/14/five-foods-to-keep-you-healthy-this-winter_n_7356396.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00485.warc.gz
en
0.917221
406
2.875
3
Barrett, Evans and Campione (2015) “find no compelling evidence for the appearance of protofeathers in the dinosaur common ancestor and scales are usually recovered as the plesiomorphic state, but results are sensitive to the outgroup condition in pterosaurs. Rare occurrences of ornithischian filamentous integument might represent independent acquisitions of novel epidermal structures that are not homologous with theropod feathers.” the Barrett team followed two false traditions with regard to pterosaurs, which gained their epidermal structures independent from dinos. The two clades are not related according to the large reptile tree which nests pterosaurs in a new clade of lepidosaurs. Based on their false assumption of scaly pterosaurs as an outgroup, their analysis recovered primitively scaled Dinosauria and Ornithischia. So we’re off to a bad start based on taxon exclusion and false inclusion. Scales have never been found on pterosaurs. Why didn’t they assume filamented pterosaurs? We have evidence for that. So there is a lack of logic here that would have changed their conclusion. The actual outgroup for dinosaurs is the Crocodylomorpha for which tiny back scales first appear on the lower back of tiny Scleromochlus and ultimately cover the entire dermal surface on large extinct and extant taxa. Tiny scales may have been present on basal dinos, but more likely they had naked skin, like birds without their feathers. Scales on bird feet are transformed feathers. The Barrett team database included 24 ornithischians, 6 sauropods and 40 theropods (including Mesozoic birds). All taxa were scored for the presence/absence of epidermal scales, unbranched filaments (protofeathers)/quills and more complex branched filaments (including feathers). The Barrett team report, “Additional examples of protofeathers would be required from early dinosaur lineages or non-dinosaurian dinosauromorphs to optimize this feature to the base of Dinosauria. In particular, the ancestral condition in pterosaurs is pivotal in this regard, but currently unknown.” Longtime readers know this is false based on a cladogram, the large reptile tree) that includes several hundred more taxa. As noted above, scales are unknown in pterosaurs. However, their known outgroup taxa, Longisquama, Sharovipteryx, Cosesaurus and Macrocnemus all have scales. The former three also have ptero-hairs (pycnofibers) and are the only Triassic fenestrasaurs (including pterosaurs) known to have these epidermal structures. Based on their appearance and location, dinoaurian ‘quills’ appear to be hyper elongated primordia without branching. The Barrett team concluded, “It seems most likely that scaly skin, unadorned by feathers or their precursors, was primitive for Dinosauria and retained in the majority of ornithischians, all sauropodomorphs and some early-diverging theropods (filaments are thus far unknown in ceratosaurians, abelisaurids and allosauroids.” In Science “it seems most likely” is a very weak argument, further weakened by the fact that birds don’t have scales, except on their legs, and those are transformed feathers. The Barrett team provided a cladogram that depicted the extent to which scales, filaments and feathers were present. Notably they did not also include the extent of naked skin, which is a fourth possibility not covered by the text or graphic. The possibility exists that all dinosaur scales are transformed primordia (filaments) or transformed feathers. Dinosaur scales could also be novel epidermal structures that appear only on large dinosaurs just as croc scales are novel epidermal structures. Based on their appearance and location, dinoaurian ‘quills’ appear to be hyper elongated primordia. first develop primordial feathers in the middle of their backs, replaying phylogeny during ontogeny. With current data, that trait may go all the way back to basal archosaurs, like Scleromochlus. When you play with phylogenetic bracketing, you have to have a valid cladogram. Barrett PM, Evans DC, Campione NE 2015. Evolution of dinosaur epidermal structures. Biol. Lett. 11: 20150229. online
<urn:uuid:1c1b0d98-a121-4411-9685-50387dbdfb23>
CC-MAIN-2017-26
https://pterosaurheresies.wordpress.com/2015/06/05/evolution-of-dinosaur-epidermal-structures/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00485.warc.gz
en
0.936916
972
3.515625
4
sciencenews writes to tell us that a physicist at Stanford has just recently published a peer review website for several physics lectures focusing on a single underlying idea that "time is not a single dimension of spacetime but rather a local geometric distinction in spacetime." The science is presented quite clearly and originally uses GPS systems as a point of focus. From the article: "Not too long ago, people thought the Earth was flat, which meant they thought that gravity pointed in the same direction everywhere. Today, we think of that as a silly idea, but at the same time, most people today (including most scientists) still think of spacetime as if it were a big box with 3 space dimensions and 1 time dimension. So, like gravity for a flat Earth, the single time dimension for the 'big box universe' points in one direction, from the Big-Bang into the future. A lot of lip service is given to the idea of "curved spacetime", but the simplistic 3+1 'box' remains the dominant concept of what cosmic spacetime is like." DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's now on IFTTT. Check it out! Check out the new SourceForge HTML5 Internet speed test! ×
<urn:uuid:07851b61-da50-41ba-a91e-3d9196ff5dd4>
CC-MAIN-2017-26
https://science.slashdot.org/story/06/02/05/006254/physicist-claims-time-has-a-geometry
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00485.warc.gz
en
0.949757
274
3.0625
3
Here’s a photo that I found particularly interesting. I do think it possesses an essence of magic. That might simply be by the colours and that it contains a blurry image of an magical creature. – A char. But what does ”Char” really mean? Why weren’t they simply named ”Red Trout” just like the ”Rainbow Trout” or the ”Brown trout”? This is a typical thing I like to think about while resting the dog in the woods. I noticed recently that I’ve been referring to pokémon quite alot in my blog entries. It does make sense, since I wanna catch all fish, without killing them. But it’s not as simple as that this time. Note the fiery pokemon ”Charmander” which evolves into ”Charmeleon”which in turn evolves into the pokemons final stage – ”Charizard”. This is a flamingly orange little lizard-like creature with a little flame on it’s tail. Needless to say, this creature breathes fire. This kind of tells me that the word char has something to do with fire since all of the pokemons three different stages combines ”Char” with Salamander, Kameleon and Lizard. Since I’ve never seen or heard the word ”Char” used to describe anything other than the fish before, I found it particularly interesting that this flamingly orange/red pokemon’s name has got the name of a fish in it, a fish that also happens to be orange/red. Did the creators of pokemon refer to the fish ”Char” while creating this pokemon? I wouldn’t think so. Char simply refers to the act of turning any substance into coal. It could’ve been named coalify, but it’s not. This procedure usually occur through fire, and fire is orange/red. Once the *wood* is burned to coal, it’s (char)red. Hence the name Charmander, Charmeleon and Charizard, they spit flames, flames that turn their foes into coal. Not that our precious fish would turn anything into coal, but their name still makes a hell lot of sense – and that due to their fiery colouration. Arctic Char – the submerged embers of the north. My conclusion: The name ”Char” is related to the fish’s fiery colour.
<urn:uuid:b6001f03-e358-4f6e-b050-77501f282cba>
CC-MAIN-2017-26
https://skitenlevererar.wordpress.com/2015/01/01/just-another-char-picture/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00485.warc.gz
en
0.93599
526
2.578125
3
We measured for Noah's Ark and learned that it would encompass our entire street and both cul de sacs. It was one BIG boat! We learned about floating and tested flat bottom boats and round bottom boats. We also floated a boat with out walls and added weight to the boats. The children were facinated! -Notice the bored hands on the cheeks? We learned about rainbows. We looked through rainbow colored glasses. Made a rainbow appear in milk. and refracted rays of light Observed mixing of colors Then finally collapsed from all that learning
<urn:uuid:2af12a07-7752-4f76-9126-3c3c9989c865>
CC-MAIN-2017-26
http://letsfillthevan.blogspot.com/2011/01/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320489.26/warc/CC-MAIN-20170625101427-20170625121427-00565.warc.gz
en
0.969966
116
3
3
There are many different types of pipes for sewer lines and home drainage systems. Each type of piping system can have its own unique drainage problems requiring different methods to maintain the lines and clear stoppages. Listed below are different types of piping materials and descriptions of what they are used for. Clay is one of the most ancient piping materials, with the earliest known example coming from Babylonia. Clay pipe was laid in 2, 3 and 4 foot lengths for most residential applications. There is an expanded “bell” hub at one end. The regular end of a pipe fits snugly into the bell end of the next pipe, making a joint. These joints were typically packed with a mortar type material creating a seal. The Clay piping is very strong but like glass, it will crack or break under pressure. The most common issues we find in Clay are tree root intrusion and cracked or broken sections of pipe. Cast Iron is a metal pipe that has been manufactured and used in The United States since the early 1800s. A good quality Cast Iron pipe, installed under ideal conditions, has a life expectancy of about 50-100 years. As Cast Iron ages it begins to corrode and deteriorate. This deterioration is very slow but exponential and does affect the structural integrity of the pipe eventually requiring repair or replacement. In some cases, the beginning signs of deterioration will be evident through small cracks or breaks in the pipe. Tree roots growing into the Cast Iron is also a sign that the pipe has deteriorated to the point that a repair will likely be needed. In more severe cases, entire sections of the pipe may be missing or the pipe may have completely collapsed. Cast Iron was used extensively in single family homes until the late 1960s to the mid 1970s when plastic became the material of choice. PVC (Polyvinyl Chloride) PVC is a plastic material that became popular in the 1960s as a cheaper and easier to install alternative to Cast Iron. PVC is light weight and very durable so it became the main material used in sewer line applications by the early 1970s. Properly installed, PVC has a life expectancy of 100+ years. The common issues we see with improperly installed PVC relate to poorly glued connections that have separated or improperly backfilled lines that have been crushed. ABS is very similar to PVC in terms of cost and ease of installation, but is considered to be slightly less durable. ABS is widely used in some areas of the country but is not nearly as prevalent as PVC. The common issues we see with improperly installed ABS relate to poorly glued connections that have separated or improperly backfilled lines that have been crushed. Check out our FAQS for more information. VIEW FAQS >>
<urn:uuid:ade812bd-a247-4be3-99e0-067f29e18688>
CC-MAIN-2017-26
http://thesewerpros.com/different-piping-materials-their-uses/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00645.warc.gz
en
0.961911
550
3.03125
3
Freckles and Blemishes Skin blemishes are the discoloration marks that can be found in our skin. We are either born with or it can just instantly appear as we grow older. Blemishes like acne, scars, birthmarks, age spots and freckles are just some of the many types of blemishes that can be found in a human’s skin. To those who are not happy with freckles and blemishes, there are number of freckles treatment in Dubai to diminish and prevent this type of skin blemishes. Signs & Symptoms There are always some signs and symptoms associated with any condition. Mostly, freckles are harmless but they may be a warning sign of sensitive skin that is highly vulnerable to sun burn and potential skin cancer. Some symptoms of freckles and blemishes include: - Changes in the color of skin - Changes in texture of skin - Blotches on the skin - Dryness and roughness Some common causes of freckles and blemishes are: - Excessive exposure to the sun: You can prevent or fade freckles by reducing sun exposure. - Increased melanin: Melanin is the pigment in the skin that is produced by special cells. Increase in the melanin can cause freckles or age spots and darkening of skin. - Genetics: Freckles are influenced by genetics also. - Hormonal changes: Changes in hormones is also one of the causes of freckles and age spots. - Fair skin: Fair skin is more prone to freckles. There is less melanin in fair skin, therefore on exposure to the sun, production of melanin is stimulated to absorb UV light. - Skin conditions: Certain skin conditions like acne can also cause blemishes. Treatment options for freckles and blemishes vary broadly, from simple topical treatments to more advanced and effective IPL therapy. Some common as well as effective ones are: - Topical medications: Some over the counter skin lightening creams can help lighten the skin color and improve the appearance of freckles and blemishes. Hydroquinone, kojic acid and tretinoin (vitamin A acid, Retin A) are the common ingredients that are used in topical medications. - Oral medications: There are various kinds of freckles and oral medications that are very effective for certain kinds of freckles. Usually, oral medications are prescribed to deal with freckles caused by hormonal changes. - Laser therapy: Laser treatment is safe and minimal invasive option for improving freckles. Multiple types of lasers can be used for this purpose. This treatment uses laser to break down melanin in the skin thereby lightening skin tone and improving the appearance of freckles and blemishes. The treatment requires multiple sessions to achieve optimal results. Alexandrite and ND: YAG lasers are mostly used for treating aging spots. - IPL (Intense Pulsed Light) therapy: IPL therapy is not laser treatment but it works same like laser treatment. It effectively and safely zaps away the pigment in the skin and promotes healthy cell turnover revealing a clear and revitalized complexion. - Chemical peels: Chemical peels are also effective for the treatment of freckles and blemishes. They help remove age spots, freckles and blemishes and discoloration. They help to improve the appearance of freckles and make skin clear and smooth gradually. The treatment involves applying chemical on the skin to remove the topmost damaged layers of skin revealing healthy and clear skin. - Microdermabrasion: Microdermabrasion is also an effective treatment for certain kinds of freckles. It sloughs off the damaged top layers of skin revealing healthy, bright and even skin. Stream of tiny particles is used for this purpose. It is usually performed over the course of several sessions. Post Treatment Care You will be able to resume your normal routine immediately after the treatment. However, the treated area will look darker for about 3 to 5 days. Crusting may also occur but this will also be temporary. Slight redness and swelling are normal. Following aftercare instructions will help you get rid of these symptoms quickly. Some common aftercare tips are: - Avoid excessive sun exposure for few weeks after surgery. - Wear strong sunblock with both UVA and UVB protection before going in the sun. - Avoid using AHA, salicylic or glycolic acid creams. - Apply the prescribed topical creams regularly. - Don’t take hot showers. There are many benefits of IPL therapy. Have a look on some of them: - IPL therapy is safe, gentle and minimal invasive treatment. - It is a precise treatment and does not cause any damage to healthy skin cells. - It does not cause any severe discomfort or side effects. - Besides making superficial improvement, it also boosts the production of collagen in the skin. - It requires no downtime; you will be able to resume your normal routine immediately after treatment. if you are looking for freckles treatment in Dubai, Dubai Cosmetic Surgery is here to cater your needs. You can get a free consultation about freckles treatment in Dubai, Abu Dhabi and Sharjah with one of our experts by filling the online form. Fill in the form to get Free Consultation
<urn:uuid:9ad79bfb-0053-437f-89b8-19d1644b744e>
CC-MAIN-2017-26
https://www.dubaicosmeticsurgery.com/laser-treatments/freckles-and-blemishes/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00645.warc.gz
en
0.914112
1,132
2.5625
3