text
stringlengths
247
264k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
20
294
date
stringlengths
20
20
file_path
stringclasses
370 values
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
62
58.7k
Weak pulse; Absent pulse Definition of Pulse – weak or absent A weak pulse means you have difficulty feeling a person’s pulse (heartbeat). An absent pulse means you cannot detect a pulse at all. Linda Vorvick, MD, Family Physician, Seattle Site Coordinator, Lecturer, Pathophysiology, MEDEX Northwest Division of Physician Assistant Studies, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc. – 2/22/2009
<urn:uuid:2dbdabb6-7eef-48d0-bf9c-827f9797d6a4>
CC-MAIN-2013-20
http://www.drgreene.com/adam/pulse-weak-or-absent/
2013-05-21T10:06:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.860162
116
Sussex IncidentArticle Free Pass Sussex Incident, (March 24, 1916), torpedoing of a French cross-channel passenger steamer, the Sussex, by a German submarine, leaving 80 casualties, including two Americans wounded. The attack prompted a U.S. threat to sever diplomatic relations. The German government responded with the so-called Sussex pledge (May 4, 1916), agreeing to give adequate warning before sinking merchant and passenger ships and to provide for the safety of passengers and crew. The pledge was upheld until February 1917, when unrestricted submarine warfare was resumed. What made you want to look up "Sussex Incident"? Please share what surprised you most...
<urn:uuid:3eb1737e-8746-49a1-9864-7870f8d95b88>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/575672/Sussex-Incident
2013-05-19T18:42:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954295
134
The systole is the point of most tension in the heart’s rhythm; the apex of the sinus. It is the beat we hear and feel, what we might identify as metronome to our every function. It is the pointing baton of the conductor in our electric orchestra. What can distinguish a conductor from a metronome? Both keep time with the beat, both determine tempo, and both guide the musicians. The difference is cardiac, pulse: diastolic. The diastole is where you end up between beats, when the conductor’s arms are waving about and cueing the coming crescendo. Trace the graph of this sine wave from peak down into trough with your finger and you’ll feel a tingling between the crevices in your skin. You’ll start to find something the metronome can’t reproduce: a sound (music?) shape that is alltogether dynamic and static. The conductor’s chambers produce a charge of their own between the beats. Strings and horns follow the pointed peaks with each measure to stay on track, but it is the space between each beat where music is made. When a teacher expects only to stay on the beat, to follow the script or the tempo suggested, the heart becomes stressed and the life quickened. A writer concerned only with words, neglecting their phrasing and negative space is apt only to draft in straight lines rather than composing in beautiful curves and dips. The true author of learning is the heart. Teaching with respect to pace over people is like putting a pendulum in place of Gustavo Dudamel’s podium; the sounds will still play, even keep time, but it will not be music. If we’re to seek a kind of teaching which goes beyond the accountability of a tempo, we must consider our hearts at rest. Teaching that cultivates real learning is about the filling the spaces between instruction with meaning befitting to each student. A classroom like this at work is not evaluated by recording descriptions of “time on task” and “compliant and quiet.” Those are symptoms of tempo, meter, and order, but not learning. Deeper learning is described in the language of “curiously seeking” and “exploring and reflecting.” Phrases which both denote an independent sense of ownership on the part of learners. Composing a classroom of such habits involves conducting with tempo, of course, but not speaking to its sole value. The driving force of a learning community must be the sound of conversation between teacher and student, between school and teacher. We may design procedures and protocols all we want for reading, writing, and organizing our class strucutre, but what do our students ultimately talk about when they describe class? If they talk about what page they’re on, the chapter headings, or the number of required pages they have to write, something is amiss. We need to identify the beat of our classrooms, what kids “work on” and stress over: their systolic tasks. We should do this with a mind not just to help them “rigorously” pursue the next thing, but to teach them to see how to settle into the perigee–that point farthest from what they expect “school” is about. Then, not only will they learn, they will begin to understand how they learn best.
<urn:uuid:dbc322f3-0572-4a26-8ef6-d837c75b9448>
CC-MAIN-2013-20
http://stevejmoore.com/2011/11/
2013-05-18T08:18:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956881
713
The construction of the first Episcopal seat, strictly speaking, commenced in 1535 by order of the bishop of Santo Domingo, Don Sebastian Ramirez de Funleal, and was concluded in 1544 during the time of bishop Juan Lopez de Zarate. It was a basilica of three naves, with stonewalls and columns, each cut from single stones. The monolithic columns of Mitla probably served to inspire the cathedral builders, given that this practice had disappeared centuries earlier in Europe. The roof was made from wooden beams and between 1553 and 1581 the damages caused to the beams by earthquakes were repaired. At that time the wing chapels must have existed, in which the altarpiece sculptor Andres de la Concha lived and worked between 1574 and 1594, unfortunately there are no traces of this paintings. By occupying the Episcopal seat, the bishop Monterroso proposed to the chapter, that the naves be extended with two center to center extensions up to the end and the roofs be domed, which ignited a fiery controversy. The works were concluded around 1680, but the earthquakes of 1694 and 1714, the latter being one of the most severe to be registered in Oaxaca, gravely damaged the edifice. After various years of reconstruction, during the Christmas of 1730 the temple was finally opened and the consecration by the bishop Santiago y calderon did not occur until 1733. Few changes were made until the second half of the 19th century but, in 1870, due to the then recent earthquakes remodeling was begun, gaining momentum as of 1887, when Monsignor Eulogio Gillow took charge of the Episcopal seat, promoted in 1891 to Archbishop. Architecturally, the principal layout of the cathedral of Oaxaca is the basilica of three naves with lateral chapels. Its crossed shaped plan is based on Roman architecture of covered market places and the first christian temples. However, there is a main interpretation regarding such plan, in that the cross is not found in the last third of the longitudinal axis perhaps due to the enlargement of the temple in the 17th century, together with the lack of apse. Another peculiarity is the succession of three altars, choir stall and places of the parishioners down the lenght of the main nave: The altar of Pardon which precedes the main entrance of the temple, is associated with the popular nature of the rites carried out with in. This is not the case concerning the imposing choir stall and spacious presbytery where the main altar is found. The sequence ends with the back wall, where the altar of the kings was located and where the composition of the altar of the Holly spirit is found. The frequency of the earthquakes in the region explains the prudent 16 meters height of the main nave, the 2 meters section of the pilasters made up of the Toscan order and the similar thickness of the walls and counter forts. Spherical vaults cover the main nave, the arms of the cross and the aisles. On a lower level, tunnel vaults cover the lateral chapels, which represent a constructive advantage: the dividing walls act as counter forts which support powerful lateral strain. The appearance of the main facade of almost square proportions correspond to an elaborate baroque. Intensively elaborated panels on the doors with diverse carving in relief, recesses carved with the figures of saints in the intercolumniations, decorated columns, plinths, arches and stripes, all of which contribute to the formidable visual effect of the facade. Copyright © 2008 - 2013 ExploringOaxaca.com All rights reserved. 20130525-2142 1.1MB 0.0392s
<urn:uuid:77e0dc51-7f08-4199-80b3-0d9df0e0596f>
CC-MAIN-2013-20
http://www.exploringoaxaca.com/churches-convents,oaxaca-city,cathedral-of-oaxaca-city/
2013-05-26T02:42:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960099
773
<< Back to Safer Needle Devices: Protecting Health Care Workers Previous | Next Type: Picture Slide Content: Illustration of an arrow labeled "Safer Needle Devices" pointing to the right with three arrows coming out of the point. The three arrows are labeled: Engineering Controls, Built-in Safety Features, and Risk Reduction. A safer needle device uses engineering controls to prevent needlestick injuries before, during, or after use through built-in safety features. The term, "safer needle device," is broad and includes many different types of devices including those that have a protective shield over the needle and those that do not use needles at all. The common feature of effective safer needle devices is that they reduce the risk of needlestick injuries to health care workers.
<urn:uuid:32be2ccd-a060-457a-8f5d-d479b9e14b57>
CC-MAIN-2013-20
http://www.osha.gov/dte/library/bloodborne/saferneedledevices/slide34.html
2013-06-18T23:40:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.888937
161
Plant Invaders of Mid-Atlantic Natural Areas Swearingen, J., K. Reshetiloff, B. Slattery, and S. Zwicker. 2002. Plant Invaders of Louis' swallowwort is native to Europe, and may have been introduced intentionally for ornamental purposes or imported unintentionally on other plants or materials. Louis' swallowwort occurs in the northeastern and mid-Atlantic states to the Midwest, and in California, where it threatens native flora in fields, forest edges, woods and open disturbed areas. It grows vigorously and densely, blocking light from reaching the plants it scrambles across, often leading to their death. Swallowwort spreads vegetatively and by seeds dispersed by the wind. Because there are many native milkweed species in the United States, correct identification of this plant is imperative. Prevention and Control |Bargeron, C.T., D.J. Moorhead, G.K. Douce, R.C. Reardon & A.E. Miller | (Tech. Coordinators). 2003. Invasive Plants of the Eastern U.S.: Identification and Control. USDA Forest Service - Forest Health Technology Enterprise Team. Morgantown, WV USA. FHTET-2003-08.
<urn:uuid:589ff44d-cfa9-4c5b-8a1a-8761a0ebbe5a>
CC-MAIN-2013-20
http://www.dnr.state.il.us/Stewardship/cd/midatlantic/cylo.html
2013-05-22T01:02:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.865135
263
Students learn healthy eating habits with computer-based teaching According to the Center for Disease Control, 9 million young people in America are overweight, making the need to promote nutrition and health a public priority. Teaching children about healthy eating habits is an important part of student health education in public schools. According to a recent study published in the Journal of Nursing Scholarship, technology-based teaching was more effective in increasing adolescent development of self-efficacy for healthy eating. "Our findings are important in understanding how to help adolescents develop lifelong healthy eating habits," states author, Dr. JoAnn Long. The study reviewed two curricula, a traditional and an intervention, as they were implemented in separate junior high schools. Results showed that students responded significantly better to computer-based teaching involving interactive, exploratory, and fun modules, versus conventional delivery of nutritional information embedded in health, science and home economics courses. The popularity of technology-based activities, like video games and internet use, was key in appealing to the social and developmental preferences of the youth in this study. After one month of instruction, questionnaires were administered to assess dietary knowledge, actual decision-making in eating habits and the potential for sustained positive eating behavior. Participants in the web-based intervention group had "higher self-efficacy for healthy eating, more dietary knowledge and healthier usual food choice scores than did those in the comparison group." Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. If you think you can do a thing or you think you can't do a thing, you're right. -- Henry Ford
<urn:uuid:682bf14f-35c8-4bdc-8d06-fc56aa988a2d>
CC-MAIN-2013-20
http://psychcentral.com/news/archives/2004-07/bpl-btb071504.html
2013-06-19T20:02:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959126
345
What causes congestive heart failure? Various conditions can undermine heart function or worse, cause injury to the heart components themselves. The end result under all the conditions is a heart that can no longer perform its duty of pumping blood properly to the rest of the body. Some conditions can go undetected, increasing the risk for CHF. The conditions include: Abnormal heart rhythm - The heart beats too fast, or too slow (Arrhythmia). The heart muscle becomes overly strained or weakened, leading to failure. Coronary artery disease or heart attack – During this type of disease, blood flow is impeded by a plaque buildup of fatty deposits on arterial walls, depriving the heart of blood (Atherosclerosis), weakening it. If this plaque causes the artery to rupture, heart attack occurs. Heart muscle damage - In this category, there are many contributing factors that can result in damage to the heart muscle (cardiomyopathy) such as chemotherapy, drug/alcohol abuse, thyroid problems, infections, or even diseases like Lupus that affect the entire body. Undiagnosed causes are also possible. Faulty heart valves – This occurs when one or more of the four valves sustains damage; the others must then work even harder to keep blood flowing properly. Eventually, the heart can weaken. Early detection can correct valve problems. Birth defects - When someone is born with malformed valves or ventricles, the normal components of the heart have to compensate, bearing additional strain. Genetic (Congenital) defects such as Tetralogy of Falot, septum defects or Down syndrome can increase the risk of heart disease leading to CHF. High blood pressure - HBP (Hypertension) restricts blood flow through the body, as more force is needed to push it along. Therefore the heart must pump harder to overcome the pressure in the arteries. To do this, the heart increases its muscle mass like a bodybuilder, resulting in an enlarged heart, and eventual weakening or rigidity. Inflammation - Viruses can cause the heart muscle to become inflamed (myocarditis). Heart failure of the left-sided type is a common result. Other diseases - Sudden (acute) heart failure can result from allergic reactions, certain medications, clotting of blood in the lungs, and many illnesses that attack the body as a whole. Chronic conditions can also induce failure. These include hypo/hyperthyroidism, anemia, diabetes, emphysema, hemochromatosis, amyloidosis, and lupus. There are certain risk factors that make it more likely that a person develops congestive heart failure, especially age. Heart failure can be brought on by a solitary factor, and additional factors combined further increase the risk. The most common are: Age - People 40 and older have a 1 in 5 chance of developing CHF. Older people are affected more often, due to incidents of disease or age-related conditions that lead to weakening of the heart. Alcohol abuse – Overuse of alcohol causes weakening in the muscle tissue, which can cause eventual heart failure. Coronary artery disease (CAD) – CAD partially blocked arteries keep the heart from getting the oxygen-rich blood necessary to maintain its strength. Congenital defects - Abnormal conditions affecting heart structure from birth, such as a deformed valve, or defects of the inner wall of the heart chambers (the septum) put people more at risk of developing CHF later in life. Diabetes – Diabetes can increase the chances of getting coronary artery disease and high blood pressure. Heart attack - Muscle damage often results from this type of event, affecting heart function and increasing risk of CHF. High blood pressure - The heart works much harder than it should to distribute blood through the system and increases risk of heart failure. Irregular heartbeat - Again, the heart is laboring under stress to pump efficiently, perhaps even weakening with time. Kidney problems - These can cause retention of fluids, and high blood pressure, both harmful to the heart. Medications – Diabetes drugs can increase ones’ risk for heart failure. Sometimes, an adjustment of dosage is enough to avoid problems. Sleep apnea - Abnormal heart rhythms and low levels of oxygen in the blood can occur, as this condition causes the sleeping person to breathe improperly. Viruses - Muscle tissue can be injured by viral infections. The war on heart disease is more effective than it has ever been. But as people live longer, they should also be more vigilant over their bodies, especially where the heart is concerned. What are the signs to look for? How do we identify possible heart failure? The next section on symptoms of congestive heart failure outlines what to look for here. |Congestive Heart Failure, abnormal heart rhythms, abnormal heart rhythm, faulty heart valves, Heart Disease, heart failure, heart muscle, Heart Attack, heart valves, heart rhythm, heart, irregular heartbeat, High Blood Pressure, thyroid problems, hyperthyroidism, early detection, Atherosclerosis, Down syndrome, Chemotherapy, hypertension|
<urn:uuid:ade1ee51-13df-430b-8354-e68208d7da46>
CC-MAIN-2013-20
http://ehealthforum.com/health/congestive_heart_failure_causes_and__risk_factors-e528.html
2013-05-22T00:44:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908745
1,058
Cloud computing is the latest buzz on the Internet this days. What does it mean to us and where does the future of Cloud computing goes? In the mid 90′s, we had Citrix, with its vision for server based-computing. Works similar to the Mainframe idea who came couple of decades before – you put all your resources on one server, and thin clients connect to receive resources. Couple of years later, we had new buzz, called ASP (Application service provider), which according to Wikipedia is a business that provides computer-based services to customers over a network. Few years later, ASP changed its name to SaaS (Software as a service), which also referred to as software on demand. In between, we had VMware who presented to world (at least the most famous) server virtualization. What is Cloud Computing? According to Wikipedia, Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. The idea of Cloud computing, enables the customers to avoid investing money on hardware and network equipment, and instead, renting usage from third-party provider. Cloud computing has the following key features: - Agility improves with users’ ability to rapidly and inexpensively re-provision technological infrastructure resources. - Cost is claimed to be greatly reduced. - Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). - Multi-tenancy enables sharing of resources and costs across a large pool of users. - Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery. - Scalability via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. - Maintenance cloud computing applications are easier to maintain, since they don’t have to be installed on each user’s computer. - Metering cloud computing resources usage should be measurable and should be metered per client and application on daily, weekly, monthly, and annual basis. The confusion point and vision People tend to confuse between companies moving their data-centers and applications toward the cloud, and actual Cloud computing providers. A real Cloud computing provider is built from large-scale data centers around the world. Each rack is built from cheap (to manufacture) hot-swappable hardware – it’s time to say goodbye to 1U-4U servers from all major vendors (HP, IBM, DELL, SUN, etc). Each blade has many core CPU (4-core, 6-core and above), with allot of memory (as much as the hardware supports). Each blade is connected to large-scale storage grid. Everything must be redundant – you must be able to add new racks on-demand, without affecting any customer. Servers, network equipment and storage devices must be configured in active-active clusters. Data should be replicated on the fly between data centers across the world, in-order to provide 24/7 availability. Guest operating system must be able to move between physical servers, transparently, as VMware introduced in its VMotion technology. Server maintenance should be performed on schedule basis – since everything is transparent to the customer, firmware upgrades, patch management and software/application upgrades will not affect any customer. The hardware/network/storage layer should be separated from the application layer, so that current SaaS companies will be able to integrate their current applications to the cloud era, and work transparently with Cloud computing infrastructure. Cloud computing Achilles The thing that drives most people off the cloud is security. Customers can’t physically protect their hardware, since they don’t own it. Customers having troubles protecting their data, since everything is built on virtual machines, connected to shared virtual storage. I hope that in the near future information security professionals will be able to close this gap, and enable customers transparent, cheap and secure solutions.
<urn:uuid:6bea5010-feaf-41e9-937d-8ceb31ed4975>
CC-MAIN-2013-20
http://security-24-7.com/cloud-computing-vision/
2013-05-23T04:55:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927027
870
What Is HIV? To understand what HIV is, let’s break it down: H – Human – This particular virus can only infect human beings. I – Immunodeficiency – HIV weakens your immune system by destroying important cells that fight disease and infection. A "deficient" immune system can't protect you. V – Virus – A virus can only reproduce itself by taking over a cell in the body of its host. Human Immunodeficiency Virus is a lot like other viruses, including those that cause the "flu" or the common cold. But there is an important difference – over time, your immune system can clear most viruses out of your body. That isn't the case with HIV – the human immune system can't seem to get rid of it. Scientists are still trying to figure out why. We know that HIV can hide for long periods of time in the cells of your body and that it attacks a key part of your immune system – your T-cells or CD4 cells. Your body has to have these cells to fight infections and disease, but HIV invades them, uses them to make more copies of itself, and then destroys them. Over time, HIV can destroy so many of your CD4 cells that your body can't fight infections and diseases anymore. When that happens, HIV infection can lead to AIDS. What Is AIDS? To understand what AIDS is, let’s break it down: A – Acquired – AIDS is not something you inherit from your parents. You acquire AIDS after birth. I – Immuno – Your body's immune system includes all the organs and cells that work to fight off infection or disease. D – Deficiency – You get AIDS when your immune system is "deficient," or isn't working the way it should. S – Syndrome – A syndrome is a collection of symptoms and signs of disease. AIDS is a syndrome, rather than a single disease, because it is a complex illness with a wide range of complications and symptoms. Acquired Immunodeficiency Syndrome is the final stage of HIV infection. People at this stage of HIV disease have badly damaged immune systems, which put them at risk for opportunistic infections (OIs). You will be diagnosed with AIDS if you have one or more specific OIs, certain cancers, or a very low number of CD4 cells. If you have AIDS, you will need medical intervention and treatment to prevent death. For more information, see CDC’s Basic Information About HIV And AIDS. Where Did HIV Come From? Scientists believe HIV came from a particular kind of chimpanzee in Western Africa. Humans probably came in contact with HIV when they hunted and ate infected animals. Recent studies indicate that HIV may have jumped from monkeys to humans as far back as the late 1800s. For more information, see CDC's Where Did HIV Come From? Fact Sheets & Print Materials - AIDSinfo – HIV And Its Treatment: What You Should Know (PDF) - National Institute of Allergy and Infectious Diseases – HIV/AIDS - CDC – Rapid HIV Testing Related Topics on AIDS.gov Frequently Asked Questions Do all people with HIV have AIDS? No. Being diagnosed with HIV does NOT mean a person will also be diagnosed with AIDS. Healthcare professionals diagnose AIDS only when people with HIV disease begin to get severe opportunistic infections or their CD4 counts fall below a certain level. For more information, see CDC’s Basic Information About HIV And AIDS. Where did HIV come from? Scientists identified a type of chimpanzee in West Africa as the source of HIV infection in humans. The virus most likely jumped to humans when humans hunted these chimpanzees for meat and came into contact with their infected blood. Over several decades, the virus slowly spread across Africa and later into other parts of the world. For more information, see CDC's Basic Information About HIV And AIDS: Origin Of HIV. Last revised: 06/06/2012
<urn:uuid:6c57c491-f7d3-40ab-a10f-910ae4e481b8>
CC-MAIN-2013-20
http://www.aids.gov/hiv-aids-basics/hiv-aids-101/what-is-hiv-aids/index.html
2013-05-22T21:25:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944103
822
This is a fun game that challenges students to find the factors of a given product. My students are allowed to use their multiplication charts to help them until they are fluent with their multiplication facts. This game can be played as a whole class, or as a center activity with a partner. Divide the students into groups, have one student choose a product from a container. Each group/ team works together to find all the factors of that number. All teams are given the opportunity to write down the factors and show them to the teacher. Each team that gets all the factors is allowed to move the number of yards as the chosen product. Aligned to the 3rd Grade Common Core 3.OA.1. Interpret products of whole numbers, e.g., interpret 5 × 7 as the total number of objects in 5 groups of 7 objects each. 3.OA.4. Determine the unknown whole number in a multiplication or division equation relating three whole numbers. 3.OA.5. Apply properties of operations as strategies to multiply and divide. 3.OA.6. Understand division as an unknown-factor problem. 3.OA.7. Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division
<urn:uuid:12cc4d76-919a-45be-b96f-af6a9feb9595>
CC-MAIN-2013-20
http://www.teacherspayteachers.com/Product/Factor-Football-Activity-Aligned-to-the-Common-Core-479839
2013-05-24T01:52:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945733
258
January 2003. VOL.3 ISSUE 1 HAKKA STRONGHOLD IN THE SOUTH By Angelica Montgomery The importance of Hakka heritage to Meinung is obvious if you glance at a tour map of this famous Kaohsiung County town. Numerous cultural centers, shops and other attractions celebrate and exploit the long history of Taiwan's largest ethnic minority. 10 and 15 percent of Taiwan¡¦s people regard themselves as Hakka, but of Meinung's 50,000 residents, nearly 90 percent are Hakka. Their ancestors, drawn by the availability of fertile, well-watered land, began settling in the area during the first year of the reign of Qianlong, the Qing Dynasty emperor who ruled China from 1736 Their history, from umbrellas to tobacco, music to ecology, is depicted in the expansive two-story Meinung Hakka Museum (some English-language pamplets refer to this place as the Kaohsiung County Meei-noog The Hakkas Museum). museum provides a library and access to a database system for Hakka research. It also highlights significant points in Meinung's history. During the early era of settlement, conflicts with Minnan Taiwanese and aborigines were common, and led Meinung citizens to build walls and gates around the women, who would usually face the water when washing clothes in a stream, Hakka women did their laundry with their backs to the river, facing the bank, wary of invaders. Former President Lee Teng-hui is a Hakka; so was the late mainland Chinese leader Deng Xiaoping. Hakka people are widely respected for their work ethic and devotion to education. As part of a traditional regard for scholarship, paper was considered too valuable to be thrown away. Instead it was ritually burned, and a disused stove tower for this purpose still stands in Meinung. In the days of yore, waste paper would be burned in an annual public ceremony. The ashes were then thrown into the river that flows through the town. one point nearly every Meinung home played some role in the labor-intensive tobacco harvest. The museum depicts the sheds used to dry the tobacco crop. These clever structures allowed the temperature to rise from 32¢XC to 73¢XC over eight days; the shed temperature could never be allowed to rise more than five degrees per day, or the tobacco would turn black. The people of Meinung harvested tobacco once a year, around the Lunar New Year. During this time, smoke would rise from tobacco sheds near almost every homestead. Hakka traditional song also has a connection to farm life. Songs often took the form of lyrical exchanges between the men and women working the fields. Hakka soceity was notably conservative, so the words passed between the men and women expressed emotion through innuendo and double-meanings. The museum also discusses the impact of the Japanese occupation of Taiwan on Meinung. Japanese police built a gate to the settlement, and used it to monitor and suppress the population. After the Japanese departed at the end of World War II, villagers tore down the Japanese east gate to erase this painful history, and built a new one which stands today. Meinung Hakka Museum is open from 10 am to 4 pm Tuesday to Friday and 9 am to 5 pm on Saturday and Sunday. It is closed on Mondays and national holidays. On weekends and important occasions, musicians perform in the museum courtyard. one of Meinung¡¦s quite roads, the Lei Cha House Tea shop (see page 18) serves a Hakka tea that nobility enjoyed 1,000 years ago. Guests are invited to don traditional blue Hakka frocks and dine on "yen-yeh fan," tobacco-leaf rice (NT$ In recent years Meinung has become synonymous with the making of painted umbrellas. This craft is actually a relatively recent introduction: In 1920 a Hakka businessman from Meinung visited mainland China, and after seeing the beauty of oil-paper umbrellas made there, decided to import the skill to Meinung. The Yuan Shiang Yuan Cultural Village, which contains shops selling hand-made pottery, works of art, Paiwan glass beads and other gifts, stocks some of the most celebrated umbrellas in Meinung. Visitors can see the oil-paper umbrellas in the process of being handmade. Each umbrella takes around four-and-a-half hours to make, and sells for between NT$600 and NT$1200, depending on its size. Umbrellas with a hand-painted image of a woman cost more than those with flowers or birds because of the extra detail required. According to Lin Hsiu-man, a third-generation umbrella maker, these works of art can be used for up to eight years, but require careful looking after. Visitors with their own transportation should try to spend some time outside the town, where the fields are still tilled by Hakka farmers. If anything, Hakka culture and traditional architecture survives even better there than in downtown Meinung, and provides outsiders with an additional perspective on this enduring ethnic Since the opening of Freeway No. 10, which connects Kaohsiung City and Chishan, driving to Meinung has become much easier and quicker. If you do arrive from the Chishan direction, check out the YiMin Temple before heading into the downtown. Other attractions include Chungcheng Lake, close to the Meinung Hakka Museum, and the Shuanghsi Forest Area, several kilometers to the east.
<urn:uuid:3e5fb59f-42dd-43f2-bb39-35001a255c1d>
CC-MAIN-2013-20
http://www.taiwanfun.com/south/kaoping/articles/0301/0301CoverStory.htm
2013-05-19T02:53:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927851
1,256
Abraham Lincoln and John F. Kennedy were two of the greatest U.S. Presidents, and they are forever linked by an incredible series of eerie similarities. 1. Lincoln was elected in 1860. Kennedy was elected in 1960. 2. Lincoln’s assassin was born in 1839. Kennedy’s assassin was born in 1939. 3. Lincoln’s Vice President was a Southerner named Johnson born in 1808. Kennedy’s Vice President was a Southerner named Johnson born in 1908. 4. Lyndon Johnson and Andrew Johnson both died 10 years after their predecessors. 5. Both Presidents were slain on a Friday, as their wives looked on. Both were shot from behind and in the head. 6. John Wilkes Booth shot Lincoln in a theater and ran to a warehouse. Lee Harvey Oswald shot Kennedy from a warehouse and ran to a theater. 7. Both assassins were killed before they could be brought to trial. 8. Presidents Lincoln and Kennedy were both shot in Fords – Lincoln in Ford’s Theatre and Kennedy in a Lincoln Continental, which is made by the Ford Motor Co. 9. President Lincoln’s secretary was named Kennedy and she advised him not to go to the theater where he was killed. President Kennedy’s secretary was named Lincoln and she advised him not to go to Dallas, where he was assassinated. 10. The names Lincoln and Kennedy each have seven letters. The names of their Vice Presidents – Andrew Johnson and Lyndon Johnson – each contain 13 letters.
<urn:uuid:b13c1caa-58ba-40f8-a629-d70b4d176ae4>
CC-MAIN-2013-20
http://weeklyworldnews.com/politics/6733/10-jfk-lincoln-coincidences/?like=1&_wpnonce=2984bf4a5d
2013-05-20T12:00:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.989323
318
"There is no greater sorrow on earth than the loss of one's native land." - Euripides 431 B.C. A Gallery in Tribute to Heroes of the Political Struggle for Aboriginal Rights: 1900 - 2000 1904 - 1996 1904 born Warrangesda Mission, New South Wales| 1917 removed from family and sent to Cootamundra Girls Home 1930s campaigns with William Cooper, Doug Nicholls and Bill Onus 1932 co-founder of Australian Aborigines League 1938 on Victorian woman at "Day of Mourning" protest in Sydney 1963 co-founder United Aboriginal & Islander Womens Council 1964 first woman appointed to Victorian Aborigines Welfare Board 1968 awarded and OBE Author, Campaigner and Community worker Born: 1904 Warrangesda Mission, New South Wales. Died: 1996. Margaret (Lilardia) Elizabeth Tucker was born on Warrangesda Mission and spent her early childhood on the Cummeragunja and Moonaculla Missions in New South Wales. Her father, William Clements, was Wiradjuri and her mother Teresa (Yarmuk) Clements, née Middleton, was Yulupna. At the age of thirteen, Tucker and her sister May were separated from their mother against her wishes and taken to the Cootamundra Girls' Home. Tucker has written of her harrowing experiences under the care and training of the Aborigines Protection Board and in domestic service for white families in Sydney in her 1977 autobiography, If everyone cared. By the 1930s, Tucker had begun to campaign for Aboriginal rights alongside other legendary Koori campaigners including William Cooper, Bill and Eric Onus, and Doug Nicholls. In 1932, she was co-founder of the Australian Aborigines League and on 26 January 1938 was one of the Victorian representatives observing the first national Day of Mourning. She was also instrumental in founding the United Council of Aboriginal and Islander Women in the 1960s. Tucker was the first Aboriginal woman appointed to the Aborigines Welfare Board (Victoria), 1964, and the Ministry of Aboriginal Affairs, 1968. Tucker was appointed as a Member of the Order of the British Empire (Civil) on 1 January 1968 for services to the Aboriginal community. Source:National Foundation for Australian Women on Australian Women's Archives Project Web Site
<urn:uuid:78873211-0a27-4900-9a8d-0a4caa49acbc>
CC-MAIN-2013-20
http://www.kooriweb.org/foley/heroes/tucker.html
2013-05-20T22:32:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964131
492
A way to toughen up the latex particles used to make emulsion paints has been developed by UK chemists. The approach involves adding tiny slivers of clay armor to make the particles more hard wearing and fire resistant. Until now, latex emulsion paints have been made by adding a soap-like surfactant molecule to allow the hydrophobic, or water-hating, polymer ingredients to mix with water. The surfactant stabilizes the paint mixture and allows decorators everywhere the chance to slap on a multitude of colors with a matt or satin finish to walls, ceilings, and other surfaces. Now, chemist Stefan Bon and Patrick Colver of the University of Warwick have taken a different approach. They have found a simple way to individually coat the prospective paint’s polymer particles with disks of Laponite clay just a few billionths of a meter in diameter. These nanodisks, just 1 nanometer thick and 25 nanometers in diameter, create an armored layer on the individual polymer latex particles in the paint. Because the Laponite clay has an ambivalent chemical nature, it can bond both to the hydrophobic particles but also sit comfortably in the “hydro”, the water. So, not only does it provide particulate protection, it makes the surfactant additive redundant in emulsion paint. The Lapointe clay disks can be applied using current industrial paint manufacturing equipment and treatment with ultrasound—sonication—say the researchers. Starting materials for the polymers are styrene, lauryl (meth)acrylate, butyl (meth)acrylate, octyl acrylate, and 2-ethyl hexyl acrylate. The new clay armor is not only about improving home improvements. The team says the same technology can also be used to create highly sensitive materials for sensors. The researchers can take a closely packed sample of the armored polymers and heat it to burn away the polymer cores of the armored particles leaving just a network of nanoscopic interconnected hollow spheres. This gives a very large useful surface area in a very small space which is an ideal material to use in creating compact but highly sensitive sensors. Bon, S., & Colver, P. (2007). Pickering Miniemulsion Polymerization Using Laponite Clay as a Stabilizer Langmuir, 23 (16), 8316-8322 DOI: 10.1021/la701150q
<urn:uuid:aec69e9d-5dc9-47c3-bf25-b231a20b0dce>
CC-MAIN-2013-20
http://www.reactivereports.com/chemistry-blog/heat-resistant-paint.html
2013-05-22T14:17:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.887859
501
The Earthen Architecture Initiative (EAI) seeks to further the conservation of earthen architecture through international activities and institutional partnerships, including: - seismic stabilization of historic earthen buildings - structural grouting - strategic planning - assessment of ethyl silicate-based consolidants for earthen finishes Our earthen architectural heritage is rich and complex. A ubiquitous form of construction, earthen architecture appears in ancient archaeological sites as well as in modern buildings, in large complexes and historic centers, and in individual structures and decorated surfaces. At microscopic and macroscopic levels—and on physical and social planes—earthen architecture is vastly varied. Thus a range of disciplines in study, research, and practice are associated with its conservation. The field of earthen architecture has grown tremendously in recent decades. This development is reflected in a series of international conferences, the first in 1972 in Iran and the latest in Mali in 2008, devoted to the preservation of earthen architecture. With each conference, the number of participants has increased along with their geographic and professional diversity. Academics, scientists, architects and conservation practitioners, united by their interest in earthen architecture, now convene every few years to discuss chemistry, soil science, seismology, hydrology, structural engineering, archaeology, sociology, sustainability, and more, as they pertain to earthen architectural heritage. As the exchange of ideas within the field has expanded, so have opportunities for collaboration. In 1994 the Getty Conservation Institute joined the Gaia Project (a partnership of CRATerre-EAG and ICCROM) to promote the conservation of earthen architecture through the first Pan-American course on the subject. Three years later, capitalizing on their independent and shared experiences in earthen architecture education, research, and field projects, the three institutions formed Project Terra. Although the Project Terra partnership culminated in 2006, its long-term initiatives and goals have continued under the programs of the individual partner institutions. Advancing the discipline of earthen conservation is the organizing principle for all of the Earthen Architecture Initiative's activities—which include model projects that improve the way conservation interventions are carried out in different parts of the world, pursuing research that addresses unanswered questions in the field of earthen conservation, and disseminating information regarding appropriate conservation interventions on historic buildings, settlements, and archaeological sites composed of earthen materials. The GCI supports the field of earthen architecture—as it matures from a special interest topic into a distinct discipline and science—through a vigorous program that includes laboratory research, field projects, training, conferences, and publications focused on earthen architecture conservation. Since 2006 the EAI has organized and participated in a series of meetings with experts in the field, with the ultimate objective of identifying key areas for project research and implementation. Last updated: January 2011
<urn:uuid:6325c3fc-7ce6-42d0-8e4a-aadf6925c99f>
CC-MAIN-2013-20
http://www.getty.edu/conservation/our_projects/field_projects/earthen/
2013-05-23T19:21:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937798
589
by T. Friedrich, A. Timmermann, A. Abe-Ouchi, N. R. Bates, M. O. Chikamoto, M. J. Church, J. E. Dore, D. K. Gledhill, M. González-Dávila, M. Heinemann, T. Ilyina, J. H. Jungclaus, E. McLeod, A. Mouchet, and J. M. Santana-Casiano Nature Climate Change in press, doi:10.1038/nclimate1372 ABSTRACT: Since the beginning of the Industrial Revolution humans have released ~500 billion metric tons of carbon to the atmosphere through fossil-fuel burning, cement production and land-use changes1, 2. About 30% has been taken up by the oceans3. The oceanic uptake of carbon dioxide leads to changes in marine carbonate chemistry resulting in a decrease of seawater pH and carbonate ion concentration, commonly referred to as ocean acidification. Ocean acidification is considered a major threat to calcifying organisms4, 5, 6. Detecting its magnitude and impacts on regional scales requires accurate knowledge of the level of natural variability of surface ocean carbonate ion concentrations on seasonal to annual timescales and beyond. Ocean observations are severely limited with respect to providing reliable estimates of the signal-to-noise ratio of human-induced trends in carbonate chemistry against natural factors. Using three Earth system models we show that the current anthropogenic trend in ocean acidification already exceeds the level of natural variability by up to 30 times on regional scales. Furthermore, it is demonstrated that the current rates of ocean acidification at monitoring sites in the Atlantic and Pacific oceans exceed those experienced during the last glacial termination by two orders of magnitude.
<urn:uuid:7b9fab78-aac5-4cb8-99c5-2a8a7b23db7d>
CC-MAIN-2013-20
http://newscience.planet3.org/2012/01/25/detecting-regional-anthropogenic-trends-in-ocean-acidification-against-natural-variability/?shared=email&msg=fail
2013-05-23T05:22:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.875077
368
GNU MIX Development Kit (MDK) MIX is Donald Knuth's mythical computer as described in his monumental work The Art Of Computer Programming. As any of its real counterparts, the MIX features registers, memory cells, an overflow toggle, comparison flags, input-output devices, and a set of binary instructions executable by its virtual CPU. You can program the MIX using an assembly language called MIXAL, the MIX Assembly Language. So, what's the use of learning MIXAL? The MIX computer is a simplified version of real CISC computers, and its assembly language closely resembles real ones. You can learn MIX/MIXAL as an introduction to computer architecture and assembly programming: see the MDK documentation for a tutorial on MIX and MIXAL. MDK (MIX Development Kit) offers an emulation of MIX and MIXAL. The current version of MDK includes the following applications: - mixasm A MIXAL compiler, which translates your source files into binary ones, executable by the MIX virtual machine. - mixvm A MIX virtual machine which is able to run and debug compiled MIXAL programs, using a command line interface with readline's line editting capabilities. - gmixvm A MIX virtual machine with a GTK+ GUI which allows you running and debugging your MIXAL programs through a nice graphical interface (see screenshots). - mixguile A Guile interpreter with an embedded MIX virtual machine, manipulable through a library of Scheme functions. - mixal-mode.el An Emacs major mode for MIXAL source files editing, providing syntax highlighting, documentation lookup and invocation of mixvm within Emacs (since version 22, mixal-mode is part of the standard Emacs distribuition). - mixvm.el An elisp program which allows you to run mixvm within an Emacs GUD window, simultaneously viewing your MIXAL source file in another buffer. Using the MDK tools, you'll be able to - write, compile and execute MIXAL programs, - set breakpoints and run your programs step by step, - set conditional breakpoints (register change, memory change, etc.), - collect execution timing statistics, - trace executed instructions, - inspect and modify the MIX registers, flags and memory contents at any step, - simulate MIX input-output devices using the standard output and your file system. The user's manual is distributed with the source tarball in texinfo format, which is converted to info files during the installation process. It is also available in a variety of formats in the documentation section. - Repository: git://git.sv.gnu.org/mdk.git - Development branch: master - Online access here You can get the sources using the following incantation: git clone git://git.sv.gnu.org/mdk.git or, for those of you behind a firewall, git clone http://git.sv.gnu.org/r/mdk.git
<urn:uuid:d4ccbbe1-504a-46e6-9d96-c6fc967b3d11>
CC-MAIN-2013-20
http://www.gnu.org/software/mdk/mdk.html?source=navbar
2013-05-22T07:28:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.823474
643
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. use in predicate calculus ...and not F”; and (3) those true on some specifications and false on others, as with “Something is F and is G.” These are, respectively, the tautologous, inconsistent, and contingent sentences of the predicate calculus. Certain tautologous sentence types may be selected as axioms or as the basis for rules for transforming the symbols of the various... What made you want to look up "inconsistency"? Please share what surprised you most...
<urn:uuid:fe19fc54-227a-435a-bdee-54eca8fc14e5>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/284904/inconsistency
2013-05-20T12:23:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909664
145
Learn something new every day More Info... by email A class B misdemeanor is a classification for a crime that is considered less serious than a felony, and in the mid-range of offenses charged as misdemeanors. Not all jurisdictions have a designation for class B misdemeanors but those that do tend to have class A and class C misdemeanors as well, with A being the most serious and C the least serious. Punishments for class B misdemeanors may rely on legal standards in the jurisdiction as well as the circumstances of the crime. It is important to note that the way in which misdemeanors are charged in different areas will affect the applicable punishments. Something considered a class A misdemeanor in one region may be a class B in another, or may even be upgraded to a felony. Repeated class B misdemeanors, such as multiple convictions of driving under the influence, may bring a higher grade of punishment. To understand the specific designations of misdemeanors in each region, it is important to study local law or consult with local legal professionals. Generally, there are three types of punishments for class B misdemeanors: jail time, fines, and alternative sentences. In some regions, the type of punishment may be at the discretion of the judge, based on the facts of the case and the defendant's criminal record. In other areas, punishments for class B misdemeanors may be based on a set schedule of options that require the judge to assign certain punishments for certain crimes. Some regions also use a combination of these two approaches, providing a sentencing schedule for general use, but also allowing judges to use their own methods to determine an appropriate sentence. Jail time is one of the more common punishments for class B misdemeanors. Typically, the time in jail is 90 days or less, but the defendant may receive credit for time served while awaiting trial. Judges may also choose to order a suspended sentence, which means that the defendant may not have to serve time if he or she completes alternative programs, such as drug or alcohol rehabilitation programs. Suspended sentences may also be dependent on the convicted person staying out of trouble and sticking to certain provisional guidelines. Fines are often used as punishments for class B misdemeanors. These typically have an upper limit, such as $1,000 US Dollars (USD), based on a fine schedule for certain crimes. If a person cannot pay a fine, he or she may be subject to property or asset seizure or jail time. Paying a fine is often the quickest way to conclude a case. Alternative sentences are used in cases where a judge feels that a person may benefit and learn more from treatment programs or therapy than from jail time or fines. These sentences may include community service, rehabilitation programs, driver safety classes, anger management courses, or court-ordered therapy. Typically, alternative sentences are used with a suspended jail sentence or fine, to ensure that the convicted person has incentive to complete the program.
<urn:uuid:544e8da5-49c8-4d4d-adcd-c84fab2c3eb3>
CC-MAIN-2013-20
http://www.wisegeek.com/what-is-the-punishment-for-class-b-misdemeanors.htm
2013-05-18T17:28:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952826
602
Tobacco is a legal and widely-used drug, especially among HIV-positive people. In the Futures 5 survey, 47.6 percent of respondents had smoked tobacco in the last 12 months, a figure that is significantly higher than the 23 percent of Australians who smoke. Smoking is highly addictive and its negative health impacts have been well established. Smoking, by itself, does not make HIV infection worse. In clinical Pertaining to or founded on observation and treatment of participants, as distinguished from theoretical or basic science. studies, people who smoked tobacco did not do any worse (or better!) than those who didn’t in terms of HIV disease progression. But smoking has been linked to increased rates of some HIV-related opportunistic infections and HIV-positive people who smoke may be more likely to suffer smoking-related diseases than HIV-negative smokers. Several studies have shown that the AIDS-related pneumonia An inflammation of the lung, usually caused by infection with bacteria or other microorganisms, in which the air sacs of the lung become filled with inflammatory cells which solidify and inhibit breathing. PCP is more common among people who smoke, and that the risk of dying from PCP is higher in smokers. Positive smokers are also more likely to develop oral hairy leukoplakia, oral candidiasis and community-acquired bacterial pneumonia, compared to non-smokers. Compared to HIV-negative smokers, HIV-positive smokers have an increased risk of developing emphysema, a debilitating smoking-related illness which prevents adequate oxygen from entering the bloodstream. There is also increasing evidence that the incidence of lung cancer is higher if you’re HIV-positive, regardless of your CD4 count or viral load A measurement of the quantity of HIV RNA in the blood. Viral load blood test results are expressed as the number of copies (of HIV) per milliliter of blood plasma.. It’s well known that smoking is a major cause of heart and artery a blood vessel which carries oxygenated blood away from the heart. disease, high blood pressure Persistently high blood pressure, an outwardly symptomless condition which carries an increased risk of serious illnesses such as stroke, heart disease and heart attack., and stroke. Living long-term with HIV, and taking some HIV treatments, are also considered risk factors for the development of these diseases, so it’s likely that HIV- positive people who smoke will have a significantly increased likelihood of developing them as they get older. THE BOTTOM LINE: Clinical studies have not shown that smoking tobacco worsens HIV directly, but they have shown that positive people who smoke are more likely to develop smoking-related illnesses and some AIDS-related complications. If you’re one of the 47 percent of positive people who smoke, giving up the habit could be the most important health decision you can make.
<urn:uuid:6cfa3014-a1a1-4603-b340-b5cb4cf432fc>
CC-MAIN-2013-20
http://napwa.org.au/print/pl/2008/07/the-bottom-line-smoking
2013-05-25T13:22:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705955434/warc/CC-MAIN-20130516120555-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960724
584
This is an image of Uranus' atmosphere. Click on image for full size An Overview of Motions in Uranus' Atmosphere Motions in the atmosphere include wind. The major winds in the Uranian atmosphere are the zonal winds which consist of westward flowing zones and eastward flowing belts. The other major means of motion in the atmosphere is by diffusion. There is a continual circulation within the atmosphere from the top to the bottom. Ethane diffuses down through the atmosphere and collects at the bottom where it breaks apart and becomes methane. The methane returns to the top of the atmosphere by diffusion. This constant breakdown and assembly of methane and ethane is part of the evolution of Uranus and affects its weather. Shop Windows to the Universe Science Store! Our online store includes issues of NESTA's quarterly journal, The Earth Scientist , full of classroom activities on different topics in Earth and space science, as well as books on science education! You might also be interested in: On Uranus, as on Jupiter, the winds in the belts and zones blow first in one direction, then in the opposite direction. Wind blows east in a belt, and west in a zone. The clouds rise up in a belt, and...more Diffusion means the slow steady movement of individual molecules from one region to another. As solid as a pane of glass can seem, molecules of air can easily diffuse, or pass through the glass. This can...more In general, the weather on earth can be described in the following way: in response to incoming energy from the sun, the air rises at the equator and drifts to the poles where it is colder. Because Uranus...more Besides methane, Uranus' atmosphere contains more sophisticated atmospheric molecules such as ethane gas, acetylene, and diacetylene. All these molecules form layers of haze at different altitudes high...more The mesosphere of Uranus is a region of balance between warming and cooling. That essentially means that nothing happens there. Except for diffusion, the atmosphere is still. Upper reaches of the atmosphere,...more As on Earth, the atmosphere of Uranus consists of a troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the region where the visible clouds are to be found. The stratosphere, as...more The striped cloud bands on Uranus, like Jupiter, are divided into belts and zones. On Uranus the belts and zones are hard to distinquish. The left picture shows the north pole of Uranus. In this picture...more
<urn:uuid:a26ef7df-de76-4342-a9a3-d99ab94a56cc>
CC-MAIN-2013-20
http://www.windows2universe.org/uranus/atmosphere/U_atm_motions_overview.html&edu=high
2013-05-23T04:33:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913936
532
AU$89.00 / 16 month(s) Subscribe Online NOW » 06 August 2012 As we confront a changing global climate there is constant, and well warranted, debate as to what we can do as humanity to promote a healthy climate situation. Central to the debate is the role of trees in reducing the pollution that contributes to climate change. Every so often misleading headlines to the effect that trees increase pollution levels crop up, but trees are certainly an overall pollution fighter and new research has shown that they might be more powerful in this regard than previously thought. Plants do many things that theoretically benefit an urban environment. It is estimated that a mature leafy tree produces enough oxygen in one season to sustain ten humans for a year. Plants also fight noise pollution, provide windbreaks, offer shade, and stabilise the soil. There is also evidence that plants reduce air pollution. Plants trap and hold particle pollutants (dust, ash, pollen and smoke) that can damage human lungs. They absorb carbon dioxide (CO2) to the extent that one acre of trees absorb CO2 equivalent to what is produced by driving your car 42,000 kilometres. Every now and then it is suggested that trees contribute to air pollution but that is actually a misunderstanding. What actually happens is that many trees emit reactive molecules known as volatile organic compounds (VOCs), the most common of which is called isoprene. These VOCs are pollutants but they are so reactive that they quickly get consumed in the atmosphere, and some react with nitrogen oxides emitted from combustion engines to form longer-lived more stable organic nitrate compounds. The other issue around plants is whether the amount of particulate pollution that they absorb is significant. Previous research has suggested that trees reduce pollution from particulate matter by only about five per cent. A new study though has suggested that this is an underestimate. The study looked at the effects of grass, vines, and trees in an urban environment. Their analysis showed that at street level these plants can reduce nitrogen dioxide (NO2) levels by as much as 40 per cent and can reduce particulate matter by 60 per cent. The authors even suggested that building “green billboards” in cities to boost plant levels would be worthwhile. Certainly a green billboard would be more edifying than another ad suggesting that beer drinking will build your circle of friends or that a triple beef burger combined with deep fried potato and a fructose sweetened fizzy drink somehow equals a “happy” dining experience.
<urn:uuid:0136dfae-79bb-4d70-b6a9-139f12a5aaed>
CC-MAIN-2013-20
http://wellbeing.com.au/newsdetail/Green-streets_000722
2013-05-22T00:14:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961737
509
Book Launch: A sustained and comprehensive overview of the field of Achaemenid studies by leading scholars and experts. The Revised Stonehenge Site Management Plan 2009 released The Stonehenge World Heritage Site Management Plan provides a strategy to conserve and manage the Site for present and future generations. It covers the whole World Heritage Site landscape (more than 2,600 hectares around Stonehenge). Based on a detailed assessment of the significance of the Site and of the management issues, the Plan outlines long-term aims, short to medium term policies and a detailed action plan. The aim of the Management Plan is to sustain for present and future generations the Outstanding Universal Value of the World Heritage Site: its outstanding prehistoric monuments and archaeological landscape. The Plan takes into account other issues such as access, nature conservation, farming, education, research and the needs of the local community. The first Stonehenge World Heritage Site Management Plan was published in 2000. A revised plan was published in 2009 after extensive consultation with landowners, the local community, statutory bodies and other interested parties. Its preparation was led by English Heritage on behalf of the Stonehenge World Heritage Site Committee, a steering group of stakeholders. The Management Plan is recognised by all parties as the overarching strategy for the sustainable management of the World Heritage Site.
<urn:uuid:04c49841-ee19-4493-abc3-632883526153>
CC-MAIN-2013-20
http://xorshid.com/news/?i=68
2013-06-19T12:33:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904221
263
Surely you see your baby while sleeping peacefully in her crib breathing fills you with love and peace. But especially if it is your first child, you may have some questions and concerns, specifically about whether he is breathing adequately. We are here to answer those questions that you will overcome with information, practical and with the passage of time. Your baby’s lungs begin to develop in the fifth week of pregnancy and continue to grow throughout pregnancy. While in the womb, surrounded by amniotic fluid, the baby lives in a liquid medium and receives oxygen through the placenta. At the time of delivery, and separated from mom, your lungs will suck air for the first time. The first baby breathing itself is done within 10 seconds of birth. Once you start, your body will take a pink color, and normal breathing occur at a leisurely pace. Generally, babies make breathing noises, which originate in the nostrils. The nasal cavity is very small, and when the air passing through them, these noises are produced. It may also happen that the baby has mucus, and in this case, to breathe more easily, you should clean it out with a soft tissue and inside with saline. You can repeat the cleaning each time the child is congested, and keep it semi-sitting, not completely lying. Also consider that when the baby sleeps, in several stages. These include periods of deep sleep and quiet moments of activity and breathing loud moments. That’s normal, and it evolves during the first few months to regular breathing. Posted in: Babies Tips, Baby Health Care Tips Antibiotics are drugs that are prescribed to fight infections caused by bacteria. In the case of infants, may be necessary, for example, when there are urinary tract infections, throat infections caused by strep and ear infections. It is important to know that antibiotics should be used in moderation, and always follow your doctor’s instructions since its abuse or misuse may create resistance to them. Moreover, not only are not required for viral diseases such as the flu, but they do not work. In addition, antibiotics should be administered with caution and under the supervision of the physician, as well as killing the bacteria that causes the infection, can eliminate the “good bacteria” in the intestinal flora and can cause digestive problems such as diarrhea. But beyond resistance to these drugs can create and diarrhea, use of antibiotics in babies under 6 months of age, up by 22% the risk of babies becoming overweight from 10 months and 3 years old. So suggests a study by a pediatrician at the University of New York, which in turn clarified that normal weight can return, usually when the child reaches seven years. To reach that percentage, pediatricians considered multiple variables such as the weight of the parents, if the baby was breastfed, the baby’s weight at birth and if the mother smoked during pregnancy, among others. Posted in: Baby Health Care Tips, Baby Physical Checks When a baby is born, if you count the fingers of the hands, feet, see if you have two ears, two eyes, the mouth intact, arms and legs. When in appearance all is well, dads breathe quiet. However, there are health issues of children can not be determined at a glance. For example, how do you know if your child hears well? There are a variety of medical tests aimed at assessing the hearing. The type of test depends, in general, the age of the child or baby. When are very little ones or neonates, are used tests with objective methods that can identify hearing ability baby without that MOST is to actively intervene. These tests are quick, painless and executed when the baby sleeps. The result indicates brainwave reader to assess the baby’s reaction to sound. As children grow, already can participate indicating when listen certain sound. To find out if your baby or your child listen well, you should also be alert to the signs that can tell you that there is a hearing problem. Take note: - The infant or child does not react to loud noises. - The infant or child can not identify where it comes from a particular sound. - The baby may stammer but then ceases to do so. - Babbling not evolve to verbal communication understandable and consistent. Posted in: Baby Health Is one of the most important elements to keep our baby healthy during a visit to the pediatrician in the first two years of life of our baby one of the most important will be the application of vaccines to protect our baby of various diseases. As we said vaccines and their proper application in the appropriate time frames protect our baby from diseases like polio, hepatitis B, measles, mumps, rubella, diphtheria, pertussis, tetanus and varicella name a few that are very dangerous for health of our baby, each of these vaccines will be applied by our pediatrician possibly in each of the visits we make to your office. The importance of vaccines It is always important to keep parents informed about the different tables vaccination offered by our hospital or international entities such as the Academy of Pediatrics updated it annually to accurately know how often our baby should receive vaccine to prevent different diseases. Posted in: Baby Health, Baby Health Care Tips From the first months of life is the human voice that attracts baby’s attention. We recommend using lullabies to stimulate their curiosity. Hearing is a sense that you are connected and running at all times, has no on-off mechanism, therefore there is no need to direct it consciously. From the moment of his birth is advisable to talk to your baby in front. The quality hearing your child’s learning environment depends on how early we stimulate your ear. As auditory stimulation stimulate your baby’s hearing What is auditory stimulation? Is specifically to talk to your baby, introducing new sounds to go “associating” this sound with their surroundings or environment. At no time should speak ill baby, that is emulating the speech of a child or desmoldando words. They should talk to the baby properly, in a suitable volume and pace. Posted in: Baby Health, Baby Massage There are per-bedtime routines, this circuit repeated daily actions tell the baby that is about bedtime for sleep also help you relax and predispose it to rest. The routine also reduces the number of times the baby to wake up during the night. The dream child cut causes havoc on parents who do not have the ability to recharge overnight. Dream Baby Dream Baby vs. the dream of parents The baby through the routine associated with pleasant manners when you go to sleep, giving confidence and tranquility. This routine is established by performing the same actions for several days at the same times. We offer a sample routine, it can be customized based on each family manages schedules for bed: at 6 pm can be defined as the baby’s bath time and in the process you can use to play with your child the bath, this is a good way to tire and predispose to sleep. At 7 pm you can mark as the time to breastfeed or take a bottle, this is sacred time for the child and the mother, is the perfect time to pamper your baby with lullabies or soft music. Posted in: Baby Health, Baby Physical Checks The experts suggest that 25% of cases of obesity and diabetes in children can be prevented by breastfeeding the baby until 24 months of age, this was confirmed by an investigation of the Breastfeeding Committee Spanish Association of Pediatrics. Well baby breastfeeding reduces risk of suffering some form of cardiovascular disease or leukemia. Breastfed infants during the first year of life get a score higher on tests of cognition and IQ assessment that apply at school age. Benefits of breast Read more... Posted in: Baby Health, Baby Health Care Tips The order in which these are generally similar, the difference in what round of the first four front teeth make their expected appearance, is it down or up?. This situation creates expectations among parents who are anxious about the appearance of your child’s first tooth. The care of the baby’s first tooth start the same day that it looks at the child’s gums. The first baby teeth can appear from 5 to 6 months old. The eruption of teeth usually leads to an upset in the baby, it can cause, crying with pain in the gum, not very severe diarrhea, fever and even colds are associated with infant teething. Teething Baby Teething Every parent notice a delay in your child’s teeth should not worry about the amount of calcium the baby. This mineral has already been absorbed by the baby from the womb until their first months of life. It is not correct calcium supplementation with a teething child late. Only if it shows no sign of their first teeth at the age of 15 months can discuss this situation using a dentist specializing in pediatrics. Posted in: Babies Tips, Baby Health Identified as an allergy disorder that affects the immune system of the baby. In this case the child’s body perceives the milk as a hazardous waste and for the same reason that responds with immediate symptoms such as hives, breathing problems, so in most cases have rashes or hives baby skin down to suffer a severe anaphylactic shock which can be fatal. A case of allergy is not treated professionally can result in death, as there are adverse reactions resulting in interference with Read more... Posted in: baby allergic, Baby Health Another problem with the use of antibiotics does not obey the intervals between doses or stop treatment in the middle without the supervision of a pediatrician when the mother thinks the child is cured. Specialists teach that between the intake of antibiotics reaches the point of infection, the drug will lose strength, ie the amount of antibiotic in the body falls to a level where it is effective. Therefore it is important to follow the dosing interval to maintain an adequate level of antibiotic in the body and its better efficiency. It is also important to continue the treatment until the end to ensure that all bacteria were eliminated even if they re-multiply. Therefore, the times and days must be strictly followed. Only then the treatment will be effective. Pediatricians also emphasize that vaccines and health habits to help prevent disease, but treatment should be evaluated by a doctor forever. Given the risks associated with the abuse of antibiotics for children’s health and the health of all, the doctor gives some basic tips: - Do not pressure your doctor to prescribe antibiotics, or the pharmacist to dispense without a prescription; - Only take antibiotics prescribed for each situation, keep leftovers and do not give antibiotics prescribed for other children; - Complies with the dose, changing the dose or dosing intervals; - Is the treatment until the end, do not stop taking the antibiotic, even if you feel better; - Schools and other child care providers should be informed about the infection and ongoing treatment. Posted in: Baby Health, Baby Physical Checks
<urn:uuid:d3ad6c90-a79f-4a5c-8341-4d78b8ce8a99>
CC-MAIN-2013-20
http://lavishbabies.com/category/baby-health-care
2013-05-20T02:48:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939523
2,282
Climate change: How do we know? The Earth's climate has changed throughout history. Just in the last 650,000 years there have been seven cycles of glacial advance and retreat, with the abrupt end of the last ice age about 7,000 years ago marking the beginning of the modern climate era — and of human civilization. Most of these climate changes are attributed to very small variations in Earth’s orbit that change the amount of solar energy our planet receives. "Scientific evidence for warming of the climate system is unequivocal." - Intergovernmental Panel on Climate Change The current warming trend is of particular significance because most of it is very likely human-induced and proceeding at a rate that is unprecedented in the past 1,300 years.1 Earth-orbiting satellites and other technological advances have enabled scientists to see the big picture, collecting many different types of information about our planet and its climate on a global scale. Studying these climate data collected over many years reveal the signals of a changing climate. Certain facts about Earth's climate are not in dispute: - The heat-trapping nature of carbon dioxide and other gases was demonstrated in the mid-19th century.2Their ability to affect the transfer of infrared energy through the atmosphere is the scientific basis of many JPL-designed instruments, such as AIRS. Increased levels of greenhouse gases must cause the Earth to warm in response. - Ice cores drawn from Greenland, Antarctica, and tropical mountain glaciers show that the Earth’s climate responds to changes in solar output, in the Earth’s orbit, and in greenhouse gas levels. They also show that in the past, large changes in climate have happened very quickly, geologically-speaking: in tens of years, not in millions or even thousands.3 Ninety-seven percent of climate scientists agree that climate-warming trends over the past century are very likely due to human activities, and most of the leading scientific organizations worldwide have issued public statements endorsing this position. Click here for a partial list of these public statements and related resources. The evidence for rapid climate change is compelling: Sea level rise Global sea level rose about 17 centimeters (6.7 inches) in the last century. The rate in the last decade, however, is nearly double that of the last century.4 Global temperature rise All three major global surface temperature reconstructions show that Earth has warmed since 1880.5 Most of this warming has occurred since the 1970s, with the 20 warmest years having occurred since 1981 and with all 10 of the warmest years occurring in the past 12 years.6 Even though the 2000s witnessed a solar output decline resulting in an unusually deep solar minimum in 2007-2009, surface temperatures continue to increase.7 The oceans have absorbed much of this increased heat, with the top 700 meters (about 2,300 feet) of ocean showing warming of 0.302 degrees Fahrenheit since 1969.8 Shrinking ice sheets The Greenland and Antarctic ice sheets have decreased in mass. Data from NASA's Gravity Recovery and Climate Experiment show Greenland lost 150 to 250 cubic kilometers (36 to 60 cubic miles) of ice per year between 2002 and 2006, while Antarctica lost about 152 cubic kilometers (36 cubic miles) of ice between 2002 and 2005. Declining Arctic sea ice Both the extent and thickness of Arctic sea ice has declined rapidly over the last several decades.9 Glaciers are retreating almost everywhere around the world — including in the Alps, Himalayas, Andes, Rockies, Alaska and Africa.10 The number of record high temperature events in the United States has been increasing, while the number of record low temperature events has been decreasing, since 1950. The U.S. has also witnessed increasing numbers of intense rainfall events.11 Since the beginning of the Industrial Revolution, the acidity of surface ocean waters has increased by about 30 percent.12,13 This increase is the result of humans emitting more carbon dioxide into the atmosphere and hence more being absorbed into the oceans. The amount of carbon dioxide absorbed by the upper layer of the oceans is increasing by about 2 billion tons per year.14,15 This resource has been selected for inclusion in the CLEAN collection IPCC Fourth Assessment Report, Summary for Policymakers, p. 5 B.D. Santer et.al., “A search for human influences on the thermal structure of the atmosphere,” Nature vol 382, 4 July 1996, 39-46 Gabriele C. Hegerl, “Detecting Greenhouse-Gas-Induced Climate Change with an Optimal Fingerprint Method,” Journal of Climate, v. 9, October 1996, 2281-2306 V. Ramaswamy et.al., “Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling,” Science 311 (24 February 2006), 1138-1141 B.D. Santer et.al., “Contributions of Anthropogenic and Natural Forcing to Recent Tropopause Height Changes,” Science vol. 301 (25 July 2003), 479-483. In the 1860s, physicist John Tyndall recognized the Earth's natural greenhouse effect and suggested that slight changes in the atmospheric composition could bring about climatic variations. In 1896, a seminal paper by Swedish scientist Svante Arrhenius first speculated that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature through the greenhouse effect. National Research Council (NRC), 2006. Surface Temperature Reconstructions For the Last 2,000 Years. National Academy Press, Washington, DC. Church, J. A. and N.J. White (2006), A 20th century acceleration in global sea level rise, Geophysical Research Letters, 33, L01602, doi:10.1029/2005GL024826. The global sea level estimate described in this work can be downloaded from the CSIRO website. T.C. Peterson et.al., "State of the Climate in 2008," Special Supplement to the Bulletin of the American Meteorological Society, v. 90, no. 8, August 2009, pp. S17-S18. I. Allison et.al., The Copenhagen Diagnosis: Updating the World on the Latest Climate Science, UNSW Climate Change Research Center, Sydney, Australia, 2009, p. 11 Levitus, et al, "Global ocean heat content 1955–2008 in light of recently revealed instrumentation problems," Geophys. Res. Lett. 36, L07608 (2009). L. Polyak, et.al., “History of Sea Ice in the Arctic,” in Past Climate Variability and Change in the Arctic and at High Latitudes, U.S. Geological Survey, Climate Change Science Program Synthesis and Assessment Product 1.2, January 2009, chapter 7 R. Kwok and D. A. Rothrock, “Decline in Arctic sea ice thickness from submarine and ICESAT records: 1958-2008,” Geophysical Research Letters, v. 36, paper no. L15501, 2009 C. L. Sabine et.al., “The Oceanic Sink for Anthropogenic CO2,” Science vol. 305 (16 July 2004), 367-371
<urn:uuid:331f8eaa-6aca-450d-8335-5a97229ecb68>
CC-MAIN-2013-20
http://climate.nasa.gov/evidence
2013-05-21T10:20:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898682
1,515
10 Comparisons Between Chimps and Humans Chimpanzees are our closest living relatives, and yet they were unknown to most of the world until Charles Darwin wrote about and popularized them in 1859. Much about them has only been discovered recently, and misconceptions abound due to the exaggerations and artistic license used in works of fiction. Nevertheless, our similarities and differences are not what many people think. By learning about our relatives we can better understand ourselves. Chimpanzees are often incorrectly called monkeys, but they are actually in the great ape family just like us. The other great apes are orangutans and gorillas. There is only one species of human alive at present: homo sapiens. In the past, many scientists tried to argue that there were several species of human, and would often hasten to add that they themselves belonged to the ‘superior’ species. However, all humans can produce fertile children and so we are all the same species. Chimpanzees, on the other hand, are actually two species: pan troglodytes, the common chimpanzee, and pan paniscus, the gracile chimpanzee or bonobo. These two types of chimpanzee are completely separate species. Humans and both chimpanzee species evolved from a common ancestor, possibly sahelanthropus tchadensis, between five and seven million years ago. Only fossils of this ancestor remain. It is often said that humans and chimpanzees share 99% the same DNA. Genetic comparison is not simple due to the nature of gene repeats and mutations, but a better estimate is somewhere from 85% to 95%. This figure may still sound impressive, but most DNA is used for basic cellular functions which all living things share. For example, we have about half the same DNA as a banana, and yet people do not use this to emphasize how similar bananas are to us! So 95% does not say as much as it first appears to. Chimpanzees have 48 chromosomes, two more than humans. It is thought that this is because in a human ancestor, two pairs of chromosomes fused into a single pair. Interestingly, humans have some of the least genetic variation of all animals, which is why inbreeding can cause genetic problems. Even two completely unrelated humans are usually genetically more similar than two sibling chimpanzees. The brain of a chimpanzee has a volume of 370mL on average. In contrast, humans have a brain size of 1350mL on average. Brain size alone, however, is not an absolute indicator of intelligence. There have been Nobel Prize winners with brains ranging from below 900mL all the way up to over 2000mL. The structure and organization of the various parts of the brain is a better way of determining intelligence. Human brains have a high surface area because they are much more wrinkled than chimpanzee brains, with greater numbers of connections between many of its parts. These, as well as a relatively larger frontal lobe, allow us much more of the luxury of abstract and logical thought. Chimpanzees spend a great deal of time socializing. Much of their socializing is grooming each other. Juvenile and adolescent chimpanzees will often play with, chase, and tickle each other, as will adults with their offspring. Shows of affection include hugging and kissing, which are done between chimpanzees of any age or gender. Bonobos are especially frisky, and nearly every show of affection is done sexually, regardless of gender. Chimpanzees strengthen friendships by spending extensive time grooming each other. Humans spend a comparable time socializing, albeit more through talking than grooming. Nevertheless, much of the vast amounts of inconsequential chatter we produce is simply a more sophisticated version of chimpanzee grooming – it serves little other purpose than to strengthen our relationship bonds. Humans also demonstrate stronger relationships through physical contact – a pat on the back, a hug, or a friendly shove. Primate social group sizes closely reflect their brain sizes. Chimpanzees have about 50 close friends and acquaintances, whereas humans have between 150 and 200. Chimpanzees have complex greetings and communications which depend on the social statuses of the communicating chimps. They communicate verbally using a variety of hoots, grunts, screams, pants, and other vocalizations. Most of their communication, however, is done through gestures and facial expressions. Many of their facial expressions – surprise, grinning, pleading, comforting – are the same as those of humans. However, humans smile by bearing their teeth, which is for chimps and many other animals a sign of aggression or danger. A much greater portion of human communication is done through vocalizations. Humans uniquely have complex vocal chords, allowing us a great range of sounds, but preventing us from drinking and breathing simultaneously like chimpanzees can. Moreover, we have very muscular tongues and lips, allowing us accurate manipulations of our voices. This is why we have pointy chins whereas chimps have receding chins – we attach our many lip muscles to the prominent lower chin, but chimpanzees lack many of these muscles and so do not need a protruding chin. Chimpanzees and humans are both omnivorous (eat plants and meat). Humans are more carnivorous than chimpanzees, and have intestines more refined towards the digestion of meat. Chimpanzees will occasionally hunt and kill other mammals, often monkeys, but otherwise restrict themselves to fruit and sometimes insects. Humans are much more dependent on meat – humans can only obtain vitamin B12 naturally through eating animal products. Based on our digestive system and the lifestyles of extant tribes, it is thought that humans have evolved to eat meat at least once every few days. Humans also tend to eat in meals rather than continuously eating throughout the day, another carnivorous trait. This may be due to meats only being available after a successful hunt, and so are eaten in large quantities but infrequently. Chimpanzees will graze on fruits constantly whereas most humans will eat no more than three times in a day. Bonobos are renown for their sexual appetite. Common chimpanzees can become angry or violent, but bonobos defuse any such situation through sexual pleasure. They also greet and show affection to each other through sexual stimulation. Common chimpanzees do not engage in recreational sex, and mating only takes ten or fifteen seconds, often whilst eating or doing something else. Friendships and emotional attachments have no bearing on with whom a common chimpanzee mates, and a female in heat will generally mate with several males, who sometimes patiently wait their turn directly after each other. Humans experience sexual pleasure, like bonobos, however even sex for reproduction only takes much longer and requires more effort; long-term partnerships naturally form as a result. Unlike humans, chimpanzees have no concept of sexual jealousy or competition, as they do not take long-term partners. Both humans and chimpanzees are able to walk bipedally (on two legs). Chimpanzees will often do this to see further ahead, but prefer to move on all fours. Humans walk upright since infancy and have evolved bowl-shaped pelvises to support their internal organs while doing so. Chimpanzees, leaning forward during movement, do not need to support their organs with their pelvis and so have broader hips. This makes childbirth much easier for chimpanzees than for humans, whose bowl-shaped pelvis is in opposition to a large birth canal. Human feet are straight with toes at the front to help push directly ahead when walking, whereas chimpanzee feet have opposable big toes and are more like strong hands than feet. They are used for climbing and crawling, involving sideways, diagonal, or rotating movements. Humans have white around their irises, whereas chimpanzees usually have a dark brown color. This makes it easier to see where other humans are looking, and there are several theories as to why this is so. It may be an adaption to more complex social situations, where it is an advantage to see whom others are looking at and thinking about. It may help when hunting silently in packs, where eye direction is vital to communication. Or it may simply be a genetic mutation with no purpose – white around the iris is seen in some chimpanzees also. Both humans and chimpanzees can see in color, helping them to choose ripe fruits and plants to eat, and have binocular vision; their eyes point forward in the same direction. This helps see in depth and is crucial to hunting, rather than eyes on the side of the head like rabbits which help avoid being hunted. For many years, humans were considered to be the only tool-using animal. Observation in 1960 of chimpanzees using sharpened twigs to fish for termites has since changed this. Both humans and chimpanzees are able to modify their environment to forge tools to help with daily challenges. Chimpanzees will make spears, use stones as hammers and anvils, and mash leaves into a pulp to use as makeshift sponges. It is thought that as a result of walking upright, our front limbs were much freer to use tools, and we have refined tool use to an art. We live constantly surrounded by the products of this ability, and much of what people consider makes us ‘successful’ is rooted in our tool making.
<urn:uuid:1d43495e-97f8-4e6e-961f-59f8ccbaf314>
CC-MAIN-2013-20
http://listverse.com/2012/02/14/10-comparisons-between-chimps-and-humans/
2013-05-24T22:28:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964388
1,870
1. Focus on teachers. Teachers are the primary change agents in the classroom. Empowereing teachers should be the fundemental guideline for any programme of using technology to enhance eduction. The same technology can work both ways: when interactive whiteboards were introduced in the UK they initially created a cinecmatic experience: the teacher had to darken the room and drag students through a pre-determained slideshow or movie. This resulted in an impofrished eductional experience, taking pedagogy back 40 years. Nowdays, teachers hardly use the boads directly: they send students to present their work or solve problems at the board, while they stay back and manage the whole class interaction. Teachers have approprited the tool to their needs, subjecting it to their expertese. This is the kind of process we want to support. 2. Focus on (techno-pedagogic) design. A good teacher does not deliver educational content – she designs an educational experience. There is an ubdundance of high-quility, open and free digital educational content. The critical resource is the knowlege of how to use it. Hewlett foundation invested vast sums in the UK open university’s openlearn project, pushing large portions of their excelant currucular resources to the web, using an open and robust platform that allows free use and even customisation and re-mixing of resoureces. Now Hewlett are funding research into why these rich resources are not being used. Teachers (and educational leaders in general) need to be shifted from the position of consumers to that of designers. A consummer chooses among available immutable goods and makes do. A designer analyses a problem, in its context, and devises a solution. She then implements this solution using available resources. Teachers need learning design and development tools such as LDSE and LAMS, but they also need help in changing their mindset. 3. Open standards, open protocols, open market + guidance, quality contron and monitoring. There are many ways to learn and many ways to teach. Any centralist solution will be good for some and less so for others. Centralised design and implementation choices disempower school leadership and desolve their responsibility. School leaders need to be able to make their own technological choices, but the system needs to ensure that these choices are sound and that the local overhead is minimised. They way to achieve this is to define a set of techno-pedagogical standards for educational technology – from the generic level of operating systems and office suites to the particular tools used to teach specific subjects to specific age groups. Such standards should define the functions and qualities of the technology and their interfaces with other systems (e.g. handles for assessment), but should not dicatate any specific technology. These standards should be supported by a central regisration and management extranet. This system would work like the way Cisco manages its ecosystem: it would provide suppliers with clear specifications of the expected standards, allow them to register and certify their services and products, and offer these to school leaders. These, on their side, will have to document their choices and evaluate them. 4. Same stuff, new ways. Some topics don’t change, but the way we teach them should. Dr. Yifat Kolikant from the Hebrew university is using internet-based dialogue between Israeli and Palestinian students to teach history in a deep and provokative way. By confronting students with conflicting perspectives, they are driven to engage with and strive to understand their own national narrative. The WebLabs project used computer programming and on-line collaboration to drive to develop their a mathematical language and explore complex issues in science and mathematics. Technology enables teachers and students to make the curriculum their own, individually and collectivly, it allows them to draw on a multitude of resources and connect their educational experieces to their real life. The only advantage a computer has over a textbook is its ability to connect people and build networks of knowledge. 5. New stuff, new ways. Our children live in a world which is radically different than the one we grew up in. The skills they need to develop did not exist when we went to school. A common mistake is to interpret this statement in a technical, or rather – technophobic – manner. Children need no more help in learning to use powerpoint than they need in operating their cell-phone. They do need help in understanding what it is they want to tell the world, how to formulate this message, and how to reach their audience. Powerpoint may be a tool they use for this purpose, as may be FaceBook or YouTube. Using any tool is easy, but choosing the right one is hard. Our children have more opportunities and confront new dangers. We need to help them leverage the former and manage the latter. 6. Coding, a basic skill. Computer programming is wrongly perceived as an elite technical skill. In fact, good programmers specialise in one thing: solving problems. They analyse problems, creating abstract models of complex situations, and use whatever hardware, software and social conventions they can get their hands on to devise solutions. Programming is an art that combines analytic reasoning with creative innovation. We do not expect every student to become a master artist or noverlis or a professional sportsperson, yet we teach art, literature and sports from an early age. We see these subjects as essencial to children’s well-being and happiness. They provide them a rich perspective on the world and cognitive tools for dealing with the issues they encounter. Mathematical thinking, and its practical embodiment in computer programming, should be seen in a similar light. Computer programming is a tool, and it should be integrated as such across the curriculum.
<urn:uuid:713d0a77-c2e4-43b4-9e4c-de86a9f33f0f>
CC-MAIN-2013-20
http://designedforlearning.wordpress.com/tag/learning-systems-policy-teaching-technology/
2013-05-18T17:47:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956023
1,185
Newsletter and Technical Publications <International Source Book On Environmentally Sound Technologies for Wastewater and Stormwater Management> Regional Overviews and Information Sources There has been an increased emphasis on urban environmental management to meet the demands of a rapidly growing urban population for safe water supply, solid waste, wastewater and stormwater management services. The demand for these services has been consistently higher than their supply in Africa; resulting in a huge unmet demand. It is now well known that integrated water management is imperative for the stability and sustainable development of cities. Examples from Africa illustrate clearly that unless adequate water supply and solid waste disposal are ensured, not much progress can be made in the provision of high standard wastewater and stormwater management. Table 1.1 illustrates the point more clearly. It shows that the method used and quantity of water supplied to an area or a house reflects the kind of waste disposal system it receives. A simple water supply system is linked to basic waste removal systems. As water service improves from minimum through basic and intermediate to full service levels it is accompanied by more sophisticated waste disposal services. |Table 1.1: Relationship between water supply and sanitation systems in Africa ||Water supply system (typical water supply, litres/person/day) ||- Water vendors (5-50) - Tanker supply (5-50) - Water kiosks (5-20) - Public (communal) standpipes or fountains more than 100 m from house |- Pit latrines - Bucket/Pan sanitation ||- Public (communal) standpipes less than 100 m from house (20-50) ||- Ventilated Improved Pit (VIP) ||- Yard tank (50) - Yard tap (50-100) |- Aqua-privy with solid free - Septic tank system - Intermediate flush toilet || - House connections (>100) ||- Full flush sewered sanitation We cannot assume that the Drinking Water Supply and Sanitation Decade in Africa has solved all of Africa’s water and waste management problems. Evaluation of the achievements at the end of the decade shows that the percentage of water supply coverage increased to 41%. In the urban sector, the coverage increased from 66% in 1980 to 77% in 1990. In the rural sector, however, the coverage increased from 22% to just 26%. In the sanitation sector, the 1990 increases in coverage are from 22 to 34% for the total African population, from 57 to 80% in the urban sector, but a decrease from 20 to 16% in the rural sector. Unfortunately, the situation instead of getting better, has actually become worse in a number of the countries in the region. |Table 1.2: Urban populations in selected African ||Urban Population (%) 1995 ||Total Population million ,1996 ||GNP per capita 1996 |Central African Rep. |Congo, Dem. Rep. ||Indian Ocean islands ||Indian Ocean islands | African Development bank (1998),# Statistics South Africa (1998), World Bank (1996). * DRC has been recently admitted into SADC, thus becoming part of southern Africa economically while remaining geographically in Central Africa. **The sanitation situation in the city state of Djibouti is unique in Africa. It is a best practice example; the problem in showing it off being the incomparably small scale of its operation. It is convenient and helpful to divide Africa into sub-regions for the purpose of considering wastewater and stormwater management. The six sub-regions are Southern Africa, Sudano-sahelian zone, humid Western Africa, Central Africa, Eastern Africa, and the Islands of the Indian Ocean. Urban transition is under way in Gabon, South Africa and Mauritania where 50-54 % of the total population lives in urban areas; and in the Congo with 59 % urbanization. Table 1.2 shows the percentage of urbanization for other selected countries. It also shows the total populations and gross national product (GNP) which when used in combination provides at least indicative information about how many people need to be served ultimately as well as the internal resources available for doing so in a sustainable manner. It is also important to look ahead a couple of decades with regard to urbanization in the sub-region. The number of small towns (population <100,000) is expected to grow from 3,000 in 1990 to 9,000 in 2020. Almost one-third of Africa's urban population will live in such towns. The number of medium towns (population 100,000 - 1 million) will likely reach 660 in number by 2020 and house 30% or about 175 million of the region’s urban inhabitants. Large cities (population >1 million) are expected to be about 70 in number by 2020 and will house some 40% of Africa’s urban population.
<urn:uuid:502a4e47-4829-4b1f-874e-9cf816e0e8f6>
CC-MAIN-2013-20
http://www.unep.or.jp/ietc/Publications/TechPublications/TechPub-15/3-1Africa/1-0.asp
2013-05-20T02:22:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.81185
1,067
• kilter • Pronunciation: kil-têr • Hear it! Part of Speech: Noun, mass Meaning: Good condition, health, tune, or spirits. Notes: Kilter is another word on its last leg that we would like to rescue from the lexical boneyard. We hear it today almost exclusively in the expression "out of kilter". However, the use of this Good Word is by no means limited to a single negative phrase. In Play: Dr. Goodword is frequently asked, "If something is not out of kilter, is it 'in kilter'?" "Yes," is always the good doctor's answer: "Everyone at the party was in good kilter so the party was a great success." Anything that can be out of kilter can as well be in good kilter: "The choir is sounding better, but their music still isn't in perfect kilter." Word History: Today's Good Word is a corruption of kelter, but a corruption that began in the middle of the 17th century, so there is little chance of going back now. Kelter is still preserved in some British dialects, but kilterclearly is favored today. The question is, then, where does kelter come from and the straightforward answer is that no one has the foggiest notion. No semantically related words resembling kelter can be found in English or any other language in its family. That leaves us with little more to say about this word except to thank Perry Dror for his contribution to keeping our Good Word series in high kilter by suggesting fascinating words like today's.
<urn:uuid:7b9aa30c-6ae5-45a7-a4ad-2bca9e2e575f>
CC-MAIN-2013-20
http://www.alphadictionary.com/bb/viewtopic.php?p=35800
2013-05-24T16:03:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974068
343
Become a fan of h2g2 On 24 July, 1999 (estimates vary), the human population of Earth reached 6 billion people. This level has been achieved largely by a process known as 'nookie', but there are other significant contributing factors: Farming, an activity which involves massively inflating the populations of certain co-existing lifeforms for human food purposes, while eliminating all animals and plants that are not tasty enough. Disease control, which is a never-ending struggle for control over other organisms in the champion's league of life. Medicine, which keeps humans healthy and fertile for longer and longer. The fact that Earth has not been hit by an asteroid or comet in the recent few million years. The Earth's human population doubles every 35 years or so, meaning that in a mere 100 years, the population of our planet will be somewhere in the region of 40 billion souls - about 8 times the current population. This translates into a population of about 520 million for the UK, and roughly 2 billion for the USA. China, the world's most populous country, will have a population of over 11 billion people. And you thought it was crowded now!
<urn:uuid:a4ad107d-1cfe-4a69-94c9-3dcbb9d94694>
CC-MAIN-2013-20
http://www.h2g2.com/approved_entry/A134713
2013-05-22T14:53:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948867
244
A rare toad from Tanzania declared “extinct in the wild” three years ago has been restored to its original habitat. This is the first time an amphibian species has been returned home after being classified as “extinct in the wild”. The Kihansi Spray Toad, native to just two hectares of land in south-central Tanzania, became extinct after the installation of a hydroelectric dam dried up the “spray meadows” it relied upon for survival. “Spray meadows” exist at the base of waterfalls, where the fine mist of water produced by the falls helped the Kihansi toad thrive. A successful captive breeding program at the Bronx and Toledo Zoos in the US meant that while the toad was extinct in the wild, 6000 survived in captivity. An initial population of 2500 toads has been rehabilitated in Tanzania. An expansive misting system, designed to recreate the conditions in the habitat before the construction of the dam, has been installed in the area, paid for by the World Bank and the Norwegian government. Australian conservationist Dr Euan Ritchie says cases such as the Kihansi Spray Toad’s can galvanise public sentiment towards environmental causes. “What is interesting about these cases is they inspire the public about what we can do in conservation to make amends for the impact we all have on our environment,” he said. But the Deakin University academic argued high profile cases such as these can also raise thorny questions about which species humans choose to protect, and why. “No one will question that it’s a wonderful achievement, but it raises the whole topic of ‘ecological triage’, and how we best invest our money and effort. Should we prioritise this toad species over many hundreds of thousands of other species which also need our help?” “It’s a very difficult and emotional topic and I don’t think anyone’s got a perfect solution.” Still, Ritchie says the case provides a strong case for rehabilitation of Australia’s endangered species. Restoring the endangered bilby to areas where it has gone extinct will have untold benefits for local environments, he argues. La Trobe Univeristy’s Head of Environment and Ecology, Dr Susan Lawler, says the costs of rehabilitation can be prohibitive. “Conservation efforts do cost money so this is something that will need continuing funding, because they’ll have to maintain the misting system forever.” She is also concerned about other species that may not receive as much public attention, but remain vital to the health of ecosystems. “It tends to be the invertebrates that suffer. People underestimate the value of things without backbones. An endangered cricket or worm is not going to get the airtime or the funding, but they may be absolutely critical to the environment.” Both Ritchie and Lawler agree that the best way to protect species such as the spray toad is to prevent habitat destruction in the first place. “A bigger question,” Ritchie said, “is what do we as humans learn from success stories such as this; are we really prepared to change our ways sufficiently to prevent further species becoming endangered?”
<urn:uuid:7afcb548-9cd1-4780-9dd4-93616fab1334>
CC-MAIN-2013-20
http://theconversation.com/extinct-toad-first-to-be-rehabilitated-into-the-wild-10480
2013-05-26T03:23:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947048
697
On this page: This fact sheet provides basic information about cat's claw—common names, what the science says, potential side effects and cautions, and resources for more information. Cat's claw grows wild in many countries of Central and South America, especially in the Amazon rainforest. The use of this woody vine dates back to the Inca civilization. Historically, cat's claw has been used for centuries in South America to prevent and treat disease. More recently, cat's claw has been used as a folk or traditional remedy for a variety of health conditions, including viral infections (such as herpes and HIV), Alzheimer's disease, cancer, and arthritis. Other folk uses include supporting the immune system and promoting kidney health, as well as preventing and aborting pregnancy. The inner bark of cat's claw is used to make liquid extracts, capsules, and teas. Preparations of cat's claw can also be applied to the skin. What the Science Says - There is not enough scientific evidence to determine whether cat's claw works for any health condition. - Small studies in humans have shown a possible benefit of cat's claw in osteoarthritis and rheumatoid arthritis, but no large trials have been done. In laboratory studies, cat's claw stimulates part of the immune system, but it has not been proven to reduce inflammation or boost the immune system in humans. - The National Institute on Aging funded a study that looked at how cat's claw may affect the brain. Findings may point to new avenues for research in Alzheimer's disease treatment. Side Effects and Cautions - Few side effects have been reported for cat's claw when it is taken at recommended dosages. Though rare, side effects may include headaches, dizziness, and vomiting. - Women who are pregnant or trying to become pregnant should avoid using cat's claw because of its past use for preventing and aborting pregnancy. - Because cat's claw may stimulate the immune system, it is unclear whether the herb is safe for people with conditions affecting the immune system. - Cat's claw may interfere with controlling blood pressure during or after surgery. - Tell all your health care providers about any complementary health practices you use. Give them a full picture of what you do to complementary and alternative medicine, see NCCAM's Time to Talk campaign. - Cat’s claw. Natural Medicines Comprehensive Database Web site. Accessed at www.naturaldatabase.com on May 6, 2009. - Cat’s claw (Uncaria tomentosa, Uncaria guianensis). Natural Standard Database Web site. Accessed at www.naturalstandard.com on May 6, 2009. For More Information The NCCAM Clearinghouse provides information on NCCAM and complementary health approaches, including publications and searches of Federal databases of scientific and medical literature. The Clearinghouse does not provide medical advice, treatment recommendations, or referrals to practitioners. A service of the National Library of Medicine (NLM), PubMed® contains publication information and (in most cases) brief summaries of articles from scientific and medical journals. Office of Dietary Supplements (ODS), National Institutes of Health (NIH) ODS seeks to strengthen knowledge and understanding of dietary supplements by evaluating scientific information, supporting research, sharing research results, and educating the public. Its resources include publications (such as Dietary Supplements: What You Need to Know), fact sheets on a variety of specific supplement ingredients and products (such as vitamin D and multivitamin/mineral supplements), and the PubMed Dietary Supplement Subset. PubMed Dietary Supplement Subset: ods.od.nih.gov/Research/PubMed_Dietary_Supplement_Subset.aspx This publication is not copyrighted and is in the public domain. Duplication is encouraged.
<urn:uuid:50c429af-7624-4651-9a18-9e5bc412b886>
CC-MAIN-2013-20
http://nccam.nih.gov/health/catclaw?nav=rss
2013-05-23T12:45:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898172
784
When stars are more massive than about 8 times the Sun, they end their lives in a spectacular explosion called a supernova. The outer layers of the star are hurtled out into space at thousands of miles an hour, leaving a debris field of gas and dust. Where the star once was located, a small, incredibly dense object called a neutron star is often found. While only 10 miles or so across, the tightly packed neutrons in such a star contain more mass than the entire Sun. A new X-ray image shows the 2,000 year-old-remnant of such a cosmic explosion, known as RCW 103, which occurred about 10,000 light years from Earth. In Chandra's image, the colors of red, green, and blue are mapped to low, medium, and high-energy X-rays. At the center, the bright blue dot is likely the neutron star that astronomers believe formed when the star exploded. For several years astronomers have struggled to understand the behavior of the this object, which exhibits unusually large variations in its X-ray emission over a period of years. New evidence from Chandra implies that the neutron star near the center is rotating once every 6.7 hours, confirming recent work from XMM-Newton. This is much slower than a neutron star of its age should be spinning. One possible solution to this mystery is that the massive progenitor star to RCW 103 may not have exploded in isolation. Rather, a low-mass star that is too dim to see directly may be orbiting around the neutron star. Gas flowing from this unseen neighbor onto the neutron star might be powering its X-ray emission, and the interaction of the magnetic field of the two stars could have caused the neutron star to slow its rotation. Explore further: Galaxy's Ring of Fire
<urn:uuid:1797442c-6d4a-4dc8-be2d-e94524ddc46d>
CC-MAIN-2013-20
http://phys.org/news103387124.html
2013-05-19T10:34:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956083
369
Boot parameters are Linux kernel parameters which are generally used to make sure that peripherals are dealt with properly. For the most part, the kernel can auto-detect information about your peripherals. However, in some cases you'll have to help the kernel a bit. If this is the first time you're booting the system, try the default boot parameters (i.e., don't try setting parameters) and see if it works correctly. It probably will. If not, you can reboot later and look for any special parameters that inform the system about your hardware. Information on many boot parameters can be found in the Linux BootPrompt HOWTO, including tips for obscure hardware. This section contains only a sketch of the most salient parameters. Some common gotchas are included below in Section 5.3, “Troubleshooting the Installation Process”. When the kernel boots, a message should be emitted early in the process. total should match the total amount of RAM, in kilobytes. If this doesn't match the actual amount of RAM you have installed, you need to use the ram is set to the amount of memory, suffixed with “k” for kilobytes, or “m” for megabytes. For example, both mem=64m mean 64MB of RAM. If you are booting with a serial console, generally the kernel will If you have a videocard (framebuffer) and a keyboard also attached to the computer which you wish to boot via serial console, you may have to pass the argument to the kernel, where your serial device, which is usually something like The installation system recognizes a few additional boot parameters which may be useful. This parameter sets the lowest priority of messages to be displayed. The default installation uses This means that both high and critical priority messages are shown, but medium and low priority messages are skipped. If problems are encountered, the installer adjusts the priority as needed. If you add debconf/priority=medium as boot parameter, you will be shown the installation menu and gain more control over the installation. debconf/priority=low is used, all messages are shown (this is equivalent to the expert boot method). debconf/priority=critical, the installation system will display only critical messages and try to do the right thing without fuss. This boot parameter controls the type of user interface used for the installer. The current possible parameter settings are: The default front end is DEBIAN_FRONTEND=text may be preferable for serial console installs. Generally only the newt frontend is available on default install media, so this is not very useful right now. Setting this boot parameter to 2 will cause the installer's boot process to be verbosely logged. Setting it to 3 makes debug shells available at strategic points in the boot process. (Exit the shells to continue the boot process.) This is the default. More verbose than usual. Lots of debugging information. Shells are run at various points in the boot process to allow detailed debugging. Exit the shell to continue the boot. The value of the parameter is the path to the device to load the Debian installer from. For example, The boot floppy, which normally scans all floppies and USB storage devices it can to find the root floppy, can be overridden by this parameter to only look at the one device. Some architectures use the kernel framebuffer to offer installation in a number of languages. If framebuffer causes a problem on your system you can disable the feature by the parameter symptoms are error messages about bterm or bogl, a blank screen, or a freeze within a few minutes after starting the install. false to prevent probing for USB on boot, if that causes problems. By default, the debian-installer automatically probes for network configuration via DHCP. If the probe succeeds, you won't have a chance to review and change the obtained settings. You can get to the manual network setup only in case the DHCP probe fails. If you have a DHCP server on your local network, but want to avoid it because e.g. it gives wrong answers, you can use the parameter netcfg/disable_dhcp=true to prevent configuring the network with DHCP and to enter the information manually. false to prevent starting PCMCIA services, if that causes problems. Some laptops are well known for Specify the url to a preconfiguration file to download and use in automating the install. See Section 4.7, “Automatic Installation”. Specify the path to a preconfiguration file to load to automating the install. See Section 4.7, “Automatic Installation”. If you are using a 2.2.x kernel, you may need to set Note that the kernel accepts a maximum of 8 command line options and 8 environment options (including any options added by default for the installer). If these numbers are exceeded, 2.4 kernels will drop any excess options and 2.6 kernels will panic.
<urn:uuid:5a5e8d6d-4d38-4133-b79e-65e3b55caf96>
CC-MAIN-2013-20
http://www.debian.org/releases/sarge/powerpc/ch05s02.html.en
2013-05-24T15:43:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.752084
1,084
What’s Your Ecological Footprint?By Katharyn Jeffreys I am currently taking an environmental policy class in which we discussed ecological footprints. This “footprint” is a measure of each individual’s impact on the environment and sustainability by estimating the total resources used. This includes the footprint on such realms as transportation, utilities, and of course food. The footprint is measured in acres, based on the number of acres of land required to produce all that goes into bringing the conveniences of transportation, food, and utilities to people. This includes not only the land that the food is grown on, but the resources required to transport, process, and package the goods. All else equal, a vegetarian requires 1.4 acres more than a vegan. Someone who eats one meal with animal products a day or a vegetarian diet with many ovo-lacto products requires an additional acre and a half. Eating more meals with meat, or having meals consisting primarily of meat increases the acreage necessary to more than 7 acres above that of a vegan. To put this in perspective, the average American ecological footprint is about 30 acres. However, the biologically productive space available worldwide is 5.4 acres per person. (This is the number of acres available divided by the population). Americans, of course, have much larger ecological footprints than other countries. As it stands today, humans already exceed the available acreage, meaning that the earth cannot be sustained if people continue to consume at the same rate they do today. This deficit between land available and land necessary also shows which countries can sustain themselves and which rely heavily on imports. The United Arab Emirates, Singapore, and Kuwait all have larger deficits that of the United States, but these three countries also have populations of less than five million people. Countries such as the Central African Republic, Congo Republic, Papua New Guinea, and Gabon all produce more than they consume and make up some of the deficit The difference comes from the fact that, to produce a pound of meat, an animal must consume tens of times that weight in grain. Therefore, the land on which the grain is grown is counted in the footprint. This grain must also be shipped and the meat processed, which adds to the footprint. This clearly indicates that reducing one’s use of natural resources and animal products can contribute to the earth’s sustainability. I have always argued, as have many vegetarians, that eating meat is ecologically unsound. Abstaining from animal products, carpooling, and using energy saving appliances are all ways to reduce your ecological footprint. To learn more about this unique way to determine your impact on the earth and to calculate your own ecological footprint, visit <http://www.rprogress.org/progsum/nip/ef/ef_main.html>. I haven’t eaten anywhere interesting lately, and am getting quite bored of my usual dishes. If you can recommend any restaurants or recipes, I would love to take your suggestions, and will incorporate them into future columns. Super Easy Lentils and Spinach 1 lb (2 1/4 cups) green lentils 1 large onion, chopped 3 cloves garlic, crushed 1 pkg frozen chopped spinach, can be thawed salt and black pepper to taste 2 tbsp olive oil Saute the onion and garlic in the oil with black pepper. When the onion is clear and becoming slightly golden, add the lentils with enough water to cover plus one inch. Bring to a boil and then turn down heat to simmer. Check the lentils after 30 minutes to ensure that there is sufficient water. After 40 minutes (or 45 if the spinach is thawed), add the spinach. Ten minutes later, stir the spinach thoroughly and then cook 10 minutes more for a total of 60 or 65 minutes. Serve over brown rice. It may need a bit of salt. Serves four. Preparation time: 75 minutes.
<urn:uuid:9f3c95e5-ba9e-4ab6-a52c-9df729300962>
CC-MAIN-2013-20
http://tech.mit.edu/V121/N7/veggie7.7a.html
2013-05-24T09:11:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928574
816
Highway Construction Stormwater Information and Guidance The construction industry is a critical participant in our efforts to protect streams, rivers, lakes, and wetlands. As stormwater flows over a construction site, it picks up pollutants like sediment, debris, and chemicals. Preventing soil erosion and sedimentation is an important responsibility at all construction sites. Through the use of Best Management Practices (BMPs), construction site operators are the key defense against erosion and sedimentation. - Soil disturbance associated with a construction site can increase the potential for excess erosion if not properly addressed. - Excess soil erosion from construction projects removes the soil surface layer, rich in nutrients, and transports the sediments into surface waters contributing to sediment loading and pollution transported with the sediments. - The excess sediment collects in reservoirs, lakes, rivers, and streams reducing their water holding capacity and quality; and is detrimental to aquatic life. - While erosion and sedimentation are natural processes that help shape Montana's rivers and valleys, activities such as highway construction can greatly accelerate these natural processes causing serious and costly problems. - The implementation of BMPs to prevent soil erosion and the resulting sedimentation from entering the waterways can significantly reduce serious and costly problems in the future. - High volumes of stormwater can also cause stream bank erosion, and destroy downstream aquatic habitat. - In addition to the environmental impact, uncontrolled erosion can have a considerable financial impact on a construction project. It costs money and time to repair gullies, replace vegetation, clean sediment-clogged storm drains, replace poorly installed BMPs, and mitigate damage to other people's property or to natural resources. Some Soil Erosion Control Tips... No BMPs = Dirt In - Design the site to infiltrate stormwater into the ground and to keep it out of storm drains. Eliminate or minimize the use of stormwater collection and conveyance systems while maximizing the use of stormwater infiltration and bio-retention techniques. - Minimize the amount of exposed soil on site. To the extent possible, plan the project in stages to minimize the amount of area that is bare and subject to erosion. The less soil exposed, the easier and cheaper it will be to control erosion. - Vegetate disturbed areas with permanent or temporary seeding immediately upon reaching final grade. - Vegetate or cover stockpiles that will not be used immediately. - Reduce the velocity of stormwater both onto and away from the project area. Good use of BMPs - Interceptors, diversions, vegetated buffers, and check dams are a few of the BMPs that can be used to slow down stormwater as it travels across and away from the project site. - Diversion measures can also be used to direct flow away from exposed areas toward stable portions of the site. - Silt fences and other types of perimeter filters should never be used to reduce the velocity of runoff. - Protect defined channels immediately with measures adequate to handle the storm flows expected. - Sod, geotextile, natural fiber, riprap, or other stabilization measures should be used to allow the channels to carry water without causing erosion. Use softer measures like geotextile or vegetation where possible to prevent downstream impacts. - Place aggregate or stone at construction site vehicle exits to accommodate at least two tire revolutions of large construction vehicles. Much of the dirt on the tires will fall off before the vehicle gets to the street. - Regular street sweeping at the construction entrance will prevent dirt from entering storm drains. - Do not hose paved areas. - Sediment traps and basins are temporary structures and should be used in conjunction with other measures to reduce the amount of erosion. Erosion control mat and check dams - Maintaining all BMPs is critical to ensure their effectiveness during the life of the project. - Regularly remove collected sediment from silt fences, berms, traps, and other BMPs. - Ensure that geotextiles and mulch remain in place until vegetation is well established. - Maintain fences that protect sensitive areas, silt fences, diversion structures, and other BMPs. - Other BMPs and Activities to Control Polluted Runoff You'll need to select other controls to address potential pollutant sources on your site. Construction materials, debris, trash, fuel, paint, and stockpiles become pollution sources when it rains. Basic pollution prevention practices can significantly reduce the amount of pollution leaving construction sites. The following are some simple practices that should be included in the Plan and implemented on site: Erosion control mats, fiber rolls & silt fence - Keep potential sources of pollution out of the rain as practicable (e.g., inside a building, covered with plastic or tarps, or sealed tightly in a leak-proof container). - Clearly identify a protected, lined area for concrete truck washouts. This area should be located away from streams, storm drain inlets, or ditches and should be cleaned out periodically. - Park, refuel, and maintain vehicles and equipment in one area of the site to minimize the area exposed to possible spills and fuel storage. This area should be well away from streams, storm drain inlets, or ditches. Keep spill kits close by and clean up any spills or leaks immediately, including spills on pavement or earthen surfaces. - Practice good housekeeping. Keep the construction site free of litter, construction debris, and leaking containers. Keep all waste in one area to minimize cleaning.
<urn:uuid:8e73fee3-2a3d-4994-b24e-45c24f004ff9>
CC-MAIN-2013-20
http://www.mdt.mt.gov/pubinvolve/stormwater/construction.shtml
2013-05-25T05:59:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905535
1,139
September 14, 2006 Ahoy, mateys ! Thar be Jewish pirates! September 19 is Talk Like A Pirate Day As of last weekend, Disney had plundered $1 billion worldwide with "Pirates of the Caribbean: Dead Man's Chest," and International Talk Like a Pirate Day -- that's Sept. 19, for you landlubbers -- has gone from an inside joke between two friends to a mock holiday celebrated in more than 40 countries. Yet tales of Jewish piracy, which stretch back thousands of years, aren't in the public's consciousness, and Hollywood even has been known to remove a pirate's Jewish background. As a result, we're stuck with portrayals of pirates as wayward English seamen on a murderous rampage. But now a forthcoming book hopes to change that image by focusing on Ladino-speaking Jews whose piracy grew out of the Inquisition. "The Jewish pirates were Sephardic. Once they were kicked out of Spain [in 1492], the more adventurous Jews went to the New World," said Ed Kritzler, whose yet-untitled book on Jewish pirates will be published by Doubleday in spring 2007. Jewish piracy has been around since well before the Barbary pirates first preyed on ships during the Crusades. In the time of the Second Temple, Jewish historian Flavius Josephus records that Hyrcanus accussed Aristobulus of "acts of piracy at sea." Kritzler has studied pirates for 40 years, and said that the public is fascinated with them because they're "rugged individuals in a world of conformity. They carved their own identity, independent of the rules and strictures of society." But determining the exact number of Jewish pirates is difficult, Kritzler said, because many of them traveled as Conversos, or converts to Christianity, and practiced their Judaism in secret. While some Jews, like Samuel Pallache, took up piracy in part to help make a better life for expelled Spanish Jews, Kritzler said others were motivated by revenge for the Inquisition. One such pirate was Moses Cohen Henriques, who helped plan one of history's largest heists against Spain. In 1628, Henriques set sail with Dutch West India Co. Admiral Piet Hein, whose own hatred of Spain was fueled by four years spent as a galley slave aboard a Spanish ship. Henriques and Hein boarded Spanish ships off Cuba and seized shipments of New World gold and silver worth in today's dollars about the same as Disney's total box office for "Dead Man's Chest." Henriques set up his own pirate island off the coast of Brazil afterward, and even though his role in the raid was disclosed during the Spanish Inquisition, he was never caught, Kritzler told The Journal. Another Sephardic pirate played a pivotal role in American history. In the book "Jews on the Frontier" (Rachelle Simon, 1991), Rabbi I. Harold Sharfman recounts the tale of Sephardic Jewish pirate Jean Lafitte, whose Conversos grandmother and mother fled Spain for France in 1765, after his maternal grandfather was put to death by the Inquisition for "Judaizing." Referred to as The Corsair, Lafitte went on to establish a pirate kingdom in the swamps of New Orleans, and led more than 1,000 men during the War of 1812. After being run out of New Orleans in 1817, Lafitte re-established his kingdom on the island of Galveston, Texas, which was known as Campeche. During Mexico's fight for independence, revolutionaries encouraged Lafitte to attack Spanish ships and keep the booty. But in the 1958 film "The Buccaneer," starring Yul Brynner as Lafitte, any mention of the pirate's Jewish heritage was stripped away. For more information on Talk Like a Pirate Day, visit www.talklikeapirate.com. Click here for a pirate talk translation of this article Top Ten Halachic Questions for a Jewish Pirate
<urn:uuid:17bdf0cb-3c50-4932-acb7-5d91af6bb1ee>
CC-MAIN-2013-20
http://www.jewishjournal.com/up_front/article/ahoy_mateys_thar_be_jewish_pirates_20060915/
2013-05-23T18:41:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967661
818
Click on image to view the movie This animation shows how light energy is collected by the optics system on the Atmospheric Infrared Sounder (AIRS) instrument and digitized. Energy from the scene is directed by the scan mirror into the AIRS telescope. The telescope collects the Earth scene energy and defines the instantaneous field of view (the 1.1 degree spot on the ground as opposed to the +/- 50 degree earth scan field of view). Filters located at the entrance to the spectrometer separate the energy into 11 different spectral ranges. Optical elements direct this energy to a grating which disperses the energy into a continuum of frequencies. The detector elements then collect discrete spectralranges of (i.e. frequencies) of energy which define the 2378 AIRS infrared spectral channels. The signals from the detectors are digitized, formatted, etc. and transmitted to the spacecraft which broadcast the data to the ground. The Atmospheric Infrared Sounder (AIRS) in conjunction with the Advanced Microwave Sounding Unit (AMSU) sense emitted infrared and microwave radiation from the Earth to provide a three-dimensional look at Earth's weather and climate. Working in tandem, the two instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity, cloud amounts and heights, greenhouse gas concentrations, and many other atmospheric phenomena. The AIRS and AMSU fly onboard NASA's Aqua spacecraft and are managed by the Jet Propulsion Laboratory, Pasadena, California, under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena. The AIRS Public Web site can be found at http://airs.jpl.nasa.gov.
<urn:uuid:f0e4c49a-7a78-4c0d-869d-ede0c8f3b3fa>
CC-MAIN-2013-20
http://photojournal.jpl.nasa.gov/catalog/PIA11170
2013-05-18T06:26:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893669
377
- Making a game - A slightly detailed game design. - Coding the game Making a game This lecture we are going to run through making a game. Or what some people would call a prototype of a game. It will be kind of like that old table top game starving starving rhinoceros(or something like that). Where lots of balls roll around, and you have to get your rhinoceros to eat them up. The person who eats the most balls wins. It is a simple game, but one with a smaller enough scope that we can finish it fairly quickly. Which are my favourite types of games to finish. It could also be expanded later with more features if we want to. A slightly detailed game design. First thing we do is think about what features will be in the first version of our game. To finish the game quickly we want to make the design as simple as possible. Why do we want to finish the game quickly? Evaluating the game quicker. If the game play isn't fun, then we want to know about it as soon as possible to fix it. Or we may even decide that the game is not possible to fix, and stop making it all together, and try out some other ideas. So we want to limit the features to the minimal ones which are required to make the game work. Limited game play features There will be two rhinos which eat up the balls. We may want to add more rhinos later, but two will be the minimum number. The rhinos will be facing in one direction. Maybe in a later version we will allow the rhinos to move. But for now each of them will be in the middle of the game play area, and facing opposite from each other. Two player game to start with. As coding a fun AI can be a lot of work, we will save that for another version. Two player games are much easier to do than single player games most of the time. By two player we mean two players playing on the same computer, not network play. So one player will use the mouse, and another the keyboard. Or maybe both will use the keyboard. Balls will bounce off the walls. So we will need to figure out how to see if a ball hits the wall, and how to make it bounce back. Balls will also need to move around. To start with we will not take into account acceleration. All balls will move at the same speed. Show the score for each of the rhinos. In the top left corner, and top right corner the score will be shown. The top left will be for the rhino on the left, the top right will be for the rhino on the right. Opening and closing the rhinos mouth. When the balls come towards the players rhino, and the mouth is closed the balls will bounce off. If the mouth is open the balls will go inside the mouth. But the rhino will have to close the mouth to swallow the balls. Game play uncertainty. At this stage we have a small design of our game. But a few things are uncertain. The main one is, will this be fun? The part which I am uncertain about is how the mouth closing, and opening to eat the balls is going to be fun. Maybe we will need to make the balls hit the back of its mouth, and bounce back? How hard should it be to eat the balls? Another uncertainy I have is about how the balls will bounce around. I can possibly imagine that the balls could bounce around in such a way that they never go near the players rhinos mouths. Perhaps we will have to add a game play element that makes the balls go towards the players mouth. These are just two uncertainties I have. You may have different ones, or see how these problems could easily be solved by changing the design. However both of these uncertainties will become clear as we start to develop the game, and when we finish what we have set out to do. Once the game is done, we may find that neither are problems, and that the game is fun! Yah! However we may find that they need fixing. Another more likely possibility is that there will be other problems. So we make a minimal design, begin implementing that, and then improve the design as we go. When we consider graphical design for a game, it is often best to start with place holder graphics. This allows us to quickly test out the game play ideas. It is also a good idea to draw basically what your game will look like. Stick figures, and boxes are usually enough. Here are some other factors you may want to take into account when making your graphics. - The graphical routines you have at your disposal. Do we have a 2d engine, a 3d engine? - The skills you have. 2d graphics, drawing, 3d modeling, animation. - What the game play requires. - Time you have. 2d graphics are generally quicker to do than 3d graphics. - Your audience. What type of computer do they have? For our game I don't want to do really advanced graphics to start with. I want to get it done quickly, so I'll choose to use 2d graphics. I also don't want to use any external images to start with, so I will use the pygame.draw graphics routines. These allow you to draw lines, circles, rectangles, and flat polygons. Not using any external files will mean that I will be able to make the game work with one python script. Once the game is complete, if I wish to continue with it, I could improve the graphics. Drawing the game design As you can see I spent maybe two minutes on this drawing. You can draw it on paper, or on the computer. Just a rough draft is neccessary for the game design. You may want to spend more time on it drawing up different rhinos at different sizes, and different frames of their animation. As we are not going to do much graphics to start with, just concentrate on getting the shapes and sizes of things correct. No sounds to start with. Later I may want all manner of sounds. But as the game design so far does not concerntrate on sounds as game play, I will not use any sounds. If I were to do sounds, some could be: - ball rolling. Get a marble and roll it along a desk. - ball hitting the wall. Roll a marble into something. - Rhino noises. Hrmmm, these would be harder to make. Be creative! User input design. Some people say the user interface to your game is the most important part. There are a few reasons they say this. One is that if the player can't figure out the controls, they will not play. Also it is what the player will be doing. For our game it will only require one input. That is all the player can do is to say open mouth, or close mouth. That's it! Simple eh? Maybe too simple. But it will do for now. There are lots of things we could add later, but opening, and closing the mouth of the rhino is the main interaction the player will have. So we want this main interaction to also be the easiest to find out how to do. Clicking the mouse button, and pressing any key will be the way to control our game. Player one will control the first rhino(one on left) with the mouse click. Player two will control its rhino with pressing a key on the keyboard. The user input to the game may also change as we make the game. So as you make a game, occasionally evaluate how your controls will work. Especially after adding lots more user interaction. Other input devices We will not put into our game support for any more types of input. But many of them could be added later quite easily. However then we would need to detect or allow the player to chose which input they are using. We could make our game see if the player is pressing on a joystick and make the joystick controll player two. Or perhaps make it control player one. In that way we could avoid making the player manually choose which input they want to use. It will just use which ever input they bash on timing and user input. We may want to limit how fast a person can do things. In our game we will probably want to limit how quickly the player can open and close the mouth of the rhino. The game could be quite different if we make it so that you can only open and close the mouth every two seconds. Compared to if you could open and close the mouth as quickly as you could click. Other things player can do. Two other things most games have are: - exiting/quiting the game. - pausing the game. User input for quiting There are some conventions in games for quiting. One is the 'q' key. Another is the ESC key. Often the esc or q keys take the player to the main menu. As we will not have a main menu those keys will just quit the game. Another convention is to use ALT+F4. But that is not Quiting with the mouse is another thing we may want to add. Maybe a big quit button somewhere on the screen. In the first version we won't implement it, as it is too much work to test out the game play. Later though it may be important. One problem with that is that the player may accidentally press it. If we make our game go in windowed mode, the player can quit with the mouse by using the quit button on the window. User input for pausing. It is generally a good idea to build pausing into the game as early as possible. As it can be a pain to add in later. The convention for pausing a game is usually the 'p' key. Also some games pause the game when you press ESC, whilst they show a menu. First we need to decide what our audience is. From that we decide which programming language, methods, and libraries to use. In this case the audience is those people reading a python for programming set of lectures/articles. So our language, and library choice is python, and pygame. As we want the game to be easy to run for you, I've decided to make the game in one file. So all our graphics will be done with code, and there will be no sound files with the game. Because we want to finish the game quickly we choose to do a 2d game. Our game design will probably not require any super speed code. The only performance problems I could see us having is with lots of balls moving around the screen. If we use the pygame.sprite classes this shouldn't be a problem. Also the sprite classes will help us organise the different visual elements of our game. Different visual elements in the game: - Score. In the corners of the screen. Will change the number when the rhino scores. - Rhinos. Two of them in the middle of the screen. These will be animated. - Balls. There will be many of them. They are allways moving, except when eaten. A slightly hard bit of code identified earlier in game play uncertainty, is the ball bouncing code. Luckily there have been lots of games done about balls bouncing around so the techniques to do so are described in lots of websites. If we have a problem coding it, with a little research we should find the answer. If the game starts to run a little slowly when we have lots of balls, we could just reduce the number of balls moving around. Also for a bit of variation, we may want to give the players the option of changing the amount of balls in play. Allthough we won't give them the option in the first version of the game. Coding the game Now we dive into the coding of the game. I like to do it in small steps. First get the screen up. Then draw the basic elements to the screen. Then maybe add in some keyboard handling, and some mouse handling. Below I describe the process I take making the game. Getting something on screen. I'll make some basic code to get a few things on screen first. So I initialize things, make a main loop, and put in the event handling for quiting. import pygame from pygame.locals import * pygame.init() display_flags = DOUBLEBUF width, height = 640, 480 if pygame.display.mode_ok((width, height), display_flags ): screen = pygame.display.set_mode((width, height), display_flags) run = 1 clock = pygame.time.Clock() while run: events = pygame.event.get() for event in events: if event.type == QUIT or (event.type == KEYDOWN and event.key in [K_ESCAPE, K_q]): # set run to 0 makes the game quit. run = 0 # add the game play in here later. pygame.display.flip() # limit the game to about 40fps, or 40 ticks per second. clock.tick(40) Ok. So now I have a black screen showing in a window, which I can quit from. I limited the frame rate to 40 frames per second so when I do animation, it is smoother. For smooth animation you need as constant a frame rate as possible. Which is one of the reasons why tv, and film run at 24 or 30 frames per second. There is more to it than this. If you want to know more there is a good discussion on the topic at http://ludumdare.com/forums/viewtopic.php?topic=141&forum=2&22 Drawing the balls. For the balls I am simply going to use the pygame.draw.circle function. To draw a filled in circle. We need two different colored circles, so this little function will draw them to two surfaces for us. import pygame from pygame.locals import * def render_ball_simple(radius, color): """ Returns (surf,rect) containing a picture of a circle of the radius, and color given. """ size = radius * 2 surf = pygame.Surface((size, size)) pygame.draw.circle(surf, color, (radius, radius), radius) return surf, surf.get_rect() def max(x, y): """ returns x, unless x > y. if it is it returns y. """ if x > y: return y else: return x def render_ball_funky(radius, color): """ Returns (surf,rect) containing a picture of a slightly shaded ball of the radius, and color given. """ size = radius * 2 surf = pygame.Surface((size, size)) # we progressively draw smaller circles of different colors. increment = int(radius / 4) for x in range(4): iradius = radius - (x * increment) print iradius isize = iradius * 2 icolor = [0,0,0] # we increment the color. if it is bigger than 255 we make it 255. icolor = max(color + (x * 15), 255) icolor = max(color + (x * 15), 255) icolor = max(color + (x * 15), 255) pygame.draw.circle(surf, icolor, (radius, radius), iradius) return surf, surf.get_rect() def render_ball(radius, color): """ Returns (surf,rect) containing a picture of a ball of the radius, and color given. """ # we use the kind of funk one... return render_ball_funky(radius, color) def main(): pygame.init() display_flags = DOUBLEBUF width, height = 640, 480 if pygame.display.mode_ok((width, height), display_flags ): screen = pygame.display.set_mode((width, height), display_flags) run = 1 clock = pygame.time.Clock() # draw some graphics to surfaces. ball1,ball1_rect = render_ball_funky(10, (50, 200, 200)) ball2,ball2_rect = render_ball_simple(6, (50, 200, 200)) # move the simple one towards the center top of the screen. ball2_rect.x = 200 while run: events = pygame.event.get() for event in events: if event.type == QUIT or (event.type == KEYDOWN and event.key in [K_ESCAPE, K_q]): # set run to 0 makes the game quit. run = 0 # add the game play in here later. screen.blit(ball1, ball1_rect) screen.blit(ball2, ball2_rect) pygame.display.flip() # limit the game to about 40fps, or 40 ticks per second. clock.tick(40) # this runs the main function if this script is called to run. # If it is imported as a module, we don't run the main function. if __name__ == "__main__": main() In this code you can see that I have drawn two ball looking things to the screen. Circles really. I got carried away with the draw ball function, and made a render_ball_funky(). It draws four circles progressively lighter to give a really simple shading effect. This is what we call feature creep. I just did it for fun. Feature creep gets in the way of you finishing the game. So avoid it! I have also moved the mainloop and the initialisation code inside a main function. This is just to make it a bit neater. Inside the main loop(the while loop), I do two blits. One for each of the balls. The one in the center top of the screen is the simple circle, the one on the left top is the so called funky ball. If you recall a blit means to draw, or to copy an image. In this case we are blitting directly to the screen. The max function we define makes sure that the color values don't get above 255. It is what some people call a helper function. That is a small function made to make your code a bit nicer looking. You should try and run this code as it gets developed. Add in some print functions, and play around with it, so you can get the flow of how it is working.
<urn:uuid:1923bc52-6361-4ce1-bf5e-d3f933c4d57c>
CC-MAIN-2013-20
http://rene.f0o.com/mywiki/LectureSix
2013-05-21T17:31:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934237
3,926
The royal abbey of Saint-Denis, now a suburb of Paris, housed the shrine of the national saint, possessed many of the regalia of the kings of France, and served as their burial site. Under the energetic Abbot Suger (1122–1151), the early abbey was rebuilt in a new style hailed in the Middle Ages as "the French style" and subsequently called Gothic. This column figure of an Old Testament king is the only complete statue to survive from the now destroyed cloister, originally constructed shortly after the death of Abbot Suger. A new pictorial approach to sculpture is evident in this carving: the standing figure is integral to the cylindrical column. The bejeweled crown and nimbus distinguish the royal and saintly nature of the figure. His identity may once have been inscribed upon the scroll that he holds, now broken.
<urn:uuid:f9a479cf-6dc1-4f70-897e-69e1b8822ebd>
CC-MAIN-2013-20
http://metmuseum.org/Collections/search-the-collections/170006227?high=on&rpp=15&pg=1&rndkey=20121006&ft=*&what=Limestone&pos=1
2013-06-18T22:53:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958685
179
A batch process performs a list of commands in sequence. It be run by a computer's operating system using a script or batch file, or may be executed within a program using a macro or internal scripting tool. For example, an accountant may create a script to open several financial programs at once, saving him the hassle of opening each program individually. This type of batch process would be executed by the operating system, such as Windows or the Mac OS. A Photoshop user, on the other hand, might use a batch process to modify several images at one time. For example, she might record an action within Photoshop that resizes and crops an image. Once the action has been recorded, she can batch process a folder of images, which will perform the action on all the images in the folder. Batch processing can save time and energy by automating repetitive tasks. While it may take awhile to write the script or record the repetitive actions, doing it once is certainly better than having to do it many times.
<urn:uuid:4079ab08-b2ab-485d-83a7-0978d5fd9ea4>
CC-MAIN-2013-20
http://www.techterms.com/print/batchprocess
2013-05-26T02:42:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949305
203
In the early days of space exploration animals were sent into space because scientists didn’t know if humans would survive. Some of our closest cousins have boldly been where many of us may never go. In a year when America has seen its space shuttles go into retirement, Iran’s Space Organisation has been firing on all cylinders. The Iranians had been planning to launch a rocket, Kavoshgar-5, which means Explorer-5 in Farsi, with a 285-kilogram capsule carrying a monkey to an altitude of 120 kilometres (74 miles). But yesterday the country’s top space official, Hamid Fazeli said the space monkey’s mission had been cancelled. He said: “One cannot give a set date for this project and as soon as our nation’s scientists announce the readiness (of the project) it will be announced. “Our scientists are exerting continuous efforts on this project… our colleagues are busy with empirical studies and sub-system testing of this project so it is a success.” The capsule was unveiled in February by President Mahmoud Ahmadinejad along with four new prototypes of home-built satellites. At the time, Fazeli touted the launch of a large animal into space as the first step towards sending a man into orbit, which Tehran says is scheduled for 2020. So far there’s been no explanation as to why the launch was postponed. The monkey may well be breathing a hefty sigh of relief but other species have been less fortunate. Last year Iran sent a menagerie of animals into space – a rodent, turtles and (interestingly!) worms. They were packed aboard a capsule carried by its Kavoshgar-3 rocket. Commenting after last year’s launch, a US defence expert said there was no scientific benefit to sending such animals into space. To test a life-support system of use to humans, “the obvious choice would be to send a monkey,” said James Lewis, senior fellow at Washington-based Centre for Strategic and International Studies. “Worms in space serve no purpose. The launch was clearly part of Iran’s effort to advance military technology and assert political dominance in space.” Press TV quoted Iranian space officials saying live video transmission and telemetry allowed the rat or mouse – named Helmz-1 – turtles and worms to be monitored during their space voyage. The Fars news agency reported that the animals had returned to Earth and were being studied by scientists. Whether this was alive or dead, I’m not sure! The Islamic republic has not been afraid of publicising its ambitious space programme. Its first satellite was put into orbit in 2009. But this growing confidence has raised concerns in the west that these ambitions may be linked to developing a ballistic missile capability that could deliver nuclear warheads. Tehran has repeatedly denied that its contentious nuclear and scientific programmes mask military ambitions. Iranian officials have also pointed to America’s use of satellites to monitor conflicts in Afghanistan and Iraq and say they need similar capabilities for their security.
<urn:uuid:d31e9a62-8e4d-4695-aed3-a96c6b1bac70>
CC-MAIN-2013-20
http://whogivesamonkeys.com/2011/10/04/iranian-space-monkey-grounded/
2013-05-20T02:40:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968127
643
Global ocean levels have risen by 4 to 10 inches over the past 100 years. How much more will they rise in 10 years? What about in 50? This kind of question is critical for planning future coastal development, but taking the measurements necessary to make predictions can be difficult and downright risky for human surveyors, who could be smashed by falling chunks of ice the size of the Empire State Building. So send in a bot, says David Holland, an oceanographer at New York University, who teamed up with the National Research Council of Canada (NRC) to deploy a five-foot-long autonomous submarine beneath an iceberg off the coast of Greenland. Called the Slocum underwater glider, the sub propels itself through water with a single-stroke piston, thereby conserving most of its energy for data collection. Sensors under the port-side wing measure conductivity (to find the salinity of the water), temperature, and depth, sending the data to processors within the sub. Icebergs are difficult to navigate, even for a sophisticated machine like this. In the pitch-black shadow under the iceberg, the Slocum glider has no access to satellite GPS and no visual markers to verify that it is following its intended path. To help the glider get and keep its bearings, the NRC plans to test an acoustic beacon system whose components would be placed underwater at strategic points around an iceberg, allowing the glider to triangulate by sound. By collecting data on how much and how quickly Greenland’s ice is melting, Holland hopes to create a computer model that will simulate and forecast glacial melt—and the future rise in sea levels—around the world.
<urn:uuid:a936095d-022f-4127-8334-a3c903fdb6f3>
CC-MAIN-2013-20
http://discovermagazine.com/2008/apr/25-the-robo-sub-that-helps-predict-where-the-oceans-headed
2013-05-18T18:07:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931948
345
Revista de Biología Tropical Print version ISSN 0034-7744 ALTRICHTER, Mariana; SAENZ, Joel C; CARRILLO, Eduardo and FULLER, Todd k. Dieta estacional del Tayassu pecari (Artiodactyla: Tayassuidae) en el Parque Nacional Corcovado, Costa Rica. Rev. biol. trop [online]. 2000, vol.48, n.2-3, pp. 689-702. ISSN 0034-7744. The diet of the white-lipped peccari Tayassu pecari was studied from July 1996 to April 1997 in Corcovado National Park, Costa Rica, through fecal analysis and direct observations. The feces consisted of 61.6% fruits, 37.5% vegetative parts, 0.4% invertebrates and 0.5% unidentified material. These proportions are similar to those reported for white-lipped peccaries diet in South America, but the species consumed were different. In Corcovado, the white-lipped peccary fed on parts of 57 plant species (37 of them fruits). Moraceae was the most represented family. In contrast, the diet of the Peruvian Amazon peccary primarily consists of plant parts (Arecaceae). Costa Rican peccary diet consisted of vegetative parts from Araceae and Heliconaceae. Direct observation showed that peccaries spent 30% of feeding time rooting. Samples taken from rooting sites suggest that peccaries fed on earthworms. Diet differed between months, seasons and habitats. They ate more fruits in coastal and primary forests and more vegetative parts in secondary forest. In the months Octubrer and November the consumption of vegetative parts exceeded fruit consumption. Keywords : Seasonal diet; fecal analysis; direct observations; Tayassu pecari; white-lipped peccary; rain forest; Costa Rica.
<urn:uuid:d402b926-68c7-44a1-acc5-ec4cc54f10b1>
CC-MAIN-2013-20
http://www.scielo.sa.cr/scielo.php?script=sci_abstract&pid=S0034-77442000000200042&lng=en&nrm=iso
2013-05-23T04:48:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.868834
416
The story of Dr. Daniel Hale Williams (pictured), the nation’s first African-American cardiologist and the first surgeon to perform a successful open-heart procedure, is as inspirational a tale as any. Born in the small town of Hollidaysburg in central Pennsylvania on January 18, 1858, Williams and his family would eventually move to Maryland’s capital city, Annapolis. The son of a barber, Williams became adept at the trade and took up roots in Wisconsin, where he graduated high school and later an academy by the age of 21. Want to Keep Up With NewsOne.com? LIKE Us On Facebook! RELATED: Angela Davis Acquitted Of Charges In ‘Soledad Brothers’ Case 40 Years Ago Today Williams would become a medical apprentice under Dr. Henry Palmer, a noted surgeon. Of Dr. Palmer’s three apprentices, all were accepted to a special three-year medical school program that was affiliated with Northwestern University in the state of Illinois. Watch Dr. Williams’ legacy here: Williams would earn a M.D. degree and began his practice in Chicago. During the time, Williams was one of four Black physicians working in the major city. Working both as an instructor at Northwestern University and as a practicing physician, Williams began to notice that Black patients faced discrimination and sought to combat it by joining the Illinois State Board of Health to help change some of the racist rules. He would later found the Provident Hospital and Nursing Training School in 1891. On this day in 1893, a stabbing victim entered Provident with wounds to the chest and heart. Dr. Williams repaired the lining of the man’s heart, thus saving his life in the process. Although he was not the first to perform such surgery, Williams would effectively lay claim to having the first successful open-heart surgery after the man recovered nearly two months later. Even after expansion and other hardships, Provident Hospital still stands, keeping Williams’ legacy intact. With community effort and government support, the hospital survived and now serves the South Side neighborhood of Chicago. Here’s to you, Dr. Williams!
<urn:uuid:9a57cabc-af80-4437-ad63-c04ef6efbfc8>
CC-MAIN-2013-20
http://urbanpetersburg.com/1359463/black-cardiologist-performs-1st-successful-open-heart-surgery-119-years-ago-today/
2013-06-20T02:07:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972198
441
Two identical trains, at the equator start travelling round the world in opposite directions. They start together, run at the same speed and are on different tracks. Which train will wear out its wheel treads first? Will their weight change? Answer is given as : the train travelling against the spin of the earth. This train will wear out its wheels more quickly because the centrifugal force is less on this train. How do the forces change when the frame of reference is same for both trains?
<urn:uuid:bd83ff1c-041e-4ff7-8f28-a0b4725372c3>
CC-MAIN-2013-20
http://physics.stackexchange.com/questions/48248/will-two-trains-running-along-the-equator-in-opposite-direction-experience-same
2013-05-22T07:43:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953851
101
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | - Main article: Penology Criminology (from Latin crīmen, "accusation"; and Greek -λογία, -logia) is the social science approach to the study of crime as an individual and social phenomenon. Criminological research areas include the incidence and forms of crime as well as its causes and consequences. They also include social and governmental regulations and reactions to crime. Criminology is an interdisciplinary field in the behavioral sciences, drawing especially on the research of sociologists and psychologists, as well as on writings in law. An important way to analyze data is to look at quantitative methods in criminology. In 1885, Italian law professor Raffaele Garofalo coined the term "criminology" (in Italian, criminologia). The French anthropologist Paul Topinard used it for the first time in French (criminologie) around the same time. Schools of thoughtEdit In the mid-18th century, criminology arose as social philosophers gave thought to crime and concepts of law. Over time, several schools of thought have developed in England. The Classical School, which developed in the mid 18th century, was based on utilitarian philosophy. Cesare Beccaria, author of On Crimes and Punishments (1763-64), Jeremy Bentham, inventor of the panopticon, and other classical school philosophers argued that (1) people have free will to choose how to act. (2) Deterrence is based upon the notion of the human being as a 'hedonist' who seeks pleasure and avoids pain, and a 'rational calculator' weighing up the costs and benefits of the consequences of each action. Thus, it ignores the possibility of irrationality and unconscious drives as motivational factors (3) Punishment (of sufficient severity) can deter people from crime, as the costs (penalties) outweigh benefits, and that severity of punishment should be proportionate to the crime. (4) The more swift and certain the punishment, the more effective it is in deterring criminal behavior. The Classical school of thought came about at a time when major reform in penology occurred, with prisons developed as a form of punishment. Also, this time period saw many legal reforms, the French Revolution, and the development of the legal system in the United States. The Positivist School presumes that criminal behavior is caused by internal and external factors outside of the individual's control. The scientific method was introduced and applied to study human behavior. Positivism can be broken up into three segments which include biological, psychological and social positivism. Cesare Lombroso, an Italian prison doctor working in the late 19th century and sometimes regarded as the "father" of criminology, was one of the largest contributors to biological positivism and founder of the Italian school of criminology. Lombroso took a scientific approach, insisting on empirical evidence, for studying crime. Considered as the founder of criminal anthropology, he suggested that physiological traits such as the measurements of one's cheek bones or hairline, or a cleft palate, considered to be throwbacks to Neanderthal man, were indicative of "atavistic" criminal tendencies. This approach, influenced by the earlier theory of phrenology and by Charles Darwin and his theory of evolution, has been superseded. Enrico Ferri, a student of Lombroso, believed that social as well as biological factors played a role, and held the view that criminals should not be held responsible when factors causing their criminality were beyond their control. Criminologists have since rejected Lombroso's biological theories, with control groups not used in his studies. Lombroso's Italian school was rivaled, in France, by Alexandre Lacassagne and his school of thought, based in Lyon and influent from 1885 to 1914. The Lacassagne School rejected Lombroso's theory of "criminal type" and of "born criminals", and strained the importance of social factors. However, contrary to criminological tendencies influenced by Durkheim's social determinism, it did not reject biological factors. Indeed, Lacassagne created an original synthesis of both tendencies, influenced by positivism, phrenology and hygienism, which alleged a direct influence of the social environment on the brain and compared the social itself to a brain, upholding an organicist position. Furthermore, Lacassagne criticized the lack of efficiency of prison, insisted on social responsibilities toward crime and on political voluntarism as a solution to crime, and thus advocated harsh penalties for those criminals thought to be unredeemable ("recidivists") for example by supporting the 1895 law on penal colonies or opposing the abolition of the death penalty in 1906. Hans Eysenck (1964, 1977), a British psychologist, claimed that psychological factors such as extraversion and neuroticism made a person more likely to commit criminal acts. He also includes a psychoticism dimension that includes traits similar to the psychopathic profile, developed by Hervey M. Cleckley and later Robert Hare. He also based his model on early parental socialization of the child; his approach bridges the gap between biological explanations and environmental or social learning based approaches, (see e.g. social psychologists B.F. Skinner (1938), Albert Bandura (1973), and the topic of "nature vs. nurture".) Sociological positivism postulates that societal factors such as poverty, membership of subcultures, or low levels of education can predispose people to crime. Adolphe Quetelet made use of data and statistical analysis to gain insight into relationship between crime and sociological factors. He found that age, gender, poverty, education, and alcohol consumption were important factors related to crime. Rawson W. Rawson utilized crime statistics to suggest a link between population density and crime rates, with crowded cities creating an environment conducive for crime. Joseph Fletcher and John Glyde also presented papers to the Statistical Society of London on their studies of crime and its distribution. Henry Mayhew used empirical methods and an ethnographic approach to address social questions and poverty, and presented his studies in London Labour and the London Poor. Emile Durkheim viewed crime as an inevitable aspect of society, with uneven distribution of wealth and other differences among people. The Chicago School arose in the early twentieth century, through the work of Robert Ezra Park, Ernest Burgess, and other urban sociologists at University of Chicago. In the 1920s, Park and Burgess identified five concentric zones that often exist as cities grow, including the "zone in transition" which was identified as most volatile and subject to disorder. In the 1940s, Henry McKay and Clifford R. Shaw focused on juvenile delinquents, finding that they were concentrated in the zone of transition. Chicago School sociologists adopted a social ecology approach to studying cities, and postulated that urban neighborhoods with high levels of poverty often experience breakdown in the social structure and institutions such as family and schools. This results in social disorganization, which reduces the ability of these institutions to control behavior and creates an environment ripe for deviant behavior. Other researchers suggested an added social-psychological link. Edwin Sutherland suggested that people learn criminal behavior from older, more experienced criminals that they may associate with. Theories of crimeEdit Social structure theoriesEdit Social disorganization (neighborhoods)Edit Social disorganization theory is based on the work of Henry McKay and Clifford R. Shaw of the Chicago School. Social disorganization theory postulates that neighborhoods plagued with poverty and economic deprivation tend to experience high rates of population turnover. These neighborhoods also tend to have high population heterogeneity. With high turnover, informal social structure often fails to develop, which in turn makes it difficult to maintain social order in a community. Since the 1950s, social ecology studies have built on the social disorganization theories. Many studies have found that crime rates are associated with poverty, disorder, high numbers of abandoned buildings, and other signs of community deterioration. As working and middle class people leave deteriorating neighborhoods, the most disadvantaged portions of the population may remain. William Julius Wilson suggested a poverty "concentration effect", which may cause neighborhoods to be isolated from the mainstream of society and become prone to violence. Strain theory, (also known as Mertonian Anomie), advanced by American sociologist Robert Merton, suggests that mainstream culture, especially in the United States, is saturated with dreams of opportunity, freedom and prosperity; as Merton put it, the American Dream. Most people buy into this dream and it becomes a powerful cultural and psychological motivation. Merton also used the term anomie, but it meant something slightly different for him than it did for Durkheim. Merton saw the term as meaning a dichotomy between what society expected of its citizens, and what those citizens could actually achieve. Therefore, if the social structure of opportunities is unequal and prevents the majority from realizing the dream, some of them will turn to illegitimate means (crime) in order to realize it. Others will retreat or drop out into deviant subcultures (gang members, "hobos": urban homeless drunks and drug abusers). - Main article: subcultural theory Following on from the Chicago School and Strain Theory, and also drawing on Edwin H. Sutherland's idea of differential association, subcultural theorists focused on small cultural groups fragmenting away from the mainstream to form their own values and meanings about life. Albert K. Cohen tied anomie theory with Freud's reaction formation idea, suggesting that delinquency among lower class youths is a reaction against the social norms of the middle class. Some youth, especially from poorer areas where opportunities are scarce, might adopt social norms specific to those places which may include "toughness" and disrespect for authority. Criminal acts may result when youths conform to norms of the deviant subculture. Richard Cloward and Lloyd Ohlin suggested that delinquency can result from differential opportunity for lower class youth. Such youths may be tempted to take up criminal activities, choosing an illegitimate path that provides them more lucrative economic benefits than conventional, over legal options such as minimum wage-paying jobs available to them. British subcultural theorists focused more heavily on the issue of class, where some criminal activities were seen as 'imaginary solutions' to the problem of belonging to a subordinate class. A further study by the Chicago school looked at gangs and the influence of the interaction of gang leaders under the observation of adults. At the other side of the spectrum, criminologist Lonnie Athens developed a theory about how a process of brutalization by parents or peers that usually occurs in childhood results in violent crimes in adulthood. Richard Rhodes' Why They Kill describes Athens' observations about domestic and societal violence in the criminals' backgrounds. Both Athens and Rhodes reject the genetic inheritance theories. Another approach is made by the social bond or social control theory. Instead of looking for factors that make people become criminal, those theories try to explain why people do not become criminal. Travis Hirschi identified four main characteristics: "attachment to others", "belief in moral validity of rules", "commitment to achievement" and "involvement in conventional activities". The more a person features those characteristics, the less are the chances that he or she becomes deviant (or criminal). On the other hand, if those factors are not present in a person, it is more likely that he or she might become criminal. Hirschi expanded on this theory, with the idea that a person with low self control is more likely to become criminal. A simple example: someone wants to have a big yacht, but does not have the means to buy one. If the person cannot exert self-control, he or she might try to get the yacht (or the means for it) in an illegal way; whereas someone with high self-control will (more likely) either wait or deny themselves that need. Social bonds, through peers, parents, and others, can have a countering effect on one's low self-control. For families of low socio-economic status, a factor that distinguishes families with delinquent children from those who are not delinquent is the control exerted by parents or chaperonage. Symbolic interactionism draws on the phenomenology of Edmund Husserl and George Herbert Mead, as well as subcultural theory and conflict theory. This school of thought focused on the relationship between the powerful state, media and conservative ruling elite on the one hand, and the less powerful groups on the other. The powerful groups had the ability to become the 'significant other' in the less powerful groups' processes of generating meaning. The former could to some extent impose their meanings on the latter, and therefore they were able to 'label' minor delinquent youngsters as criminal. These youngsters would often take on board the label, indulge in crime more readily and become actors in the 'self-fulfilling prophecy' of the powerful groups. Later developments in this set of theories were by Howard Becker and Edwin Lemert, in the mid 20th century. Stanley Cohen who developed the concept of "moral panic" (describing societal reaction to spectacular, alarming social phenomena such as post-World War Two youth cultures (e.g. the Mods and Rockers in the UK in 1964), AIDS and football hooliganism). Rational choice theoryEdit - Main article: Rational choice theory (criminology) Rational choice theory is based on the utilitarian, classical school philosophies of Cesare Beccaria, which were popularized by Jeremy Bentham. They argued that punishment, if certain, swift, and proportionate to the crime, was a deterrent for crime, with risks outweighing possible benefits to the offender. In Dei delitti e delle pene (On Crime and Punishment, 1763-1764), Beccaria advocated a rational penology. Beccaria conceived of punishment as the necessary application of the law for a crime: thus, the judge was simply to conform his sentence to the law. Beccaria also distinguished between crime and sin, and advocated against the death penalty, as well as torture and inhumane treatments, as he did not consider themselves rational deterrents. This philosophy was replaced by the Positivist and Chicago Schools, and not revived until the 1970s with the writings of James Q. Wilson, Gary Becker's 1965 article titled "Crime and Punishment" and George Stigler's 1970 article "The Optimum Enforcement of Laws." Rational choice theory argues that criminals, like other people, weigh costs/risks and benefits when deciding whether or not to commit crime and think in economic terms. They will also try to minimize risks of crime by considering the time, place, and other situational factors. Gary Becker, for example, acknowledged that many people operate under a high moral and ethical constraint, but considered that criminals rationally see that the benefits of their crime outweigh the cost such as the probability of apprehension, conviction, punishment, as well as their current set of opportunities. From the public policy perspective, since the cost of increasing the fine is marginal to that of the cost of increasing surveillance, one can conclude that the best policy is to maximize the fine and minimize surveillance. With this perspective, crime prevention or reduction measures can be devised that increase effort required to commit the crime, such as target hardening. Rational choice theories also suggest that increasing risk of offending and likelihood of being caught, through added surveillance, police or security guard presence, added street lighting, and other measures, are effective in reducing crime. One of the main differences between this theory and Jeremy Bentham's rational choice theory, which had been abandoned in criminology, is that if Bentham considered it possible to completely annihilate crime (through the panopticon), Becker's theory acknowledged that a society could not eradicate crime beneath a certain level. For example, if 25% of a supermarket's products were stolen, it would be very easy to reduce this rate to 15%, quite easy to reduce it until 5%, difficult to reduce it under 3% and nearly impossible to reduce it to zero (a feat which would cost the supermarket so much in surveillance, etc., that it would outweight the benefits). Routine activity theoryEdit Routine activity theory, developed by Marcus Felson and Lawrence Cohen, drew upon control theories and explained crime in terms of crime opportunities that occur in everyday life. A crime opportunity requires that elements converge in time and place including (1) a motivated offender (2) suitable target or victim (3) lack of a capable guardian. A guardian at a place, such as a street, could include security guards or even ordinary pedestrians who would witness the criminal act and possibly intervene or report it to police. Routine activity theory was expanded by John Eck, who added a fourth element of "place manager" such as rental property managers who can take nuisance abatement measures. Types and definitions of crimeEdit Both the Positivist and Classical Schools take a consensus view of crime — that a crime is an act that violates the basic values and beliefs of society. Those values and beliefs are manifested as laws that society agrees upon. However, there are two types of laws: - Natural laws are rooted in core values shared by many cultures. Natural laws protect against harm to persons (e.g. murder, rape, assault) or property (theft, larceny, robbery), and form the basis of common law systems. - Statutes are enacted by legislatures and reflect current cultural mores, albeit that some laws may be controversial, e.g. laws that prohibit marijuana use and gambling. Marxist Criminology, Conflict Criminology and Critical Criminology claim that most relationships between State and citizen are non-consensual and, as such, criminal law is not necessarily representative of public beliefs and wishes: it is exercised in the interests of the ruling or dominant class. The more right wing criminologies tend to posit that there is a consensual social contract between State and citizen. Therefore, definitions of crimes will vary from place to place, in accordance to the cultural norms and mores, but may be broadly classified as blue-collar crime, corporate crime, organized crime, political crime, public order crime, state crime, state-corporate crime, and white-collar crime. Areas of study in criminology include: - Juvenile delinquency - Causes and correlates of crime - Crime prevention - Crime statistics - Criminal behavior - Criminal careers and desistance - Domestic Violence - Deviant behavior - Evaluation of criminal justice agencies - Fear of crime - Sociology of law - The International Crime Victims Survey Comparative criminology is the study of the social phenomenon of crime across cultures, to identify differences and similarities in crime patterns. See also Edit - Crime science - Criminal law - Forensic psychology - List of criminology topics - Offender profiling - ↑ Deflem, Mathieu (2006). Sociological Theory and Criminological Research: Views from Europe and the United States, p. 279, Elsevier. ISBN 0762313226. - ↑ Beccaria, Cesare (1764). Richard Davies, translator On Crimes and Punishments, and Other Writings, p. 64, Cambridge University Press. ISBN 0521402034. - ↑ Siegel, Larry J. (2003). Criminology, 8th edition, p. 7, Thomson-Wadsworth. - ↑ McLennan, Gregor, Jennie Pawson, Mike Fitzgerald (1980). Crime and Society: Readings in History and Theory, p. 311, Routledge. ISBN 0415027551. - ↑ Siegel, Larry J. (2003). Criminology, 8th edition, p. 139, Thomson-Wadsworth. - ↑ 6.0 6.1 6.2 Renneville, Marc. La criminologie perdue d’Alexandre Lacassagne (1843-1924), Criminocorpus, Centre Alexandre Koyré-CRHST, UMR n°8560 of the CNRS, 2005 (French) - ↑ Beirne, Piers (March 1987). Adolphe Quetelet and the Origins of Positivist Criminology. American Journal of Sociology 92(5): pp. 1140–1169. - ↑ Hayward, Keith J. (2004). City Limits: Crime, Consumerism and the Urban Experience, p. 89, Routledge. ISBN 1904385036. - ↑ Garland, David (2002). "Of Crimes and Criminals" Maguire, Mike, Rod Morgan, Robert Reiner The Oxford Handbook of Criminology, 3rd edition, p. 21, Oxford University Press. - ↑ Henry Mayhew: London Labour and the London Poor. Center for Spatially Integrated Social Science. - ↑ Shaw, Clifford R. and McKay, Henry D. (1942). Juvenile Delinquency and Urban Areas, The University of Chicago Press. - ↑ 12.0 12.1 12.2 Bursik Jr., Robert J. (1988). Social Disorganization and Theories of Crime and Delinquency: Problems and Prospects. Criminology 26: p. 519–539. - ↑ Morenoff, Jeffrey, Robert Sampson, Stephen Raudenbush (2001). Neighborhood Inequality, Collective Efficacy and the Spatial Dynamics of Urban Violence. Criminology 39: p. 517–60. - ↑ Merton, Robert (1957). Social Theory and Social Structure, Free Press. - ↑ Cohen, Albert (1955). Delinquent Boys, Free Press. - ↑ Kornhauser, R. (1978). Social Sources of Delinquency, University of Chicago Press. - ↑ 17.0 17.1 Cloward, Richard, Lloyd Ohlin (1960). Delinquency and Opportunity, Free Press. - ↑ Rhodes, Richard (2000). Why They Kill: The Discoveries of a Maverick Criminologist, Vintage. - ↑ Hirschi, Travis (1969). Causes of Delinquency, Transaction Publishers. - ↑ Gottfredson, M., T. Hirschi (1990). A General Theory of Crime, Stanford University Press. - ↑ Wilson, Harriet (1980). Parental Supervision: A Neglected Aspect of Delinquency. British Journal of Criminology 20. - ↑ Mead, George Herbert (1934). Mind Self and Society, University of Chicago Press. - ↑ Becker, Howard (1963). Outsiders, Free Press. - ↑ Gary Becker, "Crime and Punishment", in Journal of Political Economy, vol. 76 (2), March-April 1968, p.196-217 - ↑ George Stigler, "The Optimum Enforcement of Laws", in Journal of Political Economy, vol.78 (3), May-June 1970, p.526-536 - ↑ 26.0 26.1 Cornish, Derek, and Ronald V. Clarke (1986). The Reasoning Criminal, Springer-Verlag. - ↑ 27.0 27.1 Clarke, Ronald V. (1992). Situational Crime Prevention, Harrow and Heston. - ↑ Felson, Marcus (1994). Crime and Everyday Life, Pine Forge. - ↑ 29.0 29.1 Cohen, Lawrence, and Marcus Felson (1979). Social Change and Crime Rate Trends. American Sociological Review 44: 588. - ↑ Eck, John, and Julie Wartell (1997). Reducing Crime and Drug Dealing by Improving Place Management: A Randomized Experiment, National Institute of Justice. - ↑ Barak-Glantz, I.L., E.H. Johnson (1983). Comparative criminology, Sage. - Wikibooks: Introduction to sociology - Cesare Beccaria, Dei delitti e delle pene (1763-1764) - Brantingham, P. J. & Brantingham, P. L. (1991). Environmental criminology. Prospect Heights, IL: Waveland Press. - Barak, Gregg (ed.). (1998). Integrative criminology (International Library of Criminology, Criminal Justice & Penology.). Aldershot: Ashgate/Dartmouth. ISBN 1-84014-008-9 - Pettit, Philip and Braithwaite, John. Not Just Deserts. A Republican Theory of Criminal Justice ISBN13: 9780198240563 (see Republican Criminology and Victim Advocacy: Comment for article concerning the book in Law & Society Review, Vol. 28, No. 4 (1995), pp. 765-776) - National Criminal Justice Reference Service (NCJRS) - American Society of Criminology - Australian Institute of Criminology (AIC) - British Society of Criminology - Criminology Mega-Site — Dr. Tom O'Connor (Associate Professor of Criminal Justice, Austin Peay State University) |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:545e1c4f-eb26-45ad-9b6e-5080a84f516a>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Criminology?oldid=155180
2013-05-25T12:42:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927417
5,239
Has the grand Roman Pantheon been keeping a secret for nearly 2000 years? An expert in ancient timekeeping thinks so, arguing that it acts as a colossal sundial. The imposing temple in Rome, completed in AD 128, is one of the most impressive buildings that survives from antiquity. It consists of a cylindrical chamber topped by a domed roof with an oculus in the top which lets through a dramatic shaft of sunlight. It boasts a colonnaded courtyard at the front.When Robert Hannah of the University of Otago in Dunedin, New Zealand, visited the Pantheon in 2005, researching for a book, he realised that the Pantheon may have been more than just atemple.During the six months of winter, the light of the noon sun traces a path across the inside of the domed roof. During summer, with the sun higher in the sky, the shaft shines onto the lower walls and floor. At the two equinoxes, in March and September, the sunlight coming in through the hole strikes the junction between the roof and wall, above the Pantheon's grand northern doorway. above the door allows a sliver of light through to the front courtyard - the only moment in the year that it sees sunlight if its main doors Hannah reckons this is no coincidence. A hollowed-out hemisphere with a hole in the top was a type of sundial used in Roman times, albeit on a much smaller scale, to show the time of year. While the Pantheon's dome is quite flat on the outside, it forms a perfect hemisphere inside. "This is quitea deliberate design feature," says Hannah. Pantheon means "all of the gods" and the building's roof represented the dome of the sky, where Romans believed the gods resided. At equinox, the sun is on the celestial equator - where Earth's equator would lie if projected into space - which was seen as the most stable part of the sky, a perfect eternal home for the gods. Hannah thinks that by marking the equinoxes, the Pantheon was intended to elevate emperors who worshipped there into the realm of the gods. James Evans, a historian of astronomy at the University of Puget Sound in Washington state, is intrigued: "The architect of the Pantheon would certainly have been aware of the symbolic connections between the cosmos and the empire, and between the sun and the emperor." He doesn't believe the case is proven, however, as no markingssurvive in...... In October, 2005, a truck pulled up outside the National Archeological Museum in Athens, and workers began unloading an eight-ton X-ray machine that its designer, X-Tek Systems of Great Britain, had dubbed the Bladerunner. Standing just inside the National Museum’s basement was Tony Freeth, a sixty-year-old British mathematician and filmmaker, watching as workers in white T-shirts wrestled the Range Rover-size machine through the door and up the ramp into the museum. Freeth was a member of the A...Did an Ancient Language of Universal Symbols Exist? A computer in antiquity would seem to be an anachronism, like Athena ordering takeout on her cellphone. But a century ago, pieces of a strange mechanism with bronze gears and dials were recovered from an ancient shipwreck off the coast of Greece. Historians of science concluded that this was an instrument that calculated and illustrated astronomical information, particularly phases of the Moon and planetary motions, in the second century B.C.The Antikythera Mechanism, sometimes calledthe world’s...Ancient Nuclear Weapons! it really possible that the Ancient Indians had the capacity to deploy devastating nuclear weapons against their enemies? Moreover is it really possible, as many Ufologists claim, that awesomely powerful nuclear weapons were actually given to the ancient Indian warriors by extraterrestrials, highly advanced spacemen from other planets? well passages from ancient Indian national epics certainly Appear to be evidence of such astonishing claims. It i...Ancient Egyptian Alchemy Anthony North: In the early years of the 20th century an artifact was recovered from a shipwreck off the Greek island of Antikythera. Dated to about 80BC, it was considered a mere artifact. However in 1971 research on the Antikythera Mechanism showed it to have an intricate arrangement of gears, dials and graded plates. One theory is that it was a computing device to work out the movement of the Sun and planets. If this idea is true, then the ancients had a degree of technology way above previou...High-Tech Wars Described in Ancient Texts! Sanskrit texts are filled with references to gods who fought battles in the sky using Vimanas equipped with weapons as deadly as any we can deploy in these more enlightened times. Sanskrit texts are filled with references to gods who fought battles in the sky using Vimanas equipped with weapons as deadly as any we can deploy in these more enlightened times. For example, there is a passage in the Ramayana which reads: "The Puspaka car that resembles the Sun ...Skeleton Shows Ancient Brain Surgery Archaeologists have unearthed the skull of a young woman in northern Greece who is believed to have undergone head surgery in the third century, Greek news media reported Wednesday. A Greek team discovered the skeleton at an ancient cemetery in Veria, with the skull including an injury that led them to conclude the surgery had been performed. "We think that there was a complex surgical intervention that only an experienced doctor could have performed," said IoannisGraikos, the ...An Ancient Greek 'Computer' by Derek J. de Solla Price From June 1959 Scientific American p.60-7 In 1901 divers working off the isle of Antikythera found the remains of a clocklike mechanism 2,000 years old. The mechanism now appears to have been a device for calculating the motions of stars and planets among the treasures of the Greek National Archaeological Museum in Athens are the remains of the most complex scientific object that has been preserved from antiquity. Corroded and crumblin...
<urn:uuid:69f995fa-ab7e-4d06-a9b0-eb63a9ae01b7>
CC-MAIN-2013-20
http://www.hotspotsz.com/Is_the_Roman_Pantheon_a_colossal_sundial_(Article-19096).html
2013-05-20T22:24:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955002
1,314
Principal Proposed Uses The oak tree, respected for millennia as a source of strong, dense wood, also has a considerable tradition of medicinal use. The astringent, tannin-rich bark of the oak tree has been recommended for such diverse conditions as internal hemorrhage, diarrhea, dysentery, cancer, and pneumonia. What is Oak Bark Used for Today? Currently, Germany’s Commission E recommends oak bark internally for treatment of diarrhea and topically for sore throat , mouth sores , hemorrhoids , and eczema . However, there is no meaningful scientific evidence that oak bark offers any therapeutic benefit in these or any other conditions. Only double-blind , placebo-controlled studies can prove a treatment effective, and none have been performed on oak bark. (For more information on why such studies are essential, see Why Does This Database Rely on Double-blind Studies?) Although comprehensive safety testing has not been performed, use of oak bark is not generally associated with any side effects other than the occasional digestive upset or allergic reaction. Safety in young children, pregnant or nursing women, or people with severe liver or kidney disease has not been established. - Reviewer: EBSCO CAM Review Board - Review Date: 07/2012 - - Update Date: 07/25/2012 -
<urn:uuid:955dfd4c-cc63-4ddd-91a6-52559499604e>
CC-MAIN-2013-20
http://medtropolis.com/your-health/?/111708/
2013-06-19T18:52:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923753
270
The earlier a family knows their child has an autism spectrum disorder (ASD), the sooner they can get help for the child. There is a new clinical report from the American Academy of Pediatrics that will help pediatricians recognize ASD in their patients earlier, and help them assess it. There is also a second report that talks about educational strategies and associated therapies for ASD. The world of ASD is a very complicated one. Autism can affect behavior, speech and language development, sensorimotor skills, visual and auditory processing, and learning skills. This article will discuss the areas that might be affected in a child with an ASD, the effects of it and how it can impact school bus transportation and evacuations. Signs of an autism spectrum disorder Our brain regulates how we interpret all the information that comes to us from the environment. The information can include stimuli, such as touch, movement, sight, sound and gravity. Once the brain receives this information, it interprets the information as being too much (hypersensitive), too little (hyposensitive) or just the right amount. Many students with an ASD will have trouble with tactile stimulation. A light touch can feel to some students like a truck is on their shoulders. Other students can have a heavy object on them and they will feel no additional weight. The child will react very differently between a light touch and a heavy touch. Language is another area that is affected by an ASD. The child may not even babble at 1 year of age. Some children will start to develop language and then stop abruptly. Although their hearing has been checked, they still do not respond to their name. The children also will not point to things to direct people’s attention, and they will avoid eye contact and cuddling. Children with autism may also have vestibular dysfunction. These children may be over-responsive to movement. We all know these children — the ones afraid of playground equipment, who avoid taking risks and may appear “wimpy.” These students may also be fearful of going up or down stairs or walking on uneven surfaces. The students who are under-responsive to movement are in constant motion; they could spin for hours and never appear to be dizzy. They are “thrill seekers,” always running, jumping, hopping, etc., instead of walking. Other signs of ASD include oral input dysfunction, auditory dysfunction, olfactory dysfunction, visual input dysfunction, and social, emotional, play and self-regulation dysfunction.
<urn:uuid:ca211e60-6ae6-47bc-b7b7-25f346ef7f5e>
CC-MAIN-2013-20
http://www.schoolbusfleet.com/Channel/Special-Needs/Articles/2012/08/Transporting-students-with-autism/Page/1.aspx
2013-05-24T15:49:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960548
512
tambourArticle Free Pass tambour, embroidery worked on material that has been stretched taut on a tambour frame, which consists of two wooden hoops, one slightly larger than the other, fitting close together. The embroidery is worked with a needle or a tambour hook. When an expanse of material has to be covered that is too large for a fixed square frame, it is possible to do the work in stages on a tambour frame, stretching different portions of the material at a time. The frame is portable and suitable for carrying work around. Early examples of tambour work come from China, India, Persia, and Turkey. It was popular in Europe and the United States in the 18th and 19th centuries. What made you want to look up "tambour"? Please share what surprised you most...
<urn:uuid:5fda411f-db4b-44ea-ab2a-c0180c15ec23>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/581885/tambour
2013-05-24T02:41:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959105
177
Fossils are the remains or traces of living things preserved in the rocks. In Northern Ireland, fossils have been found in sedimentary rocks ranging in age from 500 million years old to a few thousand years old. The Ulster Museum collections contain tens of thousands of fossils, ranging from small fragments of ancient shells to a near complete dinosaur skeleton. Some of the more spectacular fossils are on display. Many other, less exciting, specimens form a valuable scientific resource for geologists. Pictured above: Xiphactinus, the Bulldog Fish, the largest fossil fish on display anywhere in Britain or Ireland. Pictured right: Ammonites from Jurassic rocks of the Antrim coast. Pictured left: Crinoids, stalked relatives of starfish. These are from Triassic rocks in China. Ask an Expert If you would like further information about this collection you may contact the curator by following this link and completing the short form.
<urn:uuid:fb0e113a-6cc9-4ac4-970a-785bb17c2ef6>
CC-MAIN-2013-20
http://nmni.com/um/Collections/Natural-Sciences---Geology/Fossils
2013-05-20T21:57:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910921
196
is the addition of a sound to the end of a word. Often, this is due to nativization , and a logical counterpart of epenthesis , particularly vocalic epenthesis Some languages have undergone paragoge as a sound change , so that modern forms are longer than the historical forms they are derived from. Italian sono 'I am' from Latin SUM is an example. Paragoge in loanwords Languages that do not allow words to end in consonants, or do not allow certain consonants to occur word-finally, will add a dummy vowel to the end of loanwords from other languages that include an illegal final consonant. For example, English rack becomes Finnish räkki and Japanese rakku . Similarly, Arabic ‘araq in Modern Greek - Crowley, Terry. (1997) An Introduction to Historical Linguistics. 3rd edition. Oxford University Press.
<urn:uuid:2b1d39d6-5eff-487a-a472-b312726bda00>
CC-MAIN-2013-20
http://www.reference.com/browse/Paragoge
2013-05-21T10:42:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.875731
199
Keratosis pilaris is a common genetic skin condition which appears as rough, bumpy, sometimes red skin most often found on upper arms, thighs, and cheeks. These bumps are due to a buildup of keratin, a hard protective protein found in skin, hair, and nails. This built up keratin forms a plug which blocks the opening of hair follicles. When these plugs, or bumps, become irritated it causes redness. Keratosis pilaris usually presents in childhood, often at its worst during puberty, but can continue into adulthood. The condition usually improves in warmer weather, while dry weather seems to exacerbate symptoms. There is no known cure for Keratosis pilaris, though steps can be taken to keep minimize bumps and redness.
<urn:uuid:aa21b69c-110f-43c1-b53b-cc50cf5e46ea>
CC-MAIN-2013-20
http://www.lavera.com/blog/index/list/tag/pilaris/?cat=181&clearance=278&page_id=tag%2Fpilaris%2F
2013-05-22T00:44:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966715
160
As part of the Ark Encounter Project at Answers in Genesis, a research effort has been initiated to provide information necessary for the best possible reconstruction of the animal kinds preserved on the Ark. This initial paper outlines the basic rationale that will be used and the underlying justification for it. The biblical text provides strong evidence for each kind being a reproductive unit. Based on this and biological evidence that reproduction requires significant compatibility, hybridization will be considered the most valuable evidence for inclusion within an “Ark kind.” The cognitum and statistical baraminology are discussed as they are relevant to this venture. Where hybrid data is lacking, we have chosen to use a cognitum method. Using current taxonomic placement as a guide, pictures and/or personal experience with the animals will be used to find obvious groupings. If the grouping seems excessively high taxonomically, the family level may be used as the default level to avoid underestimating the number of kinds on the Ark. Results from statistical baraminology studies and other information will be used where appropriate. It is hoped the result will be a valuable resource for future studies in baraminology. Keywords: Ark, Flood, created kinds, baraminology, cognitum Long before the Ark Encounter project was announced by Answers in Genesis, it was realized that a considerable amount of research would be necessary to allow for a high quality exhibit. How many kinds were there on the Ark? What might they have looked like? How can we even begin to answer these questions? This paper is the first in a series that will attempt to address these questions. At a time when the world was filled with violence, God chose to destroy all land-dwelling, air-breathing life on it by a global Flood (Genesis 7:21–23). Noah, a righteous man, was instructed to build an Ark that would protect him, his family, and pairs of animals and birds from this coming destruction (Genesis 6:9–22). God told Noah: Of the birds after their kind, of animals after their kind, and of every creeping thing of the earth after its kind, two of every kind1 will come to you to keep them alive. (Genesis 6:20) This designation of flying and terrestrial creatures preserved on the Ark “after their kind” is repeated in Genesis 7:14 and is reminiscent of how these creatures were created (Genesis 1:21, 24–25).2 Since the Bible does not mention specifically how many kinds there were, nor give us specific physical descriptions of them, any attempt to discern what they were will necessarily include a significant amount of conjecture. Nevertheless, there is information that can be used to make educated guesses about these animals preserved on the Ark. While it is important to recognize that these are informed guesses, and therefore not to be accepted with the level of certainty of Scripture, they can help us gain a general appreciation for what things may have been like on the Ark. A comprehensive understanding of biology should necessarily include the origin of life. While the secular world ignores the Bible and speculates naturalistic origins for life, a Christian should recognize that reliable eyewitnesses are invaluable for establishing historical facts (Numbers 35:30, Deuteronomy 17:6; 19:15, Isaiah 8:2; 43:9–12; 44:6–8, Jeremiah 6:16–18, 32:12, Matthew 18:16, Acts 2:32, 2 Corinthians 13:1, 1 Timothy 5:19). Clearly, in the first few chapters of Genesis, we have a historical account of the creation of the world and life on it from the most reliable eyewitness, God himself. So this is where we will begin. During Creation Week God created plants (Day 3), sea creatures and flying creatures (Day 5), and land animals (Day 6) each “according to its kind” (Genesis 1:11–13, 20–25). This phrase is used of all animal life except humans, who were created in the image of God (Genesis 1:26, 27). So it is important to understand what is being conveyed. The underlying Hebrew word for kind here is מין, mîn. It, along with the Hebrew word for create ברא, bārā’), was used to coin the word baramin, a creationist term for created kind. While the word baramin has strong taxonomic connotations to most creationists, Hebrew scholars have warned against assuming that מין is a technical term (Turner 2009; Williams 1997). Both Williams (1997) and Turner (2009) suggest that מין can be understood to refer to subdivisions within a larger group much like the meaning of the English word kind. So caution needs to be exercised in this area. Plants are described as being created according to their kinds with seed (זרע, zera’), implying they were to reproduce (Genesis 1:11–12). Aquatic and flying creatures, after being created according to their kinds, were blessed and told to reproduce to fill the earth (Genesis 1:22). A similar blessing was pronounced on humans (Genesis 1:28) along with a command for them to rule the earth. Since life was created “according to their kinds” and told to reproduce, it is often assumed that life reproduces according to its kind. While Scripture does not emphatically state that life reproduces only after its own kind, there is a very strong inference given both the biblical text and observations made in the world today. The account of the Flood seems to reinforce this understanding. God told Noah: And of every living thing of all flesh you shall bring two of every sort into the ark, to keep them alive with you; they shall be male and female. Of the birds after their kind, of animals after their kind, and of every creeping thing of the earth after its kind, two of every kind will come to you to keep them alive. (Genesis 6:19–20). Notice verse 19 mentions two of all living things, a male and a female, are to come on the Ark. The obvious purpose is for reproduction (cf. Genesis 7:2, 3, and 9). This is adjacent to a verse mentioning the preservation of animals according to their kinds, again specifying two of each. A very similar situation is found in the next chapter. they [Noah and family] and every beast after its kind, all cattle after their kind, every creeping thing that creeps on the earth after its kind, and every bird after its kind, every bird of every sort. And they went into the ark to Noah, two by two, of all flesh in which is the breath of life. So those that entered, male and female of all flesh, went in as God had commanded him; and the LORD shut him in. (Genesis 7:14–16) These pairs of animals were brought on the Ark for the purpose of preserving their seed (Genesis 7:3 זרע, zera’). Word-for-word translations render זרע as offspring (for example New American Standard Bible, English Standard Version, New English Translation), clarifying things since the modern English word “seed” has a narrower semantic range than the Hebrew word. The New International Version, which is more of a dynamic equivalence translation, renders the encompassing phrase: “to keep their various kinds alive throughout the earth.” Thus, where מין is used in the Creation or Flood accounts, it seems to be referring to distinct groups of animals and strongly implying that reproduction occurs within these groups (Table 1). Based on the concept that living things reproduce according to their kinds, hybrids between different species of animals has long been considered conclusive evidence that both species belong to the same created kind (baramin). For example, crosses between dogs and wolves, wolves and coyotes, and coyotes and jackals are interpreted to mean that all these species of animals belong to a single baramin. Reproduction is a complex process and sometimes barriers arise that make it more difficult. This can be seen in attempts to form hybrids between different species. When cattle are crossed with bison, live hybrids are formed. However, the males are sterile. The females can generally reproduce and can be crossed with either parent species. For this reason, cattle and bison are considered to belong to the same baramin, but are not the same species because they cannot consistently produce fertile offspring. Crosses between horses and donkeys produce a mule, which is rarely fertile in either sex. More serious barriers to reproduction can be apparent within a baramin. Sheep and goats were identified as belonging to the same baramin because several live hybrids have been produced between them. However, a live hybrid is not the most common result when these species mate with each other. In one study, when rams were mated with does (female goats) fertilization was fairly common, although not as high as matings within the respective species. The hybrid embryos died within 5 to 10 weeks. When the cross was made the other direction, bucks (male goats) mated with ewes, fertilization did not occur (Kelk et al. 1997). So how much development is necessary for hybridization to be considered successful? Is fertilization enough? The answer to the latter question is clearly no, as human sperm can fertilize hamster eggs in the laboratory.3 Even the first few divisions are under maternal control. For this reason Scherer (1993) stated that embryogenesis must continue until there is coordinated expression of both maternal and paternal morphogenetic genes. Lightner (2007) suggested that the advanced blastocyst stage may be sufficient. This was partially based on a study by Patil and Totey (2003) which showed failure of embryos around the 8 cell stage was associated with a lack of mRNA transcripts. Thus it seemed significant coordinated expression was necessary to advance past this stage, through the morula stage, to a late blastocyst. This brings us to some limitations of hybridization in determining kinds. While well documented hybrids can confirm that two species belong to the same baramin, lack of hybridization data is inconclusive. There are several reasons why hybrid data may be lacking between individuals within the same baramin. First, it is relatively difficult to gather good hybrid data in the wild, and often the opportunity for hybridization is lacking when animals live in different parts of the world. As a result, hybrid data is more complete for animals that are domesticated or held in captivity (for example, in zoos). Second, as described earlier with sheep and goats, even for animals that have produced hybrids, many attempts may be unsuccessful. This may be the result of genetic changes (mutations) that have accumulated in one or both species since the Fall, that causes a loss of ability to interbreed. Finally, if an animal is only known from the fossil record there is no opportunity for it to hybridize with animals alive today. |Subject||Passage|| Reproduction Mentioned— | Reproduction Mentioned— |Yes: be fruitful and multiply| |Yes: be fruitful and multiply||Yes: be fruitful and multiply| |Land animals (on Ark)|| |Yes: a kind is represented on the Ark by a male and its mate; be fruitful and multiply| A cognitum is a group of organisms that are naturally grouped together through human cognitive senses. A cognitum can be above the level of the baramin (for example, mammals), below the level of the baramin (for example, foxes), or at the level of the baramin. This perception-based concept was proposed by Sanders and Wise (2003) as a separate tool in baraminology. Though not originally proposed as a means to identify baramins, the basic concept could prove useful for our purposes here. Use of this method assumes that created kinds have retained their distinctiveness even as they have diversified. Human cognitive senses influence where animals are placed taxonomically. To some degree a cognitum approach is used in baraminologic studies, though not always consciously acknowledged. Lightner (2006) used it when proposing that all members of the genera Ovis and Capra belonged to the same baramin. Hybrid data had connected most members across these genera, and the members who had no hybrid data naturally fit in the group based on their physical appearance. They also happened to fit in the same group taxonomically. The cognitum has played a role in determining what is accepted as true hybridization. As discussed previously, fertilization is clearly insufficient evidence of hybridization. When Lightner (2007) found documented evidence that domestic cattle (Bos taurus) had been crossed in vitro with water buffalo (Bubalus bubalis) and a few fertilized eggs survived to the well-developed blastocyst stage, it seemed sufficient coordinated expression of genes had been demonstrated. The fact that water buffalo naturally group with cattle based on anatomy, physiology, and the husbandry practices used with them was an important part of why it was accepted. If a blastocyst could be formed between domestic cattle and a skunk, this criterion would no doubt be reconsidered. From previous work in baraminology, researchers have suggested that the level of the baramin tends to fall at or near the taxonomic level of family (Wood 2006). There is often a strong cognitum at the family level. This suggests that the family is a good initial approximation of the level of the baramin. In some instances a strong cognitum may be above or below this level. For example, pigs (Suidae) and peccaries (Tayassuidae) form a strong cognitum even though they are in separate families. From looking at these animals or pictures of them, they are easily grouped together by human cognitive senses. Their division into separate families is based on more subtle details, and most people would not naturally split them into these groupings unless they were familiar with the taxonomy of these animals. So in this case the baramin appears to be at the level of the superfamily (Suoidea). Although developed separately, statistical baraminology has similarities to the cognitum in some ways. It takes a collection of characteristics (character traits) and using several statistical tests attempts to discern significant holistic continuity (similarity) or discontinuity between species (Wood et al. 2003). Like the cognitum, it assumes that baramins retain their distinctiveness today. However, in contrast to the cognitum, it assumes that the baramin is the level where statistical tests will consistently point when a set of character traits are analyzed. Following the introduction of statistical baraminology the definition of the term holobaramin was changed. Essentially, a holobaramin can be thought of as all members of a specific created kind; in other words, the whole baramin. Now, a holobaramin is defined as a group of organisms that share continuity, but are bounded by discontinuity. Continuity is defined as significant, holistic similarity between two different organisms (Wood et al. 2003). A precise definition of holistic and significant has been somewhat elusive, so Wood (2007) has pointed out the importance of drawing tentative conclusions based on these statistical tests. Previously, a holobaramin was only identified after considerable detailed study involving multiple lines of evidence. This meant the term carried a definitive connotation. A group was not called a holobaramin until a substantial amount of supporting evidence was amassed. This is not the case when a holobaramin is identified based on statistical tests from a single dataset, even though a dataset may include many character traits. This dramatic shift in the level of certainty associated with the term holobaramin is often not appreciated by creationists who don’t use these statistical methods. There are some clear advantages of statistical baraminology. A suitable matrix of characters is often available together with published cladistic analyses of taxonomic groups. Since someone else has done the work of compiling the data, the baraminologist can enter it into a spreadsheet and run it through the software package available at the Center for Origins Research (CORE) website.4 These advantages have allowed for numerous datasets to be analyzed, adding useful information to the field of baraminology (Wood 2008). Another potential advantage is that statistical baraminology may help identify the placement of animals known only from the fossil record. These methods have not been without their critics. The strongest reactions seem to be when the conclusions are at odds with how other creationists feel creatures naturally group. A dramatic example was when an analysis of craniodental characters placed Australopithecus sediba in the human holobaramin (Wood 2010). This led to numerous articles expressing disagreement about these specific results and the techniques in general (Line 2010; Lubenow 2010; Menton, Habermehl, and DeWitt 2010; Wilson 2010). Important points in the discussion included the significance of specific anatomic features, the inclusion of inference in certain character states of the dataset, and the possibility that statistical analysis may not consistently point to the level of the holobaramin. At the opposite end of the spectrum, there are times where the statistical tests have shown discontinuity between animals connected by hybrid data (Brophy and Kramer 2007; Wood 2008, pp. 57–60). In one case (McConnachie and Brophy 2008) a dataset of 102 mostly osteologic characters was used to evaluate landfowl. Three of the putative holobarmins were connected by hybrid data. Hybrid data is considered more conclusive than the statistical tests because it requires considerable continuity at the genetic, metabolic, developmental, and immunologic levels. This discrepancy between the hybrid data and statistical results is a concern because datasets involving fossils are generally limited to osteologic characters. The majority of holobarmins identified by statistical tests are not controversial, but they still need confirmation from further study (Wood 2008, p. 230). Given the limitations of other methods, it seems that statistical baraminology is an important tool for creationists to use and to continue to develop. As Wood (2007, p. 9) has stated [a]s long as baraminologists recognize the flaws and remember to draw tentative conclusions, baraminology research with these methods will give a good starting place for future generations of creationists. As we embark on the Ark Kinds research, we have outlined basic principles that will be used to determine probable Ark kinds. We unanimously agree that hybrid data, for both biblical and biological reasons, is the best way to definitively demonstrate that creatures are descendants of the same Ark kind. Due to the high value placed on such hybrid data, our research will include a literature search to identify documented hybrids. Emphasis will be placed on hybrids across higher taxonomic levels (for example, between genera, like the coyote, Canis latrans, and the red fox, Vulpes vulpes) since they are more informative than crosses within a genus. When a hybrid is found that crosses two taxa, all species in both taxa will be considered to be from the same created kind (for example, all Canis species and all Vulpes species). Unfortunately, hybrid data is lacking for many creatures. In these cases, a cognitum approach will be used. More specifically, using the context of where taxonomists place the creatures, morphology will be examined to find where they most naturally group together. In addition to drawing on personal experience and training, published works describing and illustrating various taxa will be used. A valuable resource for this will be the University of Michigan Museum of Zoology’s Animal Diversity Web website (ADW 2008), which contains numerous photographs covering many animal species. When the cognitum is unclear or seems excessively high taxonomically, the family level may be used as the default level for the kind. This should help guard against seriously underestimating the number of kinds represented on the Ark. One reason the cognitum is the preferred method after hybridization is that Adam would have recognized created kinds by sight. Presumably the same would have been true in Noah’s time. Humans are designed to be able to visually detect patterns and have a natural tendency to group according to those patterns. Therefore, when the cognitum is used, emphasis will be placed on traits that affect the overall appearance of the animal over those that represent more obscure anatomical or physiological details. Other data, including results of statistical baraminology analyses as well as protein and DNA sequence data, will be evaluated where it seems appropriate. However, none of these will be given as high a priority as hybrid data or the cognitum. This may seem counterintuitive to some. Sequence data is considered hard, objective data. The cognitum seems so subjective. Certainly, it would seem that it is more scientific to use hard data than the subjective cognitum. Besides, these other methods use such interesting mathematical analyses that they must be better, right? In reality, the really good math masks the fact that conclusions based on these other data have a highly subjective component. Statistical baraminology analyses are based on certain selected character traits, and character selection is not an unbiased process. Brophy (2008), in explaining why hybrid data and statistical baraminology results were in conflict, proposed that purpose for which the dataset was gathered could bias the results. In the case of landfowl (Galliformes), the dataset was intended to divide the birds up for taxonomic purposes. This seems a reasonable explanation for why the statistical tests based on that dataset divided birds that were connected by hybrid data. To some, using sequence data may seem more objective. Certainly identifying sequences is objective. It is the interpretation that is not. How does one distinguish between sequences that are the same because two creatures are from the same kind and sequences that are the same because God created them the same in two different kinds? Why do differences exist? Are they simply variability God placed in one created kind at Creation? Are they differences that have arisen within a kind since Creation? Are they created differences between different kinds? Are they differences that have arisen between two different created kinds that originally had identical or very similar sequences in a particular region? The bottom line is that we don’t have enough understanding of genetics to understand the significance of most sequence data. Once the modern descendants of the Ark kinds are determined, we need to use this information to infer what the actual pair on the Ark may have looked like. One thing that is evident when looking at animals in the world today, many have specialized to live in specific niches. There are hares that live in the arctic, others that live in the desert, and others in intermediate climates. There are cattle (for example, the yak) that can withstand high altitudes and cold climates; there are other cattle (for example, zebu) that are adapted to live in hot, arid climates. We also see specialization in domestic animals, where some cattle have been bred for milk production and others have been bred for beef production. Given these trends, the Ark kinds would be relatively unspecialized animals that fit nicely into the cognitum of the created kind. Just as building the Ark was a monumental task, so our task to determine the Ark kinds is monumental as well. We clearly recognize that in many ways God has prepared us for this task. Yet we are also keenly aware that to do this task well we need power, strength, wisdom, insight and perseverance that only our awesome, sovereign God can give us. For this, your prayers would be much appreciated. When we are done, we will not have all the answers regarding created kinds, but we hope to have made a substantial contribution to creation research that can serve as a strong resource for future research on created kinds. Beyond this we pray that this information will be used to help people understand that God’s Word is trustworthy. May it be used to play a role in many coming to know Christ and living fully for His honor and glory. Soli Deo Gloria! ADW. 2008. Retrieved from http://animaldiversity.ummz.umich.edu/site/index.html Brophy, T. R. 2008. A baraminological analysis of the landfowl (Aves: Galliformes). Lecture presented at the 7th BSG conference, Pittsburg, Pennsylvania. Brophy, T. R., and P. A. Kramer. 2007. Preliminary results from a baraminological analysis of the mole salamanders (Caudata: Ambystomatidae). Occasional papers of the BSG 10:10–11. Kelk, D. A., C. J. Gartley, B. C. Buckrell, and W. A. King. 1997. The interbreeding of sheep and goats. The Canadian Veterinary Journal 38, no. 4:235–237. Lightner, J. K. 2006. Identification of species within the sheep-goat kind (Tsoan monobaramin), Journal of Creation 20, no. 3:61–65. Lightner, J. K. 2007. Identification of species within the cattle monobaramin (kind). Journal of Creation 21, no. 1:119–122. Line, P. 2010. Gautengensis vs sediba: A battle for supremacy amongst ‘apeman’ contenders, but neither descended from Adam. Retrieved from http://creation.com/homo-gautengensis. Lubenow, M. 2010. The Problem with Australopithecus sediba. Answers in Depth 5:138–140. Retrieved from http://www.answersingenesis.org/articles/aid/v5/n1/problem-with-australopithecus-sediba. Menton, D. N., A. Habermehl, and D. A. DeWitt. 2010. Baraminological Analysis Places Homo habilis, Homo rudolfensis, and Australopithecus sediba in the Human Holobaramin: Discussion. Answers Research Journal 3:153–158. McConnachie, M. and T. R. Brophy. 2008. A baraminological analysis of the landfowl (Aves: Galliformes). Occasional papers of the BSG 11:9–10. Patil, S. and S. Totey. 2003. Developmental failure of hybrid embryos generated by in vitro fertilization of water buffalo (Bubalus bubalis) oocyte with bovine spermatozoa. Molecular Reproduction and Development 64, no. 3:360–368. Sanders, R. W. and K. P. Wise. 2003. The cognitum: A perception-dependent concept needed in baraminology. In Proceedings of the Fifth International Conference on Creationism, ed. R. L. Ivey, pp. 445–456. Pittsburgh, Pennsylvania: Creation Science Fellowship. Scherer, S. 1993. Typen des lebens, pp. 11–30. Berlin, Germany: Pascal-Verlag. Turner, K. J. 2009. The Kind-ness of God: A Theological Reflection of Mîn, “kind.” In CORE Issues in Creation no. 5, ed. T. C. Wood and P. A. Garner, pp. 31–64. Eugene, Oregon: Wipf and Stock. Williams, P. J. 1997. What does min mean? CEN Tech J. 11, no. 3:344–352. Wilson, G. 2010. Classic Multidimensional Scaling isn’t the Sine Qua Non of Baraminology. Answers in Depth 5:173–175.Retrieved from http://www.answersingenesis.org/articles/aid/v5/n1/cmds-baraminology. Wood, T. C., K. P. Wise, R. Sanders, and N. Doran. 2003. A refined baramin concept. Occasional papers of the BSG 3:1–14. Wood, T. C. 2006. The current status of baraminology. Creation Research Society Quarterly 43, no. 3:149–158. Wood, T. C. 2006. Statistical baraminology workbook. Unpublished workbook presented at workshop during the BSG conference June 13, 2007, Liberty University, Lynchburg, Virginia. Wood, T. C. 2008. Animal and plant baramins. CORE Issues in Creation no. 3. Eugene, Oregon: Wipf and Stock. Wood, T. C. 2010. Baraminological Analysis Places Homo habilis, Homo rudolfensis, and Australopithecus sediba in the Human Holobaramin. Answers Research Journal 3: 71–90. Cutting-edge creation research. Free. Answers Research Journal (ARJ) is a professional, peer-reviewed technical journal for the publication of interdisciplinary scientific and other relevant research from the perspective of the recent Creation and the global Flood within a biblical framework. High-quality papers for Answers Research Journal, sponsored by Answers in Genesis, are now invited for submission. Interested authors should download and read the Instructions to Authors Manual PDF file for all details of requirements, procedures, paper mechanics, referencing style, and the technical review process for submitted papers.
<urn:uuid:54871de2-4553-4a24-b0f5-197dbde9c128>
CC-MAIN-2013-20
http://www.answersingenesis.org/articles/arj/v4/n1/ark-kinds-flood-baraminology-cognitum
2013-05-19T02:30:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943379
6,119
Dice Play for Toddlers and Preschoolers Kieran has been fascinated with dice since before he was two years old. He loves the combination of the sounds of clinking dice, the dots on them, and the colors (we have red, green, and white dice). I didn’t give him dice as an educational activity, but just for fun, here are a few ways that dice play can benefit toddlers and preschoolers. Dice Play Can Help Develop Motor Skills Before toddlers can count (but after they are past the danger of mouthing dice1), dice play can be a good way to practice developing fine motor skills. It takes some concentration to pick up, hold, and toss several small dice. These fine motor skills will continue to develop for the next several years, and handling/manipulating small objects like dice are an important part of that development. Fine Motor Skills Dice Game: Moving Marbles Items needed: one die (dice), marbles, an ice cube tray, a melon or cookie dough scooper Advanced play: Take turns throwing the die. Count the dots. Using the scooper, scoop as many marbles out of the ice cube tray as are indicated by the die. Easier play: For toddlers who can’t count yet, let them watch/help you count the dots on the die. You can put the marbles in a bowl if the tray compartments make it too hard to scoop the marbles. For toddlers who are not coordinated enough to scoop yet, let them use their fingers to move the marbles. Gross Motor Skills Dice Game: Animal Planet Items needed: one die, paper and pen, (optional) 6 different animal stickers or pictures to afix to the paper Play: Assign a number from one to six to six different animals. I used an old children’s magazine and cut out pictures of a lion, a kangaroo, a fish, a cat, an elephant, and a monkey. Roll the die. The number on the die coordinates to an animal – everyone gets up and acts like that animal for a minute. Next person rolls, repeat. Optional: instead of using a small die and paper, get a small box and tape the animal pictures to each side of the box. Act like whatever animal is up when they roll. The larger box and the “acting” will both help develop gross motor skills. Dice Play Can Develop Math Skills You can count the dots on the side of the dice even before your toddler is counting. And once he learns to count, you can expand your dice play. “Games with dice, or with tokens that progress around a path, help children learn number recognition and make the concrete connection to the meaning of numerical concepts. Children responding to “Move two spaces” or “Go back three” are actually learning simple addition and subtraction.”2 Easy Math Skills Game: Mountain Items needed: one die, paper and pen (use this to draw a mountain, then write the numbers 1 2 3 4 5 6 up to the top of the mountain – 6 being at the very top – then 5 4 3 2 1 back down the mountain on the other side; or you can just print this printable) Play: The object of the game is to get up and back down your mountain (each player has her own mountain). Taking turns, roll until you get a one – you can cross off the one or use a game piece to mark the number one. Then roll for a two, three, etc. The site I found this game on talks about “winners” and who can climb their mountain “first,” you can easily do this game without declaring a “winner” by just continuing on until everyone has had enough. Dice Play Can Develop Language Skills Using several dice, you can help your little one start learning patterns and sequences. Sequencing and ordering are building blocks of language development (think of how our brains have to arrange letters to form words). Language Skills Game: Simple Yahtzee Items needed: five dice, paper and pen (we usually use a large MagnaDoodle when we play – this is Kieran’s current favorite game) Play: Here is a link to the traditional form of Yahtzee. You can modify these rules to your child’s level. We play to get 2 pair, 3 of a kind, 4 of a kind, full house, small and large straights (we used to just look for any kind of straight, now Kieran can differentiate), and Yahtzee. We do not try to get the most number of 1′s, 2′s, etc. We do the traditional three rolls. We do not play to “win,” we just play and see what we can get on each roll. So, for example, in traditional Yahtzee if you don’t get anything on your turn, you’d have to mark a “zero” next to one of your boxes. We don’t do that. We often have multiple full houses and 3 of a kind’s. It doesn’t matter – we just play until Kieran gets tired of rolling. What kind of dice games do you play with your toddler or preschooler? 12 Responses to: "Dice Play for Toddlers and Preschoolers" My Book Is Now Available! For My Children: A Mother's Journal of Memories, Wishes and Wisdom Click the cover to order now! - 9 Ways to Create Moments of Connection with Older Children - Lego Angst - I Wish Children Came with Instruction Manuals - Mindful Nurturing Resources Bundle Sale - We’re Just Suckers for Breastmilk! Should Lactivists Worry About Breastmilk-Flavored Lollipops? Forced Weaning Due to Pregnancy 101 Things To Do Instead of Yelling or Spanking The Effects of Circumcision on Newborn Boys Kardashian’s Call to Cover Up - Mother’s Day Gift Set Giveaway from moksa organics and Zoe Organics - Natural Parents Network Holiday Gift Guide - Giveaway: 12×16 Custom Portrait from Destany Fenton Fine Art – $220 ARV CLOSED - Giveaway: Story Starters Game from Mama May I – $25 ARV CLOSED - Giveaway: $35 Gift Certificate to Earthslings – $35 ARV CLOSED - Giveaway: $30 Gift Certificate from Dominna – $30 ARV CLOSED - Giveaway: $20 Gift Certificate to Two Pink Hearts – $20 ARV CLOSED - Giveaway: 3 Pairs of Earrings from Job Description Mommy – $45 ARV CLOSED - Revisionary Parenting - Giveaway: Qwirkle Game from SeriousShops.com – $25 ARV CLOSED
<urn:uuid:88727987-3ee1-4745-9634-67521fe5e15d>
CC-MAIN-2013-20
http://codenamemama.com/2010/07/15/dice-play/
2013-06-19T14:25:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910162
1,465
Many years ago we had an Assay Commission. The Commissionís only purpose was to examine coinage annually and attest to the fineness of the bullion coins. Although the Assay Commissionís job was mainly ceremonial, it was quite prestigious to be named to this commission. Occasionally the U.S. Assay Commission did find some underweight coins or coins that were below the standard alloy. In one of these rare instances when the Commission found something wrong during one of its annual meetings, the 1881 body discovered that about 3,000 Carson City Mint dollars had been struck in 1880 from an alloy that assayed .892 fine instead of the .900 fineness required. It is unclear from Mint records as to whether these dollars were either recovered or melted down. The original tolerance for silver dollars was 1.5 grains. The Commission of 1895 reported finding one dollar dated 1884 that was 1.51 grains below the normal weight. Since that time the specifications were changed to allow six grains. The most unique job of the 1971 Assay Commission was that for the first time in its history, there were no silver coins to examine. The Commission met on Feb 10, 1971, which was way too early in the year to receive any of the 40 percent silver Eisenhower dollars that were struck that year. The Commission was merely there to check the 1970 coins, which had no silver content. One of the perks for the Commission members were the Commission medals. As far back as 1969 collectors were offering an average of $145 per medal. Several of the Assay Commission medals depicted coins. For example, the 1965 medal had the 1964 Kennedy half dollar. The 1881 medal depicts Liberty and Justice before a coin press. The first instance of a president appearing on the medals while in office was on the 1880 Assay Commission Medal. Jimmy Carter abolished the Assay Commission in 1977. When the government resumed issuing gold and silver coins, there was a movement to revive the Assay Commission. It was pointed out that there was never any legal requirement for the Commissionís existence in the first place. The last of the 1,500 Assay Commission medals were sold in 1978, while others were given to various government officials, and the remainder were melted. The first U.S. Assay Commission medals were issued in 1860. The first Commission to examine to examine and weigh the coins was established in 1823. this ultimately became the Assay Commission, appointed each year to conduct tests and review the production records of the Mint.
<urn:uuid:2e14c742-02fc-4814-9081-793cb63b3f0e>
CC-MAIN-2013-20
http://www.bellaonline.com/ArticlesP/art174562.asp
2013-05-19T18:34:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.983502
509
Computer-aided design (CAD), also known as computer-aided design and drafting (CADD), involves the entire spectrum of drawing with the aid of a computerrom straight lines to custom animation. In practice, CAD refers to software for the design of engineering and architectural solutions, complete with two- and three-dimensional modeling capabilities. Computer-aided manufacturing (CAM) involves the use of computers to aid in any manufacturing process, including flexible manufacturing and robotics. Often outputs from CAD systems serve as inputs to CAM systems. When these two systems work in conjunction, the result is called CADCAM, and becomes part of a firm's computer-integrated manufacturing (CIM) process. CADCAM systems are intended to assist in many, if not all, of the steps of a typical product life cycle. The product life cycle involves a design phase and an implementation phase. The design phase includes identifying the design needs and specifications; performing a feasibility study, design documentation, evaluation, analysis, and optimization; and completing the design itself. The implementation phase includes process planning, production planning, quality control, packaging, marketing, and shipping. CAD systems can help with most of the design phase processes, while CAM systems can help with most of the implementation processes. The contributions of CAD and CAM systems are described below. CAD systems are a specialized form of graphics software, and thus must adhere to basic principles of graphics programming. All graphics programs work in the context of a graphics device (e.g., a window on a monitor, a printer, or a plotter). Graphics images are drawn in relation to a 2-D or 3-D coordinate system, of which there are several types. A device coordinate system is 2-D and maps images directly to the points (pixels) of the hardware device. In order to facilitate device-independent graphics, a virtual device coordinate system abstracts the 2-D points into a logical framework. Of course, the devices being designed are generally 3-D objects, which also require a world coordinate system for representing the space in which the objects reside, and a model coordinate system for representing each of the objects in that space. CAD software includes algorithms for projecting the 3-D models onto the 2-D device coordinate systems and vice versa. CAD systems include several primitive drawing functions, including lines, polygons, circles and arcs, rectangles, and other simple shapes. From these primitives, 3-D composites can be constructed, and include cubes, pyramids, cones, wedges, cylinders, and spheres. These shapes can be drawn in any color, and filled with solid colors or other patterns (called hatching). In addition, basic shapes can be altered by filleting (rounding) or chamfering (line segmentation). Based on the manipulation of basic shapes, designers construct models of objects. A skeletal wire form model is a 3-D representation that shows all edges and features as lines. A more realistic-looking model is called a solid model, which is a 3-D model of the object being designed as a unitary whole showing no hidden features. The solid model represents a closed volume. It includes surface information and data determining if the closed volume contains other objects or features. Solid modeling involves functions for creating 3-D shapes, combining shapes (via union, intersection, and difference operations), sweeping (translational and rotational) for converting simple shapes into more complex ones, skinning (for creation of surface textures), and various boundary creation functions. Solid modeling also includes parameterization, in which the CAD system maintains a set of relationships between the components of an object so that changes can be propagated to following constructions. Common shapes are constructed into features (e.g., slots, holes, pockets), which can then be included in a solid model of an object. Feature representation helps the user define parts. It also simplifies CAD software design because features are easier to parameterize than explicit interactions. Objects built from features are called parts. Since a product being designed is composed of several parts, many CAD systems include a useful assembly model, in which the parts are referenced and their geometric and functional relationships are stored. CAD models can be manipulated and viewed in a wide variety of contexts. They can be viewed from any angle and perspective desired, broken apart or sliced, and even put through simulation tests to analyze for strengths and defects of design. Parts can be moved within their coordinate systems via rotation operations, which provide different perspectives of a part, and translation, which allows the part to move to different locations in the view space. In addition, CAD systems provide valuable dimensioning functionality, which assigns size values based on the designer's drawing. The movement of these images is a form of animation. Often, CAD systems include virtual reality technology, which produces animated images that simulate a real-world interaction with the object being designed. For example, if the object is a building, the virtual reality system may allow you to visualize the scene as if you were walking around the inside and the outside of the building, enabling you to dynamically view the building from a multitude of perspectives. In order to produce realistic effects, the system must depict the expected effects of light reflecting on the surface as it moves through the user's view space. This process is called rendering. Rendering technology includes facilities for shading, reflection, and ray tracing. This technique, which is also used in sophisticated video games, provides a realistic image of the object and often helps users make decisions prior to investing money in building construction. Some virtual reality interfaces involve more than just visual stimuli. In fact, they allow the designer to be completely immersed in the virtual environment, experiencing kinesthetic interaction with the designed device. Some CAD systems go beyond assisting in parts design and actually include functionality for testing a product against stresses in the environment. Using a technique called finite element method (FEM), these systems determine stress, deformation, heat transfer, magnetic field distribution, fluid flow, and other continuous field problems. Finite element analysis is not concerned with all design details, so instead of the complete solid model a mesh is used. Mesh generation involves computing a set of simple elements giving a good approximation of the designed part. A good meshing must result in an analytical model of sufficient precision for the FEM computation, but with a minimum number of elements in order to avoid unnecessary complexity. In addition to FEM, some CAD systems provide a variety of optimization techniques, including simulated annealing and genetic algorithms (borrowed from the field of artificial intelligence). These methods help to improve the shape, thickness, and other parameters of a designed object while satisfying user-defined constraints (e.g., allowable stress levels or cost limitations). When a designer uses CAD to develop a product design, this data is stored into a CAD database. CAD systems allow for a design process in which objects are composed of sub-objects, which are composed of smaller components, and so on. Thus CAD databases tend to be object-oriented. Since CAD designs may need to be used in CAM systems, or shared with other CAD designers using a variety of software packages, most CAD packages ensure that their databases conform to one of the standard CAD data formats. One such standard, developed by the American National Standards Institute (ANSI), is called Initial Graphics Exchange Specification (IGES). Another data format is DXF, which is used by the popular AutoCAD software and is becoming a de facto industry standard. The capability to convert from one file format to another is called data exchange, and is a common feature of many CAD software packages. Modern CAD systems offer a number of advantages to designers and companies. For example, they enable users to save time, money, and other resources by automatically generating standard components of a design, allowing the reuse of previously designed components, and facilitating design modification. Such systems also provide for the verification of designs against specifications, the simulation and testing of designs, and the output of designs and engineering documentation directly to manufacturing facilities. While some designers complain that the limitations of CAD systems sometimes serve to curb their creativity, there is no doubt that they have become an indispensable tool in electrical, mechanical, and architectural design. The manufacturing process includes process planning, production planning (involving tool procurement, materials ordering, and numerical control programming), production, quality control, packaging, marketing, and shipping. CAM systems assist in all but the last two steps of this process. In CAM systems, the computer interfaces directly or indirectly with the plant's production resources. Process planning is a manufacturing function that establishes which processes and parameters are to be used, as well as the machines performing these processes. This often involves preparing detailed work instructions to machines for assembling or manufacturing parts. Computer-aided process planning (CAPP) systems help to automate the planning process by developing, based on the family classification of the part being produced, a sequence of operations required for producing this part (sometimes called a routing), together with text descriptions of the work to be done at each step in the sequence. Sometimes these process plans are constructed based on data from the CAD databases. Process planning is a difficult scheduling problem. For a complex manufacturing procedure, there could be a huge number of possible permutations of tasks in a process requiring the use of sophisticated optimization methods to obtain the best process plan. Techniques such as genetic algorithms and heuristic search (based on artificial intelligence) are often employed to solve this problem. The most common CAM application is numerical control (NC), in which programmed instructions control machine tools that grind, cut, mill, punch, or bend raw stock into finished products. Often the NC inputs specifications from a CAD database, together with additional information from the machine tool operator. A typical NC machine tool includes a machine control unit (MCU) and the machine tool itself. The MCU includes a data processing unit (DPU), which reads and decodes instructions from a part program, and a control loop unit (CLU), which converts the instructions into control signals and operates the drive mechanisms of the machine tool. The part program is a set of statements that contain geometric information about the part and motion information about how the cutting tool should move with respect to the workpiece. Cutting speed, feed rate, and other information are also specified to meet the required part tolerances. Part programming is an entire technical discipline in itself, requiring a sophisticated programming language and coordinate system reference points. Sometimes parts programs can be generated automatically from CAD databases, where the geometric and functional specifications of the CAD design automatically translate into the parts program instructions. Numerical control systems are evolving into a more sophisticated technology called rapid prototyping and manufacturing (RP&M). This technology involves three steps: forming cross sections of the objects to be manufactured, laying cross sections layer by layer, and combining the layers. This is a tool-less approach to manufacturing made possible by the availability of solid modeling CAD systems. RP&M is often used for evaluating designs, verifying functional specifications, and reverse engineering. Of course, machine control systems are often used in conjunction with robotics technology, making use of artificial intelligence and computer controlled humanoid physical capabilities (e.g., dexterity, movement, and vision). These "steel-collar workers" increase productivity and reduce costs by replacing human workers in repetitive, mundane, and hazardous environments. CAM systems often include components for automating the quality control function. This involves evaluating product and process specifications, testing incoming materials and outgoing products, and testing the production process in progress. Quality control systems often measure the products that are coming off the assembly line to ensure that they are meeting the tolerance specifications established in the CAD databases. They produce exception reports for the assembly line managers when products are not meeting specifications. In summary, CAM systems increase manufacturing efficiency by simplifying and automating production processes, improve the utilization of production facilities, reduce investment in production inventories, and ultimately improve customer service by drastically reducing out-of-stock situations. PUTTING IT ALL TOGETHER: COMPUTER INTEGRATED MANUFACTURING In a CADCAM system, a part is designed on the computer (via CAD) then transmitted directly to the computer-driven machine tools that manufacture the part via CAM. Within this process, there will be many other computerized steps along the way. The entire realm of design, material handling, manufacturing, and packaging is often referred to as computer-integrated manufacturing (CIM). CIM includes all aspects of CAD and CAM, as well as inventory management. To keep costs down, companies have a strong motivation to minimize stock volumes in their warehouses. Just-in-time (JIT) inventory policies are becoming the norm. To facilitate this, CIM includes material requirements planning (MRP) as part of its overall configuration. MRP systems help to plan the types and quantities of materials that will be needed for the manufacturing process. The merger of MRP with CAM's production scheduling and shop floor control is called manufacturing resource planning (MRPII). Thus, the merger of MRP with CADCAM systems integrates the production and the inventory control functions of an organization. Today's industries cannot survive unless they can introduce new products with high quality, low cost, and short lead time. CADCAM systems apply computing technology to make these requirements a reality, and promise to exert a major influence on design, engineering, and manufacturing processes for the foreseeable future. Bean, Robert. "CAD Should Enable Design Creativity: Engineers Need CAD Tools as Easy as the 'Paper Napkin.'" Design News, 10 January 2005. Grabowski, Ralph, and R. Huber. The Successful CAD Manager's Handbook. Albany, NY: Delmar Publishers, 1994. Lee, Kunwoo. Principles of CAD/CAM/CAE Systems. Reading, MA: Addison Wesley, 1999. McMahon, Chris, and Jimmie Browne. CAD/CAM: Principles, Practice, and Manufacturing Management. 2d ed. Upper Saddle River, NJ: Prentice-Hall, 1999. Port, Otis. "Design Tools Move into the Fast Lane." Business Week, 2 June 2003. Sheh, Mike. "A Quantum Leap in Engineering Design." Business Week, 2 June 2003. Did this raise a question for you?
<urn:uuid:1d27ca7c-9001-47ae-9d36-148b08985b16>
CC-MAIN-2013-20
http://www.enotes.com/computer-aided-design-manufacturing-reference/computer-aided-design-manufacturing
2013-05-22T15:21:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930945
2,937
The act of laying the first coat of plaster on brickwork or stonework. The coat of plaster thus laid on. Plastering on the outside of a wall, often lime-washed. (Kenyon, John R. Medieval Fortifications, 211) The first coat in plastering or general term for most finishes applied to external wall surfaces, may be smooth or rough cast. Rendering (or a rendering coat) is a layer of plaster or cement which is often applied to the outside surface of a wall as a protection against the elements. A wall covering. Internally usually plaster and externally usually cement but sometimes pebbledash, stucco or Tyrolean textured finish. a coat of stucco applied to a masonry wall Covering a wall with quarter to half an inch thick layer of sand and cement, both externally and internally before plastering. Vertical covering of a wall either plaster (internally) or cement (externally), sometimes with pebbledash, stucco or Tyrolean textured finish. Coarse material applied to a wall to cover the brick or stonework.
<urn:uuid:b9031664-fac4-4726-be54-c9bb037fe608>
CC-MAIN-2013-20
http://www.metaglossary.com/meanings/411553/
2013-05-21T10:55:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906957
240
|Auburn University Digital Library| |Little Known Asian Animals With a Promising Economic Future| source ref: b18ase.htm |Part II : Wild Bovine Species| The gaur (Bos gaurus) would seem to be an ideal meat-producing animal. It is a large bovine with massive muscular development, and it has already been domesticated (see mithan, chapter 3). Gaurs, which are threatened with extinction, deserve much greater attention. Two subspecies of gaurs are recognized. · Bos gaurus gaurus (India, Nepal) · Bos gaurus laosiensis (Burma, Thailand, Laos, Vietnam, Kampuchea, West Malaysia). Appearance and Size With its huge head, massive body, and sturdy limbs, the gaur is the embodiment of vigor and strength. It is among the biggest of bovines. Bulls weigh 600-940 kg and stand 1.6-1.9 m tall at the shoulder, but a record bull of 2.2 m and 1,225 kg has been recorded. Cows are only about 10 cm shorter in height, but they are more lightly built and weigh 150 kg less. On their shoulders gaur bulls have a striking muscular ridge that slopes down to the middle of the back, where it ends in an abrupt dip. The horns are crescent shaped, creamy yellow, and taper to a sharp point, which is usually tipped in black. Newborns are a light golden yellow, but soon darken to coffee or reddish brown, the color of young bulls and cows. Old bulls are jet black, their bodies almost hairless. Gaurs have light colored forehead and yellowish or white stockings. Their eyes are brown but in certain lights, because of reflection, they appear blue. Gaurs excrete an oily, aromatic sweat, unique to this species and to the mithan. It gives the animals a strong bovine smell and may be an adaptation for keeping away insects. Once common throughout South and Southeast Asia, gaurs now survive only in scattered remnant herds of up to 30 animals in the hill forests of India, Nepal, Burma, Thailand, Laos, Kampuchea, Vietnam, and the Malay Peninsula. Historically, the largest concentrations have coexisted with farmers in areas of shifting cultivation. The animals adjust to disturbed land, and they also adapt to man's presence if not unduly harassed. For example, gaurs will feed in agricultural fields, along roadsides, and near occupied houses. Herds in national parks feed peacefully while tourists stand by. Gaurs in zoos also become quite tame and manageable. Populations not protected in parks and reserves are in immediate danger of extinction. Even in the remotest hill forests gaurs are harassed by hunting, exposed to the diseases of domestic cattle, and driven from their natural habitat by human invasion. Most herds outside of parks or wildlife reserves are threatened by agricultural development, hydroelectric dam projects, human settlement, or extensive logging. In India, large populations still exist in the larger sanctuaries such as Mudumalai and Kanha Park. In Thailand diseases carried by domestic animals, poaching, and habitat destruction have reduced total gaur numbers to fewer than 500. In Malaysia, the population is estimated to be only 400 animals. † Habitat and Environment Gaurs typically live on gentle, undulating terrain with natural mineral licks. They inhabit gaps in the forest, such as abandoned clearings, where they can find grasses and shrubs. In the northern portions of their range, they inhabit deciduous and semideciduous hill and mountain forests with light brush and many grassy clearings. In the lowlands they live in open bamboo jungles, grassy plains near forests, or dense forests broken by glades or open meadows. (In the forest they probably depend to some extent on the slash and burn agriculture of hill people.) The animals appear to be adapting to increased human presence. They make use of such man-modified habitats as logged forests and fringe areas of agricultural estates that abound with grasses and early secondgrowth vegetation. Gaurs are combination grazers and browsers. They feed on the grasses of forest openings as well as on the young leaves, fruits, twigs, and bark of shrubs and juvenile trees. In one study in Malaysia, grasses comprised 41 percent of their diet, fortes 23 percent, and woody browse 36 percent." Gaurs develop large muscular bodies and maintain excellent condition on relatively low-quality feed. In the Malaysian study, crude protein content of grass species varied from 7.0 to 7.6 percent and phosphorus content varied from 0.11 to 0.17 percent; yet calves reached weights of 300 kg or more during their first year. Birth and survival rates of up to 100 percent have been reported for wild gaur populations. Calves are born at any time of the year. The gestation period is 270 days, a little shorter than for banteng or domestic cattle and longer than for yak or kouprey. Captive gaurs calve first at 2.5 years of age. The gaur interbreeds with the mithan, and both have a diploid chromosome number of 2n = 58. By nature gaurs are shy and timid. As with most wild bovines, their hearing and eyesight seem comparatively poor. Their defense lies in their massive size and acute sense of smell. When a herd with juveniles is threatened by a predator, the adults form a protective circle around the young. Although individuals retreat from danger if they can, they have a unique form of threat: they approach their opponents broadside instead of head on, displaying the huge muscular body and dorsal ridge. In common with other wild bovids, gaurs habitually visit mineral licks, which appear to be necessary to their habitat and influence the herd's movements. Unlike water buffaloes, gaurs do not wallow. They take cover in the forest during the heat of the day and may feed at night and in the early morning during hot weather. In populated areas such as near agricultural estates, they may feed only at night to avoid people. In the past, gaurs associated in loose herds of up to 400 animals, but today groups of only 5 to 12 animals are normally found. The herds, which are of more stable composition than those of banteng, are separated by sex for most of the year; however, during the rut stronger bulls form a series of "tending bonds" with estrous cows.† Gaurs are thought to be interfertile with domestic cattle. If so, their attributes of size, massive muscular development, tolerance of heat and humidity, and resistance to diseases and parasites can contribute to beef production in the tropics. A gaur-cattle hybrid might also have immunity to some cattle diseases; if it retained the mild temperament of the domesticated parent, an extremely powerful beast of burden could be produced. The gaur is a truly majestic animal. Its habit of using grassy forest clearings and salt licks makes it a likely tourist attraction in parks and reserves. In a climate and environment where domestic cattle are susceptible to heat stress and parasite infestation, gaur thrive and maintain body condition. Further, they are able to develop large muscular bodies and maintain excellent body conditions on relatively low-quality forage by feeding on a variety of woody browse, grasses, and fortes. Retaining its wild instincts for survival, the gaur is better able to withstand predator attacks them domestic cattle. This could be an advantage when animals graze in remote areas. Adult gaurs are strong enough to defend themselves against a predator as powerful as a tiger. In addition, they are also very protective of their young. Gaurs have little immunity to some cattle diseases. In many regions of India, cattle driven into the forests to graze infect gaur herds with rinderpest, foot and mouth disease, cattle plague, and other contagious diseases. Severe losses occur. Gaurs also appear very susceptible to malignant catarrhal fever. Gaur numbers are declining throughout their range. If this trend is not reversed, it could effectively prohibit the use of gaur for domestication or crossbreeding purposes. Gaurs are shy and excitable, making them difficult to catch, but once in captivity the animals calm down. Second generation zoo populations are easily worked and handled. Gaurs on occasion damage cultivated crops such as young rubber trees and cassava. They require sturdy and well-kept fences. Research and Conservation Needs In Southeast Asia there is special need to support the efforts of the governments of Malaysia and Thailand to conserve this species and to identify important gaur populations. Similar protective measures are needed in Burma, Laos, Kampuchea, and Vietnam. Research is needed to establish and manage new gaur herds in forest reserves and build up the gaur population in the world's zoos. Techniques have been developed to capture and release wild gaur safely. Fertilized gaur ova have been successfully transferred into a foster Holstein cow. The cow carried the gaur fetus to a successful delivery. This could be the forerunner of an important means of rapidly expanding captive herds by transferring gaur embryos into cattle in different parts of the world. Research is also needed on the basic physiology and production potential of gaur. Crossbreeding experiments should be started immediately to establish the degree of interfertility between the gaur and other bovine species.
<urn:uuid:9a190fdd-2804-4b18-afd6-f6b61f6cd3e3>
CC-MAIN-2013-20
http://diglib.auburn.edu/gsdl/cgi-bin/library?e=d-000-00---0demo--00-0-0--0prompt-10---4------0-0l--1-en-50---20-about---00031-001-1-0utfZz-8-10&a=d&c=demo&cl=CL2.1&d=HASH5d95dfa1688e7cef1da55b.5.2
2013-05-23T04:47:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94355
1,986
The Lactate Threshold If VO2 max is your aerobic endurance potential then your lactate threshold plays a significant role in how much of that potential you are tapping. Lactate threshold has been defined as: The point during exercise of increasing intensity at which blood lactate begins to accumulate above resting levels, where lactate clearance is no longer able to keep up with lactate production. (3) During low intensity exercise, blood lactate remains at or near to resting levels. As exercise intensity increases there comes a break point where blood lactate levels rise sharply (4,5). Researchers in the past have suggested that this signifies a significant shift from predominantly aerobic metabolism to predominantly anaerobic energy production. Several terms have been used to describe this shift and many coaches and athletes believe it is the same phenomenon: Onset of blood lactate accumulation (OBLA) Maximal lactate steady state Although these terms are used interchangeably, they do not describe the same thing. Lactate accumulation only determines the balance between lactate production and its clearance and suggests nothing about the availability or lack of oxygen so the terms aerobic and anaerobic become a bit misleading. The reasons for lactate accumulation are complex and varied and not yet fully understood. For more information on this topic see the lactic acid article. At a slightly higher exercise intensity than lactate threshold a second increase in lactate accumulation can be seen and is often referred to as the onset of blood lactate accumulation or OBLA. OBLA generally occurs when the concentration of blood lactate reaches about 4mmol/L (6,7). The break point that corresponds to lactate threshold can often be hard to pinpoint and so some Exercise Physiologists often prefer using OBLA. Maximal Lactate Steady State Maximal lactate steady state is defined as the exercise intensity at which maximal lactate clearance is equal to maximal lactate production (8). Maximal lactate steady state is considered one of the best indicators of performance perhaps even more efficient than lactate threshold (8,9). Lactate Threshold as a Percentage of VO 2 Max The lactate threshold is normally expressed as a percentage of an individuals VO2 max. For example, if VO2 max occurs at 24 km/h on a treadmill test and a sharp rise in blood lactate concentration above resting levels is seen at 12 km/h then the lactate threshold is said to be 50% VO2 max. In theory, an individual could exercise at any intensity up to their VO2 max indefinitely. However, this is not the case even amongst elite athletes. As the exercise intensity draws closer to that at VO2 max, a sharp increase in blood lactate accumulation and subsequent fatigue occurs the lactate threshold is broken. In world-class athletes lactate threshold typically occurs at 70-80% VO2 max. In untrained individuals it occurs much sooner, at 50-60% VO2 max (10,11). Generally, in two people with the same VO2 max, the one with a higher lactate threshold will perform better in continuous-type endurance events. See the graph below: Although both Athlete 1 and Athlete 2 reach VO2 max at a similar running speed, Athlete 1 has a lactate threshold at 70% and Athlete 2 has a lactate threshold at 60%. Theoretically, Athlete 1 can maintain a pace of about 7.5 mph (12 km/h) compared to Athlete 2s pace of about 6.5 mph (10.5km/h). VO2 max has been used to predict performance in endurance events such as distance running and cycling but the lactate threshold is much more reliable. Race pace has been closely associated with lactate threshold (11). There are several non-invasive methods used to determine the lactate or anaerobic threshold. For more information see How to Determine Your Anaerobic Threshold. Lactate Threshold and Training With training, lactate threshold as a percentage of VO2 max can be increased. Even if there are no improvements in maximal oxygen uptake, increasing the relative intensity or speed at which lactate threshold occurs will improve performance. In effect, proper training can shift the lactate curve to the right: Following training, the reductions in lactate concentration at any given intensity may be due to a decrease in lactate production and an increase in lactate clearance (12). However, Donovan and Brooks (13) suggest that endurance training affects only lactate clearance rather than production. Blood lactate levels after an intense exercise bout are also lower following training. For example, immediately after a 200m swim at a fixed pace, blood lactate may be as high as 13-14 mmol/L. Following 7 months of training these levels can decrease to under 4mmol/L (14). Before training, a swim leading to such high levels of lactate would force the swimmer to slow down dramatically or stop after the 200m. But following training, lactate levels of under 4mmol/L would probably allow the swimmer to continue after 200m, at the same pace, indefinitely. Studies have shown that training at or slightly above the lactate threshold can increase the relative intensity at which it occurs (4,13). For more information and sample training sessions for improving lactate threshold see the lactate threshold training article. 1) Baechle TR and Earle RW. (2000) Essentials of Strength Training and Conditioning: 2nd Edition. Champaign, IL: Human Kinetics 2) McArdle WD, Katch FI and Katch VL. (2000) Essentials of Exercise Physiology: 2nd Edition Philadelphia, PA: Lippincott Williams & Wilkins 3) Wilmore JH and Costill DL. (2005) Physiology of Sport and Exercise: 3rd Edition. Champaign, IL: Human Kinetics 4) Davis JA, Frank MH, Whipp BJ, Wasserman K. Anaerobic threshold alterations caused by endurance training in middle-aged men. J Appl Physiol. 1979 Jun;46(6):1039-46 5) Kindermann W, Simon G, Keul J. The significance of the aerobic-anaerobic transition for the determination of work load intensities during endurance training. Eur J Appl Physiol Occup Physiol. 1979 Sep;42(1):25-34 6) Sjodin B, Jacobs I. Onset of blood lactate accumulation and marathon running performance. Int J Sports Med. 1981 Feb;2(1):23-6 7) Tanaka K, Matsuura Y, Kumagai S, Matsuzaka A, Hirakoba K, Asano K. Relationships of anaerobic threshold and onset of blood lactate accumulation with endurance performance. Eur J Appl Physiol Occup Physiol. 1983;52(1):51-6 8) Beneke R. Anaerobic threshold, individual anaerobic threshold, and maximal lactate steady state in rowing. Med Sci Sports Exerc. 1995 Jun;27(6):863- 9) Foxdal P, Sjodin B, Sjodin A, Ostman B. The validity and accuracy of blood lactate measurements for prediction of maximal endurance running capacity. Dependency of analyzed blood media in combination with different designs of the exercise test. Int J Sports Med. 1994 Feb;15(2):89-95 10) Cerretelli P, Ambrosoli G, Fumagalli M. Anaerobic recovery in man. Eur J Appl Physiol Occup Physiol. 1975 Aug 15;34(3):141-8 11) Farrell PA, Wilmore JH, Coyle EF, Billing JE, Costill DL. Plasma lactate accumulation and distance running performance. Med Sci Sports. 1979 Winter;11(4):338-44 12) Bergman BC, Wolfel EE, Butterfield GE, Lopaschuk GD, Casazza GA, Horning MA, Brooks GA. Active muscle and whole body lactate kinetics after endurance training in men. J Appl Physiol. 1999 Nov;87(5):1684-96 13) Donovan CM, Brooks GA. Endurance training affects lactate clearance, not lactate production. Am J Physiol. 1983 Jan;244(1):E83-92 14) Costill DL, Thomas R, Robergs RA, Pascoe D, Lambert C, Barr S, Fink WJ. Adaptations to swimming training: influence of training volume. Med Sci Sports Exerc. 1991 Mar;23(3):371-7 The 100 Day Marathon If you plan to compete in a marathon within the next 6-12 months (regardless of your experience level) I recommend you give this a go...
<urn:uuid:5f93a648-f7b8-4a34-9083-aef58cf51393>
CC-MAIN-2013-20
http://www.sport-fitness-advisor.com/lactate-threshold.html
2013-06-18T22:44:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.886048
1,838
Memory in Shakespeare's Histories Stages of Forgetting in Early Modern England Published December 22nd 2011 by Routledge – 208 pages Series: Routledge Studies in Shakespeare A distinguishing feature of Shakespeare’s later histories is the prominent role he assigns to the need to forget. This book explore the ways in which Shakespeare expanded the role of forgetting in histories from King John to Henry V, as England contended with what were perceived to be traumatic breaks in its history and in the fashioning of a sense of nationhood. For plays ostensibly designed to recover the past and make it available to the present, they devote remarkable attention to the ways in which states and individuals alike passively neglect or actively suppress the past and rewrite history. Two broad and related historical developments caused remembering and forgetting to occupy increasingly prominent and equivocal positions in Shakespeare’s history plays: an emergent nationalism and the Protestant Reformation. A growth in England’s sense of national identity, constructed largely in opposition to international Catholicism, caused historical memory to appear a threat as well as a support to the sense of unity. The Reformation caused many Elizabethans to experience a rupture between their present and their Catholic past, a condition that is reflected repeatedly in the history plays, where the desire to forget becomes implicated with traumatic loss. Both of these historical shifts resulted in considerable fluidity and uncertainty in the values attached to historical memory and forgetting. Shakespeare’s histories, in short, become increasingly equivocal about the value of their own acts of recovery and recollection. Introduction: Be Our Ghost 1. Birth of a Nation from the Spirit of Tragedy: The Historical Sublime in Richard II 2. All Is Truancy: Rebellious Uses of the Past in 1 Henry IV 3. "Washed in Lethe": Laundering the Past in 2 Henry IV 4. Wars of Memory in Henry V 5. Coda: The History Play as Palimpsest in King John Jonathan Baldo is Associate Professor of English at the University of Rochester, USA.
<urn:uuid:bc1530ad-9a32-47c1-987b-41dda22805f8>
CC-MAIN-2013-20
http://www.psypress.com/books/details/9780415896832/
2013-05-23T12:17:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929152
413
Although consuming more than the recommended amount of fat is often associated with obesity, what many people fail to recognize is that fat is not in and of itself the cause of obesity. Fat doesn't necessarily make people fat; excess calories do. But fat is calorie-dense. One gram of fat contains 9 calories, whereas 1 gram of carbohydrate or protein contains only 4 calories. It is therefore quite easy to consume a great many calories in just a few bites when eating foods that are high in fat. Particularly two particular types of fats have received a great deal of media coverage. The first of these is known as trans fat. When liquid oils are made into margarines or shortenings during a process known as hydrogenation, additional hydrogen atoms are forced to bond with the liquid unsaturated fats, effectively increasing their saturation levels and causing them to become more solid at room temperature. Their process results in the formation of trans fats (trace amounts of trans fats also occur naturally in some foods). Until recently, trans fats were thought to be the lesser of two evils when compared to saturated fats in terms of their effect on serum cholesterol levels. The most current research, however, seems to indicate that trans fat is more detrimental than originally thought. It raises blood cholesterol levels and may be carcinogenic. However, American generally tend to consume much less trans fat than saturated fat, so current dietary advice places more of an emphasis on reducing saturated fat in the diet. Commercially baked goods, margarines, and foods fried in or containing shortening that is solid at room temperature are the main sources of trans fats in the American diet ... Omega-3 fatty acids have also been in the nutrition spotlight. These polyunsaturated fatty acids occur in fatty fish, dark green leafy vegetables such as spinach and broccoli, and certain nuts and oils such as walnuts and canola oil. They have been shown to be quite effective in reducing the risk of heart disease by lowering the amount of cholesterol manufactured in the liver and reducing the likelihood of blood clot formation around deposits of arterial plaque. Omega-3 fatty acids may also slow or prevent tumor growth, stimulate the immune system, and lower blood pressure. The Professional Chef, 7th Edition by The Culinary Institute of America
<urn:uuid:5e387225-b391-45e9-a101-3c4a7dab9d98>
CC-MAIN-2013-20
http://hcrecipes.blogspot.jp/2005/03/nutrition-basics-fat-part-ii.html
2013-06-19T19:35:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966549
458
(Based on Ralph Winter, The Kingdom Strikes Back: Ten Epochs of Redemptive History in Perspectives on the World Christian Movement) Phase One: 1 – 400 Romans Possibly Paul's work in Galatia established contacts with Gauls in the West and with other peoples in the northwest of Europe. The earliest Irish mission compounds followed a ground plan derived from the Christian centers in Egypt, not from Roman centers with their central chapel. And the earliest language of Christians in Gaul was Greek, not Latin. Thus the spread of Christianity was not only by formal, systematic expansion from a Christianized Rome, but spontaneously through natural connections, for example, of trade and extended family. By 312 there were enough Christians in the Roman Empire (in spite of extended and terrible persecutions) that it was politically feasible and wise for Constantine to reverse his own commitments and the policy of the state. He declares himself a Christian. There was a need of cohesiveness in the Empire and Christianity alone of all the religions had no nationalism at its root. It had no geographic center. It was not racially specific. By 375 Christianity was the official religion of the Roman Empire. But there was no great push to evangelize the northern portions of Europe, even though they knew that these peoples were without the gospel. Phase Two 400-800 the Barbarians During the 100 years of peace for Christianity (310 to 410) there was little official church effort to evangelize the Barbarian nations to the north. Instead, the nominalism and ease of official Christianity did little to stem the tide of inner corruption in Rome and the Empire gave way to decay and invasion from Visigoths, the Ostrogoths, the Vandals, etc. But the upshot of this was that the Romans lost the Western half of the Empire while the barbarians, in the real sense, gained a Christian faith. During the 400 years after the fall of Rome, the Benedictine Christian order established 1,000 mission compounds all over the Western Empire. Traveling evangelists like Colomban (Irish) and Boniface (German) should not necessarily be judged along with the worldly and legalistic monks of Luther's day. Toward the end of the period, Charlemagne arose as a kind of second Constantine. He espoused Christian ideals, but did not reach out in earnest missionary efforts to the frontiers of the north—the Scandinavians, the Vikings. Phase Three: Vikings The unevangelized peoples to the north invaded the comfortable, but non-evangelizing, Empire to the south. They were seafaring Vikings and took numerous island and coastland Christian centers. Unlike the partially evangelized Barbarians who invaded Rome, these raiders were totally unreached and destroyed churches, libraries and believers. The Northmen cease not to slay and carry into captivity the Christian people, to destroy the churches and to burn the towns. Everywhere, there is nothing but dead bodies—clergy and laymen, nobles and common people, women and children. There is no road or place where the ground is not covered with corpses. We live in distress and anguish before this spectacle of the destruction of the Christian people. (Christopher Dawson, Religion and the Rise of Western Culture, p. 87) But once again the power of Christianity showed itself. The conquerors became the conquered. Often it was the monks sold as slaves or the Christian girls forced to be their wives and mistresses who eventually won these savages of the north. "In God's eyes, their redemption must have been more important than the harrowing tragedy of this new invasion of barbarian violence and evil which fell upon God's own people whom He loved." (Winter, p. 148) The churches and monasteries had become opulent in the second phase, and this is why the Vikings were so attracted to them. So there was a refinement that came to the churches as the devastation spread. The faith spread back to Scandinavia. The phase came to an end with another very powerful Christian man, Innocent III, but there was no missions thrust to the peoples beyond Europe. Phase Four: 1200-1600 Crusades The friars were a new evangelistic force, but the tragedy was the repeated efforts to take the Holy Land by force—the Crusades. This was a carry-over of the Viking spirit into the church—all the crusades were led by Viking descendants. Francis of Assisi and Raymond Lull were bright exceptions to the Crusader spirit. Judgment came this time on the empire not by human invaders, but in 1346 from the Bubonic Plague, which lasted for forty years. One-third to one-half the population of Europe died, and the hardest hit were the best (120,000 Franciscans in Germany alone), but not the Crusaders themselves. Winter suggests that the reason is that judgment was the removal of the best messengers of truth. This was a greater judgment on those left behind than on the good who died (p. 152.1)! The recovery led into the Reformation and a final phase that sent the gospel around the world with the ships of trade and conquest. See p. 151 for a good summary of how the four phases of expansion were judged at the end because of sitting on their blessings and not energetically sharing them with the unreached peoples of the world. Phase Five: 1600-2000 - To the ends of the earth See the three eras of modern missions.
<urn:uuid:aa238c20-8068-4598-acfa-a04be02a0d6e>
CC-MAIN-2013-20
http://www.desiringgod.org/resource-library/articles/an-overview-of-the-history-of-missions/print?lang=en
2013-05-22T00:42:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965325
1,126
“If liberty means anything at all, it means the right to tell people what they do not want to hear.” Adopted in 1791, the First Amendment, states that “Congress shall make no law abridging the freedom of speech, or of the press or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” (Pilon, p. 13) The freedom of speech documented in the First Amendment is not only a constitutional protection, but also an inevitable part of democratic government and independence, which are essential values in our society. “Censorship,” according to Justice Oliver Wendell Holmes, “is an almost irresistible impulse when you know you are right” (Sunstein, p. 25). That is why the American citizen’s right to free speech is should be held as the highest virtue and any censorship of freedom of speech should not be allowed, but only respected. Freedom of speech is essential part of democratic government, because the only way truth can emerge is when there is an open competition of ideas. However, there is a strong support of censorship when people start mentioning extremely offensive opinions. Should the freedom of speech be limited in this case? The answer is “No”. “If liberty means anything at all,” writes George Orwell, “it means the right to tell people what they do not want to hear.” (Cox, p. 36) If we want to enjoy the freedom fully, the full protection should be given to the freedom of speech; there are no compromises about it. Freedom of speech protected by the First Amendment is not just a right, which can be declared or abolished. According to the “liberty theory”, proposed by some legal scholars, freedom of speech is an essential part of the liberty of every person, who pursues an individual self-determination and self-realization (Cox, 1981). Thus, freedom of speech is also a part more global right to freedom of personal development and self-expression. Another theoretical ground to support the freedom of speech is called “tolerance theory”. It holds that the ability to teach and promote tolerance is one of the most important assets of freedom of speech (Cox, 1981). From this perspective, freedom of speech itself excludes any type of intolerance, which sometimes appears in a threatening form (religious intolerance, racial intolerance. The “tolerance theory” implies self-restraint, which is the only appropriate response to any ideas, even those that we may personally dislike or hate. The “tolerance theory” provides a broader context for exercising tolerance in a conflict-ridden democratic society. In legal practice there are certain restrictions on freedom of speech imposed by the Supreme Court. They define a few categories of speech, which are considered to be not fully protected by the First Amendment. These categories include defamation, advocacy of imminent illegal conduct, obscenity and fraudulent misrepresentation (Farber, 1998). However, if the speech doesn’t fall within one of these categories, there are no grounds for the government to argue that freedom of speech should be restricted because of its harmful content. One of the common bases for partial censorship is proof that the freedom of speech causes imminent illegal action. The Supreme Court has already drawn a careful line between general abstract theories and political dissent on one hand and particular illegal acts incitement on the other. This line is drawn by definition of “clear and present danger” test (Farber, 1998). The government cannot sue the speech on the basis of its tendency or possibility illegal conduct incitement. Before any speech is punished on the grounds of incitement, there is an obligatory three-part criterion that should be met. First, the speech must directly incite lawless action. Second, the context of speech must imply imminent breaking of the law, rather than call for illegal conduct at some indefinite future time. At last, there should be a strong intention to produce such conduct(Farber, 1998). Such “clear and present danger” test determines the level of probability of threat imposed by the speech in question. However, the actual evil, which the government tries to prevent by outlawing the advocacy, does not outweigh the harm of outlawing the free speech. Only when the imposed danger becomes evident, the freedom of speech may be questioned. But we must be aware that the price for preventing several cases of the openly declared illegal conduct may be paid by restriction of one of the most essential rights that constitute freedom for the entire nation. For the wellbeing and public safety the Supreme Court has imposed certain regulations on the freedom of speech not because of its content, but because of the time, place and manner the speech is being expressed (Farber, 1998). However, these rules do not limit the actual freedom of speech and are not even upheld, if there is no public need for this. However, when it comes to the content-neutral regulation, it raises many controversial issues. The content-neutral regulation requires a very careful distinction and therefore may sometimes be misinterpreted. There is a raising concern that such regulation may weaken people’s right to participate, especially if the government puts too many restrictions on how the ideas should be voiced. Thus, by analyzing the current issues concerning the First Amendment right to free speech in the United States, I wanted to show the perspective of outlawing this right and the negative aspects that such outlawing may involve. Freedom of speech has served a crucial role for the right to dissent and for the entire principle of democracy in our society. This law was developed during the course of American history and only after numerous struggles it was achieved. The evolution of this law is still in progress, however, the limitation of the basic right to free speech may as well limit our freedom and democracy, therefore should be respected and protected. Cox, A. Freedom of Expression. Cambridge: Harvard Univ. Press, 1981. Farber, D. The First Amendment. NY: Foundation Press, 1998. Pilon, Roger. (Preface) The Declaration of Independence and the Constitution of the United States. Cato Institute, 2000. Sunstein, C. Democracy and the Problem of Free Speech. NY: Free Press, 1993.
<urn:uuid:60acfe2f-8d3f-493f-86f4-ba867bb79a14>
CC-MAIN-2013-20
http://essayinfo.com/blog/censorship-and-the-first-amendment-the-american-citizens-right-to-free-speech/
2013-05-22T14:38:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941667
1,294
Keep Kids Safe from Bugs Lyme disease. Rocky Mountain spotted fever. West Nile virus. Flying fiends and crawling critters can spread such diseases with a bite. Few cases put kids' lives at risk, say experts from the American Academy of Pediatrics (AAP). Still, some insects can threaten children's health, and you'd be wise to take precautions. Many products seek to prevent bug bites, but one that can be applied to skin is very effective: DEET (usually listed on labels as N,N-diethyl-m-toluamide). The AAP recommends using products with no more than 30 percent DEET on children 2 months of age and older who will be exposed to insects that might cause diseases. The AAP says that DEET seems as safe in concentrations of 30 percent as in concentrations of 10 percent. Products containing more DEET provide longer, but not better, protection. Products that contain about 10 percent DEET are effective for about two hours, the AAP says. Products that contain about 24 percent DEET protect, on average, for about five hours. Products that contain more than 30 percent DEET do not offer much added benefit and are not recommended for children. One prudent approach, the AAP suggests, would be to select the lowest concentration effective for the amount of time your children will spend outdoors. The CDC also recommends picaridin and oil of lemon eucalyptus. These repellants offer protection similar to low concentrations of DEET, when used in similar concentrations. As repellants, DEET, picaridin, and oil of lemon eucalyptus repel some types of ticks, but permethrin kills ticks on contact, so it may be helpful to spray permethrin on clothes when playing or working in an area with lots of ticks. Permethrin is used as a spray for clothing only--not for the skin. Banish the bugs: For mosquitoes, use an insect repellent when needed. DEET, picaridin, and oil of lemon eucalyptus are recommended by the CDC. Read and follow the directions with care. Don't let children apply repellants to themselves. Get rid of standing water where mosquitoes and other insects can breed. Have kids avoid insect-prone areas in the early morning and late evening. Dress children in long sleeves and long pants when appropriate. Have them wear a hat and keep long hair pulled back. Dress children in light colored clothes. Make sure windows screens are in good repair. When hiking, stay on cleared trails to avoid ticks. Check for ticks after you or your child has been outdoors. Do a thorough search for ticks, looking in particular behind the ears and along the hairline. It can take a tick up to 48 hours to pass on an infection, so the sooner a tick is found, the better your chances of avoiding illness.
<urn:uuid:4f520647-f2db-4fef-a808-ca0bce1e0491>
CC-MAIN-2013-20
http://www.crh.org/conditions-and-services/health-library.aspx?t=1&c=2210
2013-05-23T18:53:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930492
603
In the 1960s there was increasing agitation in Panama to achieve greater Panamanian control over the canal, resulting in the negotiation of a new treaty (1967) which failed, however, to gain ratification by the Panamanian government. In 1977 negotiations were successful, and a new treaty was signed. It returned the Panama Canal Zone to Panama while setting up joint U.S.-Panamanian control of the canal until the end of 1999, when Panama gained full control. A separate treaty (1979) guarantees the permanent neutrality of the canal. In 0ct., 2006, Panamanian voters approved expanding the canal by adding a third series of locks paralleling the existing locks; the new locks, whose construction was inaugurated in 2007, will be larger, enabling wider, longer vessels with deeper drafts to transit the Isthmus. The plan will utilize the work begun but abandoned on a third series of locks prior to World War II. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Panama Canal Panamanian Control from Fact Monster: See more Encyclopedia articles on: Panama Political Geography
<urn:uuid:ab82b73b-6dcf-48e9-b92e-e940a51b010b>
CC-MAIN-2013-20
http://www.factmonster.com/encyclopedia/world/panama-canal-panamanian-control.html
2013-05-26T03:17:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.912515
242
Readers Question:… So, basically we have the FED using it’s money to liquidate the fund’s of other people, and those other people are lending, or a portion of them are lending, the fund’s that they now have because of the FED. Wouldn’t it make more sense for the FED to, instead of purchasing bonds so that those people that held the bonds may become debtors, to rather just lend the money directly to firms at a zero percent interest rate? Central Banks are responsible for: - Maintaining monetary stability (low inflation, positive economic growth) - Financial stability (e.g. acting as lender of last resort should commercial banks be short of funds) Usually, Central Banks have been able to manage their objectives with relatively minimal activity. For example, the Fed managed the US economy mainly through changing the base interest rate. However, the great recession of 2008, created a particularly deep recession in which conventional monetary policy failed to overcome the recession. Therefore, to promote economic activity, they pursued quantitative easing. This involved creating money to buy bonds from commercial banks. It was hoped this would: - Increase liquidity of commercial banks - Reduce bond yields encouraging banks to lend and firms to borrow Quantitative easing had some impact (see: Impact of quantitative easing) The recession would probably have been worse without it. But, a significant part of extra money was kept by banks to improve their balance sheets. Therefore, the impact on economic activity was limited. Should Central Banks Lend Directly? Generally, Central Banks try to avoid moving into the field of directly lending to firms. The problem is that Central Banks are supposed to have a narrow remit. If they start lending to firms, it may be difficult to know how, when and which firms to start lending. For a Central Bank to start lending directly would require a change of constitution and give them even more political influence. There is also a fear Central Banks will lose their credibility should they start becoming an ordinary bank as well. Given their political importance and influence, there would be little appetite for making a Central Bank a key lender in the economy. For example, on the issue of direct lending by the Bank of England, Mervyn King stated: ”It is not sensible for us to get engaged in making judgments about which sectors of the economy to extend credit to- judgments which are inherently fiscal or political in nature.” Inflation Report However, in exceptional circumstances, others have argued there is a need for the government or Central Bank to exceed its usual role and directly encourage lending because commercial banks are not doing their job. Japan, which suffered a lost decade of stagnant growth did see its Central Bank step outside its traditional role and lend directly. The central bank (of Japan) will provide loans for no less than a year to “support strengthening the foundation for economic growth” in areas including research and development, setting up new businesses, energy and the environment, healthcare, childcare, housing and disaster prevention. The move is the first of its kind for the central bank, although it has provided guarantees for loans to small and medium enterprises in the past. The BoJ said the lending programme, which was announced at the end of April, would have a maximum size of ¥3 trillion ($32.8 billion), with no more than ¥1 trillion to be disbursed per quarter. “It seems to be subsidised direct lending to the enterprise sector, rather than a monetary measure, as there is no indication that this is going to be adding to the monetary base – the total size of it is less than 1% of GDP,” (Central Banking.com) The scale is relatively small. But, it is an example of a Central Bank trying to make a commitment to growth – Hoping this will change expectations and improve growth prospects. Another option is for a Central Bank to fund a government intermediary bank to improve lending to small and medium term businesses. For example, Adam Posen has criticised the poor infrastructure of UK banks for lending to business. If commercial banks fail to do the job, there is scope for improved private sector or government lending initiatives.
<urn:uuid:56279cb5-69bb-4766-943c-cfb4472a9b85>
CC-MAIN-2013-20
http://www.economicshelp.org/blog/5249/economics/central-bank-lending/
2013-05-18T17:48:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964298
861
Goodpasture’s syndrome is a rare condition that is characterized by rapid destruction of your kidneys and hemorrhaging of your lungs. Although several diseases can display these symptoms, the name Goodpasture’s syndrome is usually reserved for the autoimmune disease produced when your immune system attacks cells having the Goodpasture antigen (a type II hypersensitivity reaction). These cells are found in your kidneys and lungs. This attack by your immune system causes damage to these organs. An autoimmune disease is one in which for some unknown reason your immune system attacks your own body cells and tissues. When your immune system is working properly, it creates antibodies to fight off germs. In Goodpasture’s syndrome, your immune system makes antibodies that attack your lungs and kidneys. Goodpasture’s syndrome is named after the American pathologist, Dr Ernest Goodpasture. In 1919, He described this condition. It is thought to be the first report on the existence of this disorder. Goodpasture’s syndrome is known by several other names. It is also called Goodpasture’s disease, anti-glomerular basement membrane antibody disease, rapidly progressive glomerulonephritis with pulmonary hemorrhage, pulmonary renal syndrome and glomerulonephritis-pulmonary hemorrhage. There are several ways in which you may be affected by Goodpasture’s syndrome. It can cause you to feel a burning sensation when urinating or coughing up blood. However, the first indications you have may be vague. These are things like difficulty breathing, paleness, nausea or fatigue. These effects are usually followed by kidney involvement. This usually involves protein and small amounts of blood in your urine. Other indications that you may have are: - Dark colored urine - Foamy urine - Decrease in the amount of urine - Chest pain You or a loved one may suffer with Goodpasture’s syndrome. This may be what keeps you or your loved one from being able to work. Goodpasture’s syndrome and/or conditions associated with or resulting from this disorder may be the reason you or your loved one is disabled. If this is the case, you may need help. You may need financial assistance. Who can you turn to for help? Where will the financial assistance that you need come from? Have you or your loved one thought about applying for Social Security disability benefits or disability benefits from the Social Security Administration because of the disability caused by Goodpasture’s syndrome and/or conditions associated with or resulting from this disorder? Have you already done this and been denied? You or your loved one may be planning to appeal the denial by the Social Security Administration. If you decide to do this, here is something for you to think about. You are going to need a smart disability lawyer like the one you will find at disabilitycasereview.com to assist you in this process. This is true because people who are represented by a skilled disability attorney are approved more often than those people without a lawyer. Do not hesitate. Contact the wise disability attorney at disabilitycasereview.com, today.
<urn:uuid:53f9bfac-4856-4c8e-9169-30bc961b24cd>
CC-MAIN-2013-20
http://www.disabilitycasereview.com/disabilityblog/tag/law/
2013-05-21T17:37:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952998
649
Yosemite National Park Announces the Public Comment Period for the Invasive Plant Management Plan Environmental Assessment Invasive plants are one of the greatest threats to the integrity of National Park Service lands. Non-native plants invade an estimated 4,600 acres of federal land in the United States every day, and already infest millions of acres in the national parks. Fortunately, Yosemite National Park is at the early stages of invasion, with less than 1% of the landmass within the Park contaminated with invasive plants. Unfortunately, 177 non-native plant taxa have already established within Park borders, many with the potential to spread rapidly. The purpose of this Invasive Plant Management Plan Environmental Assessment for Yosemite National Park is to evaluate a range of alternatives to prevent the establishment and spread of invasive plants into uninfested areas of the park, and quickly and effectively eradicate new infestations. The public comment period for the Invasive Plant Management Plan Environmental Assessment will open on Friday, June 13, 2008 and will run through Sunday, July 13, 2008. The plan is available on the park's website at http://www.nps.gov/yose/parkmgmt/invasive.htm. A public meeting will take place on Wednesday, June 25, 2008 from 1:00 p.m. to 5:00 p.m. during the monthly Open House in the Yosemite Valley Visitor Center Auditorium. Additionally, park representatives will be available at the El Portal Planning Advisory Meeting on Tuesday, July 8, 2008 at 7pm in the Clark Community Hall. Written scoping comments should be postmarked no later than July 13, 2008. Comments can be submitted at public meetings, by mail, fax, email, and through the Planning, Environment, and Public Comment (PEPC) commenting system. To request a hard copy or CD ROM version of the EA and to submit written comments: Mail: Superintendent, Yosemite National Park For more information on park planning efforts, visit the website at www.nps.gov/yose/parkmgmt.htm. Did You Know? In Yosemite Valley, dropping over 594-foot Nevada Fall and then 317-foot Vernal Fall, the Merced River creates what is known as the “Giant Staircase.” Such exemplary stair-step river morphology is characterized by a large variability in river movement and flow, from quiet pools to the dramatic drops of the waterfalls themselves.
<urn:uuid:52b768c3-07e5-44a5-801c-1b176b415481>
CC-MAIN-2013-20
http://www.nps.gov/yose/parknews/ipmp608.htm
2013-05-22T00:31:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921961
499
Curriculum Standards for Social Studies National Council for the Social Studies Theme I: Culture - Standard C - The student explains and give examples of how language, literature, the arts, architecture, other artifacts, traditions, beliefs, values, and behaviors contribute to the development and transmission of culture. - Standard D - The student explains why individuals and groups respond differently to their physical and social environments and/or changes to them on the basis of shared assumptions, values, and beliefs. Theme II: Time, Continuity and Change - Standard B - The student identifies and uses key concepts such as chronology, causality, change, conflict, and complexity to explain, analyze, and show connections among patterns of historical change and continuity. - Standard C - The student identifies and describes selected historical periods and patterns of change within and across cultures, such as the rise of civilizations, the development of transportation systems, the growth and breakdown of colonial systems, and others. - Standard D - The student identifies and uses processes important to reconstructing and reinterpreting the past, such as using a variety of sources, providing, validating, and weighing evidence for claims, checking credibility of sources, and searching for causality. - Standard E - The student develops critical sensitivities such as empathy and skepticism regarding attitudes, values, and behaviors of people in different historical contexts. Theme III: People, Places and Environments - Standard B - The student creates, interprets, uses, and distinguishes various representations of the earth, such as maps, globes, and photographs. - Standard D - The student estimates distance, calculate scale, and distinguish's other geographic relationships such as population density and spatial distribution patterns. - Standard G - The student describes how people creates places that reflect cultural values and ideals as they build neighborhoods, parks, shopping centers, and the like. - Standard H - The student examine, interprets, and analyze physical and cultural patterns and their interactions, such as land uses, settlement patterns, cultural transmission of customs and ideas, and ecosystem changes. - Standard I - The student describes ways that historical events have been influenced by, and have influenced physical and human geographic factors in local, regional, national, and global settings. Theme V: Individuals, Groups, and Institutions - Standard A - The student demonstrates an understanding of concepts such as role, status, and social class in describing the interactions of individuals and social groups. - Standard B - The student analyzes group and institutional influences on people, events, and elements of culture. - Standard D - The student identifies and analyzes examples of tensions between expressions of individuality and group or institutional efforts to promote social conformity. - Standard F - The student describes the role of institutions in furthering both continuity and change.
<urn:uuid:53792f2d-eefe-496d-9e0f-5578ffdf19ba>
CC-MAIN-2013-20
http://www.cr.nps.gov/nr/twhp/wwwlps/lessons/85bellamy/85ssstandards.htm
2013-05-23T12:02:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.872449
574
Scientists at Cancer Research UK’s London Research Institute (LRI) and the Technical University of Denmark have developed an RNAi-based approach to determine paclitaxel response. Focusing on estrogen-receptor (ER)/progesterone-receptor (PR)/human epidermal growth factor receptor 2 (HER2; ERBB2)-negative (triple-negative) disease, they sequentially deleted 829 genes involved in cells’ response to the chemotherapy to see which missing or faulty genes would prevent the drug from working. Their results identified six genes which, if faulty, impacted on the effectiveness of the drug in cancer cells in vitro. The team then confirmed that the same six genes in patients’ breast cancer cells could be used to predict which individuals would derive the most benefit from paclitaxel. “Our research shows it is now possible to rapidly pinpoint genes that prevent cancer cells from being destroyed by anticancer drugs and use these same genes to predict which patients will benefit from specific types of treatment,” states Charles Swanton, M.D., head of translational cancer therapeutics at the LRI. “Now the challenge is to apply these methods to other drugs in cancer medicine and to help identify new drugs within clinical trials that might benefit patients who are predicted to be unresponsive to treatment.” Details appear in The Lancet Oncology in a paper titled “Assessment of an RNA interference screen-derived mitotic and ceramide pathway metagene as a predictor of response to neoadjuvant paclitaxel for primary triple-negative breast cancer: a retrospective analysis of five clinical trials.”
<urn:uuid:a99e9203-ab8e-4fd8-be22-40d79baa345e>
CC-MAIN-2013-20
http://www.genengnews.com/gen-news-highlights/researchers-suggest-new-rnai-method-to-identify-genes-predictive-of-paclitaxel-response/76636549/
2013-05-19T02:31:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.911598
341
A country full of wild beauty, Colombia has not formerly adopted its national flower, although the Christmas Orchid, or Cattleya Trianae. This lovely flower was first discovered in Colombia in the 1840’s and when it was recommended by the Colombian Academy of History in 1936, the flower was adopted by the country as its state flower. To date there is no official law that marks the Christmas orchid as the official flower of Colombia. The Christmas Orchid The stunningly beautiful flower is an enigma. It’s lovely and yet has a fetid smell. The large blossoms of the orchid are said to be among the most beautiful in the world, and grow in the wilds of Colombia most often along mountain streams. Naturally occurring in Colombia, the Christmas orchid can reach up to eight inches in size and has a darker pink center and lighter petals although the orchid can bloom in up to fourteen different shades. The single leaf of the orchid is long and dark green. The Christmas orchid has a darker center and then lighter petals with a slightly ruffled edge. The orchid is large with a stem reaching up to a foot. The stem can produce between three and fourteen flowers making an elaborate statement in the rocks of the mountains. Growing Christmas Orchids The Christmas orchids are stunning but require very temperate zones for growth. The flowers bloom between December and March and need intermediate or hot weather. New growth appears directly after the first round of blooms. The flower can be tricky to grow at home without ideal conditions, but ordering Christmas Orchids is common as these have been one of the most popular winter cut flowers since the early 20th century.
<urn:uuid:a229d2dd-f622-48d1-8467-eb58fada7d6d>
CC-MAIN-2013-20
http://national-flowers.info/2010/07/10/national-flower-columbia-christmas-orchid/
2013-05-19T02:45:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9554
353
The boats were usually built long and pointed for the sake of speed, and had seats for thirty rowers. Besides the rowers, the long-boats could hold from sixty to one hundred and fifty sailors. Harald Fairhair was one of the foremost of the kings of Norway. He was so brave a Northman that he became king over the whole of Norway. In eight hundred and sixty-one, when he began to reign, Norway was divided into thirty-one little kingdoms, over each of which ruled a little king. Harald Fairhair began his reign by being one of these little kings. Harald was only a boy, ten years of age, when he succeeded his father; but as he grew up he became a very strong and handsome man, as well as a very wise and prudent one. Indeed he grew so strong that he fought with and vanquished five great kings in one battle. After this victory, Harald sent, so the old chronicles of the kings of Norway say, some of his men to a princess named Gyda, bidding them tell her that he wished to make her his queen. But Gyda wished to marry a king who ruled over a whole country, rather than one who owned but a small part of Norway, and this was the message she sent back to Harald: “Tell Harald,” said the maiden, “that I will agree to be his wife if he will first, for my sake, subdue all Norway to himself, for only thus methinks can he be called the king of a people.” The messengers thought Gyda’s words too bold, but when King Harald heard them, he said, “It is wonderful that I did not think of this before. And now I make a solemn vow and take God to witness, who made me and rules over all things, that never shall I clip or comb my hair until I have subdued the whole of Norway with scat [land taxes], and duties, and domains.” Then, without delay, Harald assembled a great force and prepared to conquer all the other little kings who were ruling over the different parts of Norway. In many districts the kings had no warning of Harald’s approach, and before they could collect an army they were vanquished. When their ruler was defeated, many of his subjects fled from the country, manned their ships and sailed away on viking expeditions. Others made peace with King Harald and became his men. Over each district, as he conquered it, Harald placed a jarl or earl, that he might judge and do justice, and also that he might collect the scat and fines which Harald had imposed upon the conquered people. As the earls were given a third part of the money they thus collected, they were well pleased to take service with King Harald. And indeed they grew richer, and more powerful too, than they had ever been before. It took King Harald ten long years to do as he had vowed, and make all Norway his own. During these years a great many new bands of vikings were formed, and led by their chief or king they left the country, not choosing to become King Harald’s men.
<urn:uuid:a6fffd75-2c6c-42d7-83ca-a0ead5d67b9c>
CC-MAIN-2013-20
http://www.bookrags.com/ebooks/15202/270.html
2013-06-20T02:17:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.992116
677
CHICAGO Ė The importance of genetic factors in an elderly individual's propensity to bone fractures depends on the individual's age and the type of fracture, according to a study in the September 12 issue of Archives of Internal Medicine, one of the JAMA/Archives journals. Bone fractures resulting from osteoporosis have a profound impact on quality of life, with only one third of patients regaining their pre-fracture level of function and a substantial risk of death following fracture, according to background information in the article. The authors suggest that twin studies provide one of the most natural study populations for evaluating genetic risk (the relative contribution of genes versus environment). If heritable factors contribute to fractures, monozygotic twins (who have all the same genes, commonly called identical twins) are more likely to have similar rates of fracture than dizygotic twins (who share about half the same genes, commonly called fraternal twins). Karl MichaŽlsson, M.D., Ph.D., of the Uppsala University Hospital, Uppsala, Sweden, and colleagues used the Swedish Twin Registry, the Swedish Inpatient Registry and telephone interviews to evaluate the genetic liability to fracture of the elderly. From the registry of Swedish twins born between 1,896 and 1,944 (3,724 identical twins, 6,314 fraternal same-sex twins and 5,736 fraternal different-sex twins), the researchers were able to identify 6,021 twins with any fracture, with a higher proportion among women (23 percent) than men (14 percent). More than half the cases (3,599) were classified as osteoporotic fractures. The most important osteoporotic fracture, hip fracture, was recorded for 1,055 twins. Genetic variation in liability to fracture differed considerably by type of fracture and age, the authors report. Less than 20 percent of the overall age-adjusted fracture variance was explained by genetic variation. Heritability was considerably greater for first hip fracture before the age of 69 years and between 69 and 79 years than for hip fractures after 79 years of age. "We conclude that the genetic influence on susceptibility to fractures is dependent on type of fracture and age at fracture event," the authors write. "The heritability of osteoporotic fractures is stronger than has been previously estimated, especially for early-occurring osteoporotic fractures. A search for genes and gene-environmental interactions that affect early osteoporotic fracture risk is likely to be fruitful, but fracture-prevention efforts at older ages should be focused on lifestyle habits." Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. It's not having been in the dark house, but having left it, that counts. -- Theodore Roosevelt
<urn:uuid:f3f3a4a6-7046-4f4f-aba8-78349be048b4>
CC-MAIN-2013-20
http://psychcentral.com/news/archives/2005-09/jaaj-gfi090805.html
2013-06-18T05:38:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932421
592
|Regions with significant populations| |Related ethnic groups| In the Gambia, about 16% of the population are Wolof. Here, they are a minority, where the Mandinka are the plurality with 42% of the population, yet Wolof language and culture have a disproportionate influence because of their prevalence in Banjul, the Gambian capital, where a majority of the population is Wolof. In Mauritania, about 8% of the population are Wolof. They live largely in the southern coastal region of the country. In older French publications the spelling "Ouolof" is often used instead of "Wolof". In some English publications, predominantly those referring to Gambian Wolof, the spelling "Wollof" is used, because this spelling will induce native English speakers to pronounce the term correctly as a Wolof speaker. In publications of the 19th century and before the spelling "Volof" and "Olof" can also be found. Rarely used are also the spellings "Jolof", "Jollof" and "Dyolof". - The term "Wolof" itself may also refer to the Wolof language or to things originating from Wolof culture or tradition. It is thought the Wolof people originated from a Baffouri population in the Sahara before it became hostile to farming due to desertification. As the environment deteriorated some of them drifted into the Senegalese areas of Futa Toro and modern-day south eastern Mauritania. With the Arab conquests of around 640 AD they were forced to move into north and east Senegal where over time villages developed into autonomous states such as Baol, Kayor, Saloum, Dimar, Walo and Sine the overall ruling state being that of Jolof who came together voluntarily to form the Jolof Empire. Legend has it that in Walo the fishermen from several villages argued vehemently over firewood which lay along the edge of a lake at Mengen. Just before matters developed into violence a mysterious person called Ndyadyane Ndyaye (Njanjan Njie) arose from the lake and shared out the firewood fairly among the men and promptly vanished much to their bafflement. The decision was made to try and catch him so they feigned another argument and when he appeared he was caught. When Mansa Wali Jon the ruler of Sine, who was himself endowed with supernatural powers, heard about the strange goings on in Mengen he shouted "Ndyadyane Ndyaye" which is an expression of utter amazement. This name was given to the strange visitor (actual name: Amadu Bubakar Ibn Muhammed). He became the first ruler of the new empire with the title Burba Jolof and other states voluntarily pledged allegiance to him. Historical state The Wolof Empire was a medieval West African state that ruled parts of Senegal and the Gambia from approximately 1350 to 1890. While only ever consolidated into a single state structure for part of this time, the tradition of governance, caste, and culture of the Wolof dominate the history of north-central Senegal for much of the last 800 years. Its final demise at the hands of French colonial forces in the 1870s-1890s also marks the beginning of the formation of Senegal as a unified state. By the end of the 15th century, the Wolof states of Jolof, Kayor, Baol and Walo had become united in a federation with Jolof as the metropolitan power. The position of king was held by the Burba Wolof and the rulers of the other component states owed loyalty to him while being allowed local sovereignty in internal state matters. Saloum and Sine were later brought within the union. Before they became involved in trading with the Portuguese merchants on the coast, the Wolof people enjoyed the benefits of long established trading and cultural ties with the Western Sudanese empires and had also benefited from trading with Futa Toro and the Berbers from North Africa. Through these early trading links and organisation the Wolof states grew wealthy and had formidable strength. The Wolof people’s traditional culture and practices have survived the colonial era and are a strong element of the Senegalese culture . "Wolof" is the name of the native language of the Wolof people. At least 50% of Senegal's population are native speakers of Wolof. Members of neighboring groups are often bilingual and can understand Wolof. Wolof culture and language have an enormous influence, especially in urban areas. Wolof is strongly linked to Serer and Fulani in structure with minor Arabic influence. The vast majority of the Wolof people are Sufi Muslims. The Senegalese Sufi Muslim brotherhoods, appearing in Wolof communities in the 19th century, grew tremendously in the 20th. Their leaders, or marabouts, exercise a huge cultural and political influence amongst most Muslim communities, most notably the leader of the Mouride brotherhood, Serigne Cheikh Maty Leye Mbacké. The Islam of the Wolof is very tolerant and puts an emphasis on meditation and spirituality. Wolof ceremonial traditions Ceremonies such as weddings, funerals, and baptisms, while not unique, have traditional elements distinctive to the Wolof. Many aspects of these traditional ceremonies have merged and been modified through the 20th century. Prior to traditional Wolof wedding ceremonies, the parents of the groom-to-be sends elders to the girl's parents with kola nuts and money to ask for her hand in marriage. The girl's parents consult their daughter and either consent to or reject the proposal. If accepted, the parents of the bride to be distribute the kola nuts among the family and neighbours. This distribution is an informal way of announcing the impending wedding. In more traditional practices, the family of the groom-to-be paid the girl's bride price in the form of money. This tradition, where surviving, has been modernized and dowry is paid in money, cars or even houses. After the completion of the groom's obligations, the two families set a wedding day. Before the wedding day, the groom's family gives a party to welcome their daughter-in-law and to prepare her to live with her new family. The imam and elders advise the groom with the presence of some representatives of the bride's parents. Weddings traditionally take place at the groom's home. Parents receive guests with food and drink (but not alcohol), while guests bring gifts of money, rice, drinks, ships, sugar, or spices. After the ceremony people feast and dance with guests hiring a griot (praise-singer) and giving further gifts to the groom's parents. Notable Wolof people |This section requires expansion. (July 2008)| - Cissé, Mamadou (2004). Dictionnaire Français-Wolof. Paris: L’Asiathèque. ISBN 2-911053-43-5. - Cissé, Mamadou (1994). Contes wolof modernes. Paris: L’Harmattan. ISBN 2-7384-3016-3. - Malherbe, Michel; Sall, Cheikh (1989). Parlons Wolof – Langue et culture. Paris: L'Harmattan. ISBN 2-7384-0383-2. - Bichler, Gabriele Aïscha (2003). Bejo, Curay und Bin-bim? Die Sprache und Kultur der Wolof im Senegal (mit angeschlossenem Lehrbuch Wolof). Europäische Hochschulschriften 90. Frankfurt am Main: Peter Lang Verlagsgruppe. ISBN 3-631-39815-8. - Fal, Arame; Santos, Rosine; Doneux, Jean Léonce (1990). Dictionnaire wolof-français (suivi d'un index français-wolof). Paris: Karthala. ISBN 2-86537-233-2. - Goetz, Rolf (1996). Senegal – Gambia: Praktischer Reiseführer an die Westküste Afrikas. Frankfurt am Main: Verlag Peter Meyer Reiseführer. ISBN 3-922057-09-8. - adherents.com: Wolof - Senegal:Religion, africaguide.com (1996-2008). - People and culture of Senegal. (2007). Africaguide. Retrieved May 30, 2007.
<urn:uuid:6951820d-dcf2-400e-8364-ffc2339c6740>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Wolof_people
2013-06-19T12:55:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898437
1,803
There is absolutely no difference between reading data from a file and reading data from a terminal. Likewise, if a program's output consists entirely of alphanumeric characters and punctuation, there is no difference between writing to a file, writing to a terminal, and writing to the input of another program (as in a pipe). The standard I/O facility provides some simple defaults for managing Input/Output. There are three default I/O streams: standard input, standard output, and standard error. By convention, standard output (abbreviated stdout) consists of all "normal" output from your program, while standard error (stderr) consists of error messages. It is often a convenience to be able to handle error messages and standard output separately. If you don't do anything special, programs will read standard input from your keyboard, and they will send standard output and standard error to your terminal's display. Standard input (stdin) normally comes from your keyboard. Many programs ignore stdin; you name files directly on their command line -- for instance, the command cat file1 file2 never reads its standard input; it reads the files directly. But, without filenames on the command line, UNIX commands that need input will usually read stdin. Standard input normally comes from your keyboard, but the shell can redirect stdin from a file. This is handy for UNIX commands that can't open files directly -- for instance, mail. To mail a file to joan, use < filename - to tell the shell to attach the file, instead of your keyboard, to mail's standard input: The real virtue of standard I/O is that it allows you to redirect input or output away from your terminal to a file. UNIX is file-based. Because terminals and other I/O devices are treated as files, a program doesn't care or even know if it is sending its output to a terminal or to a file. For example, if you want to run the command cat file1 file2, but you want to place the output in file3 rather than sending it to your terminal, give the command: This is called redirecting standard output to file3. If you give this command and look at file3 afterward, you will find the contents of file1, followed by file2 - exactly what you would have seen on your screen if you omitted the > file3 modifier. One of the best-known forms of redirection in UNIX is the pipe. The shell's vertical bar (|) operator makes a pipe. For example, to send both file1 and file2 together in a mail message for joan, type: The pipe says "connect the standard output of the process at the left (cat) to the standard input of the process at the right (mail)." Table 1 shows the most common ways of redirecting standard I/O, for both the C shell and the Bourne shell. Table 1: Common Standard I/O Redirections |Send stdout to file||prog > file||prog > file| |Send stderr to file||prog 2> file| |Send stdout and stderr to file||prog >& file||prog > file 2>&1| |Take stdin from file||prog < file||prog < file| |Send stdout to end of file||prog >> file||prog >> file| |Send stderr to end of file||prog 2>> file| |Send stdout and stderr to end of file||prog >>& file||prog >> file 2>&1| |Read stdin from keyboard until c||prog <<c||prog <<c| |Pipe stdout to prog2||prog | prog2||prog | prog2| |Pipe stdout and stderr to prog2||prog |& prog2||prog 2>&1 | prog2| Be aware that: While standard I/O is a basic feature of UNIX, the syntax used to redirect standard I/O depends on the shell you are using. Bourne shell syntax and C shell syntax differ, particularly when you get into the less commonly used features. The Korn shell and bash are the same as the Bourne shell, but with a few twists of their own. You can redirect standard input and standard output in the same command line. For example, to read from the file input and write to the file output, give the command: The Bourne shell will let you go further and write stderr to a third file: The C shell doesn't give you an easy way to redirect standard output without redirecting standard error. A simple trick will help you do this. To put standard output and standard error in different files, give a command like: Many implementations of both shells don't care what order the redirections appear in, or even where they appear on the command line. For example, SunOS lets you type < input > output prog. However, clarity is always a virtue that computer users have never appreciated enough. It will be easiest to understand what you are doing if you type the command name first -- then redirect standard input, followed by standard output, followed by standard error. Of course, programs aren't restricted to standard I/O. They can open other files, define their own special-purpose pipes, and write directly to the terminal. But standard I/O is the glue that allows you to make big programs out of smaller ones, and is therefore a crucial part of the operating system. Most UNIX utilities read their data from standard input and write their output to standard output, allowing you to combine them easily. A program that creates its own special-purpose pipe may be very useful, but it cannot be used in combination with standard utilities. Some UNIX systems, and utilities such as gawk, support special filenames like /dev/stdin, /dev/stdout, and /dev/stderr. You can use these just as you'd use other files. For instance, to have any ordinary command read from the file afile, then standard input (from the keyboard, for example), then the file bfile: In the same way, a process can write to its standard output through /dev/stdout and the standard error via /dev/stderr. If a program's input consists entirely of alphanumeric characters and punctuation (i.e., ASCII data or international (non-English) characters). A program can find out. More Unix Power Tools Copyright © 2009 O'Reilly Media, Inc.
<urn:uuid:e1b6a225-b680-4605-ad28-ed308757936a>
CC-MAIN-2013-20
http://www.linuxdevcenter.com/lpt/a/87
2013-05-23T18:51:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.881496
1,393
Taxonomic name: Cecropia peltata L. Synonyms: Ambaiba pelata Kuntze, Coilotapalus peltata Britton Common names: bois cannon (French), faux ricin (French), guarumo (Spanish), papyrus géant, parasolier (French), pisse-roux (French), pop-a-gun (English), snakewood tree (English), Trompetenbaum (German), trumpet tree (English), trumpet wood (English), yagrumo hembra (Spanish) Organism type: tree Cecropia peltata is a fast-growing, short-lived tree that grows in neotropical regions. It is light-demanding and rapidly invades disturbed areas, such as forest canopy gaps, roadsides, lava flows, agricultural sites, urban locations, and other disturbed areas. It naturally occurs in tropical Central and South America, as well as some Caribbean islands and has been introduced to Malaysia, Africa, and Pacific Islands. It may be replacing, or competing with, other native pioneer species in some locations. Cecropia peltata is a neotropical tree that reaches heights of 20 m or more. Its stems are hollow, partitioned at the nodes, and bear U-shaped leaf scars. Its leaves are alternate, long-lobed, ovate, somewhat pointed, about 10-50 cm wide, dark green and scaborous above and densley white-tomentose underneath. Its staminate inflorescence is an umbellate cluster of spikes 3-5.5 cm long, consisting of many individual tubular calyces with paired stamens, while its pistillate spikes are yellowish and 2-5.5 cm long, thick, and succulent when in fruit (PIER, 2009). agricultural areas, natural forests, planted forests, ruderal/disturbed, urban areas Cecropia peltata typically inhabits forest gaps and disturbed sites (PIER, 2009), such as, along roadsides, agricultural sites, lava flows, and urban locations (Binggeli, 1999). It is a fast growing, high light demanding, pioneer species that colonizes tree fall gaps in ts native range and is capable of establishing dense stands (PIER, 2009). It is known from altitudes of 50-2700 m (Hurtado & Alson, 1995). C. peltata requires much rainfall and may be found in environments with 990 mm to over 3,810 mm of annual percipitation. It grows in alluvial, colluvial, and residual soils neutral to acidic in nature. Soil texture may range from heavy clay to sandy, but a clay-loam soil is optimal. C. peltata is also generally found in warm climates ranging from montane to tropical with mean annual temperatures of 12-24°C (Silander & Lugo, undated). Cecropia peltata forms dense stands that may compete with or displace native pioneer species and reduce species richness (Bingelli, 1999; Dumont et al, 1990). Evidence suggests it competes with and may displace tropical African pioneer species Musanga cecropioides (Bingelli, 1999). Cecropia peltata is popularly cultivated as an ornamental species (Bodkin 1990 in Csurhes, 2008). Cecropia peltata L was distinguished from C. schreberiana Miq. in 1988. Whereas Cecropia peltata occurs in Mexico and Central America, C. schreberiana occurs in the Antilles and northern South America (Howard, 1988; ISTF, 1997 in Brokaw, 1998; Csuhres, 2008). However, ITIS does not distinguish between the species and, in fact, states Cecropia schreberiana as the valid name for the species and indicates C. peltata as a synonym for C. schreberiana. Native range: Belize, Colombia, Costa Rica, Guatemala, Guyana, Honduras, Jamaica, Mexico, Panama, Nicaragua, Suriname, Trinidad and Tobago, Venezuela Known introduced range: Cameroon, Cote d`Ivoire (Ivory Coast), French Polynesia (Polynésie Française), Malaysia, New Caledonia, Zaire Introduction pathways to new locations For ornamental purposes: Local dispersal methods Consumption/excretion: Bats and birds eat large quantities of its succulent fruits and are the main seed disperser. In some locations fruits are consumed during the day, mainly by monkeys, and at night by bats and arboreal mammals (Bingelli, 1999) For ornamental purposes (local): Preventative measures: A Risk Assessment of Cecropia peltata for Hawai‘i and other Pacific islands was prepared by Dr. Curtis Daehler (UH Botany) with funding from the Kaulunani Urban Forestry Program and US Forest Service. The alien plant screening system is derived from Pheloung et al. (1999) with minor modifications for use in Pacific islands (Daehler et al. 2004). The result is a score of 9 and a recommendation of: "Likely to cause significant ecological or economic harm in Hawai‘i and on other Pacific Islands as determined by a high WRA score, which is based on published sources describing species biology and behavior in Hawai‘i and/or other parts of the world." Physical: Hand pulling or digging out seedlings and young trees is recommended (PIER, 2009). Chemical: Larger trees should be cut and their stumps should be treated with herbicide (PIER, 2009). Biological control: C. peltata has been found to be attacked by Historis spp. and various moth species and is sometimes extensively defoliated (Bingelli et al, 1998). Cecropia peltata is dioecious and becomes sexually mature in 3 to 5 years. Its tiny flowers are clustered on 5 to 10 cm long spikes and are wind-pollinated. On female spikes the minute one-seeded fruits form large fruit clusters which appear to take around a month to mature. A spike contains around 800 viable seeds which are about 1.9 mm long and weigh 1.6 mg. Bats and birds eat large quantities of the succulent fruits and are the main seed disperser. In Costa Rica a similar amount of fruits are consumed during the day, mainly by monkeys, and at night by bats and arboreal mammals. A large and persistent seedbank is formed in the forest soil (Bingelli, 1999). In some locations flowering and fruiting occur year-round and in others it it seasonal with a peak in either the wet or the dry season depending on location (Silander & Lugo, undated; Bingelli, 1999). C. peltata is highly productive and seed production is estimated to be as high as 1 million seeds per year (Silander & Lugo, undated) Seeds of Cecropia peltata require full sunlight for successful germination and with those conditions may be as high as 80-90%. Seedling leaves are pubescent on both sides, lanceolate, unlobed, and finely toothed. Seedlings are also very light demanding and seedling mortality in natural conditions is typically very high. It has been found that 99% of seedlings in forest openings die in the first year. C. peltata grows rapidly reaching 10-15 cm in height in 10 weeks and up to about 2 m in the first year. Reproductive maturity is reached by pistillate trees in 3-4 years and by staminate trees in 4-5 years. Maturation is dependant on allocation of resources for rapid initial height growth and factors such as the height of and proximity to surrounding vegetation with trees in open environments maturing faster than those in forest gaps. C. peltata usually reaches canopy height in about 10 years and its estimated life span is 30 years (Silander & Lugo, undated). This species has been nominated as among 100 of the "World's Worst" invaders Principal sources: Bingelli, Pierre, 1999. Cecropia peltata L. (Cecropiaceae). Pacific Ecosystems at Risk (PIER), 2009. Cecropia peltata Regel, Cecropiaceae Csurhes, Steve, 2008. Cecropia, Cecropia spp. Pest Plant Risk Assessment. Biosecurity Queensland Department of Primary Industries and Fisheries, Queensland Silander, Susan R. and Ariel E. Lugo, undated. Cecropia peltata L. Yagrumo Hembra, Trumpet-Tree. Compiled by: National Biological Information Infrastructure (NBII) & IUCN/SSC Invasive Species Specialist Group (ISSG) Last Modified: Wednesday, 23 February 2011
<urn:uuid:ce4bc44d-4a58-454e-bf14-99915f103e3e>
CC-MAIN-2013-20
http://www.issg.org/database/species/ecology.asp?si=116&fr=1&sts=sss
2013-05-24T02:32:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.899721
1,881
That is the title of this essay by Dr. Walter Starck. If you are just beginning to look into the issues surrounding purported anthropogenic global warming, Dr. Starck’s essay is a good place to start. He notes that claims of precision in describing the Earth’s climate history are bogus: The average temperature for the Earth, or any region or even any specific place is very difficult to determine with any accuracy. At any given time surface air temperatures around the world range over about 100°C. Even in the same place they can vary by nearly that much seasonally and as much as 30°C or more in a day. Weather stations are relatively few and located very irregularly. Well maintained stations with good records going back a century or more can be counted on one’s fingers. Even then only maximum and minimum temperatures or ones at a few particular times of day are usually available. Maintenance, siting, and surrounding land use also all have influences on the temperatures recorded. The purported 0.7°C of average global warming over the past century is highly uncertain. It is in fact less than the margin of error in our ability to determine the average temperature anywhere, much less globally. What portion of any such warming might be due to due to anthropogenic CO2 emissions is even less certain. There are, however, numerous phenomena which are affected by temperature and which can provide good evidence of relative warming or cooling and, in some cases, even actual temperatures. These include growth rings in trees, corals and stalactites, borehole temperature profiles and the isotopic and biologic signatures in core samples from sediments or glaciers. In addition, historical accounts of crops grown, harvest times, freezes, sea ice, river levels, glacial advances or retreats and other such records provide clear indication of warming and cooling. The temperature record everywhere shows evidence of warming and cooling in accord with cycles on many different time scales from daily to annual, decadal, centennial, millennial and even longer. Many of these seem to correlate with various cycles of solar activity and the Earth’s own orbital mechanics. The temperature record is also marked by seemingly random events which appear to follow no discernable pattern. Over the past 3000 years there is evidence from hundreds of independent proxy studies, as well as historical records, for a Minoan Warm period around 1000 BC, a Roman Warm Period about 2000 years ago, a Medieval Warm Period (WMP) about 1000 years ago and a Modern Warm Period now developing. In between were markedly colder periods in the Dark Ages and another between the 16th and 19th centuries which is now known as the Little Ice Age (LIA). The warmer periods were times of bountiful crops, increasing population and a general flourishing of human societies. The cold periods were times of droughts, famines, epidemics, wars and population declines. Clearly life has been much better in the times of warmer climate, and there is nothing to indicate that the apparent mild warming of the past century is anything other than a return of this millennial scale warming cycle. Starck goes on to document the sheer irrationality of the climate alarmists. He describes the sad history of the “hockey stick.” Not only have they ignored and dismissed the hundreds of studies indicating the global existence of a Medieval Warm Period and the Little Ice Age, they have set out to fabricate an alternate reality in the form of a graph purporting to represent the global temperature for the past thousand years. It portrays a near straight line wiggling up and down only a fraction of a degree for centuries until it begins an exponential rise gradually starting at the beginning of the 20th century and then shooting steeply up in the latter part of that century. This hockey stick-shaped graph was then heavily promoted as the icon of AGW. It appeared on the cover of the third climate assessment report of the IPCC published in 2003 and was reproduced at various places in the report itself. Among the emails between leading climate researchers released in the Climategate affair were a number which revealed a concerted effort to come up with some means to deny the existence of the MWP. The implement chosen to do this became known as the Hockey Stick Graph. The methodology used to construct the graph involved the use of estimates of temperatures from a very small sample of tree growth rings from the Yamal Peninsula in far northern Siberia and ancient stunted pine trees from near the tree line in the High Sierras of California. This data was then subjected to a statistical treatment later shown by critics to produce a hockey stick form of graph even when random numbers were used as raw input data. To make matters even worse, the same tree ring data also indicated a significant decline in temperature for the 20th century, but this was hidden by burying it in a much larger number of data points from instrument measurements. The resulting study was published in the prestigious scientific journal, Nature in 1998. Remarkably, this very small, highly selected and deceptively manipulated graph was proclaimed to be an accurate representation of global temperatures and the extensive body of contrary evidence was simply ignored. Then came Climategate… It is all well worth reading. We are rapidly approaching the time, I think, when global warming alarmism will be generally recognized as the greatest fraud in human history.
<urn:uuid:7ddf69f5-3940-4e53-9237-58c8c0d2c003>
CC-MAIN-2013-20
http://www.powerlineblog.com/archives/2012/11/speak-loudly-and-carry-a-busted-hockey-stick.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+powerlineblog%2Flivefeed+%28Power+Line%29
2013-05-18T07:13:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961049
1,084
The Kanalkampf comprised a series of running fights over convoys in the English Channel and occasional attacks on the convoys by Stuka dive bombers. It was launched partly because Kesselring and Sperrle were not sure about what else to do, and partly because it gave German aircrews some training and a chance to probe the British defences. In general, these battles off the coast tended to favour the Germans, whose bomber escorts massively outnumbered the convoy patrols. The need for constant patrols over the convoys put a severe strain on RAF pilots and machines, wasting fuel, engine hours and exhausting the pilots, but eventually the number of ship sinkings became so great the British Admiralty cancelled all further convoys through the Channel. However, these early combat encounters provided both sides with experience. They also gave the first indications some of the aircraft, such as the Defiant and Bf 110, were not up to the intense dog-fighting that would characterise the battle.
<urn:uuid:14805b64-c5ad-4078-b895-08abe042892c>
CC-MAIN-2013-20
http://worldhistoryproject.org/1940/7/10/germany-begins-air-assaults-on-convoys-in-the-english-channel
2013-05-25T06:05:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975571
199
The Endangered Species Act is a program that aims to protect, but some say it can cause more harm than good. A small herd of Wood Bison are waiting to be reintroduced into the wild at the Alaska Wildlife Conservation. Wood Bison have been extinct in the United States for over 100 years, and the Alaska Department of Fish and Game is hoping to change that. But stakeholders worry that their reintroduction would give them full protection under the Endangered Species Act, and cause problems in Interior Alaska. "In essence you may have to designate critical habitat for those animals," said Doug Vincent-Lang, Endangered Species Coordinator for the Alaska Department of Fish and Game. The state fears they wouldn't be able to properly manage the herd once released, including restrictions on hunting because of federal regulations. "We've been working with the state on this issue for some years now and our aim is to work with the state and find a way to allow this reintroduction to go forward as quickly, and as smoothly as possible," said Bruce Woods, spokesperson for the U.S. Fish and Wildlife Service. Critics of the Endangered Species Act say it doesn't have a very good track record of helping a species recover. Many point to the recent controversy in the Northern Rockies over grey wolves. In the mid 1990s the federal government reintroduced wolves into Yellowstone National Park. By 2010, state departments estimated there were over 1,000 roaming the Northern Rockies. Montana, Idaho and Wyoming wanted to take management into their own hands, but U.S. District Court Judge Donald Molloy ruled that Wyoming didn't have proper management tools in place, and the wolves remained on the list. In April 2011, Congress stepped in and in an unprecedented move delisted wolves using a rider on the budget bill. "In the U.S. this is the first time that we've ever seen Congress without worrying about the science just delist a species and take away the public's right to have any input on the listing of the species," said Rebecca Noblin, with the Center for Biological Diversity. Some environmental groups worry this could set a bad precedent. "The Endangered Species Act is supposed to be on science and in this case it was a political reason wolves were taken off and that defies everything that I think the law should stand for," said Carole Holley, Co-Program Director for Alaska with Pacific Environment. Rep. Don Young has introduced legislation to delist Polar Bears. "I got the support to pass it out of the House but I doubt if we can get it out of the Senate because they are so locked into, ‘Oh, we are saving the world, they are not saving the world,’" Young said in a phone interview. Once a species is on the list it's tough to get it off, according to the Alaska Department of Fish and Game. "We are setting these recovery objects to get back to recovery levels that aren't necessarily to prevent their extinction when they are no longer threatened with extinction," said Vincent-Lang. He said a perfect example is the Steller sea lion population in the Aleutians. "The recovery objective for Steller sea lions in the western Aleutians right now is sitting someplace in the order of 110,000 animals. Right now we have over 75,000 animals," Vincent-Lang said. But even with those numbers, the National Marine Fisheries Service shut down fisheries near Adak. "The process that has brought them to the point where they need protection has been long, it's been a long process and it's taken a long time," said Woods. He says there are plenty of success stories in Alaska, including the Cackling Goose, Peregrine Falcon and Arctic Peregrine Falcon, which were all previously listed and have now been removed.
<urn:uuid:34e6d6b1-1b22-48a5-ac54-d8ee17489f9f>
CC-MAIN-2013-20
http://www.ktuu.com/news/ktuu-endangered-species-pt-3-20110512,0,3712530.story
2013-05-22T14:33:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967178
789
Written as a resource for children, the text includes an introduction for parents/guardians, but the body of the book is written for children to read themselves. Or, an adult or older sibling can read the book to a younger child. The revision of this classic resource (first published in 1986) is intended to make it more obvious that the book is for children. The typographic cover has been changed to a full-color illustration and the text design reworked to make it more appropriate for children. Pastors will find it helpful to keep a quantity on hand to give to people during grief experiences and in grief counseling. Check out books on the subject of grief and appropriate Bulletins (see the Related Products Section below). Did you know. . . Grief is the normal response of sorrow, emotion, and confusion that comes from losing someone important to you. The word "grief" comes from the same root as "grave." Although many times focused only on emotional responses to loss, grief also has physical, cognitive, behavioral, social, and philosophical dimensions. Grief responses are influenced by personality, family, culture, and spiritual and religious beliefs and practices. While many who grieve may be able to work through their grief independently, accepting additional support from counseling and support groups may promote the process of healing. The Book of Ecclesiastes reminds us "For everything there is a season... a time to weep, and a time to laugh; a time to mourn, and a time to dance..."
<urn:uuid:c7aa3719-9fa1-4194-9cf4-5c416c62a8ab>
CC-MAIN-2013-20
http://www.cokesbury.com/forms/ProductDetail.aspx?pid=445771
2013-05-24T02:48:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957851
313
From the basic guarantees of the right to life and freedom from physical harm, it is not much of a stretch to arrive at the notion that individuals should have “control over what happens to their bodies.” As Human Rights Watch points out, Millions of women and girls are forced to marry and have sex with men they do not desire. Women are unable to depend on the government to protect them from physical violence in the home…Women in state custody face sexual assault by their jailers. Women are punished for having sex outside of marriage or with a person of their choosing…Husbands and other male family members obstruct or dictate women’s access to reproductive health care.150 The right of women to control over their bodies is implicitly recognized by human rights protections ranging from the right to health and freedom from discrimination to the right to privacy and freedom from torture.151 Reproductive rights are a very delicate subject but one that receives a lot of attention in public debates in developed as much as developing countries. In the United States, for example, the issue of abortion has been one of the hottest areas of contention in the culture wars and in presidential campaigns. Advocates of a “woman’s right to choose” argue that women should have complete control over their bodies, including developing fetuses, and therefore the right to decide to terminate a pregnancy. Although having an abortion can be a devastating and traumatic experience, many believe women should have the option to take this course if they deem it necessary. Opponents of abortion, who would classify themselves as “pro-life,” assert that fetuses should be treated as independent persons even before birth, and that abortion is equivalent to a form of murder – one that should not be permitted in any, or perhaps only the most extreme, circumstances, as when the life of the mother as in imminent danger. They object to the U.S. Supreme Court’s 1973 decision in Roe v. Wade, which allowed for abortions to continue to be performed. To read more about women’s sexual rights, click here. |Female Genital Mutilation (FGM) In other parts of the world, notably in Africa (in 28 countries) and FGM is performed by a number of different cultures for a number of different reasons. Sometimes the goal is to limit and control female sexual desire, sometimes is it seen as a rite of passages for girls into womanhood. Some cultures point to health grounds, claiming it is for hygiene. In a few Muslim communities, it is tied to an interpretation of Islam. Whatever the justification, FGM is “usually performed by a traditional practitioner with crude instruments and without anesthetic,” posing a serious risk to those who undergo the procedure. In addition to the physical side-effects, the mental and psychological harm can be permanent: it “may leave a lasting mark on the life and mind of the woman who has undergone it. In the longer term, women may suffer feelings of incompleteness, anxiety and depression.”153 Does FGM constitute torture or an acceptable cultural practice whose deep-rooted traditions justify its existence? Is it a violation of privacy or a proper exercise of familial authority? These questions are important to ponder in trying to understand the scope of a woman’s human rights. To see a videos about FGM, please watch http://www.youtube.com/watch?v=Gh4fWUVcBN4 (a women discusses her experience after FGM) and http://www.youtube.com/watch?v=TMSQPDd1B2g&feature=player_embedded (please note there is some tribal nudity). For more on women and health issues, see the “Health” section of the “Women and Globalization” Issue in Depth.
<urn:uuid:496fb2d9-f9f8-4c30-b473-b93c651c7598>
CC-MAIN-2013-20
http://www.globalization101.org/reproductive-rights-and-sexual-autonomy/
2013-05-25T05:38:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958307
791
Researchers from the Georgia Institute of Technology, have made single-crystal zinc oxide (ZnO) nanobelts that spontaneously rolled into helical structures. The nanohelixes or nanosprings, have piezoelectric properties and show promise for biomedical applications and in microsystems. Piezoelectric semiconductors are natural resonators and therefore don't need all the circuitry normal semiconductors need to make them process and emit signals. Physical stimulation causes piezoelectric materials to naturally oscillate at a known frequency. It might therefore be possible to treat the material to attract a protein from a cancer cell and then even a single molecule of that protein could be detected with just one nanospring. Currently working to exploit this is a collaborative group of multidisciplinary specialists looking to devise a micron-sized "pill" that disperses millions of such nanosprings all through the entire body, and radios through the skin if cancer cells are detected. A prototype of this "pill" is promised by the end of the year. The nanosprings are also compatible with traditional photolithographic techniques of chip-making and by being grown on-chip, they can be used as highly sensitive transducers. The nanobelts were grown by a solid-vapour process by evaporating high-purity zinc oxide powder in vacuum at 1,350 degrees C, at which time an argon gas flow is introduced into the furnace. The ring shapes began depositing on a cooler 400- to 500 degrees C alumina substrate, with the thinnest of them forming spiraled nanosprings. To get the greatest piezoelectric effect, the ZnO nanostructures need a large area of polarized (0001) zinc- and oxygen-terminated surfaces. However, the (0001) surface has a high surface energy and so its growth is energetically unfavourable. With careful control of the experimental conditions, the Georgia team produced structures with more than 90% of the nanobelts having top flat surfaces consisting of polar ±(0001) facets.
<urn:uuid:e9867a49-ed70-4b4e-83f2-685875069e2d>
CC-MAIN-2013-20
http://www.azonano.com/article.aspx?ArticleID=98
2013-05-25T06:26:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936729
439
Urbanization can be the engine for progress of cities and provide great opportunities for individuals and families to prosper. However, urbanization can bring many difficult health challenges. Rapid, unplanned urbanization, when not managed properly, can give rise to urban poverty, growth of slums, health problems and widening inequities. Urban health equity is a pressing concern, especially for the poor. The urban poor are often socially excluded, and face the following challenges: lack of social support systems; unsafe living and working conditions; discrimination; isolation, powerlessness; and inability to pay for goods and services. The poorer a person is, the worse is his or her health. The urban poor suffer from unfavorable, living conditions. Oftentimes, they cannot afford the prohibitive high costs of health services. They face illnesses and premature deaths from preventable deaths, due to lack of safe drinking water, improper sanitation, health facilities, safety, security and health information. The urban poor is constantly exposed to social and economic determinants of health status and other outcomes. It is often asked: “Why do we keep treating people for illness, only to send them back to the conditions that created the illness in the first place?” Addressing social determinants will help provide the answer to this question.
<urn:uuid:a47335df-2a48-4d37-836c-a9c2802de538>
CC-MAIN-2013-20
http://www.wpro.who.int/philippines/areas/urban_health/continuation_urban_health/en/index.html
2013-05-24T22:30:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954572
261
Back to Menu The Middle Creek National Battlefield (January 10, 1862) is the site of the largest and most significant Civil War battle in Eastern Kentucky. Union forces were led by Col. James A. Garfield and Confederate troops by Brig. Gen. Humphrey Marshall. Stroll along two distinct walking paths that are enhanced with informative interpretive panels. These superb panels explain the strategic offensive and defensive measures that were implemented by the Confederate and Union troops. Fully interpretive. Two walking trails - confederate & union loop trails. Free (donations accepted). Intersection of Ky 114 & Ky 404 Home / E-Newsletter / THE OFFICIAL SITE OF THE KENTUCKY DEPARTMENT OF TRAVEL Capital Plaza Tower 22nd Floor, 500 Mero Street, Frankfort, KY 40601 2013, Kentucky Department of Travel. All rights reserved. Web Design & Search Engine Optimization by Aristotle ® Kentucky State Parks
<urn:uuid:313ff75c-7e59-4f57-b77a-e92746af5c63>
CC-MAIN-2013-20
http://www.kentuckytourism.com/things_to_do/middle-creek-national-battlefield/1384/
2013-05-24T01:30:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.882881
196
The Mozambique Ridge (MOZR) and the Agulhas Plateau (AP) are prominent bathymetrically elevated rises off south-eastern Africa connected by a rise of less bathymetric expression. Intuitively, this observation would imply that the plateaus and rises underwent a related crustal formation. Deep crustal ocean-bottom seismometer data and a multichannel seismic reflection profile from the southern MOZR show evidence for its predominantly oceanic crustal origin with excessive volcanic eruption and magmatic accretion phases. The lower two-thirds of the crustal column exhibit P-wave velocities of more than 7.0 km/s, increasing to 7.5-7.6 km/s at the crustal base. These velocities suggest that the lower crust was accreted by large volumes of mantle-derived material to form an over-thickened equivalent of an oceanic layer 3. When comparing the velocity-depth model and the observations of the seismic reflection data with those of the AP, a resemblance can be established which concludes that a greater Southeast African Large Igneous Province (LIP) must have formed between 140 and 95 Ma in phases of highly varying magmatic and volcanic activities. The timing, size and formation history of the Southeast African LIP is almost analog to that of the Kerguelen-Heard Plateau, which provokes speculation about related processes of periodic magma generation at that time. Helmholtz Research Programs > PACES I (2009-2013) > TOPIC 3: Lessons from the Past > WP 3.2: Tectonic, Climate and Biosphere Development from Greenhouse to Icehouse
<urn:uuid:c8576fea-0708-4010-b0f4-759df41c5da4>
CC-MAIN-2013-20
http://epic.awi.de/20924/
2013-05-18T18:06:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904375
347
This ICT tip could be applied to the following subjects: Quick overview: Two Canadian websites that aim to educate adults about everyday financial concerns. Both sites contain loads of reputable information that can be used within a learning situation or courses dealing with finances. Available in both English and French. How the Money Belt website can be used in the classroom: The Money Belt website aims to teach financial life skills in easy “down-to-earth” language. The website is primarily intended for young Canadian adults but is also useful for adult learners of all ages. On this site your students can test what they know (and don’t know) about managing their money. Some topics include how to avoid high levels of debt, avoiding fraud, choices that exist when choosing credit cards, loans, bank accounts, and general financial knowledge. The Money Belt website is maintained by the Financial Consumer Agency of Canada (FCAC) a federal government agency that aims to protect and educate Canadian consumers of financial services. How the CBA website can be used in the classroom: The CBA (Canadian Bankers Association of Canada) is a Canadian website that contains a large amount of information regarding Canadian banking and financial services. In particular, the consumer information section contains information on banking basics, financial rights and responsibilities, saving, investing, and identity theft. There is also a very useful glossary (the link can be found on the top right hand corner of the CBA webpage) containing definitions in relation to finances. (Source of websites: Nancy Sher at the CDC Vimont Adult Centre, SWLSB) - The Money Belt in English at (www.tinyurl.com/moneybelt-english) - The Money Belt in French at (www.tinyurl.com/moneybelt-french) - The CBA website in English at (www.cba.ca) - The CBA website in French at (www.tinyurl.com/CBA-french) Note: Sections of the Money Belt description have been quoted from the about section of the Money Belt website.
<urn:uuid:07aa1f05-3e73-4b60-a7ac-1ec21d8d1fe0>
CC-MAIN-2013-20
http://avispector.wordpress.com/tag/financial/
2013-05-18T05:55:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926087
426
Constitution of Malta |This article is part of the series: Politics and government of The current Constitution of Malta was adopted as a legal order on September 21, 1964, and is the self-declared supreme law of the land. Therefore, any law or action in violation of the Constitution is null and void. Being a rigid constitution, it has a three-tier entrenchment basis in order for any amendments to take place. Constitutional Development since Independence The Constitution has been amended twenty-four times, most recently in 2007 with the entrenchment of the office of the Ombudsman. The constitution is typically called the Constitution of Malta and replaced the 1961 Constitution, dating from October 24, 1961. George Borg Olivier was its main instigator and negotiator. Under its 1964 constitution, Malta became a parliamentary democracy within the British Commonwealth. Queen Elizabeth II was sovereign of Malta, and a governor general exercised executive authority on her behalf, while the actual direction and control of the government and the nation's affairs were in the hands of the cabinet under the leadership of a Maltese prime minister. On December 13, 1974, the constitution was revised, and Malta became a republic within the Commonwealth, with executive authority vested in a Maltese president. The president is appointed by parliament. In turn, the president appoints as prime minister the leader of the party that wins a majority of parliamentary seats in a general election for the unicameral House of Representatives. The president also nominally appoints, upon recommendation of the prime minister, the individual ministers to head each of the government departments. The cabinet is selected from among the members of the House of Representatives. The Constitution provides for general elections to be held at least every five years. Candidates are elected by the Single Transferable Vote system. The entire territory is divided into thirteen electoral districts each returning five MPs to a total of 65. Since 1987, in case a Party obtains an absolute majority of votes without achieving a Parliamentary majority a mechanism in the Constitution provides for additional seats to that Party to achieve a Parliamentary majority (Act IV of 1987). To date this mechanism, intended to counteract gerrymandering, came into effect twice: for the Sixth and the Eight Parliaments. A similar mechanism was introduced in 1996 so that additional seats would be given to that Party obtaining a relative majority of votes but not a parliamentary majority with only two parties achieving Parliamentary representation. This mechanism was first applied in the 2008 general election. The Nature of the Constitution The Independence Constitution of Malta of 1964 established Malta as a liberal parliamentary democracy. It safeguarded the fundamental human rights of citizens, and forced a separation between the executive, judicial and legislative powers, with regular elections based on universal suffrage. This constitution was developed through constitutional history and its evolution. The constitutions of Malta fell under three main categories. These were: - Those over which the British possessed total power; - The intermediate genres of constitutions (1921-1947), where Malta had self government (the 1961 constitution was very similar to these constitutions); - the Independence Constitution of 1964. On July 27, 1960, the Secretary of State for the Colonies declared to the British House of Commons the wish of Her Majesty’s Government to reinstate representative government in Malta and declare that it was now time to work out a new constitution where elections could be held as soon as it was established. The Secretary, Iain Macleod, also notified the House of the appointment of a Constitutional Commission, under the chairmanship of Sir Hilary Blood, to devise thorough constitutional schemes after consultation with representatives of the Maltese people and local interests. The Commissioners presented their report on December 5, 1960. The report was published on March 8, 1961. That same day, the Secretary of State declared to the House of Commons that Her Majesty’s Government had taken a decision. The Commissioner’s constitutional recommendations to be the basis for the subsequent Malta constitution were to be granted. The 1961 Constitution was also known as the Blood Constitution. It was enclosed in the Malta Constitution Order in Council 1961 and it was completed on 24 October of that same year. The statement that the Order makes provision for a new constitution where Malta is given self-government is found on the final page of the Order in Council. The 1961 Constitution provided the backbone for the Independence Constitution. A date was provided to guarantee this legal continuity. An indispensable characteristic of this constitution is the substitution of the diarchic system, which was no longer practicable, by system of only one Government, the Government of Malta, with full legislative and executive powers. At that time Malta was still a colony and responsibility for defence and external affairs were referred to Her Majesty’s Government. There was a clear indication that the road towards independence continued and now was at a highly developed stage. It is imperative to recognise that the 1961 Constitution established most of the features of the 1964 Constitution. The British recognised Malta as a State. Another important characteristic of this constitution was an innovative introduction of a chapter covering the safeguarding of Fundamental Rights and Freedoms of the Individual. This is fairly significant because Fundamental Human Rights are a protection for the individual by the State. In the 1961 Constitution, Fundamental Human Rights and Freedoms are found in Chapter IV. The protection of freedom of movement was introduced only in the 1964 Constitution. The declaration of rights of the inhabitants of the islands of Malta and Gozo dated June 15, 1802, gives a collective declaration of rights. The 1961 Constitution gave birth to what was recognised as a Parliament in the 1964 Independence Constitution. The Cabinet had the general direction and management of the Government of Malta. It consisted of the Prime Minister. The Prime Minister alone might summon it and it was this office which presided over it. Not more than seven other ministers were members of the Legislative Assembly, and they were collectively responsible to it. This was one of the first attempts to restate some of the more important British Constitutional Conventions in the constitution. In the exercise of his powers, the Governor was to act on the advice of the Cabinet, except where he was directed to act in his discretion or on the recommendation or advice of a person other than the Cabinet. Three elections of the promulgation of the 1961 Constitution existed. This constitution included the presence of a Cabinet for the first time in Malta. The legislature was unicameral. The Legislative Assembly’s normal life span was of four years. It consisted of fifty members and they were elected by universal suffrage from ten electoral divisions on the system of proportional representation by the single transferable vote. The 1961 Constitution constructed a firm foundation for a future achievement of Independence. When in 1964 Malta did in fact become independent, because the Government chose to avoid breaking all ties with the United Kingdom, there was legal continuity of the legislation, as a result of which Parliament remained functional. To a certain extent the same situation existed as regards to the legislation by the British Parliament for Malta. The Malta Independence Order itself developed into the subject of an entrenchment, since here it is declared that this evolved into an extension to the 1961 Constitution even in the sense of an amendment. Even though Malta acquired independence, there was an ongoing presence of continuity. One of them is the monarchy pre-1964 and prior 1964. The Malta Independence Order 1964 was subject to the Malta Independence Act of that same year and it is a document that holds the chief regulations that govern the constitution of a state. This document is supreme over each and every other document and all legislation is subject to it. Throughout Malta's constitutional history, the nation acquired its own constitution, and to a certain extent, the Independence Constitution is made up of certain principles that arose for the first time in previous constitutions. It can be said that the Independence Constitution has evolved from the constitution which preceded it. But one must not ignore the fact that changes have taken place in this process of evolution. The statement that the 1964 constitution is in fact a replica of the 1961 constitution with sovereignty added might be criticised by saying that some factors differ between the two constitutions. The 1964 constitution is not merely what can be defined as an improvement. It is more like another stepping-stone in constitutional history being the final step in a long series of constitutions. In fact, even though it may seem that some provisions were altered from the 1961 constitution to the 1964 constitution, some of those provisions remained unchanged until the amendments of the 1964 constitution were made. The Malta Independence Order, 1964, as amended by Acts: - XLI of 1965, XXXVII of 1966 - IX of 1967 - XXVI of 1970 - XLVII of 1972 - LVII, LVIII of 1974 - XXXVIII of 1976 - X of 1977 - XXIX of 1979 - IV of 1987 - XXIII of 1989 - Proclamations Nos. II and VI of 1990 - Acts XIX of 1991 - IX of 1994 - Proclamations IV of 1995 and III of 1996 - Acts: XI of 1996, XVI of 1997 - Acts: III of 2000 and XIII of 2001 Past constitutions Malta has had numerous past constitutions. - The 1813 Constitution - The 1835 Constitution - The 1849 Constitution - The 1887 Constitution - The 1903 Constitution - The 1921 Constitution - The 1936 Constitution - The 1939 Constitution - The 1947 Constitution - The 1959 Constitution - The 1961 Constitution Further reading - Frendo, Henry, The Origins of Maltese Statehood - A Case Study of Decolonization in the Mediterranean - Malta: PEG Publications, ISBN 99932-0-015-8. See also - Supplement of the Malta Government Gazette, No. 11688 of September 18, 1964 - Supplement of the Government Gazette 31 October 1961 No. 11,346 - Section 2: 1961 Constitution – “The State of Malta” - Articles 5-17: 1961 Constitution - Article 45: 1961 Constitution - Article 50: Malta Independence Order - J.J. Cremona - THE MALTESE CONSTITUTION AND CONSTITUTIONAL HISTORY SINCE 1813 (Publishers Enterprises Group Ltd (PEG) – 1994) ISBN 99909-0-086-8 - Royal Instructions of July 16, 1813, (C.O. 159/4) as supplemented by despatch at pp 124-125, infra - Cremona, J.J, The Malta Constitution of 1835 and its Historical Background (Malta, 1959), (Appendix) - Ordinances and other Official Acts published by the Government of Malta and its Dependencies, Malta, 1853, Vol X, pp70-77 - Law, Letters Patent and other Papers in relation to the Constitution of the Council of Fovernment of Malta, Malta, G.P.O., 1889, pp 113-132 - Malta Government Gazette No. 4603, June 22, 1903, pp 614-621 - Malta Government Gazette No. 6389, May 4, 1921, pp 326-366 - Malta Government Gazette No. 8206, September 2, 1936, pp 804-812 - Malta Government Gazette No. 8534, February 25, 1939, pp 244-257 - The Malta Constitution 1947, Malta, G.P.O. 1947 - The Malta (Constitution) Order in Council 1959, Malta, Department of Information, 1959 - The Malta Constitution 1961, Malta, Department of Information, 1961 |Wikisource has original text related to this article:| - Il-Kostituzzjoni tar-Repubblika Maltija Ministeru tal-Ġustizzja u l-Intern. (Maltese) - The Constitution of the Republic of Malta Ministry for Justice and Home Affairs. (English) - The Constitution of the Republic of Malta Ministry for Justice and Home Affairs. (English)
<urn:uuid:777ae305-51f7-4113-a954-dd8ef6bc4704>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Constitution_of_Malta
2013-05-27T02:58:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946242
2,459
The 2010 midterm elections and the resulting battles over redistricting will shape the future of both political parties. A case challenging the constitutionality of the Voting Rights Act (VRA) is being offered to the Supreme Court, highlighting these political stakes. And President-elect Barack Obama’s Justice Department is about to take center stage in this fight. The Constitution requires legislative districts be redrawn after each decennial national census. The 2010 midterm elections will determine the makeup of all 50 state legislatures. With few exceptions, these legislatures will then draw new lines of all congressional districts, as well as many state legislative districts, for the 2012 election and beyond. A major factor in this redistricting is the Voting Rights Act. A number of VRA’s provisions apply nationwide, originally designed to protect the right of African Americans to vote. But another provision of the law has been challenged in a case that has now been offered to the U.S. Supreme Court. Section Five of the VRA requires certain jurisdictions with a history of egregious racism to go through a special process before they can make any changes affecting voting. Under Section Five, these jurisdictions must get pre-clearance from the U.S. Justice Department before they can redistrict or make any other changes to their election laws or procedures, or get a three-judge panel of the federal district court in D.C. to sign off on the jurisdiction’s plan. The law requires the Justice Department or the federal court to determine whether the changes would have either the purpose or the effect of abridging the right to vote. But as racial barriers to voting have fallen in America, and the African American vote has become increasingly reliable as a voting bloc for Democrats, VRA lawsuits have increasingly been used to advantage the Democrats in partisan wrangling against Republicans. The VRA was originally passed in 1965. This was the height of the civil rights movement, when powerful forces were arrayed to prevent black voters from casting ballots, and where a number of states saw widespread voter intimidation. Today America is a different place. For years now, African Americans have served in Congress, served as federal judges, on the Supreme Court, and in the Cabinet. And now an African American has been elected president of the United States. America is a far different place today than it was in 1965. Yet, Mr. Obama has nominated Eric Holder as attorney general of the United States. Mr. Holder has liberal views of what constitutes “civil rights,” including election law issues and redistricting issues subject to VRA. How will Mr. Holder deal with these VRA issues? Does America still need draconian laws that were passed to combat endemic racism and overt hostility? Will Mr. Holder use the power of Section Five to strike down redistricting plans, hoping to force districting lines that will be more favorable to Democrats? Additionally, Mr. Obama has promised to nominate activist judges to our nation’s courts, and he has a Democrat-controlled Senate that will likely confirm anyone he nominates. Mr. Holder will be one of his top advisors on whom to nominate. How much damage could result from a liberal attorney general pushing partisan advantage in redistricting processes, with any legal challenges to those actions being decided by equally liberal, Obama-appointed judges? These are critical questions. Republicans are on the ropes. Democrats control majorities in both houses of Congress, and now the White House. And a liberal Democrat will be deciding VRA issues at the Obama Justice Department. There will be a strong push by some to use these levers of power to get redistricting plans that will only increase Democrat control for the next decade. This country has clearly progressed beyond yesterday’s racism. And the law should not give one party an overwhelming advantage to invoke the mistakes of the past for partisan advantage. Ken Blackwell is the senior fellow for Family Empowerment at the Family Research Council, and a former chairman of the U.S. Census Monitoring Board. By Rand Paul Obama acts as though we no longer have a Constitution Independent voices from the TWT Communities What does the middle-class conservative think about everything? Find out here. Television commentary, reviews, news and nonstop DVR catch-up. Born in 1930 in rural Missouri, Charles Vandegriffe, Sr., brings his time and place to the Communities. The world impacts us. What happens in our towns, cities, states, country and on this planet makes a difference to us. Benghazi: The anatomy of a scandal Vietnam Memorial adds four names Cinco de Mayo on the Mall NRA kicks off annual convention California wildfires wreak havoc
<urn:uuid:430b88cd-a839-41c8-9d41-bcf0823c1459>
CC-MAIN-2013-20
http://www.washingtontimes.com/news/2008/dec/31/obama-justice/
2013-05-18T18:24:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950823
956
An ecosystem (or ecological system) is a collection of communities of organisms and the environment in which they live. Ecosystems can vary greatly in size. Some examples of small ecosystems are tidal pools, a home garden, or the stomach of an individual cow. Larger ecosystems might encompass lakes, agricultural fields, or stands of forests. Landscape-scale ecosystems encompass larger regions, and may include different terrestrial (land) and aquatic (water) communities. Ultimately, all of Earth's life and its physical environment could be considered to represent an entire ecosystem, known as the biosphere. Ecologists often invent boundaries for ecosystems, depending on the particular needs of their work. (Ecologists are scientists who study the relationships of organisms with their living and nonliving environments.) For example, depending on the specific interests of an ecologist, an ecosystem might be defined as the shoreline vegetation around a lake, or the entire lake itself, or the lake plus all the land around it. Because all of these units consist of organisms and their environment, they can properly be considered to be ecosystems. The raw materials of an ecosystem All ecosystems have a few basic characteristics in common. They use energy (usually provided by sunlight) to build complex chemical compounds out of simple materials. At the level of plants, for example, carbon dioxide and water vapor are combined with the energy of sunlight to produce complex carbohydrates, such as starches (this process is known as photosynthesis). As plants (producers) are consumed by other organisms, more complex substances are manufactured in their bodies, and energy is passed upward through the food web. The flow of energy in an ecosystem occurs in only one direction: it is always consumed by higher levels of organisms in a food web. As a result, each level of a food web contains less energy than the levels below it. By contrast, nutrients can flow in any direction in an ecosystem. When plants and animals die, the compounds of which they are formed are decomposed by microorganisms (decomposers), returned to the environment, and are recycled for use again by other organisms. One of the greatest challenges facing humans and their civilization is to develop an understanding of the fundamentals of ecosystem organization, how they function and how they are structured. This knowledge is absolutely necessary if humans are to design systems that allow for the continued use of the products and services of ecosystems. Humans are sustained by ecosystems, and no alternative to this relationship exists.
<urn:uuid:bccc2f01-9287-4f5f-976b-4d01ee3ebcd2>
CC-MAIN-2013-20
http://www.scienceclarified.com/Di-El/Ecosystem.html
2013-05-24T02:04:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947493
497
Heel Bursitis is another type of heel pain. The sufferer of this kind of heel pain experiences pain at the back of the heel when the patient moves his joint of the ankle. In the heel bursitis type of heel pain there is swelling on the sides of the Achilles’ tendon. In this condition the sufferer may experience pain in the heel when his feet hit the ground. Heel bruises are also referred as heel bumps they are usually caused by improper shoes. The constant rubbing of the shoes against the heel. What is bursitis? Bursitis is the inflammation of a bursa. Normally, the bursa provides a slippery surface that has almost no friction. A problem arises when a bursa becomes inflamed. The bursa loses its gliding capabilities, and becomes more and more irritated when it is moved. When the condition called bursitis occurs, the normally slippery bursa becomes swollen and inflamed. The added bulk of the swollen bursa causes more friction within an already confined space. Also, the smooth gliding bursa becomes gritty and rough. Movement of an inflamed bursa is painful and irritating. “Itis” usually refers to the inflammation of a certain part of the body, therefore Bursitis refers to the constant irritation of the natural cushion that supports the heel of the foot (the bursa). Bursitis is often associated with Plantar Fasciitis, which affects the arch and heel of the foot. What causes bursitis? - Bursitis and Plantar Fasciitis can occur when a person increases their levels of physical activity or when the heel’s fat pad becomes thinner, providing less protection to the foot. - Ill fitting shoes. - Biomechanical problems (e.g. mal-alignment of the foot, including over-pronation). - Rheumatoid arthritis. Bursitis usually results from a repetitive movement or due to prolonged and excessive pressure. Patients who rest on their elbows for long periods or those who bend their elbows frequently and repetitively (for example, a custodian using a vacuum for hours at a time) can develop elbow bursitis, also called olecranon bursitis. Similarly in other parts of the body, repetitive use or frequent pressure can irritate a bursa and cause inflammation. Another cause of bursitis is a traumatic injury. Following trauma, such as a car accident or fall, a patient may develop bursitis. Usually a contusion causes swelling within the bursa. The bursa, which had functioned normally up until that point, now begins to develop inflammation, and bursitis results. Once the bursa is inflamed, normal movements and activities can become painful. Systemic inflammatory conditions, such as rheumatoid arthritis, may also lead to bursitis. These types of conditions can make patients susceptible to developing bursitis. - Cold presses or ice packs. - Anti-inflammatory tablets. - Cushioning products. - Massaging the foot / muscle stimulation. - Stretching exercises. - Insoles or orthotics.
<urn:uuid:76a4c6bc-e63f-4d77-8492-c24be187795c>
CC-MAIN-2013-20
http://sugarlandheelpain.com/?page_id=78
2013-05-19T02:16:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.902648
667
Ringing In One Ear – Unilateral Pulsatile Tinnitus Article by Gene Frank If you are suffering from ringing in one ear, then you most likely have a condition referred to as Unilateral Tinnitus. This form of Tinnitus affects only one ear, whereas Bilateral Tinnitus causes ringing in both ears. And to get even more specific, if the ringing in your ear is accompanied by a pulsating noise or thumping sound that is in rythym with your heartbeat, then theres a high possibility that you have Unilateral Pulsatile Tinnitus.Identical to Unilateral Tinnitus, Unilateral Pulsatile Tinnitus affects solely one ear, and a person would typically hear pulsating noises (whooshing, popping, buzzing, ringing, etc.) that is normally in unison with ones heartbeat. In some cases, even the sound of your voice or breathing pattern can leave a resonance in ones ear.There are two different categories relating to “ringing in one ear” symptoms – objective Tinnitus and subjective Tinnitus. When a doctor is able to hear the pulsating sounds in the patients ear by using a physicians tool, then this is considered objective tinnitus. Whereas subjective tinnitus ringing noises can only heard by the patient themselves.Although Pulsating Tinnitus is somewhat uncommon, it can be associated with serious ailments such as:Middle Ear Effusion – This ailment typically affects middle aged people. When the Eustachian tubes become inflammed, this causes excessive fluid to build up in the middle ear, and in turn causes infection.Glomus tumor – Arteries as well as tissues become entangled in the middle ear or surrounding area causing tinnitus.Atherosclerotic Carotid Artery Disease – Excessive cholesterol levels can accumulate in the arteries and blood vessels that cause them to become narrow. This consequently causes an uneven blood flow to the head and neck region…creating the pulsating sounds in the inner ear. This condition largely has effects on elderly patients with a history of diabetic issues and high blood pressure.Meniere’s Disease – Triggered by irregular inner ear fluid pressure and related to a vast majority of inner ear problems, this is one of the most common causes of tinnitus.Unfortunately, there are no miracle drugs or over the counter medication to cure this mind-boggling ringing in one ear. But nonetheless, there are many touted products in the market that persuade you to believe otherwise. Highly promoted herbal products such as Gingko or Cohosh, and hoeopathy programs look to be very promising in curing ringing in one ear. But realistically, most of these treatments merely give you minor relief and in most instances produce negative side effects that outweigh the advantages.The great news is that permanent relief from ringing in one ear is possible…not just short-term relief.. Pulsating tinnitus CAN be cured successfully without depending on surgery, psychiatric treatment, or prescription drugs. By concentrating on the underlying issues of the noise, you can once and for all cure tinnitus. Previously, tinnitus sufferers were hoping to find the most effective methods to cope with ringing in one ear, today it’s a matter of how to permanently stop it.. In most cases, serious Pulsitile Tinnitus symptoms will not disappear by itself. If ringing in one ear happens on a continual basis, this is not something to take lightly. Pulsatile Tinnitus treatment is closer than you think as long as you take action and stick to the appropriate guidelines. If you’ve had enough of dealing with continuous ringing in one ear regardless of how you have attempted to remedy it, here is my #1 recommendation for curing Pulsatile Tinnitus. Stop the agony caused by this condition and discover a real solution by Clicking Right Here.
<urn:uuid:4844992a-2843-4cf3-a53f-3e99ba30d7e3>
CC-MAIN-2013-20
http://haveaheartfoundation.net/ear-hearing/ringing-in-one-ear-unilateral-pulsatile-tinnitus/
2013-05-22T07:14:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934305
795
Friday, August 17, 2012 - 19:30 in Astronomy & Space The sun is the roundest natural object ever precisely measured, a discovery that may solve past climatic mysteries, new observations show. - Children find human-made objects more likely to be owned than natural objectsThu, 6 Oct 2011, 17:39:15 EDT - MIT neuroscientists reveal how the brain learns to recognize objectsWed, 22 Sep 2010, 12:30:40 EDT - Scientists see billions of miles awayFri, 18 Jun 2010, 16:34:09 EDT - UCLA life scientists unlock mystery of how 'handedness' arisesTue, 8 May 2012, 17:22:18 EDT - Border collie comprehends over 1,000 object namesThu, 6 Jan 2011, 12:11:50 EST
<urn:uuid:5ab2b289-b815-4c33-beb4-6e9f94e7e52b>
CC-MAIN-2013-20
http://esciencenews.com/sources/national.geographic/2012/08/17/sun.is.roundest.natural.object.known
2013-05-19T02:54:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.819439
167
In his right hand, Don Sada clutches a simple kitchen sieve; in his left, he holds a Tupperware container. As I look on, the 58-year-old ecologist from Reno's Desert Research Institute plunges into a thick stand of watercress that obscures the headwaters of Big Springs Creek, an exuberant stream that issues from multiple springs at the southern end of Snake Valley, along the flanks of the Snake Range in east-central Nevada. "Let's see what's here," he says, stooping to part the watercress and drag his sieve through the stream's pebble-strewn bottom. "I've got springsnails," he shouts. Peering into the container, I see about a dozen dots that appear as animate as baby peppercorns. The dots are snails, so small that the whorls that mark their shells are all but invisible. These diminutive gill-breathers belong to a species -- Pyrgulopsis anguina -- found near the source of just three springs, all of them in Snake Valley. The snails are part of an ancient assemblage of aquatic organisms found here and in other Great Basin valleys. Fifteen thousand years ago, agile minnows now confined to spring-fed pools and streams swam through the shallows of great lakes and rivers. Springsnails, and the type of habitat they occupy, may have existed here for some 5 to 6 million years, ever since the end of the Miocene, the geological epoch during which Nevada's corrugated basin-and-range began to form. But now many of these little spring dwellers are in trouble, due largely to us, the brash newcomers who, barely two centuries ago, began pushing into the territory west of the 100th meridian. Between the late 1800s and the start of the 21st century, Sada says, habitat destruction and the introduction of non-native species caused the extinction of a dozen genetically unique Great Basin fishes along with at least three mollusks. Still other extinctions have been but narrowly averted. Of some 4,000 springs Sada and his colleagues have examined, barely 60 can be considered remotely pristine. The rest have been subjected to unremitting abuse, notably by cattle and wild horses, which have trampled riparian margins, and by ranchers and farmers, who've canalized spring brooks and diverted their water. "This spring looks pretty healthy," Sada says of Big Springs, "but if you look, you can see it's been disturbed. All those grasses over there are non-native, as is this clover. And over there, it looks like it's been dug out." Not far from Big Springs is Needle Point Spring, which used to spill into a trough and pond used by cattle and wild horses. Its flow faltered in 2001, shortly after nearby wells started withdrawing groundwater for irrigation. Now, Snake Valley's springs face a new threat: the Southern Nevada Water Authority's controversial plan to pump groundwater from Snake and other remote valleys and ship it south, to the Las Vegas metropolitan area. A decade or so from now, a 285-mile-long pipeline could carry more than 100,000 acre-feet of water south each year -- more than enough to flood a city the size of Las Vegas to the depth of one foot. (See sidebar: Vegas forges ahead with pipeline plan). In another era, a project this stunning in scale might have been hailed as smart, imaginative, even visionary. But that was before the environmental consequences of extracting large amounts of water from arid Western lands became so apparent. Across the region nowadays, rivers are in trouble, as are many aquifers. In extreme cases, water tables have dropped by several hundred feet, causing streams to dwindle, spring flow to wane, trees and shrubs to wither. Many rural and urban areas now suffer from land subsidence. As groundwater is removed, surrounding sediments can compact and slump, undermining buildings and highways. Parts of the Las Vegas Valley have sunk as much as six feet, and areas in Arizona and California have dropped anywhere from 15 to 30 feet. This dismal track record casts a long shadow over the planned water diversion and lends credibility to those who question its eventual costs. The concern is further magnified by the size of the planned withdrawal. According to hydrological models submitted to the Nevada state engineer by project opponents, the utility's pumps will likely cause a severe water table drop across a very large area, extending well beyond the targeted valleys. And yet the full picture of the impacts may not emerge for decades, even centuries. Where springs are concerned, what worries Sada most is the potential for harmful synergy -- the cumulative impact of all the strain being placed on the small, vulnerable ecosystems he has spent the past quarter-century studying.
<urn:uuid:b8541e87-fac3-4a25-a051-ba352ac72659>
CC-MAIN-2013-20
http://www.hcn.org/issues/41.17/silenced-springs/
2013-05-21T11:02:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962497
1,005
Often they are treated separately in different segments of a course. In fact, the principles governing the organization of three-dimensional structure are common to all of them, so we will consider them together. We will begin with the monomer units. We will describe the features of representative monomers, and see how the monomers join to form a polymer. We will then look at the monomers in each major type of macromolecule to see what specific structural contributions come from each. The three-dimensional structure of each type of macromolecule will then be considered at several levels of organization. We will investigate macromolecular interactions and how structural complementarity plays a role in them. The stories for proteins, monosaccharides and nucleotides are just variations on the same theme. So you'll need to learn only one pattern, then apply that pattern to the other systems. We will conclude this section of the course with a consideration of denaturation and renaturation -- the forces involved in loss of a macromolecule's native structure (that is, its normal 3-dimensional structure), and how that structure, once lost, can be regained. The main point of the first segment of this material is this: THE MONOMER UNITS OF BIOLOGICAL MACROMOLECULES HAVE HEADS AND TAILS. WHEN THEY POLYMERIZE IN A HEAD-TO-TAIL FASHION, THE RESULTING POLYMERS ALSO HAVE HEADS AND TAILS. These macromolecules are polar [polar: having different ends] because they are formed by head to tail condensation of polar monomers. Let's look at the three major classes of macromolecules to see how this works, and let's begin with carbohydrates. Glucose is a typical monosaccharide. It has two important types of functional group: a carbonyl group (an aldehyde in glucose, some other sugars have a ketone group instead.) hydroxyl groups on the other carbons. This is what you need to know about glucose, not its detailed structure. Glucose exists mostly in ring structures. ( 5-OH adds across the carbonyl oxygen double bond.) This is a so-called internal hemiacetal. The ring can close in either of two ways, giving rise to anomeric forms, -OH down (the alpha-form) and -OH up (the beta-form) The anomeric carbon (the carbon to which this -OH is attached) differs significantly from the other carbons. (note: it's easy to pick out because it is the only carbon with TWO oxygens -- ring and hydroxyl -- attached.) Free anomeric carbons have the chemical reactivity of carbonyl carbons because they spend part of their time in the open chain form. They can reduce alkaline solutions of cupric salts. Sugars with free anomeric carbons are therefore called reducing sugars. The rest of the carbohydrate consists of ordinary carbons and ordinary -OH groups. The point is, a monosaccharide can therefore be thought of as having polarity, with one end consisting of the anomeric carbon, and the other end consisting of the rest of the molecule. If two anomeric hydroxyl groups react (head to head condensation) the product has no reducing end (no free anomeric carbon). This is the case with sucrose If the anomeric hydroxyl reacts with a non-anomeric hydroxyl of another sugar, the product has ends with different properties. Since most monosaccharides have more than one hydroxyl, branches are possible, and are common. Branches result in a more compact molecule. If the branch ends are the reactive sites, more branches provide more reactive sites per molecule. Let's now turn to nucleotides and nucleic acids. There are four dominant bases; here are three of them: The fourth base is (a pyrimidine) Be aware that uracil and thymine are very similar; they differ only by a methyl group. You need to know which are purines and which are pyrimidines, and whether it is the purines or the pyrimidines that have one ring. The reasons for knowing these points relate to the way purines and pyrimidines interact in nucleic acids, which we'll cover shortly. A 3'->5' phosphodiester bond is thereby formed. The product has ends with different properties. Let's look at the conventions for writing sequences of nucleotides in nucleic acids. Bases are abbreviated by their initials: A, C, G and U or T. U is normally found only in RNA, and T is normally found only in DNA. So the presence of U vs. T distinguishes between RNA and DNA in a written sequence. Sequences are written with the 5' end to the left and the 3' end to the right unless specifically designated otherwise. Phosphate groups are usually not shown unless the writer wants to draw attention to them. The following representations are all equivalent. uracil adenine cytosine guanine | | | | P-ribose-P-ribose-P-ribose-P-ribose-OH 5' 3' 5' 3' 5' 3' 5' 3' pUpApCpG UACG 3' GCAU 5' (Note that in the last line the sequence is written in reverse order , but the ends are appropriately designated.) Branches are possible in RNA but not in DNA. RNA has a 2' -OH, at which branching could occur, while DNA does not. Branching is very unusual; it is known to occur only during RNA modification [the "lariat"], but not in any finished RNA species. The naturally occurring amino acids are optically active, as they have four different groups attached to one carbon, (Glycine is an exception, having two hydrogens) and have the L-configuration. The R-groups of the amino acids provide a basis for classifying amino acids. There are many ways of classifying amino acids, but one very useful way is on the basis of how well or poorly the R-group interacts with water The product has ends with different properties. Conventions for writing sequences of amino acids. Abbreviations for the amino acids are usually used; most of the three letter abbreviations are self-evident, such as gly for glycine, asp for aspartate, etc. There is also a one-letter abbreviation system; it is becoming more common. Many of the one-letter abbreviations are straightforward, for example: Others require a little imagination to justify: Still others are rather difficult to justify: Question: What do you suppose "Q" represents? You should be aware this is becoming more and more commonly used, and you should have the mindset of picking it up as you are exposed to it, rather than resisting. Sequences are written with the N-terminal to the left and the C-terminal to the right. Although R-groups of some amino acids contain amino and carboxyl groups, branched polypeptides or proteins do not occur. The sequence of monomer units in a macromolecule is called the PRIMARY STRUCTURE of that macromolecule. Each specific macromolecule has a unique primary structure. This concludes our consideration of the relationship between the structures of biological polymers and their monomer subunits. Biosynthesis of these macromolecules will be covered in subsequent lectures. Let's now begin to investigate the three-dimensional shapes of these macromolecules in solution and the forces responsible for these shapes. It turns out that THE REGULAR REPEAT OF MONOMER UNITS HAVING THE SAME SIZE AND THE SAME BOND ANGLES LEADS TO HELICAL (SPIRAL) POLYMERS. IF THESE HELICES CAN BE STABILIZED BY SUITABLE INTRA- OR INTERMOLECULAR INTERACTIONS, THEY WILL PERSIST IN SOLUTION, AND WILL BE AVAILABLE AS ELEMENTS OF MORE COMPLICATED MACROMOLECULAR STRUCTURES. Just what is a helix? A helical structure consists of repeating units that lie on the wall of a cylinder such that the structure is superimposable upon itself if moved along the cylinder axis. A helix looks like a spiral or a screw. A zig-zag is a degenerate helix. Helices can be right-handed or left handed. The difference between the two is that: Right-handed helices or screws advance (move away) if turned clockwise. Helical organization is an example of secondary structure. These helical conformations of macromolecules persist in solution only if they are stabilized. What might carry out this stabilization? Starch (amylose) exemplifies this structure. The starch helix is not very stable in the absence of other interactions (iodine, which forms a purple complex with starch, stabilized the starch helix), and it commonly adopts a random coil conformation in solution. In contrast, beta (1 -> 4) sequences favor linear structures. Cellulose exemplifies this structure. Cellulose is a degenerate helix consisting of glucose units in alternating orientation stabilized by intrachain hydrogen bonds. Cellulose chains lying side by side can form sheets stabilized by interchain hydrogen bonds. The purine and pyrimidine bases of the nucleic acids are aromatic rings. These rings tend to stack like pancakes, but slightly offset so as to follow the helix. The stacks of bases are in turn stabilized by hydrophobic interactions and by van der Waals forces between the pi-clouds of electrons above and below the aromatic rings. In these helices the bases are oriented inward, toward the helix axis, and the sugar phosphates are oriented outward, away from the helix axis. Two lengths of nucleic acid chain can form a double helix stabilized by Base pairs of this size fit perfectly into a double helix. This is the so-called Watson-Crick base pairing pattern. Double helices rich in GC pairs are more stable than those rich in AT (or AU) pairs because GC pairs have more hydrogen bonds Now, Specific AT (or AU) and GC base pairing can occur only if the lengths of nucleic acid in the double helix consist of complementary sequences of bases. A must always be opposite T (or U). G must always be opposite C. Here's a sample of two complementary sequences. Most DNA and some sequences of RNA have this complementarity, and form the double helix. It is important to note, though, that the complementary sequences forming a double helix have opposite polarity. The two chains run in opposite directions: 5' ...ATCCGAGTG... 3' 3' ...TAGGCTCAC... 5' This is described as an antiparallel arrangement. This arrangement allows the two chains to fit together better than if they ran in the same direction (parallel arrangement). Consequences of complementarity. In any double helical structure the amount of A equals the amount of T (or U), and the amount of G equals the amount of C. -- count the A's. T's, G's and C's in this or any arbitrary paired sequence to prove this to yourself. Because DNA is usually double stranded, while RNA is not, in DNA A=T and G=C, while in RNA A does not equal U and G does not equal C. Three major types of double helix occur in nucleic acids. These three structures are strikingly and obviously different in appearance. You could see the difference if it were out of focus, and you could feel the differences in the dark. This is critically important, because SO CAN AN ENZYME! Such as the enzymes that control the expression of genetic information. DNA usually exists in the form of a B-helix. Its characteristics: Double-stranded RNA and DNA-RNA hybrids (also DNA in low humidity) exist in the form of an A-helix. Its characteristics: RNA is incompatible with a B-helix because the 2' -OH of RNA would be sterically hindered. (There is no 2' -OH in DNA.) This is a stabilizing factor you should know. DNA segments consisting of alternating pairs of purine and pyrimidine (PuPy)n can form a Z-helix. Its characteristics: The link between the deoxyribose and the purine has a different conformation in Z-DNA as compared to A-DNA or B-DNA. Z-DNA is stabilized if it contains modified (methylated) cytosine residues. These occur naturally. The detailed shape of the helix determines the interactions in which it can engage. The geometry of the grooves are important in allowing or preventing access to the bases. The surface topography of the helix forms attachment sites for various enzymes sensitive to the differences among the helix types. We'll see some detailed examples of this later. The DNA triplex (triple helix): Start by imagining a B-DNA helix. It is possible under certain circumstances to add a third helix fitting it into the major groove. A triplex can form ONLY if one strand of the original B-helix is all purines (A and G) [why you need to know purines from pyrimidines] and the corresponding region of the other strand is all pyrimidines. Regions of DNA with these characteristics are found in control regions for genes, and triplex formation PREVENTS EXPRESSION OF THE GENE. The triplex is stabilized by H-bonds in the unusual Hoogsteen base-pairing pattern shown in the slide (along with standard Watson-Crick base pairing). The existence of this structure was known for 20 years, but no one knew what to make of it. Now, recognizing that it occurs naturally in gene control regions, it is getting a great deal of attention in the research literature. Currently artificial oligonucleotide drugs are being synthesized that form triplexes with specific natural DNA sequences. Other drugs are being developed that stabilize naturally occurring or artificial triplexes. These are showing promise as antitumor and antibacterial agents, as well as potential agents to modify enzyme activity by controlling enzyme synthesis. It's too new to be in even the most modern text, but you will be seeing more and more of this in the near future. Be aware of this structure, know where it is found in the gene (at control regions) and its effect on gene expression, and that it is the subject of promising clinical investigations. As a result of having double bond character the peptide bond is These characteristics restrict the three-dimensional shapes of proteins because they must be accommodated by any stable structure. The second major property of the peptide bond is that the atoms of the peptide bond can form hydrogen bonds. Now let's look at some of the structures that accommodate the restrictions imposed by the peptide bond. The first is the alpha-helix. The alpha-helix is a major structural component of proteins. Occurrence of the alpha-helix. Alpha-keratin has high tensile strength, as first observed by Rapunzel. It is found in hair, feathers, horn; the physical strength and elasticity of hair make it useful in ballistas, onagers, etc. The beta-pleated sheet is a second major structural component of proteins. The beta-pleated sheet resembles cellulose in that both consist of extended chains -- degenerate helices -- lying side by side and hydrogen bonded to one another. The polypeptide chains of a beta-pleated sheet can be arranged in two ways: parallel (running in the same direction) or antiparallel (running in opposite directions). An edge-on view shows the pleats. Sheets can stack one upon the other, with interdigitating R-groups of the amino acids. Occurrence of the beta-pleated sheet. Collagen has an unusual structure. It consists of three polypeptide chains in a triple helix. This is the structure: The stability of the collagen triple helix is due to its unusual amino acid composition and sequence. One third of the amino acid residues is glycine, and the glycyl residues are evenly spaced: (Gly X Y)n, where X and Y are other amino acids is the amino acid sequence of collagen. This places a glycyl residue at each position where the chain is in the interior of the triple helix. There would be no room for a bulky R-group in this position (glycine's R-group is H). The high glycine content (with its small R-group) would otherwise permit too much conformational freedom and favor a random coil. Proline and hydroxyproline together comprise about one third of the total amino acid residues, and Gly Pro Hypro is a common sequence. The relative inflexibility of the prolyl and hydroxyprolyl residues stiffens the chains. The high (proline & hydroxyproline) content prevents formation of an alpha-helix. Collagen occurs in tough, inelastic tissues, like tendon. The collagen helix is already fully extended. Unlike the alpha-helix, it cannot stretch; tendon ought not to stretch under heavy load. Collagen is the single most abundant protein in the body; fortunately collagen defects are rare. The next level of macromolecular organization is Tertiary structure is the three dimensional arrangement of helical and nonhelical regions of macromolecules. Let's look first at the Superhelicity introduces strain into the molecule. (Think of holding a coil spring by the two ends and twisting it to unwind it; it takes effort to introduce this strain) The strain of superhelicity can be relieved by forming a supercoil. The identical phenomenon occurs in retractable telephone headset cords when they get twisted. The twisted circular DNA is said to be supercoiled. The supercoil is more compact. It is poised to be unwound, a necessary step in DNA and RNA synthesis. RNA -- most RNA is single stranded, but contains regions of self-complementarity. This is exemplified by yeast tRNA. There are four regions in which the strand is complementary to another sequence within itself. These regions are antiparallel, fulfilling the conditions for stable double helix formation. X-ray crystallography shows that the three dimensional structure of tRNA contains the expected double helical regions. Large RNA molecules have extensive regions of self-complementarity, and are presumed to form complex three-dimensional structures spontaneously. Hydrophobic R-groups, as in leucine and phenylalanine, normally orient inwardly, away from water or polar solutes. Polar or ionized R-groups, as in glutamine or arginine, orient outwardly to contact the aqueous environment. Some amino acids, such as glycine, can be accommodated by aqueous or nonaqueous environments. The rules of solubility and the tendency for secondary structure formation determine how the chain spontaneously folds into its final structure. R-CH2-SH + R'-CH2-SH + O2 = R-CH2-S-S-CH2-R' + H2O2 (Under reducing conditions a disulfide bridge can be cleaved to regenerate the -SH groups.) The disulfide bridge is a covalent bond. It strongly links regions of the polypeptide chain that could be distant in the primary sequence. It forms after tertiary folding has occurred, so it stabilizes, but does not determine tertiary structure. Globular proteins are typically organized into one or more compact patterns called domains. This concept of domains is important. In general it refers to a region of a protein. But it turns out that in looking at protein after protein, certain structural themes repeat themselves, often, but not always in proteins that have similar biological functions. This phenomenon of repeating structures is consistent with the notion that the proteins are genetically related, and that they arose from one another or from a common ancestor. In looking at the amino acid sequences, sometimes there are obvious homologies, and you could predict that the 3-dimensional structures would be similar. But sometimes virtually identical 3-dimensional structures have no sequence similarities at all! The four-helix bundle domain is a common pattern in globular proteins. Helices lying side by side can interact favorably if the properties of the contact points are complementary. Hydrophobic amino acids (like leucine) at the contact points and oppositely charged amino acids along the edges will favor interaction. If the helix axes are inclined slightly (18 degrees), the R-groups will interdigitate perfectly along 6 turns of the helix. Sets of four helices yield stable structures with symmetrical, equivalent interactions. Interestingly, four-helix bundles diverge at one end, providing a cavity in which ions may bind. All-beta structures comprise domains in many globular proteins. Beta-pleated sheets fold back on themselves to form barrel-like structures. Part of the immunoglobulin molecule exemplifies this. The interiors of beta-barrels serve in some proteins as binding sites for hydrophobic molecules such as retinol, a vitamin A derivative. What keeps these proteins from forming infinitely large beta-sheets is not clear. Now let's look at combined alpha/beta structures. Beta/alpha8 domains are found in a variety of proteins which have no obvious functional relationship. They consist of a beta-barrel surrounded by a wheel of alpha-helices. Beta-sheet surrounded by alpha-helices also occur. This is a variation on the theme of beta-structure inside and alpha-helix outside. Now that we are familiar with the structures of single chain macromolecules, we are in a position to look at some of the interactions of macromolecules with other macromolecules and with smaller molecules. If covalent links exist (such as disulfide bridges) then the structure is not considered quaternary. In proteins with quaternary structure the deaggregated subunits alone are generally biologically inactive. Here are some examples of quaternary structure. Quaternary structure in proteins is the most intricate degree of organization considered to be a single molecule. Higher levels of organization are multimolecular complexes. Many different kinds of compound are found in conjugated proteins. A few examples are: Nomenclature: the word "conjugated" is from the Latin, cum = with and jugum = yoke. The protein and nonprotein moieties are yoked with one another (like oxen) to work together. Sometimes other organic or inorganic compounds share metals with proteins. Lipoproteins resemble micelles in some respects. The structure of lipoproteins typically includes the following features. Their outer surface is coated with polar lipids, with protein intermingled. Their interior is a region of randomly oriented neutral lipid. Lipoproteins are usually much larger than two molecules across. The role of the polar lipid and protein on the surface is to solubilize the neutral lipid interior. Protein interacts with the lipid of lipoproteins through amphipathic helices. Alpha-helical regions of apolipoproteins have polar amino acids on one surface, and nonpolar ones on the opposite surface. The helix lies on the surface of the structure, with the polar groups oriented outward toward the water, and the nonpolar groups buried in the lipid. (Recall the four-helix bundle domains of proteins, in which contacts between helices involved hydrophobic residues at the contact points.) Consequence of charged surface: (not unlike many proteins) a tendency to stick to things. Membrane proteins are lipoprotein-like in that they have nonpolar amino acids in strategic locations to permit interaction with the membrane lipid. Proteins of the membrane surface may be structured like the apoproteins of lipoproteins, with amphipathic helices. Some membrane proteins transverse the membrane. The region of the protein that is completely immersed in membrane should consist entirely of hydrophobic amino acids. A common structural motif to accomplish this is an alpha-helix consisting of at least 22 hydrophobic amino acyl groups. This makes an alpha-helix long enough to span a membrane. In arrays of membrane-spanning helices, helices in the interior of the array could be shorter. The problem of proline in transmembrane "helices:" Mostly you find hydrophobic residues in transmembrane helices, and their length is about right, around 24 residues. You also find PROLINE. This is very common. Does it violate the prohibition against proline in the helix? Probably not. The current opinion of qualified protein chemists is that when we eventually determine the exact structures of these molecules, we will find the expected kink in the helix at each P residue, and that it will prove to be important in the biological function of the protein. Glycoproteins have two major types of functions. The first is recognition: carbohydrate prosthetic groups serve as antigenic sites (e.g., blood group substances are carbohydrate prosthetic groups), intracellular sorting signals (mannose 6-phosphate bound to a newly synthesized protein sends it to the lysosomes), etc. Or they may be structural components of the organism: E.g., the proteoglycans of cartilage. The central core is a polysaccharide called hyaluronic acid. Many glycoprotein branches are attached to the hyaluronic acid noncovalently. Each branch is a glycoprotein (core protein) with many carbohydrate chains (chondroitin sulfate -- alternating galactosamine and galactose -- and keratan sulfate -- alternating glucosamine and galactose) attached covalently (xylose beta-> O-ser). The attachment of the core protein to the hyaluronic acid is mediated by a protein called link protein. We've now seen interactions between protein and metal ions, lipid and carbohydrate. Let's now turn to Zn complexed to His and/or Cys maintains the structure of the domain. Other amino acyl residues in the loop are involved in binding to specific nucleotides of the nucleic acid or helping to maintain the folded structure of the domain. Zinc fingers occur in proteins occur in tandem arrays. They are joined to nearby zinc fingers by short linking regions of peptide. They are spaced to fit into the major groove of DNA, with the bases of the alpha-helices down in the grooves, and the beta-loops touching the double helix. A protein designed to bind at such a site might also be symmetric; this could be accomplished if the protein were a head-to-head dimer. A class of DNA binding proteins appears to form such dimers through alpha-helices having regularly spaced leucyl residues along one edge. Interaction between the protein monomer units is thought to be through leucyl residues along the edges of the amphipathic helices, sort of like the 4-helix bundle, but with just two helices. Originally it was thought that the leucyl residues interdigitated (hence the name, "leucine zipper"), but it is now believed that they face each other (reality in the form of x-ray crystallography strikes again). In any case, the symmetric dimer binds to the symmetric region of the DNA through special binding domains. A dimeric protein can have a helix-turn-helix motif in each subunit, and if the monomer units are identical it can thereby recognize and bind to symmetric DNA structures. Denaturation is the loss of a protein's or DNA's three dimensional structure. The "normal" three dimensional structure is called the native state. Denaturation is physiological -- structures ought not to be too stable. Loss of native structure must involve disruption of factors responsible for its stabilization. These factors are: Note that no break in the polymer chain (disruption of primary structure) is involved in denaturation. Denaturing agents disrupt stabilizing factors. Heat -- thermal agitation (vibration, etc.) -- will denature proteins or nucleic acids. Heat denaturation of DNA is called melting because the transition from native to denatured state occurs over a narrow temperature range. As the purine and pyrimidine bases become unstacked during denaturation they absorb light of 260 nanometers wavelength more strongly. The abnormally low absorption in the stacked state is called the hypochromic effect. Urea and guanidinium chloride -- work by competition These compounds contain functional groups that can accept or donate hydrogen atoms in hydrogen bonding. [picture of structures] At high concentration (8 to 10 M for urea, and 6 to 8 M for guanidinium chloride) they compete favorably for the hydrogen bonds of the native structure. Hydrogen bonds of the alpha-helix will be replaced by hydrogen bonds to urea, for example, and the helix will unwind. Organic solvents, such as acetone or ethanol -- dissolve nonpolar groups. Detergents -- dissolve nonpolar groups. Cold -- increases solubility of nonpolar groups in water. When a hydrophobic group contacts water, the water dipoles must solvate it by forming an orderly array around it. The array is called an "iceberg," because it is an ordered water structure, but not true ice. The ordering of water in an "iceberg" decreases the randomness (entropy) of the system, and is energetically unfavorable. If hydrophobic groups cluster together, contact with water is minimized, and less water must become ordered. This is the driving force behind hydrophobic interaction. (The clustering together of hydrophobic groups is also entropically unfavorable, but not as much so as "iceberg" formation.) At low temperatures, solvation of hydrophobic groups by water dipoles is more favorable. The water molecules have less thermal energy. They can "sit still" to form a solvation "iceberg" more easily. The significance of cold denaturation is that cold is not a stabilizing factor for all proteins. Cold denaturation is important in proteins that are highly dependent on hydrophobic interaction to maintain their native structure. pH extremes -- Most macromolecules are electrically charged. Ionizable groups of the macromolecule contribute to its net charge (sum of positive and negative charges). Bound ions also contribute to its net charge. Electric charges of the same sign repel one another. If the net charge of a macromolecule is zero or near zero, electrostatic repulsion will be minimized. The substance will be minimally soluble, because intermolecular repulsion will be minimal. A compact three-dimensional structure will be favored, because repulsion between parts of the same molecule will be minimal. The pH at which the net charge of a molecule is zero is called the isoelectric pH (or isoelectric point). pH extremes result in large net charges on most macromolecules. Most macromolecules contain many weakly acidic groups. At low pH all the acidic groups will be in the associated state (with a zero or positive charge). So the net charge on the protein will be positive. At high pH all the acidic groups will be dissociated (with a zero or negative charge). So the net charge on the protein will be negative. Intramolecular electrostatic repulsion from a large net charge will favor an extended conformation rather than a compact one. Agents with free sulfhydryl groups will reduce (and thereby cleave) disulfide bridges. 2 HO-CH2-CH2-SH + R1-S-S-R2 = R1-SH + HS-R2 + HO-CH2-CH2-S-S-CH 2-CH2-OH Some proteins are stabilized by numerous disulfide bridges; cleaving them renders these proteins more susceptible to denaturation by other forces. Renaturation is the regeneration of the native structure of a protein or nucleic acid. Renaturation requires removal of the denaturing conditions and restoration of conditions favorable to the native structure. This includes Usually considerable skill and art are required to accomplish renaturation. The fact that renaturation is feasible demonstrates that the information necessary for forming the correct three-dimensional structure of a protein or nucleic acid is encoded in its primary structure, the sequence of monomer units. But... This folding may be slow; what happens in the cell during protein synthesis? Guidance may be needed for it to occur correctly and rapidly. Molecular chaperones are intracellular proteins which guide the folding of proteins, preventing incorrect molecular interactions. They do NOT appear as components of the final structures. Chaperones are widespread, and chaperone defects are believed to be the etiology of some diseases. Medical applications of chaperones may be expected to include things such as Return to the NetBiochem Welcome page. Last modified 1/5/95
<urn:uuid:a3e3a0e5-d54a-489a-819b-0ffa44eb16d4>
CC-MAIN-2013-20
http://library.med.utah.edu/NetBiochem/macromol.htm
2013-05-19T18:43:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924116
7,080
Arius (AD 256-336) was an early Christian theologian who taught Arianism, a theology that rejected the Trinity and taught that Jesus was lesser than God the Father. All of the writings of Arius were systematically destroyed by the Church, but historians have reconstructed his thoughts from the arguments used against them by orthodox trinitarians. Arius was born in Libya (or Alexandria) in 256. He was tall, lean, learned, morally exemplary, a fine orator, and inclined to be disputatious, He was educated in the theological school of Antioch under the distinguished scholar Lucian. This school was noted for its emphasis upon (1) the historical and inductive method of religious investigation and (2) the unity and transcendence of the Godhead. Combined with these was a tendency to regard Christ as a created being, subordinate to the Father, a view that affected Arius. Arius became a presbyter (minor priest) in Alexandria. In 318 he disagreed with his bishop on the nature of Christ, and was condemned with his associates at a synod at Alexandria in 320 or 321 and banished. He left the city and found refuge with two powerful churchmen, Eusebius of Caesarea and Eusebius of Nicomedia, who sympathized with his view. The debate spread across the region. The controversy became so heated that the Emperor Constantine, now military master of the East and the West, called the first General Council of the Church at Nicaea to resolve the issue in May 325. Three parties emerged at the Council: the Arian, led by Eusebius of Nicomedia; the Alexandrian; and the moderate, led by Eusebius of Caesarea. The Council banished Arius to Illyria, condemned Arianism, and affirmed that Christ was "begotten, not made," "of one essence with the Father." The unity of the Church seemed achieved. However, controversy continued. Arius returned from banishment through the favor of the Empress Constantia and presented a new creed to the Emperor, which seemed like a retraction of his heretical views. The Emperor commanded that Arius be restored to his position in Alexandria, which Athanasius, who was then bishop of Alexandria, refused to do. Charged with insubordination, Athanasius was banished to Gaul in 335. The opposition to Arianism seemed broken, and the bishops decided to restore Arius to the fellowship of the Church through a formal ceremony. The aged Arius died in Constantinople in 336, before the ceremony took place, perhaps because the emotional stress was too great. His friends thought he had been poisoned, but his enemies regarded his death as the act of a vengeful Providence. The principal work of Arius is Thalia ("The Banquet"), in which he defends his doctrine in prose and poetry. The document is lost and the knowledge of his writings comes through his critics. Arianism continued in the Church for many years, particularly in the Christianity of the Germanic tribes; Ulfilas, "the apostle to the Goths," was their Arian missionary. - Gonzalez, Justo L. A History of Christian Thought: Volume 1: From the Beginnings to the Council of Chalcedon (2nd ed. 1987); excerpt and text search vol 1 - Williams, Rowan. Arius: Heresy and Tradition (2002) excerpt and text search - New Schaff-Herzog Encyclopedia of Religious Knowledge (1911), major sources of older scholarly articles; mainline Protestant perspective: Vol. 1: Aachen- Basilians
<urn:uuid:493c9368-1524-457c-b32d-e1d84a9653af>
CC-MAIN-2013-20
http://conservapedia.com/Arius
2013-05-19T18:41:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980115
748
Physical Activity & Sport in the Lives of Girls THE PRESIDENT’S COUNCIL ON PHYSICAL FITNESS AND SPORTS (PCPFS) serves as a catalyst to promote, encourage and motivate the development of physical activity, fitness and sport participation for all Americans. This report expresses the PCPFS’s mission to inform the general public of the importance of developing and maintaining physical activity and fitness in our daily lives, and to heighten awareness of the links that exist between regular physical activity and good health. In the past, involvement in sport and physical activity has been primarily associated with males. Over the past two decades, however, girls’ and women’s involvement in such activity has increased dramatically. This is in large part due to the impact of Title IX, federal legislation passed in 1972 designed to prohibit sex discrimination in educational settings. For example, prior to Title IX, 300,000 young women participated in interscholastic athletics nationwide; today, that figure has leaped to approximately 2.25 million participants. In the wake of this participation explosion, scholars and educators have begun to explore its impact on girls and women. Physical Activity and Sport in the Lives of Girls: Physical and Mental Health Dimensions from an Interdisciplinary Approach was created in order to highlight relevant research and draw on expert opinion regarding girls’ involvement in physical activity and sport. This is the first report that brings together research findings—and practical suggestions for implementing these findings—from three interdisciplinary bodies of knowledge: physiological, psychological and sociological. The report focuses on girls and not boys (other than for comparison where appropriate) for several reasons. First, with respect to sport and physical activity, girls have been neglected by researchers in the biomedical sciences, education, physical education and the social sciences. Second, though girls and boys share common experiences, girls also exhibit unique physiological, emotional and social outcomes that merit special investigation. Next, scholars need to keep pace with the aforementioned explosion and diversification of girls’ involvement with sport and physical activity in the wake of Title IX. And finally, researchers increasingly recognize that the social world of physical activity and sport is not a one-dimensional universe, but a highly complex set of institutions populated by two genders with diverse racial and ethnic backgrounds, cultural values, physical abilities and sexual orientations. Public apathy about physical education, and the glitzy distractions of commercialized sports in mass media, sometimes hide the basic fact that physical activity is a public health resource for millions of American girls as well as their families and communities. In order to advance knowledge regarding the real and potential contributions of physical activity and sport in the lives of millions of girls, several areas for future research are highlighted by the authors at the end of each section. Finally, a set of policy recommendations is also included in order to encourage responsible action on the part of parents, coaches, educators, sport leaders and elected officials. With such a “teamwork” approach, we can make a difference in the lives of girls. Reprinted with the permission of the President's Council on Physical Fitness and Sports. Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - First Grade Sight Words List
<urn:uuid:3fdf96c0-b2e5-4ecd-b7fa-a0608b600c4e>
CC-MAIN-2013-20
http://www.education.com/reference/article/Ref_Physical_Sport_Lives/
2013-05-22T21:32:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926556
803
Incorporating geothermal energy involves the process of tapping into the near-constant temperature of the earth below the frost line. Below the earth’s crust is a layer of hot molten rock called magma. Heat is constantly produced there and amount of heat within 10,000 meters (about 33,000 feet) of Earth's surface contains 50,000 times more energy than all the oil and natural gas resources in the world. (Fact taken from:http://www.ucsusa.org/) This type of heating and cooling system lessens our dependence on fossil fuels and also heats and cools buildings in a cost effective way and also burns clean. What does all this have to do with timber frame homes? As we build more and more homes, we're starting to witness a greater interest from our clients in the use of geothermal heating. Davis Frame clients come to us with the understanding that tapping geothermal energy is an affordable and sustainable solution to reducing their dependence on fossil fuels, and the global warming and public health risks that result from their use. - Costs: typically, a geothermal system costs twice the amount of a conventional system. However, one can expect his or her electric bill to decrease by approximately 25 percent. - Most geo-exchange systems will be able to provide a significant source for the home's hot water needs but it's still a good idea to have a backup heater (in cooler locations). - Many closed-loop systems rely on an antifreeze solution to keep the loop water from freezing in cold temperature conditions. - Open-loop systems require a large supply of clean water in order to be cost effective. Because of this, it can be a limiting factor for use in coastal areas or sites adjacent to lakes, streams, or rivers. Additionally, an acceptable method of recycling the used water to the environment must be in place. This may be limited not only by environmental factors (such as no place to dump that much water), but also by local and state regulations. Regardless, a geothermal (geo-exchange) heating and cooling system is a great solution to consider when building a timber frame home. Combined with solar, use of SIPs, reclaimed or standing dead timber, a new timber frame and post and beam home can be an extremely energy efficient green way to build! For more information on geothermal, please visit http://www.ucsusa.org/ or feel free to give us a call at 800.636.0993! Tax Credit Information: http://www.energystar.gov/index.cfm?c=tax_credits.tx_index#c6
<urn:uuid:c54aa505-0cea-488d-97c0-46fd3103eef5>
CC-MAIN-2013-20
http://blog.davisframe.com/2010/04/geothermal-heating-and-timber-frame.html
2013-06-20T01:58:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924114
547