text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
One person, one vote: this principle is the bedrock of American democracy. The intent is to ensure that each citizen has as much say as any other, regardless of social or economic status. Consistent with that aim, campaign finance laws have sought to limit the exorbitant power that wealthy individuals and corporations can exert on the political process. Campaign finance rules have been seriously eroded of late, however, and they were further diluted by the Supreme Court’s 5-4 decision to strike down limits on how much money donors can contribute to individual political candidates, political parties, and political action committees. Prior to the court’s April ruling in McCutcheon v. Federal Elections Commission, donors were limited to a maximum of $48,600 in contributions to individual candidates during a two-year election cycle. The court declared that limit to be a violation of donors’ free-speech rights under the First Amendment.
<urn:uuid:aaee95b0-5e7f-40e5-87ec-b9ae7f3c33fc>
CC-MAIN-2016-26
http://christiancentury.org/article/2014-04/money-talks
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00019-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970024
186
2.546875
3
- Heart Disease (Coronary Artery Disease) Slideshow Pictures - Atrial Fibrillation Slideshow: Causes, Tests and Treatment - Take the Heart Disease Quiz! - How Do I Get Started? - What Type of Exercise Is Best? - What Are Examples of Aerobic Exercises? - How Often Should I Exercise For A Healthy Heart? - What Should I Include in My Program? - What Is the Rated Perceived Exertion Scale? - What Are Some Warm-Up Exercises? - Exercise while sitting - Stretching exercises - How Can I Avoid Over Doing It? - How Can I Stick With It? - General Workout Tips for People With Heart Failure - Exercise Precautions For A Heart Healthy Exercise Program Quick GuideHeart Health Pictures Slideshow: 12 Possible Heart Symptoms Never to Ignore While performing these exercises, make sure your movements are controlled and slow. Avoid quick, jerking movements. Stretch until a gentle pull is felt in your muscle. Hold each stretch without bouncing or causing pain for 20 to 30 seconds. Do not hold your breath during these exercises. - Hamstring stretch. While standing, place one foot on a stool or chair, while holding onto a wall or sturdy object (such as a table). Choose a comfortable height that allows you to keep your knee straight. Slowly lean forward, keeping your back straight, and reach one hand down your shin until you feel a stretch in the back of your thigh. Relax, and then repeat with your other leg. - Quadriceps stretch. Stand facing a wall, placing one hand against the wall for support. Bend one knee, grasping your ankle and pulling your leg behind you. Try to touch your heel to your buttocks. Relax, and then repeat with your other leg. - Calf stretch against wall. Stand facing the wall with your hands against the wall for support. Put one foot about 12 inches in front of the other. Bend your front knee, and keep your other leg straight. (Keep both heels on the floor.) To prevent injury, do not let your bent knee extend forward past your toes. Slowly lean forward until you feel a mild stretch in the calf of your straight leg. Relax, and then repeat with your other leg. - Calf stretch on stairs. Stand on the stairs, holding a handrail or placing your hand on the wall for support. Place the ball of one foot on the stair. Lower your heel down toward the step below, until you feel a gentle pull in your calf. Switch legs. - Knee pull. Lie on your back and flatten the small of your back onto the floor. Bend one knee and pull your bent leg toward your chest, until you feel a pull in your lower back. Try to keep your head on the floor, but do not strain yourself. Gently lower your leg, and then repeat with your other leg. - Groin stretch. Lie on your back with your knees bent and the soles of your feet together. Slowly lower your knees to the floor until you feel a gentle pull in your groin and inner thighs. - Overhead arm pull. Lock your fingers together, with your palms facing out (or hold onto a towel so your hands are shoulder width apart). Extend your arms out in front of you with your elbows straight. Lift your arms to shoulder height. Raise your arms overhead until you feel a gentle pull in your chest or shoulders. - Behind back arm raise. At waist level, put your hands behind your back, locking your fingers together (or hold onto a towel so your hands are shoulder width apart). Straighten your elbows and raise your arms upward until you feel a gentle pull in your chest or shoulders. - Side bends. Stand straight with your legs about shoulder width apart. Reach over your head with one arm, elbow bent, sliding the opposite arm and hand down your thigh, toward your knee. Hold the stretch until you feel a gentle pull at your side. Repeat with other side. - Double shoulder circles. While bending your elbows, put your fingertips on your shoulders. Rotate your shoulders and elbows clockwise, then counter clockwise, as if drawing large circles with both elbows. Repeat in each direction. - Leg circles. Hold onto a chair or other sturdy object for balance. Lift one leg straight behind you, keeping both knees straight. Rotate your leg clockwise, then counter clockwise, as if drawing small circles with your foot. (You should feel the movement at your hip joint). Repeat in each direction, with each leg.
<urn:uuid:234b5e1f-6acf-42c4-a6e1-310b2b601136>
CC-MAIN-2016-26
http://www.medicinenet.com/fitness_exercise_for_a_healthy_heart/page4.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00162-ip-10-164-35-72.ec2.internal.warc.gz
en
0.891001
943
2.71875
3
by Savannah Williams Four years after the creation of the ‘conflict mineral’ section of the Dodd Frank Act, what has been accomplished? In 2010, Congress passed the Dodd Frank Act as a response to the financial meltdown of 2008, in an effort to promote financial stability, accountability and transparency in the United States. Section 1502 however, has seen much debate around its effectiveness in promoting the above goals. This section requires U.S. companies to disclose their use of conflict minerals including tantalum, tin, tungsten, and gold. The concern is that profits from these minerals have been fueling violent conflict in the Democratic Republic of the Congo (DRC) for years and measures need to be taken to bring an end to the on-going humanitarian crisis. It wasn’t until 2012 that the SEC issued the final rule on the actual requirements for implementation and the law did not come into effect until 2013. There was much opposition to the law for two main reasons. The first is the cost U.S. companies would incur for the required investigation and due diligence. According to the Securities and Exchange Commission (SEC), the cost of implementation was estimated to be between $3 and $4 billion dollars for the initial startup and between $207 and $609 million annually thereafter. The second reason was that the SEC had no background or expertise in these matters. Many folks argued that there was no way to be sure that the bill would in fact help the situation in the DRC and could potentially make things worse. Was it Effective? While Section 1502 was intended to combat the human rights abuses in the DRC, it is a long way from a solution. Four years later, while progress arguably has been made, violence and gross violations of human rights persist. Due to the complexity of determining the origins and involvement of conflict resources and the fear of being pinned as a supporter of the violence, many companies and consumers have resorted to a general boycott of such minerals from the DRC region. While rebels in the area are known to use slaves in the mining of these minerals, there is still a large population involved in artisanal mining as a source of income. This de facto embargo has led to a loss of livelihood for thousands of families. Immediate aid is needed for these families who are no longer able to pay for their children to go to school, are unable to provide food, and are still facing violence from rebel groups. Additionally, violent groups have managed to find funding from other resources such as charcoal and marijuana, and many former miners have been forced to join armed groups because they have no other means of income. Policies that are meant to help should not be created without substantial research and significant input from those who will be affected. In the case of Section 1502, policies were passed with little regard for the lack of infrastructure, and as a result impractical expectations became requirements. Many of the problems that developed may have been avoided if there had been a dialogue that included a wider range of Congolese involvement. The idea that halting revenue from conflict minerals will end the violence is unrealistic. In fact there have been no lasting, significant decreases in violence that can be attributed to this policy. Arguably, it has not even had much success in reducing the conflict mineral trade. A coalition of Congo and Congolese experts wrote in an open letter regarding the Dodd Frank Act, “only a small fraction of the hundreds of mining sites in the eastern DRC have been reached by traceability or certification efforts“. We must separate the two in the dialogue and policy creation process. The deep political issues and inequalities need to be addressed rather than simply taking away a portion of funding from violent groups. We must look at the Dodd Frank provisions as our responsibility to promote transparency and accountability but not as a solution to end the conflict in the DRC or anywhere that natural resources are used to fund violent groups. As a global power, the United States has the responsibility to abstain from the illicit and violent conflict mineral trade, regardless of whether or not that will solve the problem in the DRC, and we should hold the government and corporations accountable. However, we must find a way to do so without hurting innocent people. The confusing and complex nature of Section 1502 has arguably caused more damage on the ground than it has helped. We must now look for ways to mend the damage, such as providing jobs in mining communities that assist in the traceability process and spreading awareness of the unintentional but devastating effects of the de facto boycott of minerals from the DRC. Savannah Williams is a Masters student and McCormack Scholar in the Department of Conflict Resolution, Human Security and Global Governance at UMass Boston. She has a BA in Psychology and Human Rights from the University of Connecticut.
<urn:uuid:447d2dbf-16ee-456a-8fcb-2dc12650a6c0>
CC-MAIN-2016-26
http://blogs.umb.edu/paxblog/category/conflict-minerals/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968165
976
2.734375
3
Henry Gray (18251861). Anatomy of the Human Body. 1918. aspect of the central canal. The fibers run forward through the medulla oblongata, and emerge in the antero-lateral sulcus between the pyramid and the olive. The rootlets of this nerve are collected into two bundles, which perforate the dura mater separately, opposite the hypoglossal canal in the occipital bone, and unite together after their passage through it; in some cases the canal is divided into two by a small bony spicule. The nerve descends almost vertically to a point corresponding with the angle of the mandible. It is at first deeply seated beneath the internal carotid artery and internal jugular vein, and intimately connected with the vagus nerve; it then passes forward between the vein and artery, and lower down in the neck becomes superficial below the Digastricus. The nerve then loops around the occipital artery, and crosses the external carotid and lingual arteries below the tendon of the Digastricus. It passes beneath the tendon of the Digastricus, the Stylohyoideus, and the Mylohyoideus, lying between the last-named muscle and the Hyoglossus, and communicates at the anterior border of the Hyoglossus with the lingual nerve; it is then continued forward in the fibers of the Genioglossus as far as the tip of the tongue, distributing branches to its muscular substance.
<urn:uuid:9c26163e-984f-41b5-a6b6-43bb2c41999d>
CC-MAIN-2016-26
http://www.bartleby.com/107/pages/page915.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00030-ip-10-164-35-72.ec2.internal.warc.gz
en
0.903153
315
3.484375
3
Posted by Donni on Monday, November 15, 2010 at 7:07pm. i have very little information in my math book on the following questions. can you either tell me the answer or a link where i can find my own answer. thanks How do you know if a quadratic equation will have one, two, or no solutions? How do you find a quadratic equation if you are only given the solution? - algebra - Henry, Thursday, November 18, 2010 at 5:34pm 1. b^2 < 4ac: 2 imaginary solutions. b^2 = 4ac: 1 real solution. b^2 > 4ac: 2 real soutions. 2. Let's take a given Eq and solution sets and use the solutions to derive the Eq. Eq: y = 2x^2 + 5x + 2. Solution set: x = -1/2, and x = -2. x = -1/2, x + 1/2 = 0. x = -2, x + 2 = 0. (x + 1/2) (x + 2) = 0, Multiply the 2 binomials: x^2 + 2x + x/2 + 1 = 0, x^2 + 5x/2 + 1 = 0, Multiply both sides by 2 to eliminate the fraction and get: 2x^2 + 5x + 2 = 0. The derived Eq is equivalent to the Answer This Question More Related Questions - Family Consumer Sciences - One of the questions on a worksheet I must answer is ... - Algebra - I am solving sytems of equations algrbraically. I have tried several ... - History - What was the background of many women who became leader in social ... - To NEO LOUIS - Math - I've removed your very complicated post, for which you ... - words and reference - what do these 2 words mean please.? can you please help ... - Math (Algebra) - How do you simplify the following fraction; 72 --- -68 I know ... - psychology - Mike waits until the night before his big exam to start studying. ... - language arts - tell which letter is a correct answer to the questions. a ... - Chemistry - Match each of the questions below with either "True" or "False" as ... - reading - I need help with questions for the book a boy named slow.Thanks. The ...
<urn:uuid:d0ca7032-370c-4f76-8a94-a045742df6f2>
CC-MAIN-2016-26
http://www.jiskha.com/display.cgi?id=1289866020
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00026-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917281
536
2.859375
3
Discover how UV inks differ from other formulations and what it takes to print and cure them successfully. UV technology gives screen printers the capability to print faster, more efficiently, and more accurately on a wide range of materials. But to realize the full benefits of UV, users must understand the nature of UV inks and the equipment required to cure them. Resins give UV ink its major characteristics, such as adhesion and flexibility. Monomers are selected to dissolve the resins and pigments in the formula to a workable viscosity. They’re also selected to complement the resin in achieving the desired performance characteristics of cured or dry ink film. Additives contained in UV inks inlcude pigments (for color), flow agents, catalysts, and others. In UV inks, these catalysts are called photoinitiators. They absorb UV energy at certain wavelengths, creating free radicals that connect with the molecules of the resins and monomers and, in turn, cross-link with each other, forming chains of molecules we recognize as the cured ink film. Chemists call this cross-linking reaction polymerization. UV inks are considered 100% solids because almost everything in them is used up in the polymerization process. One of the major advantages of UV over conventional inks is that no volatile organic compounds (VOCs) are released into the air during the curing process. In addition, UV curing relies on polymerization rather than evaporation, which means UV inks can be cured much more quickly and in less space than solvent-based inks. Finally, the lack of solvent in UV inks allows them to be used with higher mesh counts and support finer detail and higher print resolutions. The function of the curing unit is to deliver the UV energy that sets off the photoiniators and starts the polymerization process. However, before we ex-plore just how this energy is delivered, it might be beneficial to review the nature of electromagnetic energy. The UV range of the electromagnetic spectrum occurs at approximately 10-400 nm. The photoiniators used in UV inks typically react to specific wavelengths within the 200- to 400-nm range. However, the wavelengths that drive the curing reaction vary for different ink systems, which is why curing systems support different lamp types that deliver specific frequencies of UV energy. Did you enjoy this article? Click here to subscribe to the magazine.
<urn:uuid:f0ef394a-3fff-4144-a486-8aa71658d356>
CC-MAIN-2016-26
http://www.screenweb.com/content/an-overview-uv-curing?page=0%2C0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950072
494
3.15625
3
The overwrought cliche "a picture is worth a thousand words" can be applied literally to artist Anatol Knotek. Knotek is a text collage artist who pieces together intricate portraits from words. From afar, the images look sketched, but upon closer examination you can see the text that makes up these handwritten pictures. "The basic idea is to make portraits of people in their own words which are mixed with mine -- a kind of communication paired with the gesture of writing and painting," he said in an email interview with The Huffington Post. Knotek is an Austrian artist and visual poet. His process is step-by-step, he said. He begins with pencil; sketching the shadows and details with small text and basic words. He then moves on to larger letters and words, paying attention to the overall impression and appearance. "The style varied through the years -- from concrete poetry to handwritten portraits and pictures," he said. This artistry is a poetry of sorts; a menagerie of meaningful words strung together to create a whole. "The written pictures are just a small part of my work which has the intention to blur the borders of poetry/literature and fine arts," Knotek said. Click through for a slideshow of Knotek's word collage art: SUBSCRIBE AND FOLLOW Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements.Learn more
<urn:uuid:7bd20f59-70b4-4d44-a44b-bafac5859579>
CC-MAIN-2016-26
http://www.huffingtonpost.com/2012/03/16/anatol-knotek_n_1300268.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948869
302
2.71875
3
- Simplifies complex math concepts. - Explains concepts using real-life situations and familiar objects. - Clear linkage between text and photos. - Words You Know section to reinforce text. - Index - Full-color photographs Curriculum Standards: Grades K-4 Science Standards Science as Inquiry: I - Employ simple equipment to gather data - NCTM Pre-K-Grade 2 National Math Standards Number and Operations Standard - Count with understanding and recognize how many in sets of objects - Use a variety of methods and tools to compute, including objects, mental computation, estimation, paper and pencil, and calculators - Understand the effects of adding and subtracting whole numbers Problem Solving Standard - Build new mathematical knowledge through problem solving Back to top Rent Farmer's Market Rounding 1st edition today, or search our site for other textbooks by Julie Dalton. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Scholastic Library Publishing. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
<urn:uuid:4ef0fdc4-2434-4733-a822-b890ce9fa6e4>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/farmer-s-market-rounding-1st-edition-9780516254241-0516254243
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.870371
222
4.28125
4
New Survey Finds Many Households Can Use Electric Vehicles 4 out of 10 Households Can Switch With Little or No Change to Driving Habits WASHINGTON (Dec. 11, 2013) — Four out of 10 households could use an electric vehicle with little or no change to their driving habits and vehicle needs, according to a national survey released by the Union of Concerned Scientists and Consumers Union. While less than 1 percent of the country are driving electric vehicles (EVs) today, the survey found 42 percent of respondents with cars — equivalent to 45 million households when applied nationally — meet the basic criteria for using plug-in hybrid electric vehicles like the Chevy Volt. Over half of those households are also able to use a battery-electric vehicle (BEV) like the Nissan LEAF. “Consumers who might be shopping for a new vehicle this holiday season may be surprised to learn that an electric vehicle could be a good fit for their household,” said Josh Goldman, policy analyst for the UCS Clean Vehicles Program. “Drivers may have preconceptions about whether electric vehicles can meet their driving needs and habits, and this survey shows that for many, they can.” To view a detailed infographic about the report’s findings, click here. While plug-in hybrid EVs have similar driving range to gasoline-only vehicles, the current range of BEVs on the market today can also meet many driver’s needs. The survey found that almost 70 percent of drivers drive less than 60 miles on a weekday, which is within the range of almost every BEV on the market today. “This new survey shows today’s EVs can be practical for many car buyers,” said Shannon Baker-Branstetter, policy counsel for Consumers Union. “It demonstrates that these vehicles could be a viable option for tens of millions of American households that want lower fuel costs and cleaner air without compromising their driving needs.” If everyone that could switch to driving on electricity did so today, the nation would: Save 15 billion gallons of gasoline each year, more than all the gasoline consumed last year by the entire state of California Avoid 89 million metric tons of greenhouse gas emissions each year, equivalent to removing 14 million of today’s gasoline cars from the road for a year; and Save $33 billion on fuel each year — based on gas prices of $3.60 per gallon and electricity costs of 12 cents per kilowatt hour. Survey respondents met the basic criteria for using a typical plug-in hybrid EV available today if they have access to parking and an electrical outlet at home or work, need to carry less than 5 occupants, and do not need hauling or towing capability. A BEV was considered suitable when these criteria were met and maximum weekday driving distance did not exceed 60 miles and, in the case where weekend driving frequently exceeded current BEV vehicle range, other household vehicles were available. The results of the survey not only indicate that millions of households could utilize an EV today, but also show how that figure could grow in the future. The survey found that 33 percent of respondents did not have access to parking with an electrical outlet, but met the other basic criteria for owning a plug-in hybrid electric vehicle. In addition, more than a third, 37 percent, agreed that having access to charging at the workplace would increase the likelihood of considering an EV in their next vehicle purchase. While incentives which lower the cost of purchasing an electric vehicle will continue to be a key strategy in supporting a growing market for electric vehicles, these findings suggest that efforts to increase access to vehicle charging options at home and workplace can also help make EVs a more likely choice for vehicle buyers. The survey found 65 percent of Americans think electric vehicles are an essential part of our nation's transportation future for reducing oil use and global warming pollution, with 60 percent saying they would consider owning one themselves. State policymakers are taking notice of consumer interest and the potential for increased use of EVs, with eight governors recently announcing a joint plan to put 3.3 million zero-emission vehicles on America’s roads by 2025. “There is a huge potential to continue expanding the market for electric vehicles, a key solution for tackling climate change and cutting our nation’s projected oil use in half over the next 20 years,” said David Reichmuth, senior engineer for the UCS Clean Vehicles Program. “Americans recognize that we need to reduce our oil use, and electric vehicles offer a great opportunity for drivers to do just that.” The telephone survey was conducted among 1,004 randomly selected adults over 18 years of age and carried out over September 26 to September 30. Of all the respondents, 914 had at least one vehicle and were surveyed on driving and parking behaviors. All respondents were asked about attitudes toward electric vehicles. The margin of error is +/- 3.1 percentage points at a 95 percent confidence level for questions asked of all respondents and +/- 3.2 percentage points for questions applied to vehicle owners. The Union of Concerned Scientists puts rigorous, independent science to work to solve our planet’s most pressing problems. Joining with citizens across the country, we combine technical analysis and effective advocacy to create innovative, practical solutions for a healthy, safe and sustainable future. For more information, go to www.ucsusa.org. Consumers Union is the public policy and advocacy division of Consumer Reports. Consumers Union works for telecommunications reform, health reform, food and product safety, financial reform, and other consumer issues. Consumer Reports is the world’s largest independent product-testing organization. Using its more than 50 labs, auto test center, and survey research center, the nonprofit rates thousands of products and services annually. Founded in 1936, Consumer Reports has over 8 million subscribers to its magazine, website, and other publications.
<urn:uuid:5d6a1aa3-4d77-4749-b7e3-29d4a6adeb39>
CC-MAIN-2016-26
http://pressroom.consumerreports.org/pressroom/2013/12/new-survey-finds-many-households-can-use-electric-vehicles.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954484
1,201
2.78125
3
Excerpted from: Myths and Realities of Public Land Leasing: Canberra and Hong Kong (Land Lines Article) Hong Kong's government...provides public officials with generous remuneration and fringe benefits to reduce the temptation of corruption. This demonstrates that, in designing a public leasehold system, a government must consider... ...the need for a system of checks and balances to prevent opportunism or political maneuvering. [The] issue of competition is particularly important for developing economies where local governments are eager to attract investment. They may be willing to compromise by collecting a smaller amount of land premiums and rent from both domestic and foreign land investors . <--- exemptions, favoritism, Economic Development Zones. The use of land as a source of public funds may require some level of inter- or intra-regional cooperation to prevent developers from playing one government against another. LAND SPECULATION AND LAND VALUE INCREASES (INCREASED REVENUES) DUE TO ARTIFICIAL SCARCITY In Hong Kong the government's reliance on land revenues as a source of public funds presents another problem: ...its financial interest in land conflicts with its public role in stabilizing land prices. The government has relied heavily on initial land premiums because demanding premiums from lessees during lease renewals has proven to be politically difficult In addition, the assembly of land rights for land redevelopment involves high negotiation costs because most land leases in Hong Kong have multiple leaseholders. These high costs deter private developers from undertaking land redevelopment by acquiring lease rights and modifying contract conditions. As a result, the government is unable to utilize this method fully to recoup land value. As for the land rent, before 1997 the amount of annual rent paid by lessees was fixed and bore no relationship with increases in land value These difficulties have encouraged the government to retain land value at the beginning of the lease . <-- Value fixing Yet, this method can work only if officials lease land slowly to private developers . <--- Control of supply A rapid disposition of land when its value is low would impede the government's ability to recoup land value in the future. Restrictions on land supply, however, have encouraged private land banking and property speculation, leading to high land and property prices and making Hong Kong one of the world's most expensive cities. Officials of other countries could avoid this problem by relying more on lease renewals, contract modifications and the annual land rent than on the initial assignment of leases to capture land value. The plausibility of doing so, however, remains an empirical question. The experiences of Hong Kong suggest that such an attempt could encounter strong public resistance and high negotiation costs. ZONING LAWS AND MANDATORY DEVELOPMENT PROVISIONS (not just natural "encouragement" by virtue of the tax) In principle, public leasehold systems allow the government to manage urban growth by incorporating land use regulations into land leases. If lessees do not develop their land according to the lease provisions, the government has the right to take back the land , a contractual right not available to the government when land is privately owned. To take full advantage of this special land right, the government must be capable of enforcing the contractual agreements. Despite having the ability to repossess land, there is no evidence to show that enforcement costs under public leasehold systems are lower than those found under freehold systems In Hong Kong...the government incorporates land use regulations into land contracts as conditions at the beginning of the lease. Unless lessees initiate a lease modification, these conditions will remain until the lease expires, which could be as long as 50 years in Hong Kong (and 99 years in Canberra). The difficulties that Canberra and Hong Kong face in leasing public land show that... ...leasehold systems in and of themselves do not resolve land management problems This does not mean, however, that leasing is not a viable means to manage land. In Hong Kong, the government retains a large portion of increased land value for public infrastructure investment. Canberra's public leasehold system enables the government to obtain low-cost land for building the Australian capital. The important lesson is that... ...policymakers should not set unrealistic expectations on what public leasehold systems can achieve. Failure to deliver their promises could frustrate a well-intended reform and bring the effort to a halt. Because no land tenure system is perfect, ...the debate should not focus on the choice between leasehold and freehold systems. They are not mutually exclusive. Instead, future research should concentrate on designing specific institutions according to... ...different political, economic and social contexts... ...to minimize problems associated with both systems.
<urn:uuid:0d5e7a76-bf7d-40f4-83a7-c1bff0bd18d8>
CC-MAIN-2016-26
http://www.ronpaulforums.com/showthread.php?366312-The-Single-Tax-Land-Value-Tax-(LVT)/page46
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921679
977
2.6875
3
A map of Lebanon will demonstrate the country's role as a link in the ancient world. The International Coastal Highway stretched from Egypt, along the coast of Palestine, through Lebanon, to Ugarit. This was one of the major veins of travel in the ancient Near East. Lebanon linked Mesopotamia and Turkey with Palestine, as well as all of the ancient Near East with the cultures of the Mediterranean. Strong trading ties have been shown with Egypt, the Aegean, Israel, Mesopotamia and other ancient civilizations. A quick glance at any map of Lebanon will show the country sat amongst many different cultures and people. LEBANON A map of Lebanon and it's political boundaries as they stand today. The country of Lebanon has a population of 3,779,000 people. The capital is Beirut, a city of 1.79 million people. A variety of languages are spoken in Lebanon, including; Arabic, French, English, and Armenian. Major ports and cities developed along the coast of Lebanon. Byblos was a major shipping port with Egypt. Sidon was an island rock fortress. Ugarit is one of the more ancient cities in Lebanon. Tyre became its chief city. In the Bible Jesus is pictured traveling to Tyre & Sidon. Its southern border is shared with Israel's northern border. During the reigns of David & Solomon the two countries were allies. King Hiram of Tyre sent builders and cedars of Lebanon to Solomon for the construction of Yahweh's Temple. M. J. Strazzulla, Professor of Greek & Roman Antiquities at the University of Foggia, Italy, presents a unique view of the geography and history of Lebanon. Sixteen important monuments, at eight archaeological sites, including Baalbek, one of the wonders of the world, are analyzed and discussed in this illuminating and fascinating book! Click to be re-directed to barnesandnoble.com to purchase your copy today! The history of Lebanon is shaped in large part to the country's geography. The mountains squeezed against the sea, thus, the country developed a diverse identity. Any map of Lebanon will make plain the necessity of some turning to the sea. Nature encouraged this with her many natural harbors, located along the Lebanese coastline. Another segment sought refuge and protection in the high altitudes of Lebanon's two mountain ranges. These two ranges, Mount Lebanon in the west, and the Anit-Lebanese in the east, are two of the most rugged ranges in that part of the world. They remain snow-capped throughout the year. The famous Mount Herman is the Anti-Lebanese mountains' highest peak, at just over 9,200 feet. Banking is Lebanon's chief industry. Other industries include; food processing, jewelry, cement, textiles, mineral and chemical products. The Coastal Plain produces abundant amounts of citrus, grapes, tomatoes, apples, and other fruits and vegetables. Sheep raising is also a major part of the country's agriculture. Lebanon's chief exports include foodstuffs, tobacco, textiles, chemicals, precious stones, and metal products. The country is 2/3's Muslim, and 1/3 Christian. Civil war broke out between the two religions in 1975, and lasted until 1991. Democracy was then restored, with government positions given out based on religion. Israel and Syria had both sent troops into Lebanon during the civil war. Israel withdrew its troops in 2000, with Syria following suit five years later. MOUNT HERMON Rising over 9,200 feet, Mt. Hermon is the largest peak in the Anti-Lebanese Mountain Range. It is approximately 1,000 feet shorter than the tallest peak on Mount Lebanon. Mount Hermon has been called by many names over the ages. It is best known as Ba'al Hermon, Senir, Sirion, and Sion. Arabs call it "Jabel A-talg" today. Og, King of Bashan, was said to have ruled over Mt. Herman in the Old Testament Book of Joshua. Joshua 12:4-5 makes note that Og was the remnant of the Rephaim . The Old Testament associates the Rephaim with the Nephilim . Mt. Hermon, according to Enoch , is where the original Watchers, the fathers of the Nephilim, descended and touched down from Heaven. Interestingly enough, over twenty ancient Temples have been found on the mountain, and near its vicinity. Mt. Hermon is undoubtedly one of the most mysterious and holy places in the world. LEBANON & ISRAEL Though Israel never pushed fully into Lebanon during the Conquest, the map of Lebanon below demonstrates they did pursue their Canaanite foes all the way from Merom to Sidon, in southern Lebanon. Lebanon stayed out of Israel possession through the reign of Saul. David, however, conquered the lands from Mount Lebanon, across the Beqa Valley, and eastward past the Anti-Lebanese Mountain Range. Tadmor was the eastern and northernmost limits of David's united Israel. Tadmor is located 160 miles northeast of Mount Herman. Solomon extended David's borders north to Tiphsah, in Beth-Eden. Tiphsah extends approximately 95 miles north of Tadmor. However, the extent of Israel's occupation of Lebanon remained the same. Phoenicia maintained its identity throughout the United Monarchy. The two countries shared friendly relations, for the most part, and often exchanged goods. Solomon extended Israel's boundary north of Hamath, as well. Thus, all of the map of Lebanon bordered Solomon's kingdom in the north. Solomon maintained very cordial relations with Hiram, king of Tyre. His most profitable business enterprises were in conjunction with the Phoenicians. Solomon was simply extending his father's policy of friendly relations with their northern neighbors. Solomon used the famous "cedars of Lebanon" in many of his construction projects, both in Jerusalem and throughout the country. He built a fleet for Israel, supplied by Hiram with the craftsmen and sailors needed to maintain such a fleet. In fact, Solomon's fleet sailed to many ports previously visited by the Phoenician merchant ships. A MODERN DAY MAP OF LEBANON, ISRAEL & SYRIA. NOTICE ALL THREE COUNTRIES SHARE A BORDER IN ISRAEL'S NORTHEAST. Apr 08, 16 04:43 PM The Biblical Garden of Eden has remained one of the Bible's greatest mysteries over the course of time. Archaeologists today believe they may have found the location of the Garden of Eden. Mar 28, 16 10:10 PM Ancient Mesopotamia was one of the first civilizations in history. Their influence greatly shaped the Near East, specifically Israel. The two share close historical ties. Mar 27, 16 07:51 PM The book of Isaiah is one of the most frequently quoted books of the Old Testament found in the New. Jesus quoted Isaiah on numerous occasions. Mar 27, 16 06:02 PM An easy-to-follow history of Ancient Israel starting with Adam, and following the lines of Cain and Seth, through the flood, and into the time of Abraham's initial settlement. Mar 24, 16 08:25 PM Abraham and Sarah entered Canaan without any idea about the land, the people, the religions, or the cultures which dominated Canaan. All they had was each other, and their faith in God Almighty. Jan 01, 16 11:23 AM The NASB names Isaiah 7; The War Against Jerusalem. This chapter speaks about events surrounding the Syro-Ephraimite war of 734-732 BC. Dec 25, 15 03:11 PM By getting through many historic facts, I have learned this world doesn't have truth, language barriers are everywhere. YHVH has shown a confirming thought Dec 25, 15 03:09 PM I woke up one morning at 4 am - sat bolt up right as quick as lightening and said, I am a student of Enoch. They didn't seem like my words - but they Dec 25, 15 01:58 PM What a great study ,especially to know of the origin of organized religion. People know very little of the truth. Our country and world is in a world Dec 25, 15 01:51 PM Cain was the result of Satan having sex with Eve. This is why GOD was so angry at Adam, Eve, and Satan. The Apple/Knowledge was SEX. Able was the result Subscribe to The Bible Today, israel-a-history-of.com's free NEW Monthly Newsletter. Explore the Bible in the world of history, science, space, politics & more. We have listened to your advice and added a Site Search Engine tool. Enter the term or terms you wish to search www.israel-a-history-of.com for and check out the results! We hope this helps you in your study of God's Word! The links below provide further study on topics similar to this page. Explore the land of the Old Testament! View these maps of the Bible.
<urn:uuid:be60cddf-2416-4d8a-bfb5-51b453a2039f>
CC-MAIN-2016-26
http://www.israel-a-history-of.com/map-of-lebanon.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958005
1,895
3.46875
3
Even though many curricula include some elements of effective instruction, this does not mean they are necessarily effective. Direct Instruction (capitalized), the programs developed by Siegfried Engelmann and colleagues, incorporates all of these elements. Other direct instruction (no capitals) curricula incorporate only some of the effective elements. Siegfried Engelmann, the founder of Direct Instruction, and Geoff Colvin, Ph.D., a longtime collaborator, have developed a rubric for identifying authentic DI programs - ones that conform to all of the elements of effective instruction. See Rubric for Identifying Authentic DI Programs A key element of effective instruction is ensuring that students master the material they are studying. This is often called "mastery learning." In a 1999 article, Engelmann describes how teaching to mastery involves aligning student needs and the content of a curriculum and shows how DI programs accomplish this. See Student-Program Alignment and Teaching to Mastery An article by Cheryl Schieffer and colleagues, published in 2002, shows how one DI program, Reading Mastery, embodies the elements of effective instruction. Schieffer et al, JODI, vol. 2, no. 2, pp. 87-119
<urn:uuid:7eb9d76c-1530-4bd4-a4bc-a743c74dbdc4>
CC-MAIN-2016-26
http://www.nifdi.org/documents-library/cat_view/92-journal-of-direct-instruction-jodi/93-volume-1-no-1-winter-2001
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00122-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918367
242
3.375
3
Virgil copied Homer. Michaelangelo copied Donatello. Elvis copied Bo Didley. Traditionally, artists learn by mimicking masters. It follows then: If you want to be a better copywriter, copy great writers. I’m not talking here about using swipe files or reworking hit headlines. I mean copying word-for-word. Copy work—the practice of exactly copying another writer’s words—is an easy, almost effortless way to improve your own writing. Copying helps you think more clearly, write more precisely and produce fresher, more original words. Don’t believe me? The honorable tradition of copycat writing In a digital era flooded with pixelated content, it’s easy to forget pedagogy that pre-dates the printing press. Before there were textbooks, copy work—along with an oral tradition and rote memorization—formed the foundation of a classical education. Greek and Egyptian schoolboys copied their culture’s great works onto clay tablets. Nineteenth century American children copied literature, poetry and lessons onto slates. Hunter Thompson typed the entire text of Hemingway’s For Whom the Bell Tolls and Fitzgerald’s The Great Gatsby. Today Buddhist monks still handwrite sutras. A well-known copywriting course—the granddad of you-too-can-be-a-copywriter info products—requires students to exactly copy the world’s best-pulling sales letters. Why? In an age that venerates individuality, why mimic others? What’s the benefit to being a copycat? 7 ways copy work helps your writing First of all, let me make one thing clear: I’m not suggesting plagiary. You won’t publish your copied text or try to fob it off as your own. Copy work is for your eyes alone. It’s a writing exercise. Copy work improves your writing by helping you… - Absorb structure and style of great works, soaking up the work subliminally. - Immerse yourself in different literary forms and styles by writing, not just reading. - Open a window into great writers’ minds. Copy work gives you insights into the writer’s intentions and choices. It makes you pause to ask why Fitzgerald imagined a stairway to the sky before the moment when Gatsby kisses Daisy. Or notice how Hemingway’s absence of words evokes more powerful emotion than lesser writers’ explanations and descriptions. - Identify bad writing habits—such as passive voice, weak verbs and stale metaphors—by absorbing great writers’ good habits. - Practice the mechanics of good punctuation and grammar, again, by writing instead of just reading. - Improve your spelling. My spelling has slid to hell on a sled over the last twenty years—concurrent with my use of Spell Check. Copy work lets my hand, eye and mind work together to re-learn how to spell. - Clarify your thinking. Precise writing is about precise thought. The slow, methodical work of copying allows your brain to slow down long enough to take stuff in. How to get started with copy work I recently copied, word-for-word, George Orwell’s Politics and the English Language. The exercise taught me a few things about copy work I hope will be helpful to you. To make the most out of copying a great writer’s text: Choose a writer you love or feel inspired by. I loved Orwell’s essay so much I wanted to memorize it. And after copying it for two weeks, I almost have… Set aside time to do your copy work. I gave my copying half an hour a day—and used an hourglass to mark the time. As mentioned, it took me more than two weeks to copy Orwell’s essay. But then speed isn’t the point. Handwrite the copy. Recent studies support what my kids’ Waldorf teachers have asserted for years: handwriting produces concrete cognitive benefits. I’m also a fan of cursive writing’s esthetic and sensual properties. Writing can, after all, be an art as well as a craft. Use quality paper and pen. See esthetic notes above. I love the Lamy Safari fountain pen—it’s totally dependable and costs less than $25. Select a reasonably-sized chunk of text. A friend of mine, a former staff writer for Conan O’Brien, once began copying John Kennedy Toole’s A Confederacy of Dunces. (How’s that going, Guy?) A little ambitious for me. On the other hand, you probably want to choose something longer than a haiku. Think about selecting a passage you can copy over a few days. Don’t worry if your copy starts to sound like Nabokov. Or Tony Morrison. Or David Foster Wallace. Well, like maybe you kind of do want to worry if it, you know, like starts sounding like DFW. But don’t worry too much. You’ll shake off the mimicry quickly as the copy increases your consciousness of stylistic nuance. Ready to be a copycat? What do you think? Could copy work be a useful practice for you? Please share your thoughts in comments.
<urn:uuid:aa8cebd5-5572-4ebf-990b-b4a65d957167>
CC-MAIN-2016-26
http://marketcopywriterblog.com/2012/03/14/want-to-dramatically-improve-your-content-copy-other-writers-shamelessly/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887033
1,133
2.5625
3
Did you assess gait as part of the geriatric evaluation? [Margaret (Meega) Wells, PhD, RN, NP] Nurse practitioners in the primary care setting in both rural and urban areas are in ideal positions to provide quality care to older adult patients to help them maintain independence and function. According to the U.S. Census bureau (2010),it is estimated that 13% of the U.S. population is 65 years and older. Most of these older adults reside in the community and receive healthcare in outpatient settings. Older adult patients are often complicated to manage and nurse practitioners must be systematic and thorough in providing care to this group. A comprehensive geriatric assessment (CGA) should be performed once a year along with periodic focused assessments for management of chronic illnesses. Ideally, performing a CGA on a regular basis will allow the nurse practitioner to detect subtle changes in older adults and intervene before major problems occur. Components of the comprehensive geriatric assessment should include medical, psychosocial, cognitive, and functional assessments. Assessing functional assessment of older adults using physical performance measures will be the focus of this article. The functional assessment should include self-reported measures as well as an objective physical performance measure. First, it is necessary to ask patients if they have any difficulties with activities of daily living (ADL). ADL include eating, dressing, ambulating, and toileting. Next, patients should be asked about their ability to perform instrumental activities of daily living (IADL). IADL include shopping, managing finances, housekeeping, taking medications, using the telephone, and driving or using public transportation and this requires higher executive functioning. Inability to perform IAD is also associated with cognitive impairment. (Reppermund et al., 2011) It is important that nurse practitioners watch their patients’ walk and this is often omitted from the geriatric assessment. Patients are usually put in the exam room and assisted to the exam table prior to the nurse practitioner’s arrival to the room. Nurse practitioners may not have the opportunity to observe patients walk unless they incorporate it in to the physical exam. There are several physical performance measures that can be used to objectively measure physical function. Gait speed and Timed Up and Go (TUG) are two measures that will be discussed in this article and both of these measures include observation of gait. In many cohort studies,gait speed, or the rate in which one walks, has been found to be associated with survival in older adults. (Cesari et al., 2005; Cesari et al., 2009; Ostir, Kuo, Berges, Markides, & Ottenbacher, 2007; Rolland et al., 2006; Rosano et al., 2008) Walking requires energy, movement control, and support, which put demand on multiple organ systems, and this is why gait speed is thought to predict survival. (Abellan van Kan et al., 2009) Gait speed is usually calculated using time in seconds to walk 4 meters and is reported in meters per second. Gait speeds of 1 m/s or faster suggest healthier aging, while gait speeds of 0.6 m/s or slower increase the likelihood of poor health and function. (Viccaro, Perera, & Studenski, 2011) In a systematic review of 9 cohort studies for a total of 34,485 community-dwelling older adults who had gait speed measured at baseline, and survival monitored for at least 5 years, the overall 5-year survival rate for participants was 84% and the 10-year survival rate was 59.7%. (Abellan van Kan et al., 2009) The mean gait speed of the participants was 0.92 meters per second (m/s) and it was associated with survival in all studies using hazard ratios. Survival increased as gait speed increased in 0.1 m/s increments. Gait speed measurement has a place in the clinical setting because it may help identify older adults who have a high probability of living for 5 or 10 years and would benefit from more intensive preventative interventions. Further, gait speed may be used to help stratify risks of the patient for surgery or chemotherapy. Gait speed is relatively easy to measure and only requires a stopwatch and a 4-meter course. Patients are instructed to walk at usual pace, as if walking down the street, with no further encouragement or instructions. A gait speed less than 1 m/s or a declining gait speed over time may indicate a new health problem that requires evaluation. A physical therapy referral may be needed at this time. A recent systematic review of frail older adults found that an exercise intervention improved gait speed and performance on ADLs; however, the type of exercise was not specified nor was the effect on mortality. (Chou, Hwang, & Wu, 2012) Hardy and colleagues found that improved gait speed significantly reduced mortality in a sample of community-dwelling adults 65 years and older. (Hardy, Perera, Roumani, Chandler, & Studenski, 2007) The TUG test can also be used to assess balance and gait. The TUG measures some aspects of balance such as rising, walking, turning, and sitting and is correlated with functional mobility. (Podsiadlo & Richardson, 1991) The TUG is quick, requires no special equipment, and can be done in about 1-2 minutes during an office visit. The TUG is the time it takes a patient to rise from a standard height chair with arms, walk 10 feet (3 meters), turn around, walk back to the chair and sit down. Patients may use their arms or an assistive device when rising from the chair; however, another person may not assist them. This screening test is timed and using assistive devices or the arms of the chair to rise may slow down the time it takes to complete the task. An independently mobile adult should be able to complete the TUG in less than 10 seconds. A TUG time of 15 seconds or greater requires further evaluation to determine the cause of the mobility impairment. If a musculoskeletal problem is found to be the problem, a referral to physical therapy may be appropriate. Gait speed and TUG both were found to predict health decline, ADL difficulty, and falls in older adults living the community. (Viccaro et al., 2011) However, both tests were found to be more useful in predicting recurrent falls rather than first-time falls. (Viccaro et al., 2011)According to the Panel on Prevention of Falls in Older Persons, TUG is recommended to evaluate gait and balance in patients with a positive fall screen or those at risk for falling. (Panel on Prevention of Falls in Older Persons,American Geriatrics Society and British Geriatrics Society, 2011) Evidence exists that supports using gait speed or the TUG to screen for mobility impairment; however, more research is needed is support interventions that can improve or maintain physical function. It is essential that nurse practitioners perform a comprehensive geriatric assessment as least once a year on all older adult patients. The four components of this evaluation include medical, psychosocial, cognitive, and functional assessments. In the primary care setting, many nurse practitioners do not always objectively evaluate the physical functional ability of patients. Either gait speed or TUG can be used to measure functional ability in older adults. Both are easy to perform and do not require special equipment other than a chair, a measured distance, and a stopwatch. Both tests require patients to follow direction to carry out the task. The nurse practitioner can obtain much information from watching the patient perform these tasks. Maintaining independence is important to older adults and detecting subtle changes in functional ability can help nurse practitioners to manage the care older adult patients more effectively. The nurse practitioner should perform a comprehensive geriatric assessment that includes either gait speed or the TUG. If either is found to be slow or declined from the previous year, careful evaluation of the patient is needed and an intervention such as physical therapy or an individualized exercise program may be an appropriate addition to the treatment plan prescribed by the nurse practitioner. Abellan van Kan, G., Rolland, Y., Andrieu, S., Bauer, J., Beauchet, O., Bonnefoy, M., et al. (2009). Gait speed at usual pace as a predictor of adverse outcomes in community-dwelling older people an international academy on nutrition and aging (IANA) task force. Journal of Nutrition, Health & Aging, 13(10), 881-889. Cesari, M., Kritchevsky, S. B., Penninx, B. W., Nicklas, B. J., Simonsick, E. M., Newman, A. B., et al. (2005). Prognostic value of usual gait speed in well-functioning older people--results from the health, aging and body composition study. Journal of the American Geriatrics Society, 53(10), 1675-1680. Cesari, M., Pahor, M., Marzetti, E., Zamboni, V., Colloca, G., Tosato, M., et al. (2009). Self-assessed health status, walking speed and mortality in older mexican-americans. Gerontology, 55(2), 194-201. Chou, C., Hwang, C., & Wu, Y. (2012). Effect of exercise on physical function, daily living activities, and quality of life in the frail older adults: A meta-analysis. Archives of Physical Medicine and Rehabilitation, 93(2), 237-244. Hardy, S. E., Perera, S., Roumani, Y. F., Chandler, J. M., & Studenski, S. A. (2007). Improvement in usual gait speed predicts better survival in older adults. Journal of the American Geriatrics Society, 55(11), 1727-1734. Ostir, G. V., Kuo, Y. F., Berges, I. M., Markides, K. S., & Ottenbacher, K. J. (2007). Measures of lower body function and risk of mortality over 7 years of follow-up. American Journal of Epidemiology, 166(5), 599-605. Panel on Prevention of Falls in Older Persons,American Geriatrics Society and British Geriatrics Society. (2011). Summary of the updated american geriatrics society/british geriatrics society clinical practice guideline for prevention of falls in older persons. Journal of the American Geriatrics Society, 59(1), 148-157. Podsiadlo, D., & Richardson, S. (1991). The timed "up & go": A test of basic functional mobility for frail elderly persons. Journal of the American Geriatrics Society, 39(2), 142-148. Reppermund, S., Sachdev, P. S., Crawford, J., Kochan, N. A., Slavin, M. J., Kang, K., et al. (2011). The relationship of neuropsychological function to instrumental activities of daily living in mild cognitive impairment. International Journal of Geriatric Psychiatry, 26(8), 843-852. Rolland, Y., Lauwers-Cances, V., Cesari, M., Vellas, B., Pahor, M., & Grandjean, H. (2006). Physical performance measures as predictors of mortality in a cohort of community-dwelling older french women. European Journal of Epidemiology, 21(2), 113-122. Rosano, C., Aizenstein, H., Brach, J., Longenberger, A., Studenski, S., & Newman, A. B. (2008). Special article: Gait measures indicate underlying focal gray matter atrophy in the brain of older adults. Journals of Gerontology Series A-Biological Sciences & Medical Sciences, 63(12), 1380-1388. U.S, C. B. (2010). USA QuickFacts. Retrieved December 29, 2011, from http://quickfacts.census.gov/qfd/states/00000.html Viccaro, L. J., Perera, S., & Studenski, S. A. (2011). Is timed up and go better than gait speed in predicting health, function, and falls in older adults?. Journal of the American Geriatrics Society, 59(5), 887-892. Rural Veterans: What Rural Nurses Need to Know. [Angeline Bushy, PhD, RN, FAAN ,U.S. Army, Col. (Retired)] Since the founding of our country, rural Americans have always responded when our Nation has gone to war. In the American Revolution, rural Americans left their homes and their families to fight the threat of loss to their families and their lands. During the American Civil War, rural Americans again responded to fight the threat of loss to their way of life, and to protect their families. However, during the Civil War the United States government instituted the first-ever military draft. Again, motivated by tradition and values, rural Americans responded. According to an Issue Paper published by the National Rural Health Association Rural (2007) people respond to such needs because they maintain value structures that are reflective of service to others and service to their country, volunteerism, care of home, and a sense of place. They also respond for economic concerns and certainly through patriotism. Whether motivated by their values, patriotism, and/or economic concerns, the picture has not changed much in 200 years. More than 44 percent of U.S. military recruits come from rural areas, Pentagon figures show. In contrast, 14 percent come from major cities. Youths living in the most sparsely populated Zip Codes are 22 percent more likely to join the Army, with an opposite trend in cities. Regionally, most enlistees come from the South (40 percent) and West (24 percent) (NRHA, 2007). In the last two decades the United States has been involved with a number of military conflicts predominately in the Mideast (US Department of Veterans Affairs (VA), 2013), resulting in the deployment of numerous military personnel. Moreover, it is not unusual for a soldier to be deployed multiple times within a 3 year period. Since the U.S. has an all-volunteer military (army, air force, navy, marines) the majority are in the reserve component or the National Guard. Again, a disproportionate number of returning veterans have rural origins, and are returning to their home communities having physical and emotional health care needs. Compared to urban veterans, rural veterans have higher prevalence of physical illness, lower health-related quality of life, and greater health care needs. Despite their greater need, rural veterans are less likely than urban veterans to use VA or private sector health care services. The disparity in use of health care may be due in part to longer driving distances to VA medical facilities experienced by many rural veterans, relative to their urban counterparts. VA primary care is available within a 30-minute drive for 91% of urban veterans, 38% of rural veterans, and 22% of highly rural veterans. Fewer than half (49%) of highly rural veterans live within 60 minutes of VA primary care. The Department of Veterans Affairs (VA) is statutorily required to provide VA-enrolled veteran with access to timely and quality medical care. It does so through the nation’s largest integrated health care delivery system, with more than 150 VA medical centers (VAMCs), 800 community-based outpatient clinics (CBOCs), and a range of other types of facilities (e.g., nursing homes) that provide care to more than 5.5 million patients. Despite this, Congress remains concerned that veterans, in particular rural veterans, may not be able to access VA health services. Among veterans enrolled in VA health care, 41% reside in rural or highly rural areas. Rural-enrolled veterans share certain characteristics that influence access to and the need for care. Congress has demonstrated continuing interest in modifying VA delivery of care to expand access for rural veterans. Such interest has been demonstrated through report language, statutory mandates, appropriation of funds, and authorization of demonstration projects. In particular, Congress has encouraged the VA to collaborate with federally qualified health centers (FQHCs)—facilities that receive federal grants and are required to be located in areas where there are few providers, particularly rural areas. The VA is generally a provider—rather than a financer—of health care services; however, the VA has statutory authority to reimburse non-VA providers for services that are not readily available within the VA’s integrated health care delivery system. VA facilities may consider contracting with outside providers to provide services to rural veterans. One type of facility that the VA has contracted with in the past are FQHCs. Although FQHCs are one type of facility that the VA can collaborate with, FQHCs may be candidates for VA collaboration because, as a condition of receiving a federal grant, they must meet certain requirements that include providing specific types of services, maintaining certain records, and meeting certain quality standards. These requirements, and the leverage that the federal government may have as a funding source, may facilitate VA-FQHC collaboration to provide care to veterans in rural areas. Some considerations that may arise during attempts to increase VA-FQHC collaboration include the costs of care to an FQHC, the VA, and veterans; the capacity of an FQHC to serve veterans in addition to its existing patients; and the compatibility of the VA and an FQHC in terms of the services available, quality initiatives, accreditation, and use of electronic health records. To address these considerations and encourage VA-FQHC collaboration, there are a number of policy levers that Congress might use. These include oversight, an incentive fund, directed spending, statutory mandates, and watchful waiting. Congress may also consider a combination of these levers. Table I: VA-Enrolled Veterans * Not all veterans are eligible to enroll in the VA. In general, eligibility for enrollment in VA health care operates through a system of eight priority groups, based on veteran status, presence of service-connected disabilities or exposures, income, and/or other factors, such as status as a former prisoner of war or receipt of a Purple Heart. Once enrolled in the VA health care system, a veteran remains enrolled and does not have to reapply, even if the veteran’s priority group changes (due, for example, to a change in income).Veteran status is established by active-duty status in the U.S. Armed Forces and an honorable discharge or release from active military service. Generally, persons enlisting in one of the armed forces after September 7, 1980, and officers commissioned after October 16, 1981, must have completed two years of active duty or the full period of their initial service obligation to be eligible for VA health care benefits. Service members discharged at any time because of service-connected disabilities are not held to this requirement. Veterans returning from combat operations are eligible to enroll for five years from the date of discharge without having to satisfy a means test or demonstrate a service-connected disability. A service connected disability is a disability that was incurred or aggravated in the line of duty in the U.S. Armed Forces (38 U.S.C. §101 (16)). The VA determines whether veterans have service-connected disabilities and, for those with such disabilities, assigns ratings from 0% to 100% based on the severity of the disability (38 C.F.R. §§4.1-4.31). Veterans who are eligible on the basis of exposure include those veterans who may have been exposed to Agent Orange during the Vietnam War or veterans who may have diseases potentially related to service in the Gulf War. *Source: CRS Report R42747, Health Care for Veterans: Answers to Frequently Asked Questions, by Sidath Viranga Panangala and Erin Bagalman. Congressional Research Service. (2013, April 3).Health Care for Rural Veterans: The Example of Federally Qualified Health Centers. Accessed on May 21, 2013 from http://www.himss.org/files/HIMSSorg/Content/files/20130418-CRS-RptHealthCareRuralVeterans.pdf National Rural Health Association Issue Paper. (2007). Rural Veterans: A special Concern for Rural Health Advocates. Accessed May 21, 2013 from: http://www.ruralhealthweb.org/go/rural-health-topics/veterans-health US Department of Veterans Affairs Website, Accessed on May 21, 2013 from: http://www.va.gov US Department of Veterans Affairs (2013). Rural Health Exchange Information. Accessed on May 21, 2013 from: http://www.va.gov/health/NewsFeatures/20110421a.asp
<urn:uuid:db7bdcfe-dbe2-4f7b-9033-05e18556e2b0>
CC-MAIN-2016-26
http://www.rno.org/resources/something-2/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919442
4,358
2.578125
3
Atrioventricular canal defect is caused by a poorly formed central area of the heart. There is typically a large hole between the upper chambers of the heart and an additional hole between the lower chambers. Instead of two separate valves allowing flow into the heart, there is a large common valve, which may be malformed. Atrioventricular canal defect is often seen in patients with Down syndrome. (8 a.m.-4:30 p.m.) What to expect when coming to Akron Children's For healthcare providers and nurses Residency & Fellowships, Medical Students, Nursing and Allied Health For prospective employees and career-seekers Our online community that provides inspirational stories and helpful information.
<urn:uuid:75ccc1b9-9977-4cc7-9929-e3838f3687fe>
CC-MAIN-2016-26
https://www.akronchildrens.org/cms/conditions/atrioventricular_canal_defect/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935514
146
2.765625
3
Buried, Residual Oil is Still Affecting Wildlife Decades After a Spill Scientists Find Crab Behavior is Altered by Polluted Sediments Below the Surface Nearly four decades after a fuel oil spill polluted the beaches of Cape Cod, researchers have found the first compelling evidence for lingering, chronic biological effects on a marsh that otherwise appears to have recovered. Through a series of field observations and laboratory experiments with salt marsh fiddler crabs (Uca pugnax), doctoral student Jennifer Culbertson and colleagues found that burrowing behavior, escape response, feeding rate, and population abundance are significantly altered when the crabs are exposed to leftover oil compounds from a 1969 spill. The study builds on previous work by researchers from the Woods Hole Oceanographic Institution (WHOI), which showed that oil compounds from the 1969 wreck of the barge Florida are still lingering in the sediments 8 to 20 centimeters below the surface of Wild Harbor in Falmouth, Mass. Burrowing fiddler crabs in the marsh still won’t dig more than a few centimeters into the sediments in the areas most affected by the spill. Culbertson a graduate student from the Boston University Marine Program (BUMP) and a guest student at WHOI, conducted the research in collaboration with WHOI marine chemist Chris Reddy, ecologist Ivan Valiela of the Marine Biological Laboratory, and several student colleagues from WHOI and BUMP. The findings were published in the online version of Marine Pollution Bulletin on April 19, 2007 and it will appear later this spring in a printed edition. Culbertson’s experiments and field work were conducted in the summer of 2005 and 2006 in the Great Sippewissett and Wild Harbor marshes of Falmouth. On the surface, these neighboring marshes look quite similar, with common plants and animals, sediment types, and geologic histories. The difference is that WHOI researchers have detected residues of No. 2 fuel oil buried in the sediments Wild Harbor, while Great Sippewissett has no detectable residues of the 1969 spill. “There are outward signs that the marsh in Wild Harbor has recovered,” said Reddy, whose lab group has been studying Cape Cod oil spills for nearly a decade. “But there is still chemical warfare going on just a few centimeters beneath the surface.” To study the burrowing behavior of Uca pugnaxwhich digs burrows for shelter while aerating the soilCulbertson and colleagues poured Plaster of Paris into 31 burrows in the two marshes. They later removed the casts of the burrows from the marsh mud and measured dimensions and shape. Crabs that burrowed into the relatively pristine marsh of Great Sippewissett made holes that were straight and stretched an average of 14.8 centimeters (the longest was 18 cm). In Wild Harbor, the burrows averaged 6.8 cm (none were deeper than 14) and showed erratic shapes as the fiddler crabs halted or turned laterally. The locations of the stunted, twisted burrows mapped closely with the location of residual oil in the sediments. Researchers also observed the escape response of the crabs, both in the marsh and in the lab. After catching fiddler crabs from both marshes, the scientists fed them sediments from either the oiled or clean marsh and used visual stimulia 5 by 5 cm weighted black square, swinging in front of the crabto test how long they took to move away from it. Crabs fed with oiled sediments were significantly slower to respond, which matched what Culbertson observed in the wild. “It was shocking that you could bend over and poke the crabs, even flip them over, and they were slow to get up,” said Culbertson of the fiddler crabs who are difficult to observe, no less catch, when healthy. “It was as if they were drunk.” Culbertson and colleagues also examined how quickly crabs consumed food when exposed to oil (much more slowly) and counted the numbers of crabs in each marsh (there were half as many in the oil-tainted marsh). "It has been difficult to demonstrate the biological effects of oil spilled long ago,” said Valiela. “This work provides clear evidence. Jen was able to establish a link between residual oil contents and the onset of biological effects, which will help establish guidelines for management actions after future oil spills." The research by Culbertson builds upon similar Plaster of Paris studies conducted by WHOI graduate student Kathy Burns and BUMP student Charles Krebs in the 1970s. It also continues research by Reddy, his lab mates, and students, who first showed in 2002 that residues of the oil from the 1969 spill are still present in marsh sediments. Most recently, graduate student Emily Peacock has mapped and modeled the concentrations of oil at various locations in Wild Harbor; Culbertson relied on Peacock’s assessment of oil “hot spots” for the new research. Funding for this work was provided by the WHOI Sea Grant Program, under grants from the National Oceanic and Atmospheric Administration; by the Research Experience for Undergraduates program of the National Science Foundation; an Environmental Protection Agency Science to Achieve Results Graduate Fellowship; and the Young Investigator Program of the U.S. Office of Naval Research. The Woods Hole Oceanographic Institution is a private, independent organization in Falmouth, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the oceans and their interaction with the Earth as a whole, and to communicate a basic understanding of the ocean's role in the changing global environment.
<urn:uuid:b9ccc82d-8473-4b80-9380-63aedfa767d1>
CC-MAIN-2016-26
http://www.whoi.edu/page.do?pid=83520&tid=3622&cid=25746&c=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961251
1,198
3.0625
3
Recently while grocery shopping, I came across a box of tasty-looking breakfast bars. The front of the box promised they’d give me nutritious, sustained energy, but the food label on the back told me the real story. The bars had less fiber and protein and more sugar and sodium (salt) than my usual breakfast cereal. Yikes! They were really just cookies in disguise. It wasn’t the first time that food labels have saved me from a bad impulse purchase, and I know I’m not alone in that. More of us are stopping to read the label. At least half of all consumers say they read a product’s nutrition label when buying it for the first time. Reading labels helps us make smart choices: Studies show that people who read labels are more likely to eat healthier foods. But even the most experienced of us may not know exactly what goes into a food label — and what doesn’t. If you’d like to learn the facts behind ingredients lists, how to quickly decode food labels, and know what might be left out, then keep reading. The government requires a nutrition facts label and an ingredients list on almost all packaged foods. We need that information because the food’s coming from a manufacturing plant — not our kitchens. Small businesses aren’t required to have labels, as long as the package doesn’t make any health claims. Labels also aren’t required for fresh foods and some other one-ingredient foods like coffee, tea, and spices. The top of the nutrition facts label lists the number of servings per container. Calories and other nutrition information are for a single serving, NOT the entire container. For each single serving, the percentages of daily values (DVs) are based on a 2,000-calorie-a-day diet. The government established 2,000 calories a day as an average intake, but you may eat more or less depending on your individual needs. So you may need to adjust what these percentages mean for you. The DVs of cholesterol, sodium, vitamins, and minerals are the same no matter how many calories you need. DVs of fiber, fats, and carbohydrates change as the number of calories you need change. For example, if you need 2,500 calories a day, your DVs would include at least 30 grams of fiber instead of 25 and up to 80 grams of fat instead of 65, but cholesterol would stay the same at 300 mg or less, as would sodium at 2,400 mg or less. The daily value information is meant to help you limit how much you eat of some nutrients and make sure you get enough of others. DVs with “upper daily limits” include saturated fats and sodium (you don’t want to eat more than 100% of your DVs of saturated fats and sodium). DVs with “at least” daily amounts include fiber, iron, and certain vitamins (you want to eat at least 100% of your DVs of these). Some nutrients on the label — protein for example — don’t have official DVs, so the label will only note their amount in grams. When the label says that the food is “not a significant source” of a particular ingredient, it doesn’t necessarily mean the ingredient isn’t there at all. If a food isn’t a significant source of fiber or sugar, it means there’s less than 1 gram of either per serving. The same goes for saturated fat if there are less than 0.5 grams and cholesterol if there are less than 2 mg. At the bottom of the label, the ingredients are listed in order by weight, with the ingredient weighing the most listed first and the ingredient weighing the least listed last. So all of us non-chemists can understand, manufacturers are supposed to use the common name for ingredients, such as “sugar” instead of a more technical name like “sucrose.” If a food contains any of the eight major food allergens, it must be on the label. These allergens — milk, eggs, fish, crustacean shellfish, tree nuts, wheat, peanuts, and soybeans — trigger 90% of all food allergies. Warnings about gluten are still voluntary. What’s not listed Labels are helpful, but they don’t give us the whole nutritional picture of foods. For instance, it would be nice to know if foods contain vitamin D, omega-3 fatty acids, or phytonutrients (natural chemicals in fruits and vegetables, like carotenoids and flavonoids), all of which are known to have protective health effects. But the government doesn’t require that they be listed on the label. These nutrients can be voluntarily listed though. It also can be hard to know exactly what certain terms and claims mean. - Neither the U.S. Food and Drug Administration (FDA) nor the U.S. Department of Agriculture (USDA) defines the term “all-natural.” In general, foods labeled “all-natural” don’t contain added colors, artificial flavors, or man-made substances, but they could contain preservatives or other additives. - When buying organic produce, look for the USDA organic seal, which goes on food that’s been grown and processed according to federal organic guidelines at an operation that’s been approved by a government inspector and gone through a certification process. It’s the best way to know for sure if food is produced without antibiotics, hormones, pesticides, and irradiation or hasn’t been genetically modified. Some foods that are organic don’t carry this seal, but it can be unclear without it what’s really organic unless you know the producer’s methods. - The FDA regulates health claims used by food companies, but they can still be misleading. When a bottle of juice claims to “strengthen your immune system,” that doesn’t mean it will boost your immune cells and ward off disease, for instance. - Genetically modified foods aren’t labeled as such, but many people think they should be. Genetically modified foods have had their genes changed by scientists in a lab. The scientists add genes from a different plant or animal to change the food – make it more resistant to certain diseases or bugs, for example. There’s lots of debate about the benefits and risks of eating genetically modified foods, but more research is needed to find out if eating these foods is harmful or puts any new toxins into our bodies. If you want to learn more about this issue, you can watch the informative – and hilarious — video from the Just Label It campaign. - Sugar-free, lightly sweetened, or fat-free foods aren’t necessarily low calorie foods and aren’t always healthy foods. - “Trans fat-free” products could actually contain up to 0.5 grams of the unhealthy fat per serving. Although this sounds like an insignificant amount, it becomes larger if you consume more servings. If the ingredients list has “hydrogenated” in it, then the food probably still has trans fat in it. - “Stone-ground,” “100% wheat,” or “multi-grain” doesn’t necessarily mean whole grain. Some breads and crackers that look and sound like they’re full of fiber-rich whole grains really aren’t. To be sure food is truly whole grain, look for the word “whole” at the beginning of the ingredients list (whole oats, whole-wheat flour, whole-grain corn, whole-grain brown rice, whole rye, whole wheat). Keep in mind Reading food labels can be surprising. Food packaging can be misleading (a picture of a fit woman jogging on a package of sugary breakfast bars) or even purposely confusing to entice you to buy. Take the time to read the labels, even if it means spending an extra half hour at the grocery store. And as always, use common sense: avoid foods with long lists of ingredients that sound like chemicals. Cookies aren’t nutritious just because they’re organic. A small container doesn’t mean the food is a single serving. (That small container of ice cream might just hold six servings!) I find food labels particularly handy when considering two similar foods, such as two brands of pasta sauce. It’s easy to do a quick and accurate side-by-side comparison. If one jar of sauce has significantly more sodium, sugar, or saturated fat than the other, that helps me make my decision. Do you read food labels? Do you have any advice for using food labels to make smart food choices? Let me know!
<urn:uuid:6843f807-f4fe-43f7-aebc-a5563910cfa6>
CC-MAIN-2016-26
http://community.breastcancer.org/livegreen/food-label-findings/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932205
1,847
3.015625
3
Gargano Comes to Rome: A Revision of Castel Sant’Angelo’s Historical Origins By Louis Shwartz University of Toronto Working Papers (2011) Abstract: This article explores the early medieval transformation of a pagan Roman monument, Hadrian’s tomb, into a Christian fortress consecrated to St Michael. Ado of Vienne’s claim that Boniface IV (r. 608-15) dedicated an elevated chapel to the archangel atop the moles Hadriani is challenged and reexamined. The many similarities between Michael’s shrine on Monte Gargano and this Roman chapel instead indicate that the angelic devotion spread from Gargano to Rome, sometime in the early eighth century, and that the Lombards were the likely transmitters. Introduction: Rome, the Eternal City, Caput mundi, is also home to certain un-worldly creatures – ones even more exotic than the Swiss Guards. The archangel Michael, perched on the dome of what was once the Emperor Hadrian’s mausoleum, presides over a pageant of his confrères below on Ponte Sant’Angelo; Michael is thus preeminent in Roman architecture as he is among the angelic hosts. This paper explores the problematic historical origins of Rome’s celestial guardian and his presence atop Castel Sant’Angelo. Ado of Vienne (d. 875) was the first to mention a Roman chapel dedicated to the archangel, attributing the initial renovation of the pagan moles Hadriani to Pope Boniface (likely intending Boniface IV, pope from 608-15, but more on this later). The entry in Ado’s Martyrologium for 29 September, the Feast of St Michael, after a lengthy account of the archangel’s apparitions atop Monte Gargano, concludes thus: But not much later, in Rome, the venerable Pope Boniface dedicated to Holy Michael a church built atop a circular monument, a crypt of marvelous craft and great height. The church is housed within the very summit of this building, thus it is said to reside among the clouds.
<urn:uuid:2922cebe-a38b-4833-8157-297f9793e05b>
CC-MAIN-2016-26
http://www.medievalists.net/2012/11/14/gargano-comes-to-rome-a-revision-of-castel-santangelos-historical-origins/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906546
447
2.96875
3
First Regiment of Minnesota Volunteer Infantry The First Regiment of Minnesota Volunteer Infantry was a volunteer infantry unit of the Civil War, serving with the Union Army. The First Minnesota was one of the first regiments formed when President Lincoln called for 75,000 troops on April, 1861. They are noted in history for their bravery and sacrifice in many battles, especially at the Battle of Gettysburg. GettysburgConfederates were about to break through the Union lines. General Winfield S. Hancock ordered the First Minnesota to fill the gap in the lines, putting them up against a much larger force. The charge was made to buy time for reinforcements. Even though they were alone and outnumbered, they managed to hold off the Confederates. Of the 262 soldiers who charged, 215 were killed. This is the largest percentage of loss for a regiment ever recorded in American history. General Hancock later stated: "I had no alternative but to order the regiment in. We had no force on hand to meet the sudden emergency. Troops had been ordered up and were coming on the run, but I saw that in some way five minutes must be gained or we were lost. It was fortunate that I found there so grand a body of men as the First Minnesota. I knew they must lose heavily and it caused me pain to give the order for them to advance, but I would have done it (even) if I had known every man would be killed. It was a sacrifice that must be made. The superb gallantry of those men saved our line from being broken. No soldiers on any field, in this or any other country, ever displayed grander heroism".
<urn:uuid:0dad56d8-627b-41de-aa94-9e499b444c4d>
CC-MAIN-2016-26
http://www.conservapedia.com/First_Regiment_of_Minnesota_Volunteer_Infantry
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.987783
332
3.046875
3
By Matthew Gardner After a presidential election campaign during which tax fairness debates figured prominently, the battle has now emphatically shifted to the states. Louisiana Gov. Bobby Jindal, for instance, recently announced his support for a “flatter, fairer” tax code, and lawmakers in more than a dozen other states are poised to grapple with major tax legislation in 2013. But how unfair are state tax systems right now — and how can these flaws be remedied? What the numbers tell us is astonishing. In almost every state, low- and middle-income families currently pay more of their income in state and local taxes than the best-off taxpayers pay. Nationwide, the poorest 20 percent of taxpayers pay 11.1 percent of their income in sales, property and income taxes — while the best-off 1 percent pay just 5.6 percent. In other words, the best-off Americans are paying about half as much of their income in state and local taxes as those living on the margins of poverty. Middle income families pay more too — 9.4 percent on average nationwide. The reasons for this unconscionable inequity are straightforward. State and local governments typically rely on three types of taxes: income, sales and property. Sales taxes are inherently regressive, falling most heavily on low-income families, because poor families spend most or all of their income just getting by, while the better-off a family, the less of their total income they need to spend, on anything. Property taxes also tend to fall most heavily on the poor. This leaves only the personal income tax as a potential source of tax fairness. And many states’ income taxes are either flat or only mildly progressive In this context, a “fairer and flatter” tax system sounds like exactly the right step to take because it would mean requiring the best-off taxpayers to pay their fair share, while sheltering low-income families from the impact of regressive sales taxes. However, when elected officials, such as Gov. Jindal, call for a “flatter” tax system, their policy prescriptions often would make the inequities worse rather than better. Calls to replace the state’s modestly progressive income tax with a higher sales tax, from Louisiana to Kansas, Nebraska, North Carolina and elsewhere, would inevitably shift the cost of public services even more heavily onto the poorest taxpayers, making an already-unfair tax system even more so. The sales tax might be a flat rate, but its effects are anything but even when it comes to who pays it. Of course, in some states, proposals to shift from income to sales taxes aren’t being described in tax fairness terms at all. Kansas Gov. Sam Brownback argues that repealing the state’s income tax, and ramping up reliance on the sales tax, will be a recipe for faster economic growth. But such claims are typically based on half-baked “studies” from supply-side cheerleaders that don’t withstand even the most basic scrutiny. In fact, the most likely impact of a “tax shift” from income to sales taxes is exactly what we saw when Kansas enacted a smaller one last year: less available funding for the public services that are essential for economic success, and higher taxes on the poor. As states emerge from the fiscal pressures of the Great Recession, the politically lazy move would be cutting taxes. Instead, now is the time for lawmakers to do the responsible thing and make tax systems more sustainable so that future recessions present less of a crisis. Loophole-closing reforms are the best place to start, and could allow state income, sales and corporate taxes to become more reliable over the long haul. Closing loopholes usually has the added bonus of making tax systems fairer as well. But the tax shifts being contemplated in a number of states this year would achieve neither fairness nor sustainability. Simply digging the hole deeper for families at or near the poverty line is a misguided substitute for the real tax reform states so desperately need. Matthew Gardner is executive director of the Institute on Taxation and Economic Policy and a co-author of the 2013 edition of “Who Pays? A Distributional Analysis of the Tax Systems in All Fifty States,” available at www.whopays.org. Distributed by McClatchy-Tribune Information Services. Copyright 2013 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
<urn:uuid:7e0acce2-0e87-4c56-a7d3-30ebb2c8ef32>
CC-MAIN-2016-26
http://www.vindy.com/news/2013/feb/07/tax-fairness-battle-is-raging/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00110-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95738
919
2.96875
3
Urbain-Jean-Joseph Le Verrier had to know something was amiss. Not in his data or his analysis, but with the planet Mercury itself. It was 1859, and Le Verrier, a giant of French astronomy, was attempting to fine-tune his mathematical model of Mercury—the “theory” of the planet’s motions expressed within mathematical models based on Newtonian gravitation. He had already produced the most accurate theory 16 years earlier, and this time, he expected to do better. And he did. But there was still a discrepancy that couldn’t be explained. Mercury’s track wandered just a little bit more than the theory predicted. It was a small number—tiny, really—but the gap between theory and the data was greater than estimates of observational errors could explain, which meant the problem was real. That settled one matter: it strongly suggested that Mercury’s difficulties almost certainly lay not with flaws in La Verrier’s analysis, but rather with something unknown out there, waiting to be discovered. Le Verrier was hardly infallible, to be sure, but there were some errors he simply did not commit. Mercury’s orbit does precess around the sun. It does so at a rate that cannot be fully accounted for by any combination of gravitational influences within the solar system. Le Verrier’s number for the discrepancy between the empirical picture and the theoretical one of Mercury’s motion—38 arcseconds per century—is a little off the modern value of 43 arcseconds, but he got it as nearly right as anyone could in 1859, given the limitations of the data at his disposal. Le Verrier never doubted the work. Nor did his fellow astronomers. For them, it was in fact fantastic news: the unexplained invites discoveries. Of all men, Le Verrier knew what came next: in his booklength report on Mercury, he said as much: “a planet, or if one prefers a group of smaller planets circling in the vicinity of Mercury’s orbit, would be capable of producing the anomalous perturbation felt by the latter planet…According to this hypothesis, the mass sought should exist inside the orbit of Mercury.’ ” Le Verrier then took the next step, figuring out how big an intra-Mercurian planet would have to be to drive the advance of the perihelion, the point at which Mercury is closest to the sun. Assuming it lay roughly halfway between Mercury and the sun, he wrote, its mass would have to be about the same as its neighbor. That posed a problem, as he well knew. If it were that big, why hadn’t anyone seen it yet? Even if a Mercury-sized planet in the predicted orbit would usually be hidden within the glare of the sun, “It must be unlikely,” he wrote, that it could avoid detection “during a total eclipse of the sun.” Thus, Le Verrier proposed an alternative: “a group of asteroids [corpuscles] orbiting between the sun and Mercury.” That conclusion must have seemed a bit deflating to Le Verrier’s readers. Adding to the lengthening list of minor planets, even in such an exotic location, hardly stacked up to the astronomer’s earlier discovery of Neptune. But the stakes of the search were just as high in both cases. Until Mercury’s precession could be accounted for, the anomaly represented a violation of the cosmic order, unthinkable (of course) to all of Newton’s heirs. Hence Le Verrier’s urgency: “It’s likely that some of these [asteroids] will be sufficiently large to be seen on their transits across the disk of the sun. Astronomers, already engaged with all the phenomena that appear on the surface of that star will without doubt find here another reason to track any spot they may see, no matter how small.” In other words: all those sunspots you folks have been tracking? Some of them might be little planets. Go get ’em! For those unwilling to wade through the long job of sorting sunspots, there was one other way to speed discovery. Le Verrier had published a short form of his Mercury findings in the September 12, 1859 edition of the Académie’s proceedings, Comptes Rendus. In the same issue, the secretary to the Académie, Hervé Faye, wrote that the best chance of seeing Le Verrier’s hypothetical asteroids was during a solar eclipse. By good fortune, the next readily accessible eclipse was almost upon them, to come on July 16, 1860, visible over northern Africa and Spain. During totality, the region closest to the limbs of the sun would suddenly be freed from the brutal glare of the sun, until “at the decisive moment,” Faye wrote, the few minutes of totality “would suffice to explore much of the area designated by M. Le Verrier.” Faye’s report sparked a wave of preparation. Locations were chosen—near Bilbao, perhaps, or a few miles west of Zaragoza, or maybe across the Mediterranean to a point on the coast around Algiers—wherever each observing team believed they would find the best chance of clear skies on the sixteenth of July. Given Le Verrier’s history, it seemed plausible that one or more little planets might appear, even on a first attempt. And maybe they’d already been seen! Recalling the tally of misidentified sightings of Uranus, Le Verrier’s announcement sent some back to old records, looking for anything that might qualify as an intra-Mercurian body since Galileo had first turned his telescope skyward back in 1609. No persuasive candidates materialized in this first pass—but then again, knowing what you’re looking for is a powerful aid to discovery. On to Spain! The Country Doctor Edmond Modeste Lescarbault was a humble, almost diffident man. He lived a small life, confined mostly to a modest compass between the Seine and the Loire rivers, about 70 miles west and a touch south of Paris. He had studied medicine, and in 1848 opened a practice in a little country town, Orgères-en-Beauce. He stayed put there for the next quarter of a century. He died in 1894, 90 years old, locally honored—the street where he kept his surgery is now named rue du Dr. Lescarbault—and generally forgotten. The country doctor had one great passion. As a boy, he had fallen in love with the night sky. Children grow up, of course, and most put away childish things. Not Lescarbault. Like many before and since, he discovered in astronomy the same consolation that would later comfort Albert Einstein: the contemplation of “this huge world, which exists independently of us,” which, he wrote, serves as “a liberation.” For Lescarbault, liberating himself from the daily medical round led him to build a genuinely impressive amateur’s observatory: a low stone barn with a modest dome at one end. There he mounted a perfectly competent telescope, a four-foot-long refractor with an objective lens almost four inches in diameter. He would steal time there between patients, just minutes sometimes, sneaking from his office to the dome to look, perhaps to dream, just a little. The discovery of the asteroids in the belt between Mars and Jupiter led him to wonder: where else might such treasures lurk? An answer came to him on the 8th of May 1845—the day Le Verrier missed the timing of Mercury’s encounter with the sun. Lescarbault watched Mercury’s moving dot across the solar face, untroubled by any mathematical subtleties. Instead, he thought not about the planet in transit, but whether there might be other unobserved transits to seek. If a Ceres- or a Pallas-sized asteroid lurked close to our star, its transits would likely be the only opportunity to see it—and the search for such events would be a perfect target for an enthusiastic amateur astronomer, eager for the thrill of finding something in the cosmos that not one other human in all of time had perceived. He was slow to act on that epiphany. Ordinary life intervened. His medical practice needed nurturing, for one thing, but more important, he was a true amateur. He lacked both the knowledge and tools to achieve the precision needed to capture a phenomenon as delicate as an asteroid breaching the limb, or edge, of the sun. It took him more than a decade to prepare, but by 1858, he had fitted his telescope with homemade instruments good enough to fix the position of objects within its field of view. He was, at last, ready to go hunting. Saturday, March 26, 1859. Orgères, on the edge of spring, enjoys a sun-warmed afternoon. The flux of patients eases. As is his habit, Dr. Lescarbault takes the opportunity to retreat to his observatory. He turns his telescope toward the sun. An object leaps into view: a small, regular dot, just inside the limb of our star. He makes an estimate of its size: about one quarter the apparent diameter of Mercury. He has just missed its first appearance at the edge of the sun. Working backward from its apparent rate of motion, he estimates the time it crossed the solar limb at almost exactly four o’clock or, to be precise, at 3:59:46 pm., plus or minus five seconds. He writes that down, using a piece of charcoal to scratch on a board. Another patient arrives and, likely with unrecorded frustration, he pulls his eye from his telescope. A few minutes later, he returns. The spot is still there, moving across the face of the sun. He tracks it continuously now, noting its nearest approach to the center of the solar circle, and then the instant and place it disappears over the solar limb. He records the time again: 5:16:55. Total transit duration: one hour, 17 minutes, and nine seconds. If an asteroid were ever to be discovered within the innermost wards of the solar system, this is how it would reveal itself. Lescarbault meticulously transcribes his notes, and then… For nine months… Until, at last, he permits himself to write a letter to be delivered—by hand—to Paris. He “broke his silence,” Le Verrier later wrote, “solely because he had seen an article in the journal Cosmos on [my] work on Mercury.” Lescarbault described the data he had collected that Saturday in March—and added one bold claim: “I am persuaded also that [the planet’s] distance from the Sun is less than that of Mercury, and that this body is the planet, or one of the planets, whose existence in the vicinity of the Sun M. Le Verrier had made known a few months ago, by that wonderful power of calculation which enabled him to recognize the conditions of the existence of Neptune…” Lescarbault entrusted it to a M. Vallée, “Honorary Inspector General of Roads and Bridges,” for delivery to the obvious recipient, Le Verrier himself. Dated December 22, 1859, it reached Paris a few days later. Le Verrier’s first reaction—as he told it—was one of doubt. But he was prepared to hope. There was only one way to be sure if Lescarbault could possibly have made the observations he claimed to have achieved: meet the man; inspect his instruments; test him. No matter how unlikely it might be that some rural hobbyist could have plucked such a prize, even the possibility that he might made any delay intolerable. Le Verrier was promised to his father-in-law’s for a New Year’s Day celebration— but the train schedules showed that it was just possible that he could get to Orgères and back to Paris before midnight on the 31st. He commandeered Vallée to return with him as a witness, and the two men set out to see if Lescarbault’s “planet” might actually exist. Le Verrier and Vallée arrived at Orgères-en-Beauce unannounced, covering the last 12 miles from the nearest railway station on foot. A few days later, he painted for the Académie a calm, almost placid picture of the encounter: “We found M. Lescarbault to be a man long devoted to the study of science…He permitted us to examine his instruments closely, and he gave us the most detailed explanations of his work, and in particular of all the circumstances of the passage of a planet across the sun.” The two men from Paris made Lescarbault walk them through each phase of his observation until they were convinced that their amateur had in fact seen what he said he had—and, crucially, that his interpretation of the event was correct. “M. Lescarbault’s explanations, the simplicity with which he offered them to us gave us total conviction that the detailed observation he had completed must be admitted to science.” Le Verrier told the story very differently in private. Released from the conventions of scientific discourse, he seems to have composed a hero’s epic. Abbé Moingo, editor of the same journal, Cosmos, in which Lescarbault had first read of the problem of the precession of Mercury, was present at one of these performances. Le Verrier told of setting out for Orgères, Moingo wrote, assuming that no mere rural medico could have both discovered a new planet and kept quiet about it for nine months. Yet he had “a secret conviction that the story might be true.” At the doctor’s house, the astronomer confronted “the lamb” who trembled before the lion from Paris: “One should have seen M. Lescarbault … so small, so simple, so modest and so timid.” Le Verrier roars; Lescarbault stammers—and yet, according to the Abbé, still manages to defend himself at every turn. “You will then have determined…the time of first and last contact?” Le Verrier demanded, noting that measuring first contact is “of such extreme delicacy that professional astronomers often fail in observing it.” Lescarbault admitted that he had missed first contact, but had estimated the timing by checking how long it took for his spot to travel the same distance again it had already passed from the limb. Not good enough, said Le Verrier, and on learning that the doctor’s chronometer lacked a second hand, stormed “What! With that old watch, showing only minutes, dare you talk of estimating seconds? My suspicions are already too well founded.” Lescarbault rallied from even that devastating assault, though, showing his visitors the pendulum he used to count seconds, and reminding the astronomer that as a doctor “my profession is to feel pulses and count their pulsations…I have no difficulty in counting several successive seconds.” By this point in the remembered (and, to modern ears at least, suspiciously dramatic) account, it’s becoming clear what Moingo (and/or Le Verrier) is doing. The ebb and flow of leonine attack, each swipe seemingly fatal, and yet disarmed by a counter from the charmingly naive lamb, enlarges Lescarbault. The famous astronomer plays the part of the skeptic (never mind how much he may have hungered for one outcome over another), while the country doctor becomes more and more a competent, even an excellent man of science. The interrogation lasted an hour, enough to exhaust Le Verrier’s reservoir of doubt. At the last, he surrendered: “with a grace and dignity full of kindness, he congratulated Lescarbault on the important discovery he had made.” He would lead Lescarbault to a more tangible reward as well, securing within the month the Légion d’Honneur for “the village astronomer” who had, it seemed, discovered the first intra-Mercurian planet. The next step was all Le Verrier. Lescarbault had none of the mathematical skill needed to transform his observation into a planetary orbit. Le Verrier did so in less than a week. By making the assumption that its orbit was nearly circular, he calculated that the new planet would complete one revolution around the sun in just under 20 days, on a path that never exceeded eight degrees distance from the sun. Such an object would be difficult if not impossible to see directly. But if Le Verrier’s analysis were even close to correct, the proposed planet would repeat its transits two to four times each year. With that, planet fever hit the popular press—The Times of London, Popular Astronomy in the United States, The Spectator (which had some very kind words for Dr. Lescarbault). Alternative orbits were proposed: one reexamined the data on the assumption that the new planet traced a highly eccentric ellipse around the sun. Others returned to old records to see if Lescarbault’s planet had been seen and ignored previously—and just as with Uranus and Neptune, candidate objects soon turned up, reaching double figures in a series of sightings stretching back to the mid-18th century. It was clear more work needed to be done, beginning with a repeat observation of the mystery object. Nonetheless, the celebrations continued heedless of any lingering uncertainty—and for good reason. The faith in the new planet stood in equal measure on Le Verrier’s own reputation and the rock-solid logic behind the discovery. Mercury’s perihelion precession was and is real. Newtonian gravitation provides an obvious solution to such a problem. The appearance of an object exactly where necessity suggested it ought to be made perfect sense. It fit. It had a moral right to be true. Celestial facts need labels. In this case, the common practice held: planets major and minor took their identities from the gods of antiquity. It’s an oddity of history that there is no record of who first fixed on the ultimate choice, but the decision was easy. A body that never escaped the intense fires of the sun had only one real counterpart on Olympus: Venus’s husband, the lord of the forge. By no later than February 1860, the solar system’s newest planet knew its name: Vulcan. Preparing for Discovery Vulcan’s career began happily. Weeks after Le Verrier’s announcement, no less an old rival than the Royal Astronomical Society bowed before the new planet: “The singular merit of M. Lescarbault’s observations will be recognized by all who examine the attendant circumstances; and astronomers of all countries will unite in applauding this second triumphant conclusion to the theoretical inquiries of M. Le Verrier.” More practically, the news evoked the sincerest form of flattery—claims of prior, never recorded encounters with the newcomer. Benjamin Scott, Chamberlain of the City of London and an avid amateur astronomer, wrote to The Times to assert that he had long before found an intra-Mercurian planet: a candidate object the apparent size of Venus glimpsed at sunset “at or about Midsummer 1847.” Scott’s “discovery,” reported only in a conversation with a fellow of the Royal Astronomical Society, could hardly be taken seriously, but working astronomers wondered if they too had missed the prize. Rupert Wolf, a Zurich-based astronomer long fascinated by sunspots, reviewed his own and other solar observations to find potential mistakes—Vulcan transits he may have mistaken as mere spots—and came up with 21 possibilities that he published, and sent directly to Le Verrier as well, highlighting four that seemed the closest match to Lescarbault’s object. Wolf’s list caught the attention of another astronomer, J.C.R. Radau, who used the data from two of Wolf’s candidates to refine what could be extracted from just a single Vulcan sighting. Radau joined other professionals who sniped at “the procrastinated publication of Dr. Lescarbault’s remarkable observation.” But once past his pique, Radau performed his analysis meticulously, generating exactly what astronomers needed to attempt the next phase of Vulcan research: a prediction for an observable transit. With the assumption that Wolf’s two suspects were in fact that same object as the one Lescarbault had seen, Radau published the results in early March: transits of Vulcan could next be expected between March 29 and April 7. Radau’s transit would be visible in the southern hemisphere, and astronomers there readied themselves for the moment of discovery. The director of the Victoria Observatory, a Mr. Ellery, monitored the sun at half-hour intervals. Major Tennant, head of the Madras station, went one better, reporting that “the sun’s disk was watched every few minutes from March 27 to April 10.” At the Sydney Observatory, Mr. Scott set up a parallel search. Ellery summed up the outcome for all three: the planet hunt performed by multiple observers reached the end of the predicted period for Vulcan transits “without success.” That was a blow, but hardly a fatal one. It had been obvious from the start that Vulcan would be hard to observe. If it weren’t, any large body—Mercury-sized or thereabouts—would have been seen long since. That was why Le Verrier had thought that an intra-Mercurian asteroid belt was the more likely option until Lescarbault’s report had raised hopes for a singular Vulcan. Still, while Lescarbault’s object appeared to be bigger than most if not all asteroids, his notes suggested it would be as small as one twentieth the diameter of Mercury. At that scale, it could not account for all of the perihelion advance Le Verrier had discovered. Lescarbault himself largely disappeared from view after his sudden burst of fame. The Légion d’Honneur he received in 1860 did not change his habits; he remained a country doctor and amateur astronomer until his death. After Le Verrier’s visit, he made no further claims about any intra-Mercurian objects. Working astronomers, though, still had to deal with the problem. Any calculation of Vulcan’s orbit based on one or a few sightings would be an approximation at best, and stood a good chance of being just wrong. For Le Verrier, as for many of his peers, the missing transit expected on the basis of Radau’s calculation only demonstrated, once again, that doing astronomy at the limit of the math and empirical capacity is really, really hard. The necessity of the search hadn’t changed one bit: Mercury still precessed, and whatever was compelling it to do so remained to be found. As it was, swiftly. In the middle of the 19th century, Manchester, England, prided itself on being smart as well as rich. In 1861, the city showcased both its wealth and brains as it hosted Britain’s largest celebration of knowledge, the annual meeting of the British Association for the Advancement of Science. Charles Darwin had published The Origin of Species less than two years before, and that explosion continued to reverberate through every gathering of the learned. At the Manchester meeting Darwin’s defenders prepared to battle religious doubters. One speaker, the “blind economist” Henry Fawcett, made the ultimate claim: Darwin was a true scientific hero, one who solved his problem by the same methods, the same approach to experiment, observation, and generalization that the great Isaac Newton himself had used in his physics. Much else was discussed, of course—advances in dredging engineering, a report on birds of New Zealand, news from the balloon committee. The astronomy section was relatively quiet, but all in all, the meeting reflected a basic truth about Victorian curiosity: it was ubiquitous, constant, the common passion of both professionals and amateurs. No wonder, then, that Manchester’s citizen-scientists would chase new planets. So it happened, on the morning of March 20, 1862, a “Mr Lummis, of Manchester” stole a few minutes to peer at the sun through a small telescope. As the formal report in The Astronomical Register told it, Lummis was watching “between the hours of 8 and 9 a.m., when he was struck by the appearance of a spot possessed of a rapid proper motion.” The object was startling enough that Lummis called for a witness, and they “both remarked on its sharp circular form.” Lummis tracked the spot for 20 minutes before being called in to his day’s work. By the time he returned to his telescope, the object was gone, “but he has not the slightest doubt of the matter.” Radau and a colleague repeated the by-now familiar exercise, constructing the elements of an orbit from incomplete observations, and they found that Lummis’s potential Vulcan was at least compatible with Lescarbault’s, even if there wasn’t enough data to settle the matter once and for all. There were doubters. Two professional astronomers, the American Christian H. F. Peters and the German Gustav Spörer, dismissed Lummis’s “discovery” as a mere sunspot. But for many others, Le Verrier among them, the ongoing identification of plausible Vulcans, in sightings that allowed for at least rough estimates of consistent trajectories, made an ultimate validation seem inevitable. By the mid-1860s, The Astronomical Register itself seemed to view the matter as settled, listing Vulcan (without stating whether it was Lescarbault’s object or some other) as the innermost body in its “Descriptive Account of the Planets.” Matters soon grew more complicated, though. Reports of sightings continued to arrive, some from reputable observers, others from unknowns. In 1865, an otherwise completely obscure M. Coumbary wrote to Le Verrier with a detailed account of an observation he made in the city that he—an unreconstructed Byzantine, apparently—referred to as Constantinople. With his telescope in Istanbul he watched as a black spot separated itself from a group of sunspots and appeared to move independently. He continued to track the object for 48 minutes until it vanished over the limb of the sun. Le Verrier endorsed Coumbary’s report, noting that though he didn’t know his correspondent, his information seemed to him to be marked by a combination of “exactitude and sincerity.” In 1869, a group of four eclipse mavens at St. Paul’s Junction, Iowa (one a lady, as contemporary records took pains to mention), saw “with the naked eye what they termed a little brilliant at a distance about equal to the Moon’s diameter from the Sun’s limb”—an object that at least two others (one equipped with a small telescope) seem to have noted as well. To those for whom the logical necessity of Vulcan was overwhelming, this spray of messages was comforting, not proof in and of itself, but an ongoing accumulation of information building on an already established pattern. The lack of a pure moment of discovery must have been frustrating, but given the inherent difficulty of the problem, such momentary glimpses gained significance each time another letter from some sincere and precise stranger reached Paris. As The New York Times put it, “a little scrap of positive evidence overbears an immense amount of negative.” But despite a growing heap of such hopeful wisps, Vulcan remained almost maliciously elusive when confronted by a systematic search. Benjamin Apthorp Gould had a perfect Boston pedigree: son of the headmaster of the Boston Latin School, grandson of a Revolutionary War veteran, he graduated from Harvard College— where else?—in 1844, all of 19 years old. Then, having paid his debt to ancestry, he kicked over the traces. Heading to Europe, he took work at the Greenwich, Paris, and Berlin observatories just as Neptune made its (perceived) solar system debut. He studied math at the University of Göttingen, and in 1848 became the first American to receive a Ph.D. in astronomy—still only 23! On returning to Boston in 1849, he was appalled by the primitive state of research in his home country, and took it on himself to transform American astronomy. Most important for the future of the discipline as a whole, in the 1860s he became one of the first investigators skilled in the new technique of astrophotography, the marriage of a camera to a telescope. Gould brought his cameras with him when he traveled to observe the same 1869 eclipse at which the amateurs had spied a possible Vulcan. He set up in the town of Burlington, Iowa, working on the right bank of the Mississippi River. His goal: to study the solar corona—the sun’s atmosphere, visible only during totality—and to survey the region close to the sun as precisely as possible, looking for whatever might reveal itself within the orbit of Mercury. He and his assistants made 42 photographs during the eclipse. Gould also examined many of what he estimated were 400 images made by others along the path of totality. In all those pictures, he saw—nothing. Gould sent his findings to Yvon Villarceau at the Paris Académie. He began with a baseline estimate: in the shadow of the eclipse, a planet or planets substantial enough to account for Mercury’s motion should shine about as brightly as Polaris, the North Star, a second magnitude object—easily seen by the naked eye. His photographic equipment, Gould wrote, was sensitive enough to detect any object down to the limit of unaided human perception, well below what he considered the plausible threshold for the discovery of Vulcan. Thus, he concluded, “I am convinced that this investigation dispenses with the hypothesis that the movement of the perihelion of Mercury results from the effects of one or many small interior planets.” I’ve looked, he said, and Vulcan ain’t there. Not so fast, though: Villarceau added a note of his own to the published version of Gould’s letter. It wasn’t necessary to accept the American’s conclusion as absolute, he argued. There were configurations of asteroids, for example, that could both provide the necessary gravitational influence on Mercury and evade detection. In other words: the problem remained. Mercury still wobbled, and in Newton’s cosmos, its motion still demanded something like a Vulcan. Absence of evidence, to invoke what has become a cliché, could not be taken as evidence of absence. Others agreed. William F. Denning was by general agreement Victorian Britain’s greatest amateur astronomer. He had made his reputation with the first comprehensive analysis of the motion of the Perseid meteor shower, still to be seen from late July to its peak in mid-August, and meteors remained his primary obsession. Vulcan, though, was a sufficiently pressing problem to draw his attention. He was an obligate organizer, and he used his influence to launch a systematic search for solar transits during the next likely window: March and April of 1869. He persuaded 15 other sky-watchers to put the sun “continually under observation, when visible…with a view of rediscovering the suspected intra-Mercurial planet Vulcan.” Vulcan obstinately refused to appear. Denning tried again the next year, recruiting a team of 25 to chase the elusive planet during the spring transit season in 1870, and yet once more with a plea to collaborators in 1871. As he gathered his volunteers, he had declared that his aim was to settle the issue once and for all. “There is every reason,” he wrote, “to suppose that the search will end satisfactorily, if not successfully.” End it did. After three conscientious attempts at locating the missing planet, he seems to have concluded that there was nothing more to be done. He did not repeat his call for aid on the search, and those fellow amateurs of the sky who had responded to him were released to their prior ambitions. After what was to that point the largest systematic search for the object since word of Lescarbault’s sighting first spread, Denning’s null result left Vulcan in a predicament. An explanation for Mercury’s errant motion remained necessary. On one side of the ledger, there was the blunt fact of Le Verrier and his genuine abilities. No one doubted his calculation, and no one should have—a restudy of Mercury’s perihelion advance in the 1880s confirmed and slightly enlarged the very real anomaly he identified. Glimpse after glimpse of possible candidate planets offered tantalizing hints—yet a decade into the search, the most rigorous observers kept coming up empty. What could be done? A way out was obvious to the more mathematically sophisticated Vulcan hunters. People simply could have gotten their sums wrong. There were enough imprecise assumptions about the elements of a putative Vulcan’s orbit so that calculations for transits could just be wrong. Princeton’s Stephen Alexander told his fellow members of the National Academy of Sciences that he had reworked Vulcan’s elements to arrive at the conclusion that there should be “a planet or group of planets at a distance of about twenty-one million miles from the sun, and with a period of 34 days and 16 hours.” In other words: we may have been looking in the wrong places, or at the wrong times. Vulcan could be elusive, but not absent. That claim seemed to be confirmed when Heinrich Weber— for once, an actual well-trained professional astronomer—sent word from northeast China that he had seen a dark circular shape transit the sun on April 4, 1876. Sunspot expert and Vulcan devotee Rupert Wolf passed word of his colleague’s sighting on to Paris, taking a bit of a victory lap as he did so. He told Le Verrier that “the interval between Lescarbault’s observation and Weber’s amounts to exactly one hundred and forty eight times the period” that Wolf had calculated so many years before. The news enthralled Le Verrier—and energized yet another corps of planet seekers more eager than expert. As historian Robert Fontenrose put it, “everyone with a telescope was looking for Vulcan; some found it.” For a time, Scientific American eagerly trumpeted each new “discovery”: from “B. B.” in New Jersey to a Samuel Wilde in Maryland, to W. G. Wright in San Bernardino, to witnesses from beyond the grave, in the form of a minister who remembered that Professor Joseph S. Hubbard “had repeatedly assured him he had seen Vulcan with the Yale College Telescope.” New Vulcans kept turning up that autumn in seemingly every mail delivery, until at last Scientific American cried “Uncle!” and, following its December 16, 1876, issue, declined to publish any more such happy memories. It was as if the question of Vulcan had ridden a seesaw since 1859. Occasional sightings and seemingly consistent calculations would propel it up to the top of the ride; hard-nosed attempts to verify its existence sent it crashing back down. Now, for all that the editors of Scientific American had tired of the flood of anecdotes, the teeter-totter was pointing up: between the one seemingly authoritative report from China and the sheer number, if not the quality of sky-gazer accounts, the matter of Vulcan seemed just about settled. The popular press certainly thought so. In late 1876, The Manufacturer and Builder said, “Our text books on astronomy will have to be revised again, as there is no longer any doubt about the existence of a planet between Mercury and the sun.” That autumn, The New York Times was even less bashful, interrupting its coverage of the Hayes-Tilden presidential election to assert that any residual doubts about the intra-Mercurian planet could be put down to simple professional jealousy: “ ‘Vulcan may possibly exist,’ said the conservative astronomers, ‘but Professor So and So never saw it…’ ”—pure us-against-them nastiness, according to the Times, adding “they would hint, with sneering astronomic smiles, that too much tea sometimes plays strange pranks with the imagination.” Now, such too-smart fellows were about to receive their due, the newspaper proclaimed. Why? Because, in the wake of Weber’s report, the grand old man himself, Urbain-Jean-Joseph Le Verrier, had roused himself. “The man who untied Neptune with his nose—so to speak—cannot be accused of confounding accidental flies with actual planets. When he firmly asserts that he has not only discovered Vulcan, but has calculated its elements, and arranged a transit especially for its exhibition to routing astronomers…” the Times wrote, “there is an end of all discussion. Vulcan exists…” The Times got at least one thing right. After shifting his attention to other problems for a few years, Le Verrier had indeed returned to the contemplation of Vulcan. Wolf’s news had fired his passion for the planet, and he began a comprehensive reexamination of everything that might bear upon its existence. Starting with yet another catalogue of claimed sightings dating back to 1820, he identified five observations spread from 1802 to 1862 that seemed to him most likely to represent repeat glimpses of a single planet. That allowed him to construct a new theory for the planet, complete with the prediction the Times had rated so high: a transit that could perhaps be observed, Le Verrier suggested, on October 2nd or 3rd. The headline writers would be disappointed. Vulcan did not cross the face of the sun in early October. More confounding, Weber’s revelation from China was debunked: two photographs made at the Greenwich Observatory clearly revealed his “Vulcan” to be just another sunspot. Scientific American called this the “coup de grace” for this latest “discovery,” but, as usual in the annals of Vulcan, its real impact was more deflating than destructive. Le Verrier’s calculation turned on earlier observations, not Weber’s, and there was a way to explain away the missed transit, by positing an orbit for Vulcan that was much more steeply inclined than previously assumed. Thus Le Verrier hedged his bets: there might be a chance to see Vulcan against the face of the sun in the spring of 1877, but given the full range of possible orbits this insufferably errant planet might occupy, it might be five years or more before the next transit would occur. To the End No transits occurred that March. Le Verrier said nothing more in public about Vulcan. He had turned 66 on March 11, and he was tired to the bone. As the year advanced, he found he couldn’t drag himself to the weekly meetings of the Académie, nor to his daily post at the Paris Observatory. Time off seemed to help—he returned to his desk in August—but fatigue masked his real trouble: liver cancer. On the evidence, Le Verrier was not a religious man. He did accept communion in late June on the urging of a much more committed Catholic colleague, but that seems to have been the limit of his willingness to acknowledge conventional pieties. By summer’s end, he could no longer mistake his illness. The end came on September 23rd. Le Verrier left the solar system larger than he found it—one both better and less completely understood. Of Vulcan itself, though—surely, given all the fully satisfactory explanations for the behavior of every other astronomical object derived from the Newtonian synthesis, the fault, it seemed so nearly certain, must lie not in the stars, but in some human failure to crack this one particular mystery. From the book The Hunt for Vulcan:…And How Albert Einstein Destroyed a Planet, Discovered Relativity, and Deciphered the Universe by Thomas Levenson. Copyright (c) 2015 by Thomas Levenson. Reprinted by arrangement with Random House, a division of Penguin Random House LLC. All rights reserved.
<urn:uuid:80093830-870f-4daa-8d43-3c1161ae2449>
CC-MAIN-2016-26
http://www.pbs.org/wgbh/nova/next/physics/hunt-for-vulcan/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970847
8,605
3.5625
4
Definitions for vice admiral This page provides all possible meanings and translations of the word vice admiral an admiral ranking below a full admiral and above a rear admiral A naval rank between rear admiral and admiral A flag officer in the United States Navy, Coast Guard, National Oceanic and Atmospheric Administration Commissioned Corps, or Public Health Service Commissioned Corps having a grade superior to rear admiral (upper half) and junior to admiral. A vice admiral is equal in grade or rank to a lieutenant general, which is indicated by a 3-star insignia. Vice admiral is a senior naval flag officer rank, which is equivalent to lieutenant general and air marshal. A vice admiral is typically senior to a rear admiral and junior to an admiral. In many navies, vice admiral is a three-star rank with a NATO Code of OF-8, although in some navies like the French Navy it is an OF-7 rank, the OF-8 code corresponding to the four-star rank of squadron vice-admiral. The numerical value of vice admiral in Chaldean Numerology is: 4 The numerical value of vice admiral in Pythagorean Numerology is: 7 Sample Sentences & Example Usage The talent and expertise Vice Admiral Neffenger brings to his new role after more than three decades at the U.S. Coast Guard will be valuable to this Administration's efforts to strengthen transportation security, he has been a recognized leader in the face of our nation's important challenges, and I am grateful for his service. I look forward to working with him in the months ahead. Images & Illustrations of vice admiral Translations for vice admiral From our Multilingual Translation Dictionary Get even more translations for vice admiral » Find a translation for the vice admiral definition in other languages: Select another language: Discuss these vice admiral definitions with the community: Word of the Day Would you like us to send you a FREE new word definition delivered to your inbox daily? Use the citation below to add this definition to your bibliography: "vice admiral." Definitions.net. STANDS4 LLC, 2016. Web. 28 Jun 2016. <http://www.definitions.net/definition/vice admiral>.
<urn:uuid:60d89471-340b-4564-a681-fdad22c6287c>
CC-MAIN-2016-26
http://www.definitions.net/definition/vice%20admiral
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.87893
485
2.5625
3
Published: Jan 1961 | ||Format||Pages||Price|| | |PDF (196K)||9||$25||  ADD TO CART| |Complete Source PDF (1.6M)||71||$55||  ADD TO CART| It is postulated that a representation that would provide necessary and sufficient information on fatigue damage must be based on the statistical distribution of the peak values. Statistics affecting fatigue life are examined and a convenient method for estimating their numerical values is demonstrated and illustrated by an example including four different spectra. The proper specification of a random load is discussed and some conclusions regarding the planning of random fatigue tests are presented. Professor, Bockamöllan, Brösarps Station,
<urn:uuid:af6acece-3220-4d1d-914b-391ca0511701>
CC-MAIN-2016-26
http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP41214S.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.801463
154
2.5625
3
Truth Commission: Equity and Reconciliation Commission Duration: 2004 – 2005 Charter: Dahir (Royal Decree) No. 1.04.42 Report: Public report Truth Commission: Equity and Reconciliation Commission (Instance Equité et Réconciliation, IER) Dates of Operation: December 2004 - November 30, 2005 (12 months) Background: After Morocco gained full independence in April 1956, Mohammed V facilitated growth and reform, but he feared opposition takeover and used a strong hand to quell dissenting movements. Hassan II became King in 1961 and continued the oppressive practices of the former King. In 1965, after an opposition party won a small block in the legislature, King Hassan II took control over legislative power, repealing an earlier promise of legislative independence. Secret detention, arbitrary arrest, and disappearances of political opponents became common practices during the era commonly known as the “years of lead” (les années de plomb). Hassan II created the National Consultative Council on Human Rights in 1990 and began releasing detainees as a result of public protests. Mohammed VI, Hassan's son, succeeded his reign after the King's death in 1999. King Mohammed VI, sensing mounting pressure from the people, established a mechanism for reparations for past abuses. (The Independent Commission of Arbitration/Indemnity Commission). Many victims and family remained unsatisfied with the level of information available about past crimes; therefore, Mohammed VI established the Equity and Reconciliation Commission on January 7, 2004. Charter: Dahir (Royal Decree) No. 1.04.42 (PDF-162KB), April 10, 2004 Mandate: The Equity and Reconciliation Commission’s mandate was to investigate forced disappearances and arbitrary detention between Morocco’s independence in 1956 and 1999, to rule on reparation requests pending before the former Independent Commission of Arbitration (created in 1999), and to determine “the responsibility of the state organisms or any other party”. Commissioners and Structure: The Equity and Reconciliation Commission was comprised of sixteen commissioners appointed by the King, but included only one woman. Driss Benzikri, a former political prisoner and human rights activist, headed the commission. Five of the commissioners were former political prisoners, including two who had been exiled. Report: The Equity and Reconciliation Commission’s final report was delivered to King Mohammed VI on December 1, 2005, and it was released to the public on December 16, 2005. The full Arabic version is on the IER’s homepage, which also contains summaries in French and Spanish. The Moroccan National Human Rights Institution published an English summary of the report. - The IER determined the fate of 742 individuals and established the role of the state in the political violence during the period covered by its mandate. - The report did not mention individuals responsible for abuses and hearing participants had to sign an agreement not to identify individuals attributed with responsibility. - The report recommended a diminution of executive powers, the strengthening of the legislature, and independence of the judiciary. - The IER further recommended reforms in the security sector and changes in criminal law and policies, including the development of laws against sexual violence. - It recommended that Morocco should ratify the International Criminal Court (ICC) statute and abolish the death penalty. - Authorities were encouraged to pursue the investigations further. - More than 16,000 requests for reparations were reviewed, and 9,779 victims were recommended to receive financial, medical, and psychological assistance. It was suggested that some communities should receive communal reparations. - The King publicly endorsed the recommendations of the Commission and asked the pre-existing Consultative Council on Human Rights to pursue follow-up action. - In Spring 2011, King Mohammed VI announced in Dahir (Royal Decree) No. 1.11.19 of March 1, 2011 that the Consultative Council on Human Rights shall be replaced by an independent Council on Human Rights, vested with additional competencies and in line with the United Nations' Principles relating to the Status of National Institutions. - The King also announced that the recommendations of the IER should be incorporated in a revised constitution. - The ICC Statute had not been ratified nor have the death penalty laws been repealed as of early 2011. - No trials have taken place, and some alleged perpetrators continue to hold high government posts. - Independent from the Moroccan commission, a French judge prepared a case against five Moroccan officials in connection with the disappearance of Socialist Opposition leader Mehdi Ben Barka in Paris on October 29, 1965. - In August 2007, the Consultative Council on Human Rights announced that 23,676 people received compensation for human rights violations committed during the reign of Hassan II. - At the end of 2007, the distribution of individual compensation to victims was almost completed, and $85 million USD was distributed to approximately 16,000 individuals. An institutional mechanism was established to manage the implementation of communal reparation programs. Special Notes: The Equity and Reconciliation Commission was the first truth commission in the Arab world. The final report was silent on the Western Sahara, the area that was hardest hit by repression. Amnesty International. Morocco/Western Sahara: Amnesty International Welcomes Public Hearings into Past Violations. London: Amnesty International, 2004. Available at http://www.amnesty.org/en/library/info/MDE29/010/2004 (accessed June 12, 2008). Amnesty International. Morocco/Western Sahara: Increasing Openness on Human Rights. London: Amnesty International, 2005. Available at http://www.amnesty.org/en/library/info/MDE29/001/2005/en (accessed June 12, 2008). Center for the Study of Violence and Reconciliation. "Justice in Perspective - Truth and Justice Commission, Africa -Morocco." Available at http://www.justiceinperspective.org.za/index.php?option=com_content&task=view&id=20&Itemid=19 (accessed June 12, 2008). Confronting the Truth: Truth Commissions and Societies in Transition. Directed by Steve York, York Zimmerman Inc., United States Institute of Peace and International Center on Nonviolent Conflict. [United States]: York Zimmerman Inc., 2006. Grotti, Laetitia, Eric Goldstein, and Human Rights Watch. Morocco's Truth Commission: Honoring Past Victims during an Uncertain Present. New York: Human Rights Watch, 2005. Available at http://hrw.org/reports/2005/morocco1105/ (accessed June 12, 2008). Hazan, Pierre and United States Institute of Peace. "Morocco Betting on a Truth and Reconciliation Commission." U.S. Institute of Peace, 2006. Available at http://www.usip.org/files/resources/sr165.pdf (accessed July 1, 2008). "Institution Nationale Pour La Promotion Et La Protection Des Droits De l'Homme." Available at http://www.ccdh.org.ma/spip.php?rubrique150 (accessed June 12, 2008). International Center for Transitional Justice. "Morocco: ICTJ Activity." Available at http://ictj.org/our-work/regions-and-countries/morocco (accessed May 12, 2011). Lamlili, Nadia. "In-Depth: Justice for a Lawless World? Rights and Reconciliation in a New Era of International Law: Morocco: History Will Keep its Secrets." IRIN News, 2006. Available at http://www.irinnews.org/InDepthMain.aspx?InDepthId=7&ReportId=59487&Country=Yes (accessed July 1, 2008). National Report Submitted in Accordance with Paragraph 15(A) of the Annex to Human Rights Council Resolution 5/1: Morocco. Geneva: United Nations Human Rights Council Working Group on the Universal Periodic Review 7-18 April 2008. Available at http://www.ohchr.org/EN/HRBodies/UPR/Pages/masession1.aspx (accessed June 12, 2008). Slyomovics, Susan. "A Truth Commission for Morocco." Middle East Report 31, no. 1 (2001): 18-21. Available at http://www.merip.org/mer/mer218/218_slymovics.html (accessed June 12, 2008). Son Majesté Le Roi Mohammed VI. "Discours à l'occasion de la cérémonie d'installation de la Commission Consultative de révision de la Constitution," Rabat, March 9, 2011. Available at http://www.maroc.ma/PortailInst/Fr/Actualites/SM+le+Roi+Mohammed+VI+mercredi+soir+un+diffusion+a+la+Nation.htm (accessed March 14, 2011).
<urn:uuid:90352ca3-325e-4cbe-8509-5141c1157b46>
CC-MAIN-2016-26
http://www.usip.org/publications/truth-commission-morocco
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908498
1,894
2.875
3
Thanks to its optical and electronic control system, the FHS can simulate the flight behaviour of other helicopters. DLR (CC-BY 3.0). The ACT/FHS 'Flying Helicopter Simulator' of the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) at a flight in 2009. The ACT/FHS 'Flying Helicopter Simulator' of the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) is based on a standard Eurocopter EC 135 type helicopter, which has been extensively modified for use as a research and test aircraft. Ian Phillis from the Empire Test Pilot School (left) and Waldemar Krebs from DLR, ready for flight. FHS (Flying Helicopter Simulator) is equipped with a modular experimental system. Such as the on-board computer, the system also includes extensive sensor equipment. The cockpit has been modified for the crew stations of a safety pilot (left) and a test pilot (right). The mechanical steering system has been replaced by an electrical and optical (fly-by-wire/fly-by-light) primary flight control system that meets the highest safety requirements. In addition, a mechanical emergency control system is still available. The ACT/FHS 'Flying Helicopter Simulator' of the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) is based on a standard Eurocopter EC 135 type helicopter, which has been extensively modified for use as a research and test aircraft. The mechanical controls, for example, have been replaced by a fly-by-wire/fly-by-light (FBW/FBL) flight control system. Now the control commands are transferred by electric cables and fibre optic cables instead of control rods. The application portfolio of the FHS covers pilot training, trials of new open and closed-loop control systems, up to simulation of the flight characteristics of other helicopters under real environmental conditions. The FHS is equipped with two engines, a bearingless main rotor and a Fenestron tail rotor as standard; its key features are notably quiet operation and high manoeuvrability and safety. The fly-by-light control system is a groundbreaking new system where, in contrast with fly-by-wire, the control signals between the controls, the flight management computer and the actuators for rotor blade control are transferred optically via fibre optic cables instead of electrically. The advantages compared with electrical data transfer are the high transmission bandwidth, high reliability and low weight. The fly-by-light flight control system consists of a quadruple redundant computer and is designed such that the stringent safety criteria of the European aviation authorities are met in full. FHS is the first helicopter in the world with this flight control system. The cockpit layout provides seats for a safety pilot, the test pilot and the flight test engineer. A comprehensive equipment line-up with sensors and systems for onboard data recording and processing is used to record the data from the flight tests. This data is available to users and engineers for analysis both on board and - via telemetry - on the ground. The following modifications differentiate FHS from the standard Eurocopter EC 135 helicopter: Conversion of the ACT/FHS was planned and implemented with the close cooperation of Eurocopter Deutschland (ECD), Liebherr Aerospace Lindenberg (LLI), the German Federal Office of Defense Technology and Procurement (Bundesamt für Wehrtechnik und Beschaffung; BWB) and DLR. Missions - research focus The powerful ACT/FHS helicopter is used for the following research trials and applications: Development of flight control software that significantly reduces the pilot's workload in difficult flying situations and, at the same time, maintains intuitive control of the helicopter. In particular, the focus is on flight, take-off and landing under adverse conditions, such as launch and landing sites with obstructions and restricted visibility. The test pilot controls the helicopter via the DLR-developed experimental system, while the safety pilot oversees the manoeuvres. The experimental system is a modular, multi-purpose system, whose safety concept is structured so that new, even not fully tested, technologies can be checked and evaluated, before their development is completed. Other fields of application are: Eurocopter EC 135 ACT/FHS Last modified:21/04/2011 12:08:23
<urn:uuid:3a3a3b4d-8b87-47a9-9a01-c27361a654d4>
CC-MAIN-2016-26
http://www.dlr.de/dlr/en/desktopdefault.aspx/tabid-10203/339_read-269
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892212
942
2.75
3
The STEM Central Context on Electric Transport allows practitioners and learners to explore Electric Cars. The Electric Cars learning journey with a technologies focus is aimed at fourth level. Lessons give learners experiences and opportunities to develop their understanding of the differences between electric motors and petrol engines and the functional, societal and physical issues relating to electric forms of transport. Learners can research the current state of electric transport in the world to learn from elsewhere and deepen their understanding of the issues. Surveying people in the community allows them to better understand attitudes and views relating to this issue. Learners then develop an understanding of different case studies of schemes to encourage the adoption of electric transport and then research and present a reasoned argument on the environmental impact of a wider spread use of electric transport in Scotland.
<urn:uuid:1a922692-cb15-421a-8f0c-3bbe7746ef81>
CC-MAIN-2016-26
https://blogs.glowscotland.org.uk/glowblogs/eslb/2012/02/24/teaching-transport/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926969
151
3.34375
3
Shinto architecture embodies the Japanese national spirit, the ancestor worship of Japan. The oldest primitive style of Shinto shrine is the Taisha-zukuri (the Great Shrine style). The main shrine of the Izumo Taisha in the province of Izumo still preserves this style of Shinto architecture (Fig. 60), developing out of dwelling-houses in the age of primitive Shinto. This shrine is dedicated to Okuninushino-Mikoto, one of the earthly deities in the Japanese pantheon. On the other hand there developed a more advanced style of Shinto architecture which originated from the primitive palace building. It is called Shimmei-zukuri. The best example of this style is that of the Ise Daijingu. Here is enshrined the Great Sun Goddess Amaterasu O-mikami, the ancestor of Japanese Emperors. Both styles are simple and archaic ; but their extensive environments give visitors a sacred and inspired feeling. Especially is the Ise Daijingu the symbol of the Japanese spirit and faith. Both styles will be found in many districts over the whole Empire. In Nara, the old capital of Japan, stands the Kasuga shrine (Fig. 61). Its two-storied tall red gate in front of the main shrine which is also colored red and green, differ greatly from those simple and plain styles of the Taisha-zukuri and of the Shimmei-zukuri. The shrine was founded in the 8th century by the Fujiwara families as their tutelary god. Here in this Shinto shrine we see that the Buddhist style of architecture crept into the colorful style of the Kasuga shrine. The Kitano shrine in Kyoto, and the Osaki Hachiman shrine in Sendai, represent another remarkable style of Shinto architecture called Gongen-zukuri. This style developed in the Momoyama Period and was subjected to much influence from Buddhist architecture. It has a main hall and an oratory, connected by an intermediate room called ai-no-ma. This is a characteristic feature of the Gongen-zukuri style. The construction and decoration of the outside and inside are elaborate. This is also a special feature of the Gongen zukuri architecture. Nikko shrine, so famous throughout the world, belongs also to this style of Shinto architecture. But it is too elaborate in construction and too ornate in rich colors, and thereby greatly opposed to the simple style of the early Shinto shrine, which is most aptly expressed in the Shimmei-zukuri style of Shinto architecture.
<urn:uuid:ff713bfc-83a6-497b-a9f8-2153432840eb>
CC-MAIN-2016-26
http://art.yodelout.com/japanese-art-shinto-architecture/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00167-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934894
534
3.484375
3
How To Protect Eyes When Working With Computer Nearly every area of our life is computerized now. Computers are everywhere and some of us have to spend up to 12-14 hours in front of a computer screen. This is very harmful for eyes and causes eye fatigue, redness, itching and gradual worsening of vision. The best thing to do to prevent these symptoms is to stay away from a computer which is impossible for some people due to a variety of reasons. However, you can use a few simple tips to protect your eyes when working with a computer: 1. Organize you work place properly. - Check illumination. Make sure you don’t work in a dark room. Place your computer so that glaring, reflection and flickering of light don’t bother you. - Determine the optimal distance from the screen to your eyes and stick to it. - Remember that your monitor should be at a level slightly below the horizontal eye level. Tilt the screen a little away from yourself. - Ensure that your monitor’s contrast is properly adjusted. - Keep your screen clean from dust and fingertips. - If you can’t see the icons on the screen or symbols in your text well, make them larger. 2. Take breaks during work. No matter how urgent you work is your eyesight is more important. So, take short breaks every half an hour or once in an hour (but the more often you take breaks, the better). Take 5 minutes every 30 minutes or 10 minutes every hour. Look at a distant object or, if this is possible, get up and move around for a few minutes. 3. Do special eye gymnastics. It sounds weird and complicated but actually is quite easy and doesn’t take long. Besides, you can do exercises for eyes wherever you want, including your work place. (See 5 Most Effective Exercises For Eyes). 4. Do palming. Palming is a yogic technique which helps to relax nervous system and preserve eyesight. Rub your palms together until you feel warmth between them. Then cover your eyes with palms. Make sure your fingers overlap on your forehead. Relax. Stay in this position as long as you can but the minimum time is 30 seconds. Optimum time is 4 minutes but if you feel you need to do it longer, proceed with it. You can do no harm to your eyes with a palming procedure. It is advisable to think about something that makes you happy during palming. 5. Blink more often. When people gaze at something (including a monitor) for a long time they tend to blink rarely which makes eyes dry and irritated. So, blink more often when you work at a computer. 6. Use eye drops. Eye drops are great for making your eyes wetter if you forget to blink and your eyes get too dry. You can use eye drops if your eyes are already irritated (Visit a doctor to determine which drops are the best for you). These tips are really helpful in protecting eyes when you work at a computer. But even if you use them regularly you should visit a doctor at least once in six months to check your eye health. Good luck! Leave a Reply
<urn:uuid:ffe60445-77bc-4a37-bdef-c3b2272f2594>
CC-MAIN-2016-26
http://www.ourvanity.com/health/how-to-protect-eyes-when-working-with-computer/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937783
666
2.609375
3
Stormwater runoff in southern California has become one of the largest environmental management issues in the region. While current runoff management has been immensely successful in developing systems for flood control, it has not historically been designed to enhance water quality. Current estimates of pollutant loads from stormwater runoff rival those of traditional point sources for many constituents, and impacts from storm drains and channels have been observed in receiving waters. Examples include bacteria that have resulted in posting of beaches for swimming, nutrients that have caused blooms of macro algae, and toxic constituents that have degraded aquatic habitats. This combination of emissions and impacts has led to an increasing regulatory focus on stormwater runoff, but much of the science needed to make effective and efficient management decisions is still lacking. This fact was recognized by both stormwater regulators and municipal stormwater management agencies throughout southern California and has resulted in a collaborative working relationship called the Southern California Stormwater Monitoring Coalition (SMC). The goal of the SMC is to develop the technical information necessary to better understand stormwater mechanisms and impacts, and then develop the tools that will effectively and efficiently improve stormwater decision-making. The SMC develops and funds cooperative projects to improve our knowledge of stormwater quality management. SMC projects are described on these web pages.
<urn:uuid:14752d46-4793-434e-b6fd-19401f421177>
CC-MAIN-2016-26
http://www.socalsmc.org/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962987
254
3.40625
3
A self-assembling robot that is being compared to a smaller version of the “Transformers,” has been created by a group of scientists at MIT. These robots were built using typical hobby technology, motors, batteries and other pieces found in a local hobby store, according to Yahoo News on Aug. 7. The scientists have created their newest version of the self-assembling robots for a cost of about $100. This is down from about $1000, which is what it cost to build the first few robots of this type. One of the more interesting aspects of these new self-assembling robots is where some of the ideas were borrowed from when creating this pioneering technology. The robots come together by using heat-seeking technology and the way that heating activates the hinges of the robots to make it connect together was borrowed from the science behind the toy Shrinky Dinks. The way the robots fold and unfold is borrowed from the ancient paper folding art of origami. The scientists took the technology already available in hobby shops, along with the science behind the children’s Shrinky Dink art project-toy and put it all together to make these new transformer-like robots. The self-assembling robots are made of paper and weigh next to nothing. The robots start to assemble and when complete, it rises on “four stumpy legs and starts scooting in a herky-jerky manner. It transforms from flat paper to jitterbugging four-legged robot in just four minutes,” describes Yahoo. Besides this being an amazing transformation to watch, the uses for this type of robot are never ending. They can be built bigger and better as the scientists advance. They can be used in space exploration, search and rescue missions and environments where it is just too dangerous for humans to tread. The Raw Story suggests that these new self-assembling robots “point the way to self-assembling furniture and satellites.” How wonderful it would be if the technology that allows these robots to self-assemble is applied to all those items you purchase that come with the directions of “Some Assembly Required.” This usually means be prepared to spend hours assembling your child's toy, your new grill or a baby's crib. How nice it would be to lay all the pieces on the floor and watch them slowly come together, just like a Transformer! While this won't be happening anytime soon, the basic science behind something like this is found in the new self-assembling robots. The self-assembling robots can fit into cramped spaces and because of the cost, if one is lost or destroyed, building another one won’t break the bank. Scientists report this is just the start of the self-assembling robot journey. Once this basic design is modified and improved on through the years, there is no telling what this technology will look like as it progresses in the future. Scientists expect that others will jump on the band wagon and build these robots themselves, improving on the technology with every new version. The materials to make this robot are readily available to anyone and the price is within reach of the average novice inventor. The robots that the researchers made at MIT are about six inches long, six inches wide and two inches tall. They weigh in at less than three ounces. The self-connecting robots take four minutes to assemble and stand up and once upright, they move at about two inches per second. Sam Felton and study co-author Daniela Rus of MIT described what the future of these robots might look like. Coupled with the new technology of a 3-D printer, the robots can be made into all types of shapes and sizes in the future. You could walk into a store like Kinkos and order a dog robot or you have a robot made that plays chess with you. The method used to build these robots cost nothing in comparison to the extraordinary amount of money scientists have spent creating modern-day robots. "This is a simple, flexible and rapid design process and a step toward the dream of realizing the vision of 24-hour robot manufacturing," Rus said. While the robots are being compared to “Transformers” of the movie fame, they don’t exactly live up to the major transformations that you see on screen. Once they’ve completed their heat-seeking self-assembly stage and unfold to stand upright, that is as far as they go when it comes to transforming.
<urn:uuid:6df0ae89-eaaa-4e50-b02c-12582eafa727>
CC-MAIN-2016-26
http://www.examiner.com/article/self-assembling-robot-mit-robots-use-science-behind-shrinky-dinks-100-to-make?cid=rss
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958034
921
3.21875
3
Wasp Larva Secretes Antimicrobial Liquid to Prevent Food from Spoiling A new study has found that emerald cockroach wasp larvae secrete an antimicrobial liquid to prevent their food from spoiling. Emerald cockroach wasps are known to sting a cockroach twice - once in its midsection and the second time in the roach's brain - in order to prevent the insect from escaping and tame it. The wasps then lay their eggs in the cockroach's legs which then hatch into larvae that feed on its host. Cockroaches are dirty insects that are covered in bacteria. The microbes spoil the roach's flesh and, in turn, threaten the larval wasps that live inside the cockroach during their long incubation period. Now, a team of German researchers has found that the larval wasps have a special ability to survive on their own. For the study, the research team cut the side of a parasitized cockroach and installed a small window. This allowed them to see what the wasp does inside the cockroach's body, according to a report in phys.org. They noticed that the larva wasp spits a liquid solution from its mouth and uses the fluid to cover the inside parts of the body before it begins consuming the cockroach. When experts analyzed the liquid, they found that it contained micromolide and mellein - chemicals that work as antibacterial agents. These chemicals prevent the growth of microbes like bacteria and virus. To test the effectiveness of the liquid solution, the research team used the chemicals in a bacteria culture. They found that the fluid killed different types of bacteria. "On the one hand, the finding is surprising, because such a simple, little insect larva uses such a sophisticated strategy to ward off detrimental bacteria," co-author of the study Gudrun Herzner, a researcher at Germany's University of Regensburg, told LiveScience. "The larvae are like little chemical plants that produce large amounts of different antimicrobial substances." The findings of the study appear in the Proceedings of the National Academy of Sciences.
<urn:uuid:77eefae9-07f1-4c7b-8bc0-63a9173813d3>
CC-MAIN-2016-26
http://www.natureworldnews.com/articles/475/20130109/wasp-larva-secretes-antimicrobial-liquid-prevent-food.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00184-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948944
435
3.078125
3
Sociocultural Issues Series - Multicultural Awareness - Sociocultural Issues in Psychology Understanding how diverse social and cultural contexts influence the work of the psychologist is a critical component of becoming clinically competent. The Wright Institute emphasizes educating clinical psychologists to be multiculturally sensitive, and proficient to practice with diverse populations. The Wright Institute approach to learning in this area, like the entire doctoral program, is developmental and integrative. Students use awareness of themselves, and critical thinking to gain an increasingly sophisticated understanding of how culture and personal biases affect the clinical endeavor. There are two required courses, as well as advanced electives in this series. Multicultural education is organized according to attitudes, knowledge, and skills. The first two of these dimensions are addressed in the two required courses; skills are honed in Case Conferences, and the advanced electives. In addition, the Wright Institute aims to integrate sociocultural issues into course offerings across the entire curriculum. In the multicultural awareness course, groundwork is laid for students to explore aspects of their cultural identities, understand the impact of worldview on clinical practice, and learn about systems of oppression. Students meet in small groups to provide a safe place for reflective learning. In the sociocultural issues in psychology course, students explore the social and cultural bases of behavior in our diverse society. Sociocultural issues are examined from a wide range of perspectives, including interpersonal perception and attitude formation, stereotypes, sex roles, social influence and group processes, cultural and ethnic influences, and organizational and systems theory.
<urn:uuid:7f22bbb0-87ab-4ff2-964c-dd59fd94a0e2>
CC-MAIN-2016-26
http://www.wi.edu/psyd-program-sociocultural-issues-series
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92759
312
2.890625
3
Name: Maylandia emmiltos Origin: Lake Malawi (Africa) The Red Top Zebra, or the more commonly known as the "Maylandia" zebra, are not in fact just one species. The Red Top is one of the variants of the Zebra complex. There are several differences on which species is classified as "Red Top". The only real way to distinguish these differences are to locate the close area of where the fish was naturally found. Some are more blue than others and some have more "red" than the usual yellow. The presently valid scientific name is Maylandia emmiltos. These names may also depend on the location and area of the Malawi lake, where their natural habitat is. However their name is to be classified, they are quite beautiful specimens. They are called Red Top because of the noticeably orangish, yellowish pigmented area, at the upper edge of the dorsal and caudal fins. Most specimens are from a bluish white, to iridescent white. Another feature is also the vertical black lines that characterize the "zebra" complex. The white is quite strong as well as the orange. If the specimens are well taken care of, they become brilliantly colored and make quite the display. But care is another sector. These wonderful fish require a good sized tank with minimum of 180 L with, or without other inhabitants. Like all other Mbunas, they have the typical behavior of being aggressive, territorial (especially these ones), and active. Rock work also plays a good role in maintaining them so they can thrive. Excellent rockwork structures are even known to lower their aggression levels, so that they can co-exist with other companions. If lots of caves, crooks, and crannies, are provided in the tank, the species will protect its shelter and let others swim about. Gravel should be medium to fine, because of their digging habits! These fish are great excavators, and rock work structures should be carefully placed so that when they dig, there won't be a danger of a collapse and death of fish. Their maximum length is around 13 to 15 cm. They should only be housed with other aggressive Mbuna species from Lake Malawi. They are pretty aggressive but can be housed with similarly sized specimens. Some companions include Labidochromis caeruleus, Melanochromis auratus, Pseudotropheus lombardoi, Labeotropheus spp., and many other Mbuna species. Do not keep them with "Haps" or Peacocks. These are much less aggressive and are likely to be killed if housed with species from the Zebra complex. The Red Top has been around for some time now in the hobby and is seen from time to time in most local fish stores. They are not recommended for beginners and should require some extra care. They are hardy, but good filtration and high water quality are mostly recommended. Also high oxygen levels are essential for protection against Hole-in-the-head disease or the sometimes called "Lateral line erosion" on cichlids. Like any Mbuna, pH and GH should also be considered. Hard water from 10-20°dH is necessary, and a pH from 7.5-8.5 supplemented with good biological filtration due to their high protein waste. Diet is another key issue in keeping one or a community of Mbunas. Most Mbunas in the wild feed on algae and some floating vegetation. "Maylandia" are known to even eat large amounts of duckweed, but it is not recommended to feed them this from different parts of the world. Green foods are a must, and can be mixed in with some sea meats such as shrimp. Most people believe these cichlids are all vegetarians, but a balance must be met and commercial foods in form of pellets should also be supplied. Even home made recipes are a good idea, but some guidelines are to be followed as well. Live foods should not be fed to this type of fish. Mbuna rely on eating soft vegetation that their long digestive tracts can break down easily. These fine fish are very strong and have enlarged lips for scraping off algae in smooth rocks (which are to be supplied also). These species are very easy to breed. They are the typical mouthbrooders. The male (with eggspots on anal fin) is placed with two females (with no eggspots or very very faintly) with his territory chosen and dug out. The female places eggs and the male will fertilize them. But the male will not take care of the eggs, so it is advisable to remove the female into a rearing tank. I think they are just beautiful and are a great addition to any Mbuna community. Got some experience to share for this page? No registration necessary to contribute! Your privacy is respected: your e-mail is published only if you wish so. All submissions are reviewed before addition. Write based on your personal experiences, with no abbreviations, no chat lingo, and using proper punctuation and capitalization. Ready? Then send your comments!
<urn:uuid:22c877db-5ce2-4f5f-a185-c1d4e373ed40>
CC-MAIN-2016-26
http://www.aquahobby.com/gallery/e_Maylandia_emmiltos.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964616
1,063
3.1875
3
Dredging the Toledo harbor and its 25-mile shipping channel is a big and dirty job, but the U.S. Army Corps of Engineers is bound by federal law to do it. A major chunk of the regional economy depends on $1.37 billion worth of goods and materials shuttled in and out of Toledo via ship each year. At least 7,000 jobs are at stake. Ships handle more than 3 million tons of iron ore for the steel industry, 4.7 million tons of coal to generate electric power, and more than 1 million tons of agricultural products from area farms. Given these facts, it is imperative that the Corps - Uncle Sam's public-projects arm - develop a new plan to complete the annual dredging task while depositing as little silt as possible in the open lake. Such a plan won't be easy to come up with and doubtless will be expensive. But a solution is necessary to finally end the official stalemate over the two-decade-old practice of open-lake disposal, which state officials claim roils up the shallow western end of the lake and endangers the $200 million sport fishing industry. Some 835,000 cubic yards of dredged material is removed each year from the harbor and the ship channel, which extends down the Maumee River and out into Lake Erie. Silt from the bottom of the inner harbor, which is likely to be contaminated with various toxic substances, is dumped into a federally approved "confined disposal facility," which juts out from the shore just east of the mouth of the Maumee. The other material, about two-thirds of the total, is deposited in an area of the open lake 2 miles long and a mile wide that lies 3 1/2 miles northwest of the Toledo Harbor Light in Ohio waters. While Corps officials contend that open-lake dumping does not harm the lake because the silt meets U.S. EPA criteria for cleanliness, the practice has some powerful opposition: the governors of both Michigan and Ohio, the Ohio Environmental Protection Agency, the Michigan Department of Environmental Quality, and various environmental and conservation groups. Earlier this year, the Ohio EPA gave the Corps a five-year permit to continue dredging but decreed that open-lake dumping must end by 2013. Whether the silt is contaminated is not the issue, according to Governor Taft, who says that dumping the sediment in the shallow end of the lake "where it can be spread by wind and current action is counterproductive to our efforts to restore this Great Lake." Governor Granholm, meanwhile, has vowed to fight similar practices being considered for Lake Michigan. While the Corps fights what it contends are wrongheaded "misperceptions" about open-lake dumping, the confined disposal area is 60 percent full. Agency officials claim there is no federal money for building another facility (estimated cost: $14 million) and there's no feasible near-term market for using the sediment for any other purpose. Those are reasonable concerns, but governing a vast country such as ours is all about setting priorities for the public money we spend. Logically, maintaining the health of Lake Erie for commerce, recreation, and as a clean drinking water supply for millions of regional residents ought to be a given. Unfortunately, those who hold the purse strings in Washington these days seem more concerned with rebuilding foreign nations than preserving our own.
<urn:uuid:eaa5e658-a784-47ad-8c82-da0eb4fe7e0f>
CC-MAIN-2016-26
http://www.toledoblade.com/Editorials/2004/08/08/The-drudgery-of-dredging.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948964
695
3.09375
3
Study: Allowing More Salmon to Spawn Creates a Win-Win for Humans and Ecosystems Salmon spend most of their lives in the ocean, but return to their birthplaces in freshwater streams to spawn the next generation. These annual migrations up and down the inland rivers are well known and play a significant role in the ecosystem, particularly in the Pacific Northwest. However, there is a concern that humans are harvesting too many salmon, not allowing enough to return upstream to reproduce. This leaves little for the species which depend on the salmon runs such as grizzly bears. A new research study suggests that more Pacific salmon should be allowed to spawn in coastal streams, which would create a win-win for humans and the natural environment. The study was conducted by researchers from University of California (UC) Santa Cruz and Canada. Lead author, Taal Levi, notes that salmon fisheries are generally well managed. Those in charge determine how many salmon to allocate to spawning and how many to harvest. The concern is that the proportion of spawning to harvest is skewed and needs to be rebalanced for sustainability. To assess their theory, the researchers examined the relationship between salmon and 18 grizzly bear populations in British Columbia, and what percentage of the bears' diet was made up of salmon. "We asked, is it enough for the ecosystem? What would happen if you increase escapement—the number of fish being released? We found that in most cases, bears, fishers, and ecosystems would mutually benefit," Levi said. An increase in spawning salmon will allow more food for the bears. Plus, more uneaten fish remains will be left behind by the bears if the streams are more packed with salmon. The bears would choose to eat only the most nutrient-rich parts such as the brain and eggs. The leftover fish carcasses could then feed a variety of other scavenging animals. This would create richer biodiversity and a healthier ecosystem. In most instances, more salmon allowed to spawn will also allow more young salmon reach the ocean, translating into larger harvests for fishermen. The initial cut in the amount of fish harvested would be hard at first, but could eventually lead to overall greater numbers. However, in two of the systems studied in the Fraser River, B.C., helping the ecosystem with more spawning salmon would hurt the fisheries. The predicted economic cost would be $500,000-700,000 per year. The study has been published in the journal, PLoS Biology. For more information on sustainable seafood, check out the Marine Stewardship Council. Grizzly Bear and Salmon image via Shutterstock
<urn:uuid:d777dc99-906c-41b3-a4f8-40fc9b22903b>
CC-MAIN-2016-26
http://www.enn.com/top_stories/article/44252
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959064
534
3.8125
4
University of Florida nanoresearchers report unprecedented power conversion efficiency with a new graphene-based solar cell. Prior attempts at using graphene’s single-atom-thick honeycomb lattice in solar cells have shown power conversion efficiencies up to 2.9 percent. The UF team more than tripled those levels, with a record-breaking 8.6 percent efficiency. They achieved these results by chemically treating, or doping, graphene with trifluoromethanesulfonyl-amide, or TFSA. The results are published in the current online edition of Nano Letters.
<urn:uuid:7894590f-f6f4-407e-8b62-641c1cfbb416>
CC-MAIN-2016-26
http://www.nanoscienceworks.org/rss_manager.2006-04-18.1344018745/2012-05-29.3031657504/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.895712
122
2.734375
3
Section Editor: Secondary Barbara A. Schaffner Each school year brings both new experiences and old reminiscences as first year teachers, student teachers, and interns are added to the faculty. Everyone has a student-teaching story. We all faced our first class with the guidance of a helpful or not-so-helpful cooperating teacher. Some of us also have filled the role of college supervisor. I would like to focus on the role of the high school cooperating teacher and his/her role in helping new teachers to be aware of gender issues. Obviously, the best place to learn to teach is the classroom. A knowledge of the subject matter and basic methods comes from the college courses. How to function successfully in the faculty room as well as the classroom is the area of the cooperating teacher. Ideally s/he will be able to guide and counsel the newcomers to our profession. Gender issues in the classroom are of major concern. How a potential colleague treats other male and female members of the department and responds to them as persons, how s/he deals and copes with the relationship to male and female administrators can be made much easier with the guidance of a cooperating teacher. Secondary schools are microcosms of the world. Gender issue problems of intimidation, sexual harassment, and inappropriate comments, unfortunately can confront the beginning teacher. A cooperating teacher can help the student teacher by pointing out potential problems and providing a forum for the discussion of issues. In addition s/he can help the new teacher to recognize gender problems among students. By helping a student teacher learn how to deal with both adult and student gender issues, a cooperating teacher can influence not only the present but the future.
<urn:uuid:d113ecd0-2e56-4c85-97d9-f90136b390a6>
CC-MAIN-2016-26
http://scholar.lib.vt.edu/ejournals/old-WILLA/fall95/Schaffner.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967227
342
2.90625
3
The fundamental human rights; Hitler's " demonocracy"; his ideology of destruction. The indicted organizations as instruments of Nazi tyranny; their interdependence and co-operation. Declaration of criminality: Its legal problem, and the question of expediency, Its elements (see Part 8) : Group character; Voluntariness of membership; Criminal nature of aims and methods; Members knowledge of criminal character of their organizations - Hitler's secrecy order; Connection of indicted organizations with individual defendants. Exclusions from declaration of criminality. Prosecution's observations on Defence witnesses The Leadership Corps of the Nazi Party (see Part 3; Part 20; Part 21) Numerical data; Check on political reliability. Concentration camps; Euthanasia; Persecution of the Church; of the Jews. Germanization; Foreign slave labour; Lynching of Allied airmen; Ill-treatment and murder of prisoners of war. The Gestapo (see Part 3; Part 20; Part 21) Engineering of frontier incidents; Einsatz Groups; Concentration camps; Commando order and lynching of Allied airmen; "Night and Fog" decree; Third degree interrogations; Persecution and extermination of Jews; Shooting of hostages. The SS (see Part 3; Part 20; Part 21) Argument on organizational unity of the SS; Ideological training for and extent of SS crimes-Himmler speeches; Condoning of crimes by corrupt judiciary. Concentration and extermination camps; Murder of Soviet prisoners of war; Massacre of civilians The SD (see Part 3; Part 20; Part 21) Argument on term "SD"; Engineering of frontier incidents; Einsatz Groups; Concentration camps; Persecution of the Jews; of the Church; Fifth column activities. The General Staff and High Command (see Part 3; Part 4; Part 21; Part 22) Mentality of German Officers Corps; Planning of aggression; Commissar order; Commando order; Co-operation with Einsatz Groups. The soldier's duty to obey. The Reich Cabinet (see Part 3; Part 20; Part 2I) Gradual decline of Cabinet powers and Hitler's rise to dictatorship; Conference of 5th November, 1937; Argument on Cabinet legislation and against its connection with planning of aggressive war. The SA (see Part 3; Part 21; Part 22) Background and consequences of blood purge of 30.6.34; Military character of SA and its part in aggression. Persecution of the Jews; of the Church; Guarding of prisoners of war; of ghettos and concentration camps. Submission of documents including affidavits by Defence Counsel for SA (concluded); by Prosecution in rebuttal of Defence evidence for the indicted organizations The case for the GENERAL. STAFF and HIGH COMMAND (supplemented) Oral evidence of Major General Walter Schreiber, Army Medical Services, on Preparations for bacteriological warfare (see Part 21) Submission of Defence document Experiments on concentration camp inmates and Russian prisoners of war Final Statement by Defendants Goering, Hess, Ribbentrop, Keitel, Kaltenbrunner, Rosenberg, Frank, Frick, Streicher, Funk, Schacht, Donitz, Raeder, Von Schirach, Sauckel, Jodl, von Papen, Seyss-Inquart, Speer, von Neurath, Fritzsche Site Map · What's New? · Home · Site Map · What's New? · Search Nizkor
<urn:uuid:e86b8a64-c36c-4f24-a075-50ae28881eb7>
CC-MAIN-2016-26
http://www.nizkor.org/hweb/imt/tgmwc/tgmwc-22/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00059-ip-10-164-35-72.ec2.internal.warc.gz
en
0.845106
745
2.65625
3
The Most Iconic Photographs From National Geographic's 125-Year History Courtesy of National GeographicThe journal of the National Geographic Society officially launched in October 1888. Since then, National Geographic has expanded its global reach to over 60 million readers, a TV channel, and a website. The publication is known for its award-winning nature photography and knack for visual storytelling. In honor of its 125th birthday, Nat Geo is unveiling its most iconic images in the 2013 October "Power of Photography" anniversary issue, featuring famous images that both shaped the magazine's history and had a profound impact on our global conscious. "Photography is a powerful tool and form of self-expression," Chris Johns, editor in chief of National Geographic magazine, said in the press release. "Sharing what you see and experience through the camera allows you to connect, move, and inspire people around the world." National Geographic is also encouraging all photographers — from amateurs to seasoned experts — to submit their own pictures on October 1 as part of the new photosharing platform, "Your Shot." You can find out more information about the Your Shot community platform here.
<urn:uuid:c1b35051-feb4-472b-8f47-c7208fc5d7d7>
CC-MAIN-2016-26
http://www.businessinsider.com/national-geographic-125th-anniversary-photos-2013-9
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955993
232
2.671875
3
Nawab Sir Malik Khizar Hayat Tiwana KCSI, OBE (Urdu: نواب ملک خضرحیات تیوانہ) came from a family which had, since the 15th century, been prominent among the landed aristocracy of the Punjab. Malik Khizar Hayat Tiwana was born in 1900 and died in 1975. Malik Khizar Hayat Tiwana\s father was Major General Sir Malik Umar Hayat Khan (1875-1944), who acted as honorary aide-de-camp to George V and George VI and served as a member of the Council of the Secretary of State for India, 1924-1934. Tiwana was educated, like his father, at Aitchison College, Lahore. At the age of 16 he volunteered for war service and was commissioned to the 17th Cavalry in 1918. As well as his brief World War I service, Tiwana served in the Afghan campaign which followed, earning a mention in dispatches. Tiwana then assisted his father in the management of family estates in the Punjab, taking Who do you think is the most Influential Female Politician in India?
<urn:uuid:084ae058-b60c-48d6-9737-af542892f253>
CC-MAIN-2016-26
http://www.in.com/malik-khizar-hayat-tiwana/profile-181741.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981172
262
2.75
3
Arts & Humanities Ten-year-old Alec Greven is the author of several best-selling advice books for kids. Before reading, ask students to offer advice or tips for ways to talk with parents. Write down on a board or a sheet of chart paper some of their best advice. Next, introduce these words that appear in the News Word Box on the students printable page: independent, noticed, publish, organize, donated, and inspired. Discuss the meanings of any of those words that might be unfamiliar. Then ask students to use one of those words to complete each of these sentences: - Have you ever ____ how many snacks are full of sugar? (noticed) - Students at Branch Street School ____ more than 1,000 cans of food to the Food Bank. (donated) - All the compliments about her singing voice ____ Jordan to try out to be a contestant on American Idol. (inspired) - It took Brandon a few hours to ____ his baseball cards by team. (organize) - Sherrys mother was so proud that Sherry was growing more ____. She even cleaned up her room without being asked. (independent) - At the end of the year, each student chose a favorite story to ____ in a class book. (publish) Read the News Click for a printable version of this weeks news story Student Author Sells Thousands of Books. Reading the News You might use a variety of approaches to reading the news: Read aloud the news story to students as they follow along. Students might first read the news story to themselves; then you might call on individual students to read sections of the news aloud for the class. Photocopy the news story onto a transparency and project it onto a screen. (Or use your classroom computer's projector to project the story.) Read the story aloud as a class, or ask students to take turns reading it. Arrange students into small groups. Each student in the group will read a paragraph of the story. As that student reads, others might underline important information or write notes in the margin of the story. After each student finishes reading, others in the group might say something -- a comment, a question, a clarification -- about the text. More Facts to Share You might share these additional facts with students after they have read this weeks news story. - Alec Greven is a fourth grader at Soaring Hawk Elementary School in Castle Rock, Colorado. When he was a third grader, his teacher, Anna Dupree, challenged students to write books independently. Alec wrote seven pages of advice on how to talk to girls. He wrote about not showing off or being the class clown. His favorite advice: Sometimes you get a girl to like you, then she ditches you. Life is hard, move on. - The school librarian, Janice Perry, decided to sell spiral-bound editions of the book at the school book fair. As it turned out, Alecs book was the fairs best seller. Perry even gave a copy of the book to her adult son. You can learn from this, she told him. Its level-headed. - How to Talk to Girls was published by HarperCollins. Alecs publisher suggested he keep a journal of his thoughts. The contents of that journal led to two how-to books on dealing with parents, How to Talk to Moms and How to Talk to Dads. The main difference between moms and dads is that dads dont let you get away with the big stuff like playing with matches, says Greven, while mothers dont let you get away with anything. - Alecs advice for getting a girl includes, You cant be goofy. You have to control your hyperness. Cut down on the sugar if you have to. You wont get a good start if youre hyper. See more of Alecs advice on this YouTube video from Borders Books. - Grevens success has led to the creation of a Keep the Writing Alive Award at his school. - Greven has donated a portion of his book advance to the charity Stand Up To Cancer. He has even started his own Stand Up to Cancer fundraising team. Everyone knows someone who's had cancer, someone like my grandmothers, said Alec. - Alec has appeared on Ellen three times in the past 14 months. - Twentieth Century Fox has bought the movie rights to How to Talk to Girls. At this time, there is no screenwriter attached to the project. - Alec would rather read than play video games. His room has more books than toys. He says he has read all seven volumes of Harry Potter at least five times. - What is the title of Alec Grevens first book? (How to Talk to Girls) - In how many languages has How to Talk to Girls been printed? (17 languages) - At the school book fair, how much did a printed copy of How to Talk to Girls cost? - Who put Alec in touch with a publisher? (Ellen DeGeneres did) - What is the title of the new book that Alecs publisher will start selling this fall? (How to Talk to Santa) - To what charity is Alec donating some of his book-sales proceeds? (The charity is called Stand Up to Cancer.) Think About the News Discuss the Think About the News question that appears on the students news page. You might use the think-pair-share strategy with students to discuss this question. If you use this strategy - First, arrange students into pairs to discuss and list responses to the question. - Then merge two pairs of students together to create groups of four students. Have them discuss and add to the ideas they generated in their pairs. - Next, merge two groups of four students to form groups of eight students. Have students create a new combined list of ideas. - Finally, bring all students together for a class discussion about talking to teachers. Critical thinking. Review students advice or tips for talking with parents. (See the Anticipation Guide at the top of this lesson.) Ask students to expand the list. As a follow-up activity, you might share with students one of these video interviews with Alec Greven: Citizenship. Invite students to respond in writing to one of these predicaments: - On your way home from school, you see two kids spray painting words on a fence. They run from the scene as soon as they see you. What would you do? - While walking home from school, you realize that a stranger is following you. What would you do? - You and your friend are playing on the playground when you find a small blue pill in the grass. What would you do? - On your way to school you see a stranger walking around a neighbors house. You know the neighbor is on vacation. What would you do? Math graph the results. Invite students to share the titles of their favorite books of all time. Have students vote to narrow down the list to ten favorite titles. Then conduct an election. Have students use the Create a Graph tool to create a graph to illustrate the voting results. Use the Comprehension Check (above) as an assessment. Or have students work on their own (in their journals) or in their small groups to respond to the Think About the News question on the news story page or in the Comprehension Check section. Lesson Plan Source LANGUAGE ARTS: English GRADES K - 12 NL-ENG.K-12.2 Reading for Understanding NL-ENG.K-12.12 Applying Language Skills GRADES Pre-K - 12 NM-REP.PK-12.1 Create and Use Representations to Organize, Record, and Communicate Mathematical Ideas NM-REP.PK-12.3 Use Representations to Model and Interpret Physical, Social, and Mathematical Phenomena SOCIAL SCIENCES: Civics GRADES K - 4 NSS-C.K-4.5 Roles of the Citizen GRADES 5 - 8 NSS-C.5-8.5 Roles of the Citizen GRADES 9 - 12 NSS-C.9-12.5 Roles of the Citizen GRADES K - 12 NT.K-12.1 Basic Operations and Concepts NT.K-12.3 Technology Productivity Tools See recent news stories in Education Worlds News Story of the Week Archive. Article by Ellen Delisio and Gary Hopkins Copyright © 2009 Education World
<urn:uuid:d3e267e5-809e-4dbe-93db-7af038994e1a>
CC-MAIN-2016-26
http://www.educationworld.com/a_lesson/newsforyou/newsforyou122.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946632
1,792
3.90625
4
Igor Stravinsky’s (1882-1971) Rite of Spring (La sacre du printemps) remains one of the most written-about pieces of music of the 20th-century. Nonetheless, perhaps our knowledge of it still suffers from an under-appreciation of its original version for two pianos. Although primarily known as a version Stravinsky used during dance rehearsals, the two-piano version is also occasionally recorded and performed as a composition in its own right. In fact, something about hearing the textures and rapid scales this way best demonstrates Stravinsky’s then-new musical language with a syntax of contrasting ideas that stretched the imagination of the listeners of its day almost to the breaking point. Much has been said about the Rite of Spring’s use of dissonance and compound meters, not to mention its unconventional orchestration, but there is something fundamental about Stravinsky’s innovative juxtaposition of ideas that is more apparent without the distraction of a cacophonous orchestra. In fact, the piano played such a central role in all of Stravinsky’s music, no matter how elaborately he orchestrated it, that hearing these ideas on keyboard brings an alternative point of view to them. Adapting the two-piano version of Rite of Spring for pipe organ (four hands and four feet) brings more color into the original two-piano version, but certainly is not as percussive. Without going into too much detail about the familiar segments of music that make up this ballet, this adaptation for pipe organ allows the players an arsenal of registrations to choose from while maintaining the simpler overview of the music that the two-piano version offers. Gregg Wager is a composer and critic. He is author of Symbolism as a Compositional Method in the Works of Karlheinz Stockhausen. He has a PhD in musicology from the Free University Berlin.
<urn:uuid:b25ffce2-47e9-4233-827e-15ed91dff9ed>
CC-MAIN-2016-26
http://www.laphil.com/philpedia/music/rite-of-spring-for-two-organists-from-composers-reduction-for-duo-piano-igor
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00031-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950699
399
2.5625
3
Before I get to my specifically legal argument about symbolic expression and the original meaning of the First Amendment, I wanted to say a bit about the kinds of symbolic expression that were commonplace in England and especially America of that time. Of course, the common nature of such symbolic expression doesn't by itself prove that such expression is constitutionally protected; but it helps show why the evidence that I have come up with makes sense in light of the Framing era's actual practice of using symbolic expression interchangeably with words. Plus some of the items are quite a bit of fun. To begin with, one of the leading English holidays, Guy Fawkes Day (called Pope Day in the colonies), revolved around processions and burning effigies. John Jay, the coauthor of The Federalist, Supreme Court Chief Justice, and negotiator of a much-opposed treaty with England, "wryly observed that he could have found his way across the country by the light of his burning effigies in which he was represented selling his country for British gold" -- a continuation of the pre-Revolutionary pattern of burning the effigies of disliked colonial governors. And sometimes the effigies became parts of more elaborate, and at times self-consciously humorous, displays. In the first major protest against the Stamp Act, colonists placed on a "Liberty Tree" (in that case, a large elm) various effigies, including a "devil . . . peep[ing] out of a boot -- a pun on the name of former British Prime Minister Lord Bute (pronounced Boot), who was widely if erroneously believed to be responsible for the Stamp Act"; "[t]he effigies were then paraded around town, beheaded, and burned." Puns were commonplace in other contexts as well. For instance, English supporters of restoring the Stuarts would pass a wine glass over a water jug while drinking a toast to the health of the king, as a clandestine symbol that one is toasting the "King over the Water," which is to say the Pretender, who lived in exile in France. Numbers often played a role in symbolic displays. Englishmen and Americans who sympathized with English radical and colonial hero John Wilkes not only toasted him, but toasted and celebrated him using a number associated with him: Forty-five toasts -- representing issue 45 of Wilkes' North Briton, which got him prosecuted for seditious libel and made him a star -- were drunk at political dinners where forty-five diners ate forty-five pounds of beef; at other dinners, the meal was "eaten from plates marked 'No. 45'"; the Liberty Tree in Boston had its branches "thinned out so as to number forty-five." Note also that here, as well as in some of the other examples, literal speech (the words of the toasts) was freely mixed with symbolic expression. I haven't seen the Framers wearing symbolic armbands, but their equivalent were cockades worn in hats. Thus, for instance, many 1790s Americans wore colored cockades to represent their Republican (red, white, and blue, referring to Republican sympathy for the French Revolution) or Federalist (black) allegiances. Some wore cockades made of cow dung as a mockery of the other side's cockades. Some conducted mock funerals for the other side's cockades (see the picture above). Mock funerals occurred in other contexts as well: For instance, colonists conducted funeral processions for liberty as protests against the Stamp Act. Flags and liberty poles (see the picture above) also played a role. (Liberty poles were often described as "standards," in the sense of the equivalents of flags.) From the pre-Revolutionary era to the 1790s, Americans raised liberty poles as symbols of opposition to what they saw as oppressive conduct by the government. They burned "Liberty or Death" flags stripped from their adversaries' liberty poles. They planned elaborate pantomimes criticizing their Congressmen, with displays of the French and American flags crowned with liberty caps, an upside-down British flag, and a gallows, followed by the burning of the British flag. And burning played a major role as well, as I've already suggested. After the Revolution, Americans burned copies of the Sedition Act and other federal laws. They burned copies of opponents' publications that they saw as libelous, echoing the English legal practice of having libels be burned by the hangman. So it is understandable that a nation that so often used symbolic expression as part of politics would see the freedom of speech and press as covering symbolic expression to the same extent as verbal or printed expression. Likewise, it makes sense that the protection for symbolic expression on the Supreme Court dates back to the very first Supreme Court decision striking down any government action on free speech or free press grounds. The Court in that 1931 case simply casually assumed that symbolic expression was as protected as verbal expression, and treated the display of a red flag as legally tantamount to antigovernment speech. But its assumption was consistent with the First Amendment's original meaning: The equivalence of symbolic expression and verbal expression has been part of American practice -- and, as I'll try to show below, American law -- since the Framing era. Related Posts (on one page): - "Freedom of Speech, or of the Press" as the "Right To Speak, To Write, or To Publish," Including Symbolic Expression: - Symbolic Expression in Late 1700s and Early 1800s Discussions of Constitutional Law: - Symbolic Expression in Late 1700s and Early 1800s Speech Restriction Law: - A Brief Note on Symbolic Expression During the Framing Era: - Symbolic Expression and the Original Meaning of the First Amendment:
<urn:uuid:fb000bb1-f56a-4516-89ca-2a129ee56ed0>
CC-MAIN-2016-26
http://www.volokh.com/posts/1221511556.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972244
1,195
2.984375
3
As Indiana’s K-12 education officials ponder new ways to engage students online, a national teaching and education policy group came out with a stark reminder: Nearly one-third of all U.S. students do not use the Internet at home. Take a look at this infographic from the group, called ASCD, which released a report on how educators can deal with the divide between the digital haves and have-nots. Willona Sloan, the author of ASCD’s report, says current policymakers don’t understand the central problem: The digital divide once indicated a division between those who had access to technology—especially high-speed Internet—and those who did not, but now, experts say, the digital divide is actually more complicated than previously thought. Data released by the U.S. Department of Commerce found that Americans in lower-income and rural areas have access to Internet connections; however, those connections are slower than is required to download web pages, photos, or videos, while wealthier neighborhoods have faster Internet connections Or, as education blogger Alexander Russo sums up: Many of those talking about online learning these days seem to have forgotten what it’s like to have to go somwhere to get online, or to consider home Internet access an afterthought instead of a precondition. Or they think everyone has an iPhone.
<urn:uuid:4724001e-1f33-4e21-a54a-ffe0abf44e80>
CC-MAIN-2016-26
http://indianapublicmedia.org/stateimpact/2011/10/04/policy-group-bridge-the-digital-divide-before-pushing-digital-learning/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955404
280
3.140625
3
What Exactly is Malt? Malt is the product that is left over after a cereal grain has been dried, allowed to sprout, air dried again, then heated in an oven. Any of a variety of cereal grains, including rice, wheat, oats and rye can be used to make malt. The most common by far, however, is barley, which is the primary grain used in the production of most beers and the majority of malted whiskeys. Why Is It Made? Malting a grain has two advantages. Prepares Starch for Conversion to Sugar Sometimes, the maltster wants to bring malt “to its highest point of possible soluble starch content.” This is done by heating the grain just to the point where the growth process stops, but still “allows a natural enzyme, diastase,” to be ready to convert that starch to sugar. Used in fermentation processes (think beer, whiskey and vinegar), the soluble starch is added to water where diastase turns it into sugar. This sugar-water is fed to yeast, which metabolize the sugar into CO2 and alcohol. Malt prepared in this way imparts a sweetness to the final product. Malting isn’t necessary for fermentation, and many alcohols are made without malt grains. Nonetheless, it is a helpful tool as it: “prepares the starches for conversion, and then stop this action [turning starch into sugar] until the brewer [or distiller] is ready to utilize the grain.” Adds Flavor and Color Sometimes, grains are malted past the point where the enzymes remain active in order to impart a roasted flavor, and color, to the final product. This flavoring malt is the kind typically used in malted milk powders. Where Is It Used? Malt is used in a variety of products, although it is most commonly found in drink form, with the notable exception of malted milk balls. As shown above, malt is present from the beginning of the beer making process. It is an essential ingredient in the making of the wort, the sugar-saturated liquid produced after the malt has steeped in hot water, allowing the diastase to do its work. Yeast is added to the wort and transforms it into alcohol in stages: (1) during the first “lag” phase, the proteins in the sugar are broken down into amino acids; (2) next, in the “respiration” phase, the yeast breathes in O2 which makes the wort more acidic; (3) then sugars are transformed into CO2 and pyruvic acid; which (4) ultimately turns into the alcohol we lovingly call “beer.” Some sugar is not metabolized by the yeast, which explains why beer may have a sweet flavor. There are three by-products the yeast produce that may affect the flavor of the beer: Diacetyls produce a “woody” taste, esters leave a fruity taste, and phenols have a spicy or even medicinal flavor. Not all whiskeys use malt. Those that do go through a process that is similar to beer, until the distillation starts. Recipes may differ, but the process is generally like this one used for making Scotch whiskey: When the [malted] barley is dry it is then milled to produce a floury substance known as “grist” . . . . which is rich in sugar [and] mixed with hot water to create a “mash” . . . . The . . . mash is stirred regularly to encourage the release of the sugars. When this process is complete the resulting liquid . . . “wort” is . . . transferred to large wooden “washbacks” [where] the yeast is added. At the end of fermentation, the “wash” is not much higher in alcohol than beer, about 8-9%. It is then transferred to a first copper pot, called the wash still, where the liquid is heated and distilled. (In distillation, the heat evaporates the liquid, which floats up and through a pipe where it reaches a second vessel. There the vapor cools and condenses back into a liquid). A second distillation, in the spirit still, is required before the spirit reaches the requisite strength. Interestingly, the second distillation produces three distinct products: The first part, the “foreshot” is too strong and contains undesirable components. The next part, the “middle cut” is what we are looking for . . . [and] is diverted to a receiving tank. The final part . . . the “feints” is too weak to be used but it is saved [and] added to the next batch. The middle cut is placed in oak barrels or casks that were previously used to mature Bourbon, Rum, Sherry or Port. Whiskey is typically aged for years, and by some accounts, “2% is lost through evaporation each year.” This loss, which can really add up for longer-aged whiskeys, is called the angel’s share. Malted Milk Balls The distinctive flavor of malted milk balls (think Whoppers) comes from the addition of malted milk – “a .. . mixture of malted barley, wheat flour, and whole milk, which is evaporated until it forms a powder.” Useful in a variety of products, malted milk powder can be purchased in grocery stores and added to a variety of drinks (like milkshakes) as well as baked goods and other recipess The source of the crummy commercial that made Ralphie swear, powdered and fortified Ovaltine gets its distinctive flavor from barley malt extract. Although for over 100 years moms have served this drink to their children because of its nutritional benefits, those same supplements have recently caused the company trouble. According to news reports, Canada has banned Ovaltine because it is “‘enriched with vitamins and minerals’ and therefore illegal,” due to certain ingredients being unapproved. Who knew? Made in much the same process as beer (at least at first), malt (from barley or another grain) is mixed with water, the starches are broken into sugar, the sugars are fed to yeast, and the yeast produces alcohol. Will be converted to vinegar in months [although] the industrial process can be completed within hours since air is bubbled and mixed through the solution. If you liked this article, you might also enjoy subscribing to our new Daily Knowledge YouTube channel, as well as: - Alcohol Does Not Help Prevent Hypothermia, It Actually Makes It More Likely - Does Alcohol Really Cook Out of Food? - What Grog was Originally Made From - What is Eggnog Made Of? - Grape-Nuts Contain Neither Grapes Nor Nuts |Share the Knowledge!|
<urn:uuid:3245d13e-3d8d-4a11-895d-6c197cf2fe4e>
CC-MAIN-2016-26
http://www.todayifoundout.com/index.php/2014/02/exactly-malt/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951668
1,455
3.109375
3
Shall we play a game?: Merging citizen science and video games In the mysterious online world of "Forgotten Island," you’ll investigate the destruction of a biology lab, encounter domineering robots and solve puzzles to find your way out of the conundrum. You'll also be helping real-life scientists better understand the creatures of the natural world. "Forgotten Island" and its counterpart, "Happy Match," are ingenious video games with a twist. The games were created by researchers, designers and developers as part of a School of Information Studies research project, Citizen Sort, funded by the National Science Foundation. The games combine engaging adventure and problem-solving with citizen science—scientific projects that involve the public as active participants. Inside the games, real-life photos of moths, sharks and plants appear to players who must classify them based on color, size and other features to earn points and unlock puzzles. Players compete in a challenging game—and maybe learn something in the process. Scientists gain information on thousands of specimens. And iSchool researchers gather invaluable data to better understand the effectiveness of gaming in citizen science projects. “We want to know how do we engage people to do purposeful things with a game and then also how do we design a game like this to maximize data quality for scientists,” says lead designer Nathan Prestopnik ’01, G’06, a Ph.D. candidate studying human-computer interaction. “But we also want people playing the game to get something out of it, to be entertained.” The Citizen Sort team unveiled the games Monday during an open house at Hinds Hall for visitors interested in playing the games. "Happy Match" is available to play. "Forgotten Island" goes live Oct. 12. Citizen Sort was the result of a research proposal on the connections between social behavior and computer systems by Distinguished Professor of Information Science Kevin Crowston, the principal investigator, and then-Ph.D. candidate Andrea Wiggins G’11. With a background in design, Prestopnik joined the research team and iSchool Professor Jun Wang became an investigator on the project. “My students and I had been doing research on how citizen science projects work. As a follow-up project, we wanted to develop new systems to support such projects,” Crowston says. “We did a preliminary survey of systems and noticed, first, that there was a need to deal with large collections of photographs that people were contributing and second, that few projects were using games as a way to motivate participation.” They developed the game systems with two goals in mind: “First, to explore how well game features work as a motivator and second, whether game-playing participants can provide high-quality data about the species,” Crowston says. Students began brainstorming content in the summer of 2011 and into the fall. Prestopnik then headed up the work of 16 students to design the video games and the Citizen Sort web site. The student developers, designers and artists were from the iSchool, L.C. Smith College of Engineering and Computer Science, College of Visual and Performing Arts, School of Architecture and the S.I. Newhouse School of Public Communications. To feed the web site with images, Dania Souid ’13, a broadcast journalism and French major who coordinates marketing and communications, works with the scientists who have photograph collections, including a University of Georgia naturalist with tens of thousands photos of moths, and such organizations as Encyclopedia of Life that have collected photos from the public. The team is looking to the long-term sustainability of the games after the grant ends. It will need a server to host the games and people to maintain what they hope will be a useful project for a long time. “We’re hoping it expands to people across the globe,” Souid says. Souid has already been bitten by the bug. “Because I’ve classified so many moths, I’ve gotten good at it,” she says. “When I see a moth—not even in the game—I think ‘Oh, I can classify that.’”
<urn:uuid:12253728-3568-402f-87a2-a438d9ac38e2>
CC-MAIN-2016-26
http://news.syr.edu/citizen-sort/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955056
877
2.78125
3
|[This is a MPIWG MPDL language technology service]| Where (pron. & conj.) Whether. Where (adv.) At or in what place; hence, in what situation, position, or circumstances; -- used interrogatively. Where (adv.) At or in which place; at the place in which; hence, in the case or instance in which; -- used relatively. Where (adv.) To what or which place; hence, to what goal, result, or issue; whither; -- used interrogatively and relatively; as, where are you going? Where (conj.) Whereas. Where (n.) Place; situation.
<urn:uuid:31694969-7e77-4f6a-bb4a-1643b023393f>
CC-MAIN-2016-26
http://mpdl-service.mpiwg-berlin.mpg.de/mpiwg-mpdl-lt-web/lt/GetDictionaryEntries?query=where&queryDisplay=where&language=en&outputFormat=html&outputType=morphCompact&outputType=dictFull
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00156-ip-10-164-35-72.ec2.internal.warc.gz
en
0.811892
140
2.71875
3
Grades K - 2 Grade level Equivalent: Not Available Lexile® Measure: 530L Guided Reading: L Type of Book: Easy Concept Book - Circus, Carnivals, Fairs, Parades About This Book A trip to the circus can be loads of fun. And there's a lot to learn under this big top, like the rules of subtraction. Subtraction is taking away something and finding the difference. For instance, a clown is carrying three pies out of the kitchen, fresh and hot. Uh-oh—watch out for that banana peel! Two of those pies go flying. Now there is only one left. That's subtraction! Throughout this book, jesters ride the roller coaster, hand out balloons, and wear funny hats. These entertainers are having great fun, but they are also introducing readers to important math notations and concepts such as the minus sign, the equals sign, and the number zero. The colorful illustrations keep children interested, while the clowns' goofy antics help basic subtraction facts sink in. There's plenty of clowning around in this entertaining math book, but when it's time for some serious subtracting these jokesters sure do know their numbers!
<urn:uuid:cddd0d20-da78-4892-b58b-5d2330c313ea>
CC-MAIN-2016-26
http://www.scholastic.com/teachers/book/subtraction-book
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909915
254
3.53125
4
In search of a better life By HOLLY ATKINS Wonders of Florida Cultural Diversity: Asian-Americans * * * If you've been following this series, you already understand the problem with labeling people. You know that referring to someone as Hispanic or Middle Eastern, for example, may be simple and convenient but can also present all sorts of problems. Can you say that your friend Hamir, from the Middle East, is an Arab? No. His family is from Iran, not an Arab nation, and is therefore Persian. You know that Carlos' family members all speak Spanish at home and only recently moved to the United States, but he tells you they have always been U.S. citizens. You find this confusing, but Carlos explains that they are from Puerto Rico, and Puerto Ricans are citizens of the United States. This week we'll be focusing on another extremely diverse group: Asian-Americans. Often this characterization is extended to include other groups of people, and the term Asian-Pacific Islander-Americans is used. Both of these broad labels attempt to place into one category people from more than 29 distinct subgroups who differ in language, religion and customs. The four main groups of Asian-Americans are East Asian, such as Chinese, Japanese and Korean; Pacific Islander, such as Filipino and Hawaiian; Southeast Asian, such as Thai and Vietnamese; and South Asian, such as Indian and Pakistani. Meet Wingham Chiu Wingham Chiu, "Sammi" to her friends, has just celebrated her third anniversary of living in the United States. On Nov. 26, 1998, Wingham Chiu and her father, Yatsing, her mother, Siuken, and her brothers, Saikit (Harry) and Saicheong (Steven), boarded an airplane and left Hong Kong to join other family members living and working in Tampa. Sammi, an eighth-grader at Ferrell Magnet Middle School, says that her parents came to America in search of better job opportunities. After the British government returned control of Hong Kong to China, many people lost their jobs, including Sammi's father. This was not the first time Sammi's parents had left the security of home in search of a better life. Yatsing and Siuken were both born in China but escaped to Hong Kong. It took Siuken three attempts before she was successful in leaving China in a small boat, swimming part of the way. Like the other kids we've met, Sammi says that life as an Asian-American has its ups and downs. School is a breeze for Sammi, who says that her teachers in Hong Kong helped prepare her for the work she does now. According to Sammi, schools are easier in the United States. Sammi also likes all the extra attention she sometimes gets because people want to know how to speak Chinese. What she doesn't like are the incidents of racism she has encountered. Sammi says that she's been in local department stores and salespeople have ignored her to help "other white people with nice clothes, and people who are pretty. I did not speak English at all and people have a hard time understanding me. I still struggle with the correct words to use," Sammi says. We asked Sammi about some of the special customs and traditions she and her family have brought with them from Hong Kong. She says that honoring old people is very important in her homeland. "In Hong Kong, we did not celebrate kids' birthdays, but we celebrated older peoples' birthdays. For birthdays, the whole family goes out to dinner to honor the older person. The sons and daughters must pay for the entire bill. The older you are in China or Hong Kong, the more respected you are," says Sammi. "When there is a funeral, the son pays for everything, and he sets the direction for the entire service. For weddings, the bride wears red instead of white. The man who is getting married goes and picks the bride up at her house, and he has to pay lots of money. The girls helping the bride will let him in after he pays. They do not get married in churches; they get married in restaurants usually and invite people." We also wondered how her parents' rules and expectations were different from her classmates', and if this makes it difficult for her at times. Sammi says that her parents are very strict. "Every night the children cook and wash all the dishes. Our room must remain clean, and we have to keep our house clean all the time. If we leave stuff out, it gets taken away. "I am not allowed to go out with friends, but I can go out with anyone in the family. I am not allowed to spend the night at anyone's house. If I'm out with the family, we all have to be home by 11 p.m. -- even my dad does!" she says. "My brothers and I have to keep the yard clean, also. Saturday and Sunday are chores days! Whenever guests are over, we have to pick up after them and clean up after them. "All of this bothers me because my friends do not have to do all of this. But I do not talk back to my parents. My parents taught me not to fight or argue with anyone. Keep our grades up. Always help others." Books that promote understanding According to Wayne Blanton, executive director of the Florida School Boards Association, of the 66,000 new students in Florida this year, about half were foreign-born. This nationwide trend is expected to continue, so that by the year 2020, one of every two students in the United States will be a person of color. Understanding and respecting people from different nations and cultures is a growing necessity for all Americans. One of the best ways to do this is through books you can find by searching one of these library Web sites: www2.nypl.org/home/branch/teen/MoreBooks.cfm -- From the New York Public Library. Click on "Around the World in Books" or "East Meets West." www.infopeople.org/bpl/teen/asian.html -- Berkley Public Library Teen Services, fiction featuring Asian-American characters www.mpl.org/files/readabou/yaasian.htm -- Milwaukee Public Library, books about Asian-Americans and the Asian-American experience Information from radio station WUSF-FM 89.7, the Smithsonian Institution's Web site, Education Week and ERIC Clearinghouse was used in this report. * * * -- Holly Atkins, a National Board Certified Teacher, is the language arts department head at Bay Point Middle School in St. Petersburg. Atkins, who has been a resident of St. Pete Beach nearly all her life, has been an instructor at the Poynter Institute's Writers' Camp and is the proud teacher of local and national award-winning student writers. About Newspaper in Education The St. Petersburg Times devotes news space to NIE features throughout the year, including this classroom series. The Times' NIE department works with local businesses and individuals to enrich the classroom experience by providing newspapers, supplemental guides and educational services to schools in the Tampa Bay area. To let us know what you think about this series or to find out how you can become involved in NIE, please call (727) 893-8969 or toll-free 1-800-333-7505, ext. 8969. For past stories, check out www.sptimes.com/nie and click on the Kids Only area. © 2006 • All Rights Reserved • Tampa Bay Times 490 First Avenue South St. Petersburg, FL 33701 727-893-8111
<urn:uuid:08291f8a-79d6-4643-9e91-21adb351cc90>
CC-MAIN-2016-26
http://www.sptimes.com/2002/01/14/NIE/In_search_of_a_better.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972862
1,620
2.546875
3
"My friend today, I told her I had bipolar, and she kind of backed off away and said, 'Oh my god, is it contagious?' and I was like, 'No!'" - Athena "I've had friends who've been diagnosed and have refused to believe it. And then there's parents who say, 'Oh yeah, it doesn't really exist. It's just them acting out. You know, they need to learn to control their temper.'" - Erin "Kids with bipolar gotta know that they're not that different. They have bigger mood swings than other people. There's probably kids that don't have bipolar that are a lot weirder than they are." - Eric "I regard it as a public health crisis. It's an epidemic, if you will," says Martha Hellander, research director for the Child and Adolescent Bipolar Foundation. "If this many kids had some other strange illness that was causing them to not be able to go to school, and want to kill themselves, and so on, you know there would be attention focused on it." Hellander's group advocates for better research, treatment, and awareness about the disorder. "What we hear is a long story of going from one doctor to another begging for help, describing horrendous symptoms that the child has at home, and being told that 'It's your fault. You're not disciplining enough. You're too strict. You're too lenient.' And the last place the doctors have wanted to go is to say, 'Is there something going on in this child's brain from within that's causing this behavior that we see?'," says Hellander. That's a critical distinction, says Dr. David Miklowitz. For one, the causes are believed to be genetic, and researchers think the bipolar brain works differently. "I would say there are certain chemical imbalances in the brain," says Miklowitz, "and certain structures in the brain that may be either overactive or underactive. And the point is that because of the biology of this condition, not all the behavior in this condition is controllable by the person." In adults, bipolar disorder has distinct periods of highs and lows. Each extreme can last days, weeks, or months. In children, moods can flip-flop several times a day or even hour, and in some cases, Miklowitz says, they're simultaneous. "These kids have what we call 'mixed disorders,'" says Miklowitz, "which means you're manic and depressive at the same time. If you can imagine having your thoughts race, feeling a sped up feeling like you can't sleep and don't want to sleep, but at the same time feeling suicidal, feeling hopeless about the future." But if the symptoms in children are distinct from adults, is it truly bipolar disorder? "Bipolar disorder is the flavor of the month in the diagnosis of children," says Dr. Rachel Klein, a professor of psychiatry at New York University. "The children who are described as bipolar are reported to have chronic mania. They're always irritable, impulsive, difficult, etc.. And as a result, people say, 'It's different in children. It's chronic.' Then by definition, we're not talking about the same disorder." "The problem is, across the country, different criteria are being used," says Dr. David Miklowitz. He says the symptoms of bipolar illness often look like attention deficit hyperactivity disorder. "What's one person's ADHD kid is another person's bipolar kid. And as a result, there's a lot of confusion and disagreement." Back to A Mind of Their Own
<urn:uuid:28d1adaa-6381-4a96-bc89-073d24932c9d>
CC-MAIN-2016-26
http://americanradioworks.publicradio.org/features/bipolarkids/a3.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977231
756
2.59375
3
How the Federal Government Defends Against Cybercriminals - Security breaches on the job: The majority of job-related breaches occur when cyber criminals exploit employees through social engineering and scams. - Spear phishing: Cyber criminals target employees through emails that appear to be from colleagues within their own organizations, allowing cyber criminals to steal personal information. - Social media fraud: With more than one billion users on Facebook alone, cyber criminals are increasingly using social media to engage in identity theft and entice individuals to download malicious code or reveal passwords. In an effort to address the evolving threats and increased risks of cybercrimes, the Department of Homeland Security (DHS) works directly with public and private partners to enhance cyber security. Through the Multi-State Information Sharing and Analysis Center as well as the National Association of State Chief Information Officers, DHS works to promote cyber security awareness and digital literacy amongst all Internet users. DHS also collaborates with financial and other critical infrastructure sectors to improve network security. The U.S. Secret Service and U.S. Immigrations and Customs Enforcement (ICE) have special divisions dedicated to fighting cybercrime. The Secret Service maintains Electronic Crimes Task Forces, which focus on identifying and locating international criminals connected to cyber intrusions, bank fraud, data breaches and other computer-related crimes. The agency’s Cyber Intelligence Section has directly contributed to the arrest of criminals responsible for the theft of hundreds of millions of credit card numbers and the loss of approximately $600 million. The Secret Service also runs the National Computer Forensic Institute, which provides law enforcement officers, prosecutors and judges with cyber training and information to combat cybercrime. ICE’s Cyber Crimes Center (C3) works to prevent cybercrime and solve cyber incidents by identifying sources for fraudulent identity and immigration documents on the Internet. C3's Child Exploitation Section investigates large-scale producers and distributors of child pornography, as well as individuals who travel abroad for the purpose of engaging in sex with minors. Cyber security is a shared responsibility, and each of us has a role to play in making it safer, more secure and resilient. For more information about what DHS is doing to fight cybercrime, visit dhs.gov/stopthinkconnect and dhs.gov/cybersecurity.
<urn:uuid:ce74fb20-51fa-446f-9970-f0ac09ba044f>
CC-MAIN-2016-26
http://broward.org/ECountyLine/Pages/Vol_36_no_12/Security.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916525
464
3.046875
3
After learning about Studio-Based Learning (SBL) from Pat Donohue, I interviewed her to learn about the benefits of this approach and why and how SBL personalizes learning. Pat is inspired by a passion to create engaging environments for learning. “There is a fundamental problem with public school as it has come to be defined. Confining young adults away from the world has created environments loaded with discipline problems and excruciating boredom. The challenge of the classroom teacher to engage young minds in the real subject matters of life while students are stuck in their chairs was clearly a losing task (see Gatto, 1992). If you watch children or young adults in the natural world, it will not take long to notice two things: they will soon be engaged in some kind of learning borne out of their own curiosity and/or they will be engaged in making fun – usually both. To this day I cannot fathom why education cannot be a replication of this natural tendency of human beings to learn about the life they lead.” What is Studio-Based Learning? Studio-based learning in America can be traced back to John Dewey’s Laboratory School in Chicago in the late 1800’s (Lackney, 1999). It was later adopted by North American architectural education and showed up in the University of Oregon Architectural School in 1914. Lackney describes the design studio as, “A type of professional education, traditional in schools of architecture, in which students undertake a design project under the supervision of a master designer. Its setting is the loft-like studio space in which anywhere from twelve to as many as twenty students arrange their own drawing tables, papers, books, pictures, drawings and models. In this space, students spend much of their working lives, at times talking together, but mostly engaged in private, parallel pursuits of the common design task (quoting Schon, 1983).” Primary concepts that drive studio-based learning include: - Students work like apprentices in a common space under the tutelage of a “master.” - Students interact when needed with each other on their designs. - Students undergo periodic critiques, known as “crits,” of their designs, projects, or products. Crits are for gaining knowledge about your work. They occur student-to-master first and then evolve self-learning crits between peers. - It is driven by the pragmatic. The idea is to get your hands in your work, get it done, revise it to perfect it, and self-evaluate the results. - Final work or products are presented publicly. Studio-based learning methods were picked up in various iterations in K-12 programs and in universities throughout the 20th Century. The use of SBL educational laboratories died down in 1970s and 1980s but never died-out. Today, SBL is experiencing a revival. The originators of the SBL model we pursue run the Intelligent and Interactive Systems Lab at Auburn University and partners at Washington State University have launched the Online Studio-Based Learning Environment (OSBLE) where instructors from around the country can share their experiences and growing knowledge about the model’s effectiveness. In 2006, John Seely Brown published a short but hard-hitting article, “Exploring the edge: New Learning Environments for the 21st Century” on the architectural studio model as a foundation for current trends in learning. He explains: In the architecture studio, for example, all work in progress is made public. As a consequence, every student can see what every other student is doing; every student witnesses the strategies that others use to develop their designs. And there is public critique, typically by the master and perhaps several outside practitioners. The students not only hear each other’s critiques, but because they were in some sense peripheral participants in the evolution of each other’s work, they also have a moderately nuanced understanding of the design choices and constraints that led to the final result … If you look at the learning outcomes for the architecture studio and Professor Belcher’s physics classes, it is evident that in both environments, students move from ‘learning about’ something to ‘learning to be’ something—a crucial distinction. I believe studio learning is a preferred environment for our educational system, ideas about: situated learning, collaborative learning, personal learning networks and personal learning environments, mobile computing and its ability to deliver an SBL environment into a learner’s hands, and authentic instruction. How did you build this passion for experiential learning approaches? Fifteen years ago, I set out with a Master’s degree in Instructional Technologies to a new professional life, inspired by a passion to create engaging environments for learning. I had been a high school science and English teacher in a central urban school district (Oakland, CA) and a highly rural school district (Lake County, CA). I set out in 1997 on a path that led me to one year of science and mathematics software production for an educational technology publisher, followed by eight years in STEM education grants – six years as Principle Investigator for a U.S. Department of Education grant serving schools in rural North Dakota and two years as Project Director on a similar National Science Foundation (NSF) grant for rural schools in the six Hawaiian Islands. The North Dakota grant was housed in a Science Center and that experience cemented my love of informal education approaches to learning. In Hawaii, I left the grant position on the advice of my university colleagues to enter into their Ph.D. program in Communication and Information Sciences. That program introduced me to new research colleagues whom I work with today. Our research focused on instructional models that integrated technology to raise the learning bar in science, mathematics, and computer science. I eventually came to see the most important part of STEM learning is the “E.” Engineering is, more often than not, where the other three fields come together in hands-on applications. We began to look at instructional models that would situate student learning in practice. My colleagues joined with two other universities in a grant to develop and test a model of Studio-Based Learning (SBL). They are now in their second implementation grant of the SBL program through NSF. Multiple universities and instructors around the country have been involved in one or both of the SBL grant work and the results are showing that, in college undergraduate computer science courses, SBL shows improvement gains for students compared to those in non-SBL courses. I extended the SBL protocol to a pilot program for high school and am now investigating a revision of the model into a “Design Studio” approach that integrates SBL methods into a more robust laboratory of learning experiences. What are the findings from neuroscience? Findings from neuroscience has expanded the picture of what is happening in the studio when learning is occurring. Something I now tell my students that makes them sit up with new attention is, “every moment we talk here; every day you leave this classroom, you have a new brain.” The point is, from neuroscience research (c.f., John Medina’s Brain Rules at www.brainrules.net), we know that the neurons in our brain form networks of connections that are in some mysterious way we still don’t understand how we store our learning. That learning is individual and based on the numerous factors that shape our individual connections. We learn constantly. In fact, tell yourself to “stop learning.” It can’t be done. This means that every moment of our lives we are re-forming our connections with every new or evolving thought. New thought; new connections; new brain. I find that brain boggling! And, of course, I want to know more. Currently, we are designing an evolution of our Instructional Technology department to embrace a studio environment using SBL principles. I am working with colleagues in the Education departments to reformulate our SBL model into a more rigorous approach for all grade levels and all disciplines to personalize learning in educational contexts. That will involve development of mobile learning approaches to the studio experience and it will involve creating physical laboratory spaces on campus where we implement and research this evolving method of instruction. Patricia (Pat) Donohue, Ph.D. Assistant Professor, Department of Instructional Technologies Graduate College of Education, San Francisco State University President and CEO, Community Learning Research LLC Pat Donohue teaches instructional design and technologies in the Department of Instructional Technologies at San Francisco State University’s Graduate College of Education. She is also President and CEO of Community Learning Research, LLC, a private educational research company located in the Napa Valley, California. She holds a doctorate degree in Communication and Information Sciences from the University of Hawai`i at Manoa and her Master’s in Education: Instructional Technologies degree from San Francisco State University where she currently teaches courses in Foundations of Instructional Design Theory, Learning with Emerging Technologies, and Usability Testing and Formative Evaluation. Pat worked as a professional development specialist in new technologies and learning for 20 years prior to her current position, eight of which were on federal teacher development grants in STEM (science, technology, engineering, and mathematics) education. Pat was Principal Investigator for NatureShift, a U.S. Department of Education Technology Innovation Challenge Grant (6.5 yrs.) and interim Project Director for Hawai`i Networked Learning Communities, a National Science Foundation Rural Systemic Initiative grant for the Hawai`i Department of Education (1.5 yrs.). Both grants involved technology integration in cultural contexts into curriculum and instruction and teacher professional development in STEM, history, and language literacy for rurally isolated schools in the Northern plains states and the six Hawaiian Islands. Pat taught high school science and English for six years and has taught several university education courses prior to her current position. She held administrative positions at the University of Hawai`i and at San Francisco State and Sonoma State Universities. For a brief period, she published the Middletown Times Star, a small newspaper in Northern California. With a lifelong interest in the learning sciences, Pat’s research has covered technology innovations for learning, cultural implications and impacts on learning, and advanced technology environments for collaborative learning. She is currently researching a new pedagogical model based on traditions of Studio-Based Learning and investigating the implementation of that model into mobile learning environments. Community Learning Research LLC Patricia Donohue, PhD, CEO - Gatto, J. T. (1992) Dumbing Us Down. - Lackney, J. A. (1999) A History of the Studio-Based Learning Model. - Report of a Workshop on The Scope and Nature of Computational Thinking, Committee for the Workshops on Computational Thinking; National Research Council (2010). - Mitchell Resnick (2002) Rethinking Learning in the Digital Age. Chapter 3: pp32-37. - Mitchell Resnick (2007) Sowing seeds for a more creative society. ISTE - Stephen Cooper, Lance C. Pérez, and Daphne Rainey (2010) K–12 Computational Learning: Enhancing student learning and understanding by combining theories of learning with the computer’s unique attributes. Education, v.53(11) pp 27-29. - Hundhausen, C., Narayanan, N., and Crosby, M. (2008) Exploring Studio-Based Instructional Models for Computing Education
<urn:uuid:880b560e-7648-40c2-a136-25adc4e549ea>
CC-MAIN-2016-26
http://barbarabray.net/tag/project-based-learning/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943047
2,368
3
3
Leukemia is a type of cancer that is classified as being either chronic (meaning that it gets worse slowly) or acute (meaning that it gets worse quickly). In acute leukemia, the blood cells are very abnormal, the blood cells cannot carry out their normal work, and the number of abnormal cells increases rapidly. Common symptoms of this condition can include fever, fatigue, and frequent infections. Leukemia is cancer that starts in blood-forming tissue, such as the bone marrow, and causes large numbers of abnormal blood cells to be produced and enter the bloodstream. Each year, leukemia is diagnosed in about 29,000 adults and 2,000 children in the United States. Leukemia is either chronic (gets worse slowly) or acute (gets worse quickly). In acute leukemia: - The blood cells are very abnormal - The blood cells cannot carry out their normal work - The number of abnormal cells increases rapidly - The disease progresses quickly. Blood cells form in the bone marrow. Bone marrow is the soft material in the center of most bones. Immature blood cells are called stem cells and blasts. Most blood cells mature in the bone marrow and then move into the blood vessels. Blood flowing through the blood vessels and heart is called the peripheral blood. In people with acute leukemia, the bone marrow produces abnormal white blood cells. These abnormal cells are leukemia cells. At first, leukemia cells function almost normally. However, in time, they may crowd out normal white blood cells, red blood cells, and platelets, which makes it hard for blood to do its work.
<urn:uuid:3c487d59-6af5-42f4-b58b-fb289b10b61a>
CC-MAIN-2016-26
http://leukemia.emedtv.com/acute-leukemia/acute-leukemia.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93407
319
4.03125
4
Five years ago, more than three million girls each year were at risk of undergoing genital cutting - part of a complex mix of norms and societal expectations. Today, due to growing populations and the fact that ever more countries are admitting to girls being cut, many more girls are at risk. But as head of a pioneering organization working to end FGC, I can still say that tremendous global progress has been made. That’s because millions more people are now aware that the problem exists, communities themselves are becoming more engaged and FGC is on the global development agenda. At the Orchid Project, we have always believed that systemic action needs to take place at every different level: from the girl at the heart of the issue, through to her family and community, then the local, regional, national and global levels. For the first time, we are beginning to see the different pieces of the jigsaw puzzle falling into place. Years of sustained work by activists from around the world ensured that female genital cutting (FGC) landed firmly on the world stage last September, when the United Nations agreed on 17 Global Goals for Sustainable Development. Goal Five, achieving gender equality by 2030, has as a target the “elimination of all harmful practices, such as child, early and forced marriage, and female genital mutilation.” This recognition and formalization into an international framework is long overdue, given the sheer scale and impact of FGC on millions of girls and women globally. It presents us with an unprecedented opportunity to work at the community, local, national, regional and global level to bring about an end to the practice within the next generation. And given the extent, scale and impacts of the practice, this will not be a moment too soon. The impacts of FGC are astounding; it can cause immediate and lifelong physical, psychological and sexual trauma, as well as difficulties during intercourse, menstruation and childbirth. It reflects deep cultural discrimination against women and girls. UNICEF counts at least 200 million girls and women alive today who have gone through FGC. However, these figures are only for the 30 countries where there is data. There are at least another 15 countries, mainly in the Middle East and Asia, where the practice continues, but the girls affected are not counted. This is one example of where the evidence base needs to be strengthened – with the absence of such basic data, it can be almost impossible to make a credible case. In spite of legislation, education campaigns and increased awareness, in many communities, the rate of ending the practice is slow. People give many reasons for continuing the practice. Some defenders may argue that undergoing FGC means becoming a “whole woman,” being “clean,” being “sexually modest” or a “documented virgin” in societies where these are prerequisites for a girl’s social acceptance. In most communities, no one will associate with an uncut girl, and she cannot get married. All of these beliefs serve to hold the practice in place. Extensive research in the mid 1990’s in Burkina Faso, Ethiopia, Egypt, Ghana, Kenya, Mali, and Senegal provided insight into sociocultural and religious underpinnings of FGC and helped identify approaches that, over time, have contributed to individual and community decisions to abandon the practice. Recently, donors and other partners have made stronger commitments to help underscore their belief that FGC needs to end. One of these donors is the UK’s Department for International Development, which awarded the Population Council, along with a consortium of other members, a five-year research contract to help address some of the gaps in evidence. This new research project has four tasks. First is to map FGC in nine countries, by examining existing data on when, where and why it prevails. Having more detailed knowledge at a country specific level helps with programming and successful interventions. As its second task, the Population Council’s research programmes will evaluate existing approaches to ending FGC. Most are pilot programmes or being studied as part of larger efforts on girls’ education or health care. Few have been assessed for the information decision-makers need - like validity, successful theories or cost-effectiveness. Research like this has already supported the insight that cutting a girl is not an individual decision; it is a society’s chosen path for girls to an acceptable life. Parents and community leaders, like elders everywhere, want their girls to thrive and be happy and safe, and they believe that undergoing FGC will contribute to this. But further research can find ways of working that can help with the complexity of the issue and crucially, with finding ways to help spread success more widely. As an example, the work by Tostan, Orchid’s strategic partner, in six West African countries encourages communities to dialogue about human rights to health, to safety, to well-being. Once community members see gender issues more broadly, without blame or criticism, communities recognise FGC as a harmful social norm, one that was adopted and can be abandoned. And more than 7,350 communities have decided to do just that. Previous evaluations by the Population Council of Tostan’s groundbreaking community empowerment programmes in Senegal and Burkina Faso have informed decisions by UN organizations to recommend this model as a best practice for eliminating FGC. Thirdly, researchers will examine the way existing social norms relate to FGC, and how abandoning the practice affects those norms. And finally, the project will develop and test new research methods. The Population Council has played a remarkable role over the last two decades and its commitment to strengthening research methods on FGC is to be applauded. In particular, its work on expanding the capacity of national organizations and individuals to implement evidence-based interventions and undertake high-quality research, monitoring, and evaluation on FGC is invaluable. The provision of a reliable body of evidence on what does and doesn’t work is one missing piece of the jigsaw. With increased work at community level, better ownership by governments of national action plans, strengthened civil society voices and now, a global framework to help with commitments and accountability, we can begin to see how we can all play our part to end FGC within the next generation. Orchid Project uses non-directive human rights led education to work with partners and to communicate and advocate to accelerate an end to female genital cutting.
<urn:uuid:e05b09d3-2c45-42c1-9b1c-655f44d7c5f7>
CC-MAIN-2016-26
http://www.popcouncil.org/news/ending-female-genital-cutting-in-a-generation
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950936
1,320
2.8125
3
Research output: Contribution to journal › Journal article |<mark>Journal publication date</mark>||05/2012| |<mark>Journal</mark>||Insect Conservation and Diversity| |Number of pages||11| . 1. Although plantation forests are being established at an increasing rate, their effects on biodiversity are still debated. 2. Native candeias [Eremanthus erythropappus (DC.) Mac Leish] and exotic eucalyptus (Eucalyptus spp.) have recently been planted on Cerrado grasslands. The Cerrado is the second largest biome of Brazil and one of the most threatened savanna ecosystems. 3. Here, we use dung beetles (Scarabaeinae) to investigate the effects of the land-use changes associated to afforestation on Cerrado insect biodiversity. We sampled dung beetles in candeia (4- and 6-year-old) and eucalyptus plantations (1- and 4-year-old), natural candeia formations (candeiais), native grasslands and natural forests. 4. Dung beetle diversity in plantations was lower than in grasslands and forests, but was not different from diversity in natural candeiais. Candeia and 1-year-old eucalyptus plantations, grasslands and natural candeiais all had similar community composition, distinct from natural forests. Four-year-old eucalyptus plantations were intermediate between those two groups. Overall, afforestation was detrimental for dung beetles. 5. Differences between exotic and native plantations were only apparent in older plantations, and seemed to be due to differences associated to canopy openness rather than to the origin of the planted species. Candeia plantations were of better conservation value for open-area species (62% species shared between grasslands and candeia plantation) whereas eucalyptus plantations were so for forest species (26% species shared between forests and eucalyptus plantations). We recommend considering this result before deciding where to plant which species.
<urn:uuid:938637ba-8c59-4cf1-b5aa-47188a44d8c6>
CC-MAIN-2016-26
http://www.research.lancs.ac.uk/portal/en/publications/evaluating-the-impacts-and-conservation-value-of-exotic-and-native-tree-afforestation-in-cerrado-grasslands-using-dung-beetles(d645e5a3-8cdc-4a9c-baa5-7c10678eb799).html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00164-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927046
435
2.984375
3
Rifle Throw: Projectile Motion with Rotation uses the Tracker video analysis tool to analyze the motion of a rifle tossed in the air by members of the US Army Drill Team. The tracked video includes the motion of the center of mass of the rifle as well as the end of the rifle. Students explore rotational motion about the center of mass and can build a model of the motion. Rifle Throw: Projectile Motion with Rotation Activity This pdf file contains the instructions for using the Rifle Throw: Projectile Motion with Rotation Tracker files. download 404kb .pdf Published: June 21, 2012 %0 Computer Program %A Cox, Anne %D June 7, 2012 %T Rifle Throw: Projectile Motion with Rotation %8 June 7, 2012 %U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=12085&DocID=2974 Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:5542c07f-4e3a-4e8f-a1d1-74e6d567ab8f>
CC-MAIN-2016-26
http://www.compadre.org/OSP/items/detail.cfm?ID=12085
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890275
236
3.015625
3
The complete Auto family comes with three different italic styles to choose from. Why? Besides using one specific italic variant in combination with the roman fonts, the italics can also solve more complex typographic tasks when used together. Designating a quotation within a quotation is one example where two different italics can help you to avoid typographic trouble. Or try identifying a quote within spoken text in a novel, or identifying different speakers in plays and manuscripts. Synergize their forces - get these three italics rolling! Auto 1 Italic: Trust at the speed of 240 km/h Auto 1 italic is formal and straight forward - an italic which can be trusted at the speed of 240 km/h. Use it to converse with pin-striped passengers about market fluctuations in solemn annual reports. Discuss the austere simplicity of the Scandinavian highway scenery with turtle-necked intellectuals and serious academicians who want to feel safe - but go fast too. Hear the engine humming, waiting for you to open up the throttle. Auto 2 Italic: For long passages Auto 2 italic is smooth... you hardly hear the engine if you drive with this italic. With serifs here and there, this italic contrasts more strongly with the roman weights than do the Auto 1 Italics. This italic will survive long passages which would ordinarily be difficult to read when set in italics. Assembled by hand with love, aged (almost) ten years next to oak barrels in a local garage. Auto 3 Italic: Demand the attention! When you are in the mood for cruising, take this italic - and feel the heads turning. Auto 3 bears an upright italic and the contrast with the roman and other italics is created from edgy counterforms, an explicitly cursive construction, unexpected loop strokes and open endings. The capitals have swashes while the lowercase still remains straightforward. Use it for headlines or for emphasizing phrases within roman text. What are the essential construction differences between Auto's three italic styles? Take a look at the lowercase letter 'a'. The strongest but subtle variation is in its counterform: in Auto 1 it is rather horizontal, while the downstroke linking the bowl and the stem is much higher in Auto 2 and 3, making the counter form steeper. Further, Auto 3 features an 'a' with an open counter which lets it stand out from Auto 1 and 2. This small counter-detail has a great influence on the whole text image: the lowercase 'a' is the most frequent letter in several languages, sometimes covering 20% of the whole text mass (according to some studies). Considering that this counterform also appears (sometimes flipped 180 degrees) in the lowercase b, d, g, p and q, it is a key repeated form which gives consistency to the font, and distinction from the others. Although very powerful, the counterform in 'a' alone is unable to create strong typographic contrast. That's why Auto 1 italics feature many straight strokes and a 'g' with a two-storey construction, while Auto 2 italics introduce more curves, some serifs and capitals with economic swashes. Compared to the sloped italics of Auto 1 and 2, we decided to give an upright appearance to the Auto 3 italic. Currently, it stands out convincingly beside the roman fonts and its style is easy to apply to headlines, as it keeps its compact feel.
<urn:uuid:44cc431f-8dfe-4c14-b12e-944998f4c9d4>
CC-MAIN-2016-26
http://www.underware.nl/fonts/auto/features/three_italics/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916193
718
2.703125
3
Adolescent Literacy Glossary Adequate Yearly Progress, Small Learning Communities, Explicit Instruction do you know what these phrases mean? Find these and other commonly used terms related to reading, literacy, and reading instruction in our glossary. The ability to translate a word from print to speech, usually by employing knowledge of sound-symbol correspondences. It is also the act of deciphering a new word by sounding it out. A recognition that the invented spellings of children follow a developmental pattern. As students learn about written words, their attempts at spelling reflect an increased awareness of orthographic patterns. An approach to teaching that includes planning out and executing various approaches to content, process, and product. Differentiated instruction is used to meet the needs of student differences in readiness, interests, and learning needs. The ability to learn and use the computer skills required to function in the workplace and in educational settings. Many researchers believe it will become increasingly necessary to be digitally literate to succeed in an Internet-connected economy. A teaching method that features highly scripted lessons and repetitive, interactive activities that teachers present to groups of students. The method is designed to increase student skills through carefully sequenced curriculum. Direct Vocabulary Learning Explicit instruction in both the meanings of individual words and word-learning strategies. Direct vocabulary instruction aids reading comprehension. (Also called Content-Area Literacy) - The advanced literacy skills required to master academic content areas, particularly the areas of math, science, English, and history. Content-area literacy is necessary for success at the secondary level and requires knowledge and understanding of the language, terminology, structure, and patterns of specific academic subject areas. Also called two-column notes. With this strategy, a student writes two kinds of notes in two columns or on facing pages. On the left are the key ideas in the assigned reading selection, with the page on which they occur, either directly quoted or paraphrased; on the right, the student writes his thoughts about those ideas. Double-entry journals can be completed on paper or using word processing or other software. Difficulty writing legibly and with age-appropriate speed. A language-based learning disability that affects both oral and written language. It may also be referred to as reading disability, reading difference, or reading disorder. Dyslexia can also cause difficulty with writing, spelling, listening, speaking, and math. Difficulty remembering names or recalling specific words; sometimes called a “word-retrieval” problem.
<urn:uuid:6d03a73b-3685-46ff-804a-eff0b2b9ce6f>
CC-MAIN-2016-26
http://www.adlit.org/adlit_glossary/?letter=D
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928179
516
4.125
4
The MTA and New York's Low Carbon Footprint Many people know that taking public transportation is the environmentally friendly way to travel. Now the MTA has verified climate numbers to prove it. The Climate Registry, a non-profit organization that operates the only North American voluntary greenhouse gas registry, has for the first time accepted the MTA's carbon emissions and publicly reported them on its website. "We now know how much carbon MTA operations emit each day. But more importantly, we also know how much carbon is prevented from entering the atmosphere when 8.5 million people per day choose to ride the train or the bus instead of drive their cars," said Ernest Tollerson, MTA Director of Policy and Media Relations. Using a methodology developed by the American Public Transit Association, in cooperation with the MTA and other U.S. transit agencies, consulting firms, nonprofits and academics, the MTA enables New Yorkers to avoid emitting 8.24 units of carbon for every unit the MTA emits through its operations. That means that New Yorkers avoid emitting 19.8 million metric tons of carbon a year. At the same time, its network of subways, buses, commuter trains, paratransit vehicles, bridges and tunnels releases 2.4 million metric tons of carbon annually. The net result is a savings of 17.4 million metric tons of carbon throughout the course of the year. "This number is a window on the MTA's role in reducing carbon and evidence that investing in public transportation is one of the best strategies for reducing greenhouse gas emissions," said Projjal Dutta, the MTA's Director of Sustainability Initiatives. Governed by states, provinces, territories and tribes, The Climate Registry helps hundreds of public and private organizations measure, report and reduce their carbon emissions with integrity. "The Climate Registry congratulates the MTA on successfully reporting its carbon emissions in a public, transparent and credible way," said Denise Sheehan, Vice President for Government and Regional Affairs at The Climate Registry. "By taking this important step, the MTA continues to demonstrate its leadership in addressing climate change and fostering new ways to manage and reduce carbon emissions." "This is proof positive that New Yorkers are avoiding the release of a very large volume of greenhouse gas emissions by riding the region's trains and buses," said William Millar, the President of the American Public Transportation Association. "It demonstrates how important public transportation is in combating climate change and reducing carbon emissions. Clearly, it is one more important reason for everyone to support the expansion of public transportation services throughout the country." For this effort, the Climate Registry retained Ryerson, Master and Associates, Inc. (RMA), a member of the Lloyds Register group of entities, to independently verify our carbon footprint. RMA is a leader in climate change consulting and greenhouse gas verification, and has verified over 550 million tons of CO2. The carbon avoidance factor is calculated using a three-part methodology that takes into account 1) car trips avoided each time someone leaves his or her car at home and chooses to ride a train or bus; 2) congestion relief and therefore increased fuel efficiency of those cars that remain on the road; and 3) public transportation's role in fostering compact land-use patterns that encourage walking and bicycling for some trips and shorter trips overall. If carbon emissions are to become the subject of a carbon tax or a cap-and-trade system in the years ahead, public transportation agencies across the country could use this or similar data to make a strong case that they should be entitled to a share of the carbon revenue for their role in preventing the release of greenhouse gases.
<urn:uuid:4aacc1ba-906e-44b3-b282-33ed2126f817>
CC-MAIN-2016-26
http://www.mta.info/news/2010/04/20/mta-and-new-yorks-low-carbon-footprint
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947872
734
2.96875
3
1870 Agricultural Census (pdf) This web site features a database of over 1400 names listed on the 1870 Agricultural Census for Dakota Territory. The 1870 census was the ninth United States census. Most of the enumeration area covers present day southeastern South Dakota. Information in the database includes: Last Name, First Name, Post Office, County, Farm Value, Livestock Value, and the owners Agricultural Subdivision.
<urn:uuid:3f340539-3f0a-4a81-8653-90879db1e968>
CC-MAIN-2016-26
http://www.history.sd.gov/archives/Data/1870census/default.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.847912
84
2.828125
3
Boeing has completed one of the more spectacular tests of the 787 Dreamliner program. The airplane maker completed the “ultimate-load wing-up bending test” on Sunday using airframe ZY997, the test aircraft that is basically built to be tortured on the ground and never fly. During the test, the wings on the 787 were flexed upward “approximately 25 feet” which equates to 150 percent of the most extreme forces the airplane is ever expected to encounter during normal operation. The test is used to demonstrate a safety margin for the design and is part of the certification process to show the airplane can withstand extreme forces. Ultimate wing load testing is standard procedure for any new airplane design and has been done on aircraft large and small almost since the beginning of aviation. Often simple sandbags are placed on the wing of an aircraft to represent the maximum forces needed to be tested. Boeing and other large plane makers use elaborate testing structures (pictured above) that flex the wings to apply the necessary forces. In the past, Boeing has flexed the wing beyond the required 150 percent, and more than once the company has flexed the wings of a new design to the failure point. The most recent example was in 1995 with the 777. Back in 2008, Boeing posted video of an isolated 50 foot section of the 787 wing being flexed to the point of failure during earlier testing. It had long been wondered if Boeing would flex the composite wings of the 787 to failure, but the company said they only went to 150 percent, and no further. An issue with the wing where it joins with the fuselage led to a major setback for the 787 program last year. The problem was fixed and a Boeing statement regarding Sunday’s test said the initial results from the test are positive and more analysis is being done. For a larger version of the photo above, click here. Photo: BoeingGo Back to Top. Skip To: Start of Article.
<urn:uuid:663ae0de-2637-4bf0-b549-022df4490a84>
CC-MAIN-2016-26
http://www.wired.com/2010/03/boeing-787-passes-incredible-wing-flex-test/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950414
406
2.625
3
11. Delphinium andesicola Ewan, J. Washington Acad. Sci. 29: 476. 1939. Delphinium andesicola Ewan subsp. amplum (Ewan) Ewan Stems 60-200 +cm; base reddish or not, glabrous, glaucous. Leaves cauline, 10-30, absent from proximal 1/5 of stem at anthesis; petiole 1-15 cm. Leaf blade cordate to semicircular, 5-8 × 5-12 cm, nearly glabrous; ultimate lobes 5-16, width 3-20 mm, tips gradually tapered to mucronate apex; midcauline leaf lobes more than 3 times longer than wide. Inflorescences 20-80-flowered; pedicel 1-2(-3) cm, puberulent; bracteoles 1-3 mm from flowers, green to brown, linear-lanceolate, 3-6 mm, puberulent. Flowers: sepals purple, puberulent, lateral sepals spreading, 9-12 × 5-7 mm, spurs ascending ca. 45° curved downward apically, purple, 10-13 mm, blunt tipped; lower petal blades ± covering stamens, 4-6 mm, clefts 1.5-2.5 mm; hairs centered, densest on inner lobes near base of cleft, white. Fruits 12-15 mm, 3.5-4 times longer than wide, sparsely puberulent. Seeds unwinged; seed coat cells elongate, surfaces pustulate. 2 n = 16. Flowering summer-early fall. Meadows and coniferous woods; 2200-3200 m; Ariz. Delphinium andesicola , the westernmost representative of the southern Cordilleran complex, is found in the Chiricahua, Huachuca, Graham, and White mountains. Hybrids with Delphinium scopulorum are known.
<urn:uuid:f58bee53-8665-4aad-a867-9591b8641700>
CC-MAIN-2016-26
http://www.efloras.org/florataxon.aspx?flora_id=1&taxon_id=233500472
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.726834
432
2.53125
3
Through years of behavior disorder research we have found that it affects many more people than thought in the past. In fact, chances are everyone has their own form of a behavior disorder, however only a few struggle with the issues involved. This information has started a longtime battle between two different opinions on the legitimacy of behavior disorder diagnoses. Argument 1: It is a chemical reaction. Most doctors insist that you can indeed have a condition which alters your ability to control certain behaviors. The doctors in this category represent their case by putting forth information collected when performing behavior disorder research. Study after study has suggested it could be a chemical reaction in the brain that triggers these uncontrollable urges to act out. Using these findings most physicians claim to have ideas on helping manage these disorders. Although they agree that we are only in the beginning stages of developing rock-solid data, they stand firm in their belief that some need help conquering the demons they are facing. They even go as far to claim that without help these struggling victims could turn to irrational actions such as drug use, alcoholism, cutting, and most devastatingly suicide. Argument 2: It’s simply irresponsibility. On the flip side you have the opinion of another credible group that has conducted private behavior disorder research with the conclusion that these “chemical reactions” can not be submitted as solid evidence. While technologies are advanced and becoming more reliable as time goes on we still know very little about the complex functions of the human brain. In their argument they strongly suggest that society has made these “behavior disorders” up to explain away unruly conduct. Not only do they disagree with behavior disorder research finding you can behave badly due to chemical reactions, these doctors argue that by permitting one to pass-up blame you are only discouraging personal accountability for wrongful actions. This argument will more than likely last until the end of time with both sides displaying good points, but neither having the technologies to rule out the others opinion. What it comes down to is your own desire to conquer your own disabilities. We all face things in life that seem to hold us down, but what will determine your true disorder is whether you move forward or give into life’s trying tests.
<urn:uuid:f5035c20-4d99-4fea-ace5-a93f54d76f81>
CC-MAIN-2016-26
http://www.parentingteens.com/behavior-disorder-research-chemical-reaction-vs-irresponsibility/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966989
449
2.703125
3
In a country where war was fought, it lingers, even if that war is already a century behind us. For each of the more than 600,000 dead who fell here, for each of the more than 425,000 graves and names on memorials and for the hundreds of traces and relics in the front region, for each of the millions affected (physically or psychologically wounded, refugees and displaced persons) there is a story of suffering, pain and ordeal somewhere in the world. The City of Peace Ypres and the In Flanders Fields Museum conserve the link with the war past. Because it is important for those who want to speak about peace and war today. The In Flanders Fields Museum presents the story of the First World War in the West Flanders front region. It is located in the renovated Cloth Halls of Ypres, an important symbol of wartime hardship and later recovery. The completely new permanent exhibition (opening 11 June 2012) tells the story of the invasion of Belgium and the first months of the mobilisation, the four years trench war in the Westhoek - from the beach of Nieuwpoort to the Leie in Armentières -, the end of the war and the permanent remembrance ever since. The focus of the scenography is the human experience and calls particular attention to the contemporary landscape as one of the last true witnesses of the war history. In that context, a visit can also be arranged to the belfry, from where you have a view over the city and the surrounding battlefields. Hundreds of authentic objects and images are presented in an innovative experience-orientated layout. Lifelike characters and interactive installations confront the contemporary visitor with his peers in the war, a century ago. The museum works from many possible perspectives The general and military - historical is important, but also the relation with the present, our approach - as human and society - to our past and that of all other countries involved. People from five continents and more than fifty different countries and cultures took part in the war in Flanders. Our public is diversified and extremely international. The In Flanders Fields museum is much more than a permanent exhibition. There is a current educational action for students from inland and abroad, besides a cultural and artistic programme. In the research centre of the museum every visitor can delve deeper into that dramatic period of the history of the world. Individually you can research the big, global background story here as well as the very personal and local history. Because the nature of war does not change in time, the museum considers presenting this war story as a universal and contemporary message of peace, and therefore an important social mission. The museum works closely together with partners who share its mission and works within the framework of Ypres City of Peace. De afbraak van het IFFM 1 en de opbouw van het IFFM 2 (nov 2011-juni 2012) Dit is een promotiefilmpje voor Ieper en het In Flanders Fields Museum.
<urn:uuid:682663e8-8183-43c5-93f8-2a8c28c2daaa>
CC-MAIN-2016-26
http://www.inflandersfields.be/en/discover
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909319
624
2.546875
3
Page 2 of 2 It didn’t take long before the model reacted. After only a few electrical jolts, the artificial neural circuit began to act just like a real neural circuit. Clusters of connected neurons began to fire in close synchrony: the cells were wiring themselves together. Different cell types obeyed their genetic instructions. The scientists could see the cellular looms flash and then fade as the cells wove themselves into meaningful patterns. Dendrites reached out to each other, like branches looking for light. “This all happened on its own,” Markram says. “It was entirely spontaneous.” For the Blue Brain team, it was a thrilling breakthrough. After years of hard work, they were finally able to watch their make-believe brain develop, synapse by synapse. The microchips were turning themselves into a mind. But then came the hard work. The model was just a first draft. And so the team began a painstaking editing process. By comparing the behavior of the virtual circuit with experimental studies of the rat brain, the scientists could test out the verisimilitude of their simulation. They constantly fact-checked the supercomputer, tweaking the software to make it more realistic. “People complain that Blue Brain must have so many free parameters,” Schürmann says. “They assume that we can just input whatever we want until the output looks good. But what they don’t understand is that we are very constrained by these experiments.” This is what makes the model so impressive: It manages to simulate a real neocortical column—a functional slice of mind—by simulating the particular details of our ion channels. Like a real brain, the behavior of Blue Brain naturally emerges from its molecular parts. In fact, the model is so successful that its biggest restrictions are now technological. “We have already shown that the model can scale up,” Markram says. “What is holding us back now are the computers.” The numbers speak for themselves. Markram estimates that in order to accurately simulate the trillion synapses in the human brain, you’d need to be able to process about 500 petabytes of data (peta being a million billion, or 10 to the fifteenth power). That’s about 200 times more information than is stored on all of Google’s servers. (Given current technology, a machine capable of such power would be the size of several football fields.) Energy consumption is another huge problem. The human brain requires about 25 watts of electricity to operate. Markram estimates that simulating the brain on a supercomputer with existing microchips would generate an annual electrical bill of about $3 billion . But if computing speeds continue to develop at their current exponential pace, and energy efficiency improves, Markram believes that he’ll be able to model a complete human brain on a single machine in ten years or less. For now, however, the mind is still the ideal machine. Those intimidating black boxes from IBM in the basement are barely sufficient to model a thin slice of rat brain. The nervous system of an invertebrate exceeds the capabilities of the fastest supercomputer in the world. “If you’re interested in computing,” Schürmann says, “then I don’t see how you can’t be interested in the brain. We have so much to learn from natural selection. It’s really the ultimate engineer.” An entire neocortical column lights up with electrical activity. Modeled on a two-week-old rodent brain, this 0.5 mm by 2 mm slice is the basic computational unit of the brain and contains about 10,000 neurons. This microcircuit is repeated millions of times across the rat cortex—and many times more in the brain of a human. Courtesy of BBP/EPFL; rendering by Visualbiotech Neuroscience describes the brain from the outside. It sees us through the prism of the third person, so that we are nothing but three pounds of electrical flesh. The paradox, of course, is that we don’t experience our matter. Self-consciousness, at least when felt from the inside, feels like more than the sum of its cells. “We’ve got all these tools for studying the cortex,” Markram says. “But none of these methods allows us to see what makes the cortex so interesting, which is that it generates worlds. No matter how much I know about your brain, I still won’t be able to see what you see.” Some philosophers, like Thomas Nagel, have argued that this divide between the physical facts of neuroscience and the reality of subjective experience represents an epistemological dead end. No matter how much we know about our neurons, we still won’t be able to explain how a twitch of ions in the frontal cortex becomes the Technicolor cinema of consciousness. Markram takes these criticisms seriously. Nevertheless, he believes that Blue Brain is uniquely capable of transcending the limits of “conventional neuroscience,” breaking through the mind-body problem. According to Markram, the power of Blue Brain is that it can transform a metaphysical paradox into a technological problem. “There’s no reason why you can’t get inside Blue Brain,” Markram says. “Once we can model a brain, we should be able to model what every brain makes. We should be able to experience the experiences of another mind.” When listening to Markram speculate, it’s easy to forget that the Blue Brain simulation is still just a single circuit, confined within a silent supercomputer. The machine is not yet alive. And yet Markram can be persuasive when he talks about his future plans. His ambitions are grounded in concrete steps. Once the team is able to model a complete rat brain—that should happen in the next two years—Markram will download the simulation into a robotic rat, so that the brain has a body. He’s already talking to a Japanese company about constructing the mechanical animal. “The only way to really know what the model is capable of is to give it legs,” he says. “If the robotic rat just bumps into walls, then we’ve got a problem.” Installing Blue Brain in a robot will also allow it to develop like a real rat. The simulated cells will be shaped by their own sensations, constantly revising their connections based upon the rat’s experiences. “What you ultimately want,” Markram says, “is a robot that’s a little bit unpredictable, that doesn’t just do what we tell it to do.” His goal is to build a virtual animal—a rodent robot—with a mind of its own. But the question remains: How do you know what the rat knows? How do you get inside its simulated cortex? This is where visualization becomes key. Markram wants to simulate what that brain experiences. It’s a typically audacious goal, a grand attempt to get around an ancient paradox. But if he can really find a way to see the brain from the inside, to traverse our inner space, then he will have given neuroscience an unprecedented window into the invisible. He will have taken the self and turned it into something we can see. A close-up view of the rat neocortical column, rendered in three dimensions by a computer simulation. The large cell bodies (somas) can be seen branching into thick axons and forests of thinner dendrites. Courtesy of Dr. Pablo de Heras Ciechomski/Visualbiotech Schürmann leads me across the campus to a large room tucked away in the engineering school. The windows are hermetically sealed; the air is warm and heavy with dust. A lone Silicon Graphics supercomputer, about the size of a large armoire, hums loudly in the center of the room. Schürmann opens the back of the computer to reveal a tangle of wires and cables, the knotted guts of the machine. This computer doesn’t simulate the brain, rather it translates the simulation into visual form. The vast data sets generated by the IBM supercomputer are rendered as short films, hallucinatory voyages into the deep spaces of the mind. Schürmann hands me a pair of 3-D glasses, dims the lights, and starts the digital projector. The music starts first, “The Blue Danube” by Strauss. The classical waltz is soon accompanied by the vivid image of an interneuron, its spindly limbs reaching through the air. The imaginary camera pans around the brain cell, revealing the subtle complexities of its form. “This is a random neuron plucked from the model,” Schürmann says. He then hits a few keys and the screen begins to fill with thousands of colorful cells. After a few seconds, the colors start to pulse across the network, as the virtual ions pass from neuron to neuron. I’m watching the supercomputer think. Rendering cells is easy, at least for the supercomputer. It’s the transformation of those cells into experience that’s so hard. Still, Markram insists that it’s not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that’s just a matter of massive correlation—the supercomputer should be able to reverse the process. It should be able to take its map of the cortex and generate a movie of experience, a first person view of reality rooted in the details of the brain. As the philosopher David Chalmers likes to say, “Experience is information from the inside; physics is information from the outside.” By shuttling between these poles of being, the Blue Brain scientists hope to show that these different perspectives aren’t so different at all. With the right supercomputer, our lucid reality can be faked. “There is nothing inherently mysterious about the mind or anything it makes,” Markram says. “Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don’t know why you wouldn’t be able to generate a conscious mind.” At moments like this, Markram takes on the deflating air of a magician exposing his own magic tricks. He seems to relish the idea of “debunking consciousness,” showing that it’s no more metaphysical than any other property of the mind. Consciousness is a binary code; the self is a loop of electricity. A ghost will emerge from the machine once the machine is built right. And yet, Markram is candid about the possibility of failure. He knows that he has no idea what will happen once the Blue Brain is scaled up. “I think it will be just as interesting, perhaps even more interesting, if we can’t create a conscious computer,” Markram says. “Then the question will be: ‘What are we missing? Why is this not enough?’” Niels Bohr once declared that the opposite of a profound truth is also a profound truth. This is the charmed predicament of the Blue Brain project. If the simulation is successful, if it can turn a stack of silicon microchips into a sentient being, then the epic problem of consciousness will have been solved. The soul will be stripped of its secrets; the mind will lose its mystery. However, if the project fails—if the software never generates a sense of self, or manages to solve the paradox of experience—then neuroscience may be forced to confront its stark limitations. Knowing everything about the brain will not be enough. The supercomputer will still be a mere machine. Nothing will have emerged from all of the information. We will remain what can’t be known. Originally published March 3, 2008 Page 2 of 2
<urn:uuid:b4a3b2b0-5804-477c-a880-2f48bb7d5bea>
CC-MAIN-2016-26
http://seedmagazine.com/content/article/out_of_the_blue/P2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941187
2,518
3.328125
3
Judaism: Redeeming Ourselves Rabbi Reuven TaraginThe writer is Dean of Overseas Students at Yeshivat Hakotel and the Rosh... After growing up in Pharaoh’s home, Moses ventures out to his enslaved Jewish brethren. The Torah tell us that Moses 'saw their suffering and saw an Egyptian beating (one) of his Jewish brothers (Shemot 2:11).' One wonders what the Torah intends us to learn from the preface of Moses’s having 'seen their suffering'? Rashi explains that the preface teaches us that Moses 'focused his eyes and heart to feel the pain of his brothers.' Rashi’s comment here connects to his explanation that Parshat Vayechi is a ‘closed parsha’ because ‘the eyes and hearts of the Jewish people became closed through the pain of the servitude.’ The need to survive persecution made the Jews self- focused and stifled their sensitivity to each other’s suffering. This was the situation until the emergence of Moses. Moses owed his own survival to the risks others (Shifrah, Puah, Yocheved, Miriyam, Bat Paro) took on his behalf. Although bestowed upon him by Bat Pharoh, Pharaoh's daughter and not his actual mother, Moses is known by the name 'Moses' because it connotes not only his having been saved, but also the role he plays in saving others. He intervenes on behalf of his own brethren and even on the behalf of Midianite women (through which he finds his wife and children). Moses’s compassion for others is the only attribute described before his appointment as leader. This highlights the compassion as the reason he is the one chosen to save the Jewish People, Bnei Yisrael. Chazal, our Sages, underscore this by explaining that the Torah refers to Moses shepherding as the backdrop to Hashem’s revelation to him at the burning bush because his extreme devotion to his sheep is what brought him to the bush and why Hashem chose him as leader. We have seen that Rashi begins a flow of consciousness with his comment about the loss of compassion caused by the enslavement (Parshat Vayechi). He continues this theme with his comment about Moses’s compassion as the basis of his having been chosen to lead. We shall see that completes the theme in his explanation of the return of Hashem’s compassion for the enslaved Jewish people. Chapter Two concludes by describing Hashem as having 'seen the Jewish people and knowing.' What does this mean? Rashi explains that Hashem 'set his heart on them and did not hide his eyes (from them).' Rashi’s usage of terminology similar to what he used in reference to Moses’s compassion implies that the sudden return of Hashem’s compassion was connected to that shown by Moses. The implication is that Hashem cares for the Jewish people once they care for each other. The lesson here is a powerful one. Realizing how much our success, both individually and nationally, hinges on Hashem’s assistance, we continuously beseech Him to show us mercy and assist us. The Shemot (Exodus) redemption teaches us that Hashem’s showing us mercy and assistance to us depends upon our showing and doing the same toward one another. May we truly and completely emerge from the galut (exile) slave mentality of ‘each man for his own’ and act like a redeemed nation capable of sincerely caring for one another and, in this way, merit the true and complete return of Hashem’s compassion. Ibid. Rashi is paraphrasing Shemot Rabba 1:27. See the midrash itself for a detailed description of Moses’s compassion for and sharing the load with the slaves. Bereishit 47:28. This means that the Sefer Torah begins the first pasuk of Vayechi right after the last of Vayigash without a space in between. In fact, the slavery is the result of Yosef’s having been sold into slavery which the brother’s described as a result of their insensitivity to their brother (Bereshit 42:21 and the Rambam and Seforno there). See Seforno and Chizkuni Shemot 2:10. Shemot Rabba 2:2. Shemot 3:1.
<urn:uuid:7d691985-e68c-4839-a49e-69c98fe07e3e>
CC-MAIN-2016-26
http://www.israelnationalnews.com/Articles/Article.aspx/14263
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969765
939
3.5625
4
- freely available Sustainability 2012, 4(8), 1711-1732; doi:10.3390/su4081711 Abstract: It is often claimed that the cheapest energy is the one you do not need to produce. Nevertheless, this claim could somehow be unsubstantiated. In this article, the authors try to shed some light on this issue by using the concept of energy return on investment (EROI) as a yardstick. This choice brings semantic issues because in this paper the EROI is used in a different context than that of energy production. Indeed, while watts and negawatts share the same physical unit, they are not the same object, which brings some ambiguities in the interpretation of EROI. These are cleared by a refined definition of EROI and an adapted nomenclature. This review studies the research in the energy efficiency of building operation, which is one of the most investigated topics in energy efficiency. This study focuses on the impact of insulation and high efficiency windows as means to exemplify the concepts that are introduced. These results were normalized for climate, life time of the building, and construction material. In many cases, energy efficiency measures imply a very high EROI. Nevertheless, in some circumstances, this is not the case and it might be more profitable to produce the required energy than to try to save it. Energy efficiency is one of the key tools to tackle two of the biggest challenge facing humanity: climate change and energy scarcity. Indeed, to avoid catastrophic climate changes, it is generally acknowledged that the world needs to reduce the CO2 emission by 50% from the current level by 2050 . For developed countries, this translates into a reduction of 80%, a division by five with respect to 1990’ semissions. While, increases in renewable energy production can help to reach this goal, some authors including those of the current paper, have proposed instead to drastically reduce the energy consumption as the renewables development will never be fast enough to overcome the scarcity of fossil fuels in the current century. Along this idea, Kesselring and Winter proposed the concept of a 2000 W society which aims at consuming no more than what corresponds to an average continuous power of 2000 W per capita; the value being considered as a fair share of the world energy consumption maintained at a sustainable level. This concept was later further developed and expanded [3,4]. Since actual rates of energy consumption is about 6000 W in Europe and even 10,000 W in North America, this would certainly imply dramatic changes in day to day life for most of OECD countries. Transportation is often singled out as the main target for energy efficiency. However, the building industry has an even larger energy and environmental footprint as it is one of the human activities with the largest environmental impact. As noted by Dixit et al. , the construction industry depleted two-fifths of global raw stone, gravel, and sand; one-fourth of virgin wood; and it consumes 40 percent of total energy and 16 percent of fresh water annually [6,7,8,9,10,11,12]. These figures are more or less similar in any developed country. Indeed, for OECD countries, energy consumption by buildings varies between 25% – 50% of total energy consumption , whereas it is closer to 50% in the European Union . In these conditions, the building industry is an obvious target for energy efficiency. This is the rationale behind the European Union Directive on Energy Performance of Buildings . This directive requires member states to implement energy efficiency legislations for buildings, including existing ones with floor areas over 1000 m2 that undergo significant renovations. The French legislation specifies that by January 2013, any new building will have to consume less than 50 kWh/m2/yr of primary energy (this value is modulate with the building type, apartment size and local climate). By 2020, all new buildings will have to be at least net zero—that is involving a consumption of 0 kWh/m2/yr—or better, that is globally producing energy . In a similar way, the Swedish government promulgated a Bill on Energy Efficiency and Smart Construction, to reduce total energy use per heated building area by 20% by 2020 and 50% by 2050, using year 1995 as the reference . In addition, these energy efficiency measures offer a significant opportunity to reduce CO2 emissions [1,18]. Such ambitious goals in energy efficiency improvements raise the key issue of the efficient allocation of resources. Actions that need a large upfront investment for a minimal reduction of the energy consumption are undesirable. In some cases, the return might be so small that one might wonder if it would be better to produce the energy than trying to save it. This is true both for economic and ecological efficiencies. This first paper of a series of two addresses this key issue from the point of view of energy savings as applied to two popular energy savings measures implemented in buildings: insulation and window optimization. 2. Energy Return on Investment: A Revised Concept One potentially useful alternative to conventional economic analysis when it comes to evaluate the sustainability of a particular solution aimed at saving energy (and consequently greenhouse gas emissions) or producing energy is the net energy, Enet, analysis. The concept relies on the estimation of two parameters depending whether energy production or energy savings are considered. 2.1. Energy Production In the first case, all energy required to implement a particular equipment or process (from cradle to grave) is accounted for: it is called the energy invested, Einvested. Then, all energy that this device or process will generate or produce, Eproduced, during its lifetime is evaluated. Accordingly, the net energy is simply: In the desirable situation, Enet, is positive, that is the device or process will generate or produce more energy than it took to implement it. On contrary, it could happen that the required amount of energy required from cradle to grave could be more important than the production. This could be due to a short lifetime and most of the time driven by the economics. The concept can be applied to non-existing solutions, project starting from scratch, for which there is no production at all at the beginning. In this context it is possible to compare several solutions to one another based on this criterion. Or, it could be applied to solutions improvements in which case the solutions will be compared to the existing one. In this latter case, Enet, has to be defined in terms of the improvement only. 2.2. Energy Saved In a second–opposite–case, energy savings are considered. For this case, Eproduced is replaced by Esaved. The savings, Esaved, are estimated for the difference between the amount of energy that the device, building or process would require provided nothing is done and the amount of energy it should consume with the implementation of the proposed device, building or process. On the other hand, Einvested does not account for the energy used by the device after the measures of economy are implemented. In this case, Einvested only accounts for the energy required to implement the solution or install the equipment (from resources extraction to commissioning not from cradle to grave). The energy used by the solution is already accounted for in the definition of Esaved. In this case, a positive value of Enet accounts for savings (which are negawatts or a negative production) that are greater than the energy invested, which of course is desirable. To obtain a desirable effect associated with a positive value of Enet, the savings must be positive. On the other hand, the worst case solution is the case when the savings are negative, that is when, after an investment, the amount of energy used Econsumed,after is larger than that used previously in the original situation Econsumed,before. And this happens more often than we might think when the analysis failed to adequately predict the energy embodied into a solution and focused solely on the eventual savings over a somewhat short lifetime. As for the case of production, Enet can be defined with respect to novel or additional measures of energy efficiency. In both savings and production cases a positive Enet means a desirable effect and conversely. 2.3. The Energy Ratio: Energy Return on Investment This analysis deals with the calculation of the ratio of the energy savings by a particular solution (over a given period of time) or the energy produced by some equipment or process to the energy required to implement the solution or install the equipment (from resources extraction to commissioning). This net energy analysis is sometimes called the assessment of energy surplus, the energy balance method, or the energy return on investment (EROI) [19,20,21,22,23]. In the case of energy production, the EROI is calculated from Equation (3): The key challenge to obtain a meaningful value for this ratio is to correctly define the boundaries of the problem which is investigated and to include—or try to include—all the inputs and outputs in the process [22,23]. For instance, the production of gasoline should account for all the steps required to produce it and deliver it to the stations as in a life cycle analysis. Of course, the higher this ratio, the lower the environmental impact per unit of energy is expected since less input is used for the same output and consequently less impact is felt by the environment .While several authors may argue that the energy consumption and its environment burden adequacy is not straightforward, it has still some merit as an indirect environmental impact indicator [25,26] it is likely to be linked with a better economic investment since the energy content is closely—but not necessarily linearily—related to the price of a product, a process or a service. As a result, energy sources involving a better EROI should be selected in preference to others [27,28]. This is especially true for renewable energy sources for which the environmental burden comes mostly from the extraction and transportation of the resources and manufacturing of the energy systems prior to their use. This concept is related to that of net energy, Enet, that is when Enet = 0, EROI = 1 and a negative Enet means EROI < 1. Calculating the average EROI for an energy basket is complex. Nevertheless, there are some indications that the average EROI of the U.S. energy basket is close to 10 and that a lower EROI should induce negative economic impacts . Hence, for the purpose of this discussion acceptable energy solutions should respect EROI > 10 or Enet > 9 × Einvested hence, energy solutions with a lower EROI should be discounted. When a modification to a power plant or a device is carried out, the EROI can be evaluated. That is to determine whether or not the modifications lead to an increase level of energy production, ΔEproduced, is positive. This is always the case when a production project starts from scratch. There could nevertheless be projects for which the energy production after a modification is lower than what was previously obtained. Defined this way, the EROI is then: with the obvious requirement that EROI > 1 to obtain a valid or sustainable modification. 2.4. Differences and Similarities in EROI A practical problem arises when using the EROI metric in an energy savings application such as a building. There is a key difference between EROI calculated for energy sources and EROI calculated from energy saved. Hence, negajoules ( J) and negawatt ( W) are compared to joules (J) and watt (W) (to our knowledge no symbols exist for negawatt and negajoules. Hence, we propose to use these one). While at first glance this change of definition might look only semantic, it involves much deeper consequences. The reason is that Esaved is an energy difference by itself whereas Eproduced is not. Moreover, savings are positive that is the desirable situation is that after implementation of the measures one looks for less consumption while the desirable situation for production after implementation calls for more production. EROI was originally solely conceived for energy production or energy production technologies and equipment. Hence, in this scenario energy produced and energy invested are both expected to be positive. In consequence, EROI will be always positive. Even when, the net energy production, Enet, is negative, the EROI is still positive but smaller than 1. As mentioned in the above paragraph EROI could either be positive or negative. For energy efficiency (or savings), the concept holds with minor differences. It was said that the energy saved is a positive quantity. In rare circumstances, a poorly designed intervention might increase the lifetime energy consumption, which corresponds to a negative EROI since energy saved is then negative. This situation might also be caused by a strong rebound effect, the Jevons paradox , where the users adapt their energy consumption behavior in a way that they increases the consumption of a good or a service made more affordable due to the improved efficiency to a point that the new energy consumption could exceed the original one. In this case, there cannot be a definition of EROI of gain as the energy saved is by definition a difference. However, Equation 4 corresponds to Equation 5 when the savings are negative. There are also situation for which the definition does not hold. For instance, energy efficiency measures may cost nothing. Energy efficiency measures like changing thermostat settings, closing an interrupter or cooling by natural convection have zero or near zero costs which means that EROI→∞. But, there is an even more favorable situation where an improvement of one aspect of a building has for consequence the optimization of the performance of others systems leading to an overall net negative energy cost. For instance, the improvements to the insulation of the building envelope produce a given EROI for the insulation addition alone, but it may also allow for the reduction of the size of the heating system and hence produce savings on its embodied energy. This could lead to an overall lower total energy cost for the whole building, compared with the version with less insulation. Nevertheless, since in practice energy must be always used to implement a project, this situation is restricted to two cases: when comparing two hypothetical situations and when embodied energy of the replaced components can be recovered to the point that the net energy invested is negative. In the following discussion, only the first situation occurs. From the strict mathematical point of view, this would produce a negative EROI (positive energy saved over a total negative energy cost). Hence, there are two types of negative EROI, one which is negative in term of energy savings and undesirable, and one which is positive in term of energy saved and highly desirable. Actually, this situation is better than EROI = ∞ since the embodied energy is lower than that involved in the original situation! To distinguish these two cases, the symbol † is to be used instead of—for the case where the embodied energy is lower. The reader must be warned however that the value of this type EROI must be interpreted in a different way than the usual one. Indeed, a small EROI means that little operational energy is saved compared to the embodied energy, while a large EROI means that little embodied energy is saved compare to the operational. In consequence, EROI value does not provide information about the relative desirability of the technical solutions. 2.5. A Schematic Representation of the EROI Complete Concept To complete the picture, it should be noted that there are also the situations where the investment cost is negative and where the energy return is also negative. This situation is symmetric with the classical EROI and is treated the same way. These situations are described in the following diagram (Figure 1). In Figure 1, there are four areas: (1) in the upper right corner, the EROI is positive that is the return and the investment are positive (which is the classical case for energy production leading to positive return with a positive investment); (2) in the lower left corner, the return and the (net) investment are negative leading to a positive EROI (symmetry of case 1); (3) in the lower right corner, the EROI is negative as the return is negative (you spend more energy) after a net positive investment; and (4) in the upper left corner is the case of a positive return with a lower embodied energy after the implementation of the measures. 2.6. Additional Considerations At last, there is another key difference between calculating EROI for energy efficiency application. The energy produced or saved is always calculated with reference to a given original condition. In both cases (production or savings), the EROI is always better when you start from scratch. For instance, adding extra insulation to a wall which is already well insulated will have a much lower EROI than adding the same insulation to a poorly insulated wall. Conversely, improving a combustion system by fine tuning the air-fuel ratio of a combustor with an added sophisticated control system will involve a lower EROI than changing and old coal-fired furnace with a modern gas combustor. It is also important to note that EROI may stay negative for a very long time and nevertheless reaches values above 10 since the lifetime of buildings is quite long (>50 yr). Therefore, when analyzing the energy efficiency, the appropriate time scale must be used. This is why the energy payback time (EPT), which will be explained in more details in the next section, is also an important parameter to consider. Or, more generally, the context is always important and the analyst must be careful when interpreting an EROI value. In practice, few studies have been done on an energy basis most of them have been carried out on monetary return. Since, monetary value of energy unit is sensible to the nature of the energy input, EROI calculations based on monetary inputs shall be used with care. In average, the energy content of a dollar of product and services is higher than its equivalent in energy, the EROI calculated in dollar, without correcting for this factor, is always smaller than the EROI calculated from energy units. For the data collected for these studies, this factor is typically between 6 and 10. This is a crucial point of the discussion since most studies discuss the economical aspect of energy with respect to dollars not energy units. Almost all studies were not designed to calculate EROI, all needed information is not directly available: while the energy consumption is most of the time given in energy units, the energy invested is not. However, it is sometimes possible to gather the information on the embodied energy content from alternate sources such as the data contained in . Nevertheless, this database is oriented to building analyses done in the UK context, which may create severe distortions for other countries. In few cases, numerical values of the initial investment were not explicitly given in the text and we had to rely on measurements made on published graphs to get the appropriate information. 2.7. The Energy Payback Time (EPT) In several articles, the energy payback time (EPT) is given. The energy payback time is the period needed to recover the energy invested through energy saving or energy produced. By definition, it is the time after which the EROI reaches a value of one and the net energy is equal to zero. Hence, EROI over the life time is: This brings the issue of the lifetime of components [32,33,34] and of the building itself, which are in general poorly defined. To handle this problem, it is often recommended to refer to the norm ISO 15686 Buildings and constructed assets service-life planning or using a 50 years timeframe as a reference for major renovations, since it is acknowledged and used in many studies . In the upcoming analysis, a 50 years period for the building life time and a 35 years period for the components lifetime are used. 2.8. Other Factors There are other issues peculiar to building application. One of the key problems in building life cycle analysis arises from the long life of the buildings (30–100 yr). Over such a long period of time, the energy basket and even the climate are expected to change. This raises some concerns about the applicability of the standard life cycle analysis method for buildings [37,38,39,40,41]. Another peculiar aspect of the life cycle analysis of building is that it is possible to exhaust resources locally even if the global resource base is immense globally. This problem exists for building material since they are bulky and therefore often expensive to transport over long distances. Hence, while the depletion of bulk resources is negligible at global level [42,43] and hard to put in evidence at the scale of a country like France, depletion becomes clear in a relatively small region like Île-de-France, where depletion time scale is the same order of magnitude than quarries or buildings lifetimes [44,45]. This analysis does not convers this aspect of the problem. Notwithstanding these weaknesses, the reader should be aware that the same formalism applies also to greenhouse gases or any other pollutants that are produced both in construction and operation of the building. 3. Discussion Related to Specific Applications: Insulation and Windows Since a large fraction of the energy consumption in a building is used for space heating or cooling, optimization of the insulation is a critical issue. This is why optimization of insulation has been largely covered in the scientific literature and this is why the author chose to apply the above-mentioned concepts to this application. The oldest paper known to us is the work of Muncey in 1955 , who worked on the optimization of insulation for Australian houses. This study has been followed by many others [47,48,49,50,51]. For all of them, the optimization was based on economic considerations. The first and only article performing optimization in term of energy uncovered by the current review has been written by Anani and Jibril in 1988 . Insulation constitutes a classic case of diminishing return, since the impact of each new layer is inversely proportional to the insulation already provided by the existing layers. In consequence, it makes no sense to optimize the EROI, since the very first layer of insulation as a close to infinite EROI. This is why the objective is to minimize the total lifetime energy consumption and then calculate the EROI for this configuration. where U is the thermal conductance of the wall [W · K−1], DD is the number of degree-day [K ∙ d], A is the wall area [m2], η is the efficiency of the heating or cooling system, and N the lifetime of the building [yr]. The equation is valid both for heating and cooling, but these contributions must be calculated separately. They are also valid in a constant climate. An evolving climate can skew the results compare to this static model [55,56,57,58,59,60]. The thermal conductance takes the form: where Rw stand for the thermal resistance of the original wall(or roof) and Ri is the thermal resistance of the added insulation. Since thermal resistance increase linearly with the thickness of the insulation (t), the previous equation can be rewritten as: where k is the “effective” thermal conductivity of the insulation material. Energy consumption for temperature control is then equal to: The energy cost of the insulation layer is defined simply as: where ε is the energetic cost per unit of volume of the insulation. The total energy consumed by the building over its lifetime is then equal to: where Eh and Ec are the energy use for heating and cooling, respectively. To minimize the total energetic cost, the derivative of Et with respect to t is set to be equal to zero. The optimal insulation thickness is then equal to: From this expression, the energy saved for heating and cooling is simply Substituting topt, yields As the invested energy is equal to: combining these two elements, leads to the following expression for the EROI: In the previous equation, only insulation properties can be controlled by design. Hence, we define the insulation quality factor such that: From this simple first order analysis, we can see that EROI is inversely proportional to the wall existing insulation, to the square root of the lifetime cumulative degree-day scaled by the efficiency, and inversely proportional to the square root of the product of the thermal conductivity. This simple relationship will be used to normalize the various reported return on investment analyses to a common DD, N, Rw and η. In addition, one shall note that the square root relationship between DD, EROI and optimum insulation thickness imply a smaller thickness and a larger energy consumption than that obtained by a simple linear relationship. To the best knowledge of the authors, this behavior was not considered when translating the passivhaus standard to other climate zones. Indeed, the standard in terms of kWh/m2 is kept constant, while this standard should be relaxed if the minimum lifetime energy consumption is the overall goal, as in the original passivhaus philosophy. This approach has the unfortunate consequence of increasing the overall energy consumption over the life time of the building compared to the optimal configuration by overinsulating it in the northern countries. A similar concern has been raised by Szalay in the context of the European Building directive. In recent times, optimization studies on insulation are mostly done in countries in economic transition. Especially, there are numerous studies done in Turkey ([63,64,65,66], among many). This is fortunate since this dataset is essentially internally consistent, which helps to point the underlying factor affecting the EROI. The data from these studies [63,64,65] have been normalized to a 50 years lifetime (Figure 2). In addition, to reduce the dispersion of the data points, the analysis has been limited to the cases were polystyrene was used as an insulator. Therefore, in Figure 2, five curves from can be found along with an extra vertical one that corresponds to an analysis with different fuel for on specific climate ; three others points come from . The residual dispersion comes from the economic assumption at the basis of the optimization and the original uninsulated wall thermal resistance. From each set of points reported, the square root relation between heating degree-days and the EROI can easily be observed. Starting with the overall results reported in Figure 2, the EROI for optimum total energy consumption has also been calculated based on the physical data provided in reference [63,64,65] (Figure 3). Since economic assumptions are now absent, the dispersion is much lower. It is dominated by the variation of the wall thermal conductivity in the absence of insulation and the efficiency of the heating system used. Overall, the optimum EROI tend to be lower than the published values from the literature since the economics optimization is done after for shorter lifetime (10 years) than the one we used for the energy optimization (50 years). It should be pointed out that air in a cavity is itself is a good insulator [66,67,68,69]. The EROI of an air insulating layer is infinite (as Qi = ∞ since it costs nothing (ε = 0)). Moreover, adding air may reduce the amount of embodied energy in a wall insulation solution thus lowering the denominator of Equations 4 and 17. However, in practice, the situation is more complex. For example, in the previous papers [66,67,68,69], an air cavity is left within a wall. In these cases, there is no additional cost associated to the air confinement. But, since the air cavity allows the utilization of a thinner insulation, its cost is negative, while improving the overall insulation and the net energy saved compared to a wall without the air gap. Hence, from the data of Mahlia and Iqbal , EROI of the air gap is between †0.7 and †8.7, since the air gap allows a reduction of the embodied energy in the insulation. We note that a similar work has been carried independently by Harvey , who used the energy payback time and the marginal energy payback time to optimise the insulation layer of a building. He also used a quality factor for the insulation material, that is equivalent to 1/Qi2 (Qi is defined in the Equation 18). However, he did not notice the existence of an optimum insulation thickness excepted for greenhouse gas emission (Figure 6 of reference ). It is worth noting that while studying the global energy optimization of a building, Bribián et al. found that added insulation increased the cooling loads due to the increase difficulty in evacuating heat. This anti-insulation effect has been discussed extensively by Masoso and Grobler . It is not captured in our simplified model, but is significant only in situation were the internal heat loads are comparable to external heat loads. When optimizing the total energy consumption of a desert dwelling by optimizing its wall, Huberman obtained an EROI of †0.97; the actual base case (reinforced concrete with 6 cm of expanded polystyrene and stone facing) having a larger embodied energy than the most efficient configuration (stabilized soil block with 6 cm expanded polystyrene and stone facing). A comparative analysis was carried by Pulselli et al. between 3 wall designs (reference, plus insulated and ventilated) in three cities (Berlin, Barcelona, Palermo). From the payback time given by the author the EROI varies from 8.2 to 13 for the plus-insulated design and from 9.7 to 15.8 for the ventilated wall. Utama and Gheewala studied the impact of using a double wall instead of a single one in a typical residential high rise building in Jakarta, Indonesia. They found that while the double wall had a larger embodied energy (79.5 GJ per apartment vs. 76.3 GJ), life cycle energy consumptions over 40 years were 283 GJ versus 480 GJ. Over this time period, this would translate in a EROI of 61.6. However, if we extend the lifetime at 50 years, maintenance for the double wall should be added (9 GJ). This was not include for the previous calculation due to its lower maintained compared to the single wall. Accordingly, this would bring the EROI over 50 years at 20.5. This analysis is a good example of the importance of the definition of the boundaries of the analysis. Windows are the most critical components of the building envelope for energy efficiency. They are literally holes in the walls allowing heat or cold to enter or exit. In consequence, any improvement in their global insulating properties has a very large impact on energy consumption. This is why the energy return on investment is usually very high. On the other hand, windows provide natural daylight that largely reduces the energy consumption for lighting. Nevertheless, optimizing their area is a rather complicated tradeoff for the optimization of the overall building efficiency. For the sake of simplicity, in this article, we only concentrate our analysis on the insulating effect of windows. The oldest article, found in the current review on the energy content of windows has been written by Saito and Shukuya in 1996 . These authors studied three types of windows: single and double glazed with aluminum frame and double glazed with a wood frame. Calculations were done for a glazing of 1.02 m2. The mass of the aluminum frame was estimated to be 4.1 kg, while a single glazing panel 3 mm thick was estimated at 7.6 kg. Energy density of glass and aluminum was estimated to be 16.9 MJ/kg and 503 MJ/kg respectively. In absence of accurate data, they assumed that the wood frame embodied energy was one tenth of the aluminum one. Hence, for the three windows type the embodied energy was 2190 MJ, 2319 MJ and 463 MJ, respectively. Then, they calculated the heat transmission trough the frame and glazing to be equal to 8.0 W/K, 5.2 W/K, 3.7 W/K, respectively. In consequence, the energy savings for the Tokyo climate (1800 HDD) using the single window pane with aluminum frame as a reference is 436 MJ/yr for the double glazing, with an aluminum frame and 669 MJ/yr for the wood frame with double glazing. This translates in an energy payback of 108 days and −2.6 yr respectively using the single glaze window with an aluminum frame as reference. Based on a 35 years lifetime, the respective EROI are 118 and †13.6 since the wood frame has a negative energetic cost compare to an aluminum one. While not described by the authors, single glazed wood framed window would have an embodied energy of 335 MJ and a conductivity of 6 W/K. Energy savings compared to this reference are, for double glazed window with aluminum and wood frames, 124 MJ/yr and 358 MJ/yr. In consequence, respective EROI are 2.2 and 98. Hence, this shows that it is the aluminum frame that kills the performance, illustrating the importance of analyzing the window as a system and not only focusing on the glazing. An extensive study of life cycle analyses of windows has been carried by Weir in 1998 in the British context [77,78]. She studied many aspect of the windows design. Especially, she studied the utilization of noble gas to fill the window gap. For a 1.2 m × 1.2 m window, the additional energy cost of filling the gap with argon, krypton, and xenon was estimated at 11.83 kJ, 502.2 MJ and 4.5 GJ respectively. These calculations were based on the optimum thickness of the window gap (distance between panes) that is 20 mm for air (used as a reference), 16 mm for argon, 12 mm for krypton, and 8 mm for xenon, respectively. The window being based on a timber core with an aluminum cladding, the embodied energy of the windows excluding the gas was estimated at 1030.5 MJ. Addition of argon reduced the U-factor of the window from 1.63 W/m2 to 1.3 W/m2. Assuming 2810 heating degree-day (UK average ), energy saving over 35 years for the argon would be 2.8 GJ, which provides an EROI of 237,000! This extreme value raises the question about the estimated value of the embodied energy of argon. Values of the U-factors are not given for krypton- and xenon-based windows. Nevertheless, from the figures given in the article, the energy savings can be estimated to be respectively twice and triple that of the argon-based window. Accordingly, respective EROI are 11.2 and 1.9. Later, the same authors produced a seminal study on energy efficiency of windows. They examined the embodied energy and their impact on energy consumption of five configurations of windows to be used for the replacement of existing ones in four building sited south of Edinburg, UK. For this comparison, it is possible to calculate the impact of adding a layer of low-e coating, a glazing or including a buffer gas (argon or krypton). Accordingly, respective energy payback time and EROI over 35 years can be calculated: • Addition of low-e coating on a double glazed window: EPT = 17–22 days, EROI = 592–758 • Addition of argon to a low-e coated and doubled glazed window: EPT < 1 day, EROI = 125,000–134,000 • Addition of krypton to a low-e coated and doubled glazed window: EPT = 4.25–11 yr, EROI = 3.2–8.2 • Addition of a third glazing and an additional low-e coating to a low-e coated and doubled glazed window with argon filling: EPT = 1.4–1.9 yr, EROI = 18–25 • Addition of a third glazing and an additional low-e coating and krypton to a low-e coated and doubled glazed window with argon filling: EPT = 9.6–12.8 yr, EROI = 2.7–3.6 From these numbers, it is clear that it makes little sense to use krypton as an insulating gas, while argon and low-e coating are very effective energy investments. Addition of a third glazing and additional low-e coating is also, though with a lesser impact, a good solution. Indeed, the authors concluded that double and triple argon filled windows are the best options in their climate [77,78,79,80]. Nevertheless, these values are calculated as an additional feature to a new window. Replacement of an existing window by a new one is much more costly in energy. Hence, replacement of an existing double glazed air filled window with the same window with low-e coating and argon insulation has a payback time between 4.2 and 4.9 years (EROI = 7.2–8.3). It should be noted that the frame of these replacement windows was made of aluminum cladded timber, which is among the less energy intensive type . Others types would have an even lower payback. Loss of embodied energy of the original windows and upfront cost of the new ones raises, both in economic and ecologic sense, the question of the relevance of replacing the whole window instead of simply restoring it . Alternatively, replacement should be done when the old windows reach their end of life to avoid wasting its embodied energy. An interesting aspect of the Menzies and Wherrett paper is that costs are calculated both in energy units and in monetary unit. Calculated monetary payback times are much larger than energy payback times. For low-e coating the ratio is about 90 and a staggering 30,000 for the addition of argon, while it is between 6 and 12 for the addition of an additional glass pane. These last ratios are expected since they are close to the average societal EROI . However, large ratios for the argon and low-e coating can be caused by an erroneous life cycle analysis or simply by the fact that the vendors make a very large profit margin on these features. While this is difficult to prove for the low-e coating, it is much easier to test in the case of argon. From a local producer of argon (Air Liquide Canada), the authors of this study received estimates of the argon embodied energy broadly compatible with that of . In the case of these figures too, the market price of argon is much larger than what would be expected from the embodied energy content. The source of this discrepancy is not known. It might be caused by a large profit of margin or the cost of the production and transport infrastructure, which might be significant compared to the production cost. Further investigation would be needed to clarify the situation. Recio et al. studied three different windows made of PVC, aluminum and wood. Windows with one and two glazing were studied for the wood frame. For the aluminum based-window, design with and without thermal break were studied. Energy consumption was studied for Prat Lobregat, Barcelona, Spain over a period of 50 years. In those mild climatic conditions, EROI of additional glazing is much lower (EROI = 11.9) than seen previously [76,77,78,79,80]. For the aluminium based-window, the addition of a thermal break reduced the energy consumption for operation by 618.5 kWh. Energy cost of the thermal break is not known, but since the energy involved in the fabrication of the window is 4.8 kWh, it is reasonable to assume that the cost is lower than 0.1 kWh. In these conditions, the EROI exceeds 6000. Dahlstrøm studied the energy budget of advance windows in the Norwegian context. They noted that the energy payback time of improving the insulation of a window from U = 1.2 to U = 0.8 by an additional glazing and low-e coating to a double window, with argon filling and one low-e coating was roughly a year. Over a 35 years lifetime, this would translate in an EROI ≈ 35, which is broadly consistent with previous values [77,78,79,80]. Also, he found that usage of kypton and xenon increased the environmental impacts by 5% and 20% respectively. 3.3. Whole Building A brief review is carried out here to demonstrate that the proposed concepts can also be applied perform to whole buildings. This review is nevertheless concise due to space limitation. Ramesh et al. wrote a review about whole building studies. The EROI calculations are only possible for a fraction of them due to the format in which the results are presented. Nevertheless, this review is generally more useful than the original papers quoted since it provides numerical data instead of only figures. Derived, EROI value are generally good EROI > 10. However, since most of the references are comparative case studies, the original state are not always clearly defined, which makes the EROI value ambiguous. For a U.S. residential home build in Michigan, Keolian et al. obtained an EROI of 60 from a specific so-called “Energy-Efficient Home” over a period of 50 years. This high EROI can be credited to the numerous strategies for lowering life-cycle energy consumption used. These strategies mainly focused on methods to reduce utility-supplied energy, but the reduction of the embodied energy and increased product durability were also addressed. Uzsilaityte and Martinaitis studied the impact of various rebuilding strategies on a school building in Vilnius, Estonia. The derived EROI values were between 11.9 to 55.5 as a function of the measures that were implemented. Yohanis and Norton discussed the total energy optimization for a building. They demonstrated that the glazing ratio can be optimized in a similar way as the insulation, since windows reduce the energy consumption on lighting but increase it for cooling and heating. Unfortunately, no exploitable numerical data are given. Therefore, it is not possible to derive precise EROI value. However, from figure inspection, we estimated it to be around †20 for the most favorable cases, because windows have a lower embodied energy density than the walls. Verbeeck and Hens developed a methodology to optimize low energy buildings simultaneously for energy, environmental impact and costs without neglecting the boundary conditions for thermal comfort and indoor air quality. Their study focuses on types of housing in the Belgian context (terraced house, semi-detached house, detached house and non-compact house). Numerous simulations were performed but only broad numerical ranges of values are given in the paper, therefore giving hard times to use the data meaningfully. Nevertheless, predicted EROI are all above 10 many of them exceeding 25. Gustavsson and Joelsson studied a wood building with a relatively high operational energy demand. One of the most effective measures to reduce the energy consumption was to insulate the attic and installing energy-efficient windows, with an EROI of 10. Ardente et al. presented the results of an energy and environmental assessment of a set of retrofit actions implemented in the framework of the European Union project “BRITA in PuBs”. Six public buildings energy efficiency actions were investigated. Lifetime evaluated EROI for the proposed measures varied between 6 and 52 (lightning, insulation, ventilation, etc.). Despite these former results, high EROI are not automatic for every energy efficient measure. For example, Fay et al. obtained an EROI of 3.1 for added insulation to the “Green Home”, a two levels detached brick veneer house built in Melbourne, Australia. Acknowledging this poor gain, the authors suggested that alternative strategies would be more appropriate (high performance windows, reduced infiltration, wider thermostat settings and correctly sized windows oriented appropriately). In a similar way, data from Karlsson and Moshfegh demonstrate that the EROI of the supplement energy investment required to obtain a low energy house with respect to a standard house in Sweden is equal to 6.3 over 50 years. Hernandez et al. studied various energy efficiency strategies for a recent building build in Ireland. They evaluated the EROI for various technical options. Additional insulation EROI was highly dependent on the insulation material (polystyrene EROI = 16.4, cellulose EROI = 115). Triple glazed windows and photovoltaic panel had a low EROI (3.3 and 4, respectively). New boiler and solar water heating provided intermediate EROI (18.8 and 15). This paper shed some light on the issue of energy sobriety by using appropriate definitions of the energy return on investment (EROI). The paper first discusses the intrinsic differences between a watt and a negawatt and how savings and production of energy lead to different interpretation of the EROI. The papers stresses that while production and savings are different from a point of view of positive and negative energy, EROI for savings is always with respect to an existing situation while for production it may concern a situation for which there is nothing to compare with. The paper also introduces the concept of net negative energy investment in the context of an implementation for which the reduction of intrinsic energy in the peripheral systems is higher that the investment required by the actual solution. The paper than defines 4 types of EROI according to the signs of the numerator and denominator. Then the paper addresses these key concepts from the point of view energy savings as applied to three popular energy savings measures implemented in buildings: insulation, window optimization, and the integration of several measures into a whole building. Estimated EROI in energy savings strategies are high compared to most energy production strategies . This illustrates the strongly positive impact of energy conservation (savings) on the environment. In consequence, the motto “The cheapest energy is the energy not used” is true in most case we have observed. Nevertheless, in few cases, such as adding an extra foot of insulation on an already well insulated building, this affirmation might be questioned. Nevertheless, the diminishing return of the adjunction of more insulation in building walls raises the question of the existence of a threshold above which one is better to produce the energy than try to save it. This question is especially important in the light of policies that, for instance, simply copy the passivhaus standard without further optimization with respect to the local climate. The same situation arises when extreme energy consumption reduction is sought at the expense of the embodied energy. In an upcoming study, the authors will employ the concepts developed here but from the point of view of energy production (rather than savings). Several fashionable local energy production measures advocated for residential, commercial, or institutional buildings will be considered: solar walls, photovoltaic, wind, geoexchange, etc. Conflict of Interest The authors declare no conflict of interest. - IPCC, Climate Change 2007: Mitigation; Contribution of Working Group III to the Fourth Assessment Report; Cambridge University Press: Cambridge, UK, 2007. - Kesselring, P.; Winter, C.J. World Energy Scenarios: A Two-kilowatt Society, Plausible Future or Illusion? In Proceedings of the Energietage 94, Villigen, Switzerland, 10-12 November 1994. - Pfeiffer, A.; Koschenz, M.; Wokaum, A. Energy and building technology for the 2000 W society—Potential of residential buildings in Switzerland. Energy Build. 2005, 37, 1158–1174. [Google Scholar] - Schulz, T.F.; Kypreos, S.; Barreto, L.; Wokaum, A. Intermediate steps towards the 2000 W society in Switzerland: An energy-economic scenario analysis. Energy Policy 2008, 36, 1303–1317. [Google Scholar] [CrossRef] - Dixit, M.K.; Fernández-Solís, J.L.; Lavy, S.; Culp, C.H. Identification of parameters for embodied energy measurement: A literature review. Energy Build. 2010, 42, 1238–1247. [Google Scholar] [CrossRef] - Arena, A.P.; de Rosa, C. Life cycle assessment of energy and environmental implications of the implementation of conservation technologies in school buildings in Mendoza-Argentina. Build. Environ. 2003, 38, 359–368. [Google Scholar] [CrossRef] - Horvath, A. Construction materials and the environment. Annu. Rev. Energy Environ. 2004, 29, 181–204. [Google Scholar] - Urge-Vorsatz, D.; Novikova, A. Opportunities and Costs of Carbon Dioxide Mitigation in the Worlds Domestic Sector. In Proceedings of the International Energy Efficiency in Domestic Appliances and Lighting Conference ’06, London, UK, 21-23 June 2006. - Langston, Y.L.; Langston, C.A. Reliability of building embodied energy modeling: An analysis of 30 Melbourne case studies. Constr. Manag. Econ. 2008, 26, 147–160. [Google Scholar] [CrossRef] - Lippiatt, B.C. Selecting cost effective green building products: BEES approach. J. Constr. Eng. Manag. 1999, 125, 448–455. [Google Scholar] [CrossRef] - Tommerup, H.; Rose, J.; Svendsen, S. Energy-efficient houses built according to the energy performance requirements introduced in Denmark in 2006. Energy Build. 2007, 39, 1123–1130. [Google Scholar] [CrossRef] - Ding, G. The Development of a Multi-criteria Approach for the Measurement of Sustainable Performance for Built Projects and Facilities. Ph.D. Thesis, University of Technology, Sydney, Australia, 2004. [Google Scholar] - Asif, M.; Muneer, T.; Kelley, R. Life cycle assessment: A case study of a dwelling home in Scotland. Build. Environ. 2007, 42, 1391–1394. [Google Scholar] [CrossRef] - Campogrande, D. The European Construction Industry—Facts and Trends. In Proceedings of the ERA Convention; European Construction Industry Federation (FIEC), Berlin, Germany: 5-6 June 2007. - European Parliament, Report on the Proposal for a Directive of the European Parliament and of the Council on the Energy Performance of Buildings (Recast); COM(2008)0780-C6-0413/2008-2008/0223(COD); European Parliament: Bruxelles, Belgium, 2009 April 6. - Règlementation Thermique. 2012. Available online: http://www.rt-batiment.fr/batiments-neufs/reglementation-thermique-2012/presentation.html (accessed on 4 July 2012). - A National Programme for Energy Efficiency and Energy-Smart Construction. Swedish Government Bill. 2005. Available online: http://www.regeringen.se/sb/d/574/a/63635 (accessed on 4 July 2012). - Harvey, L.D.D. Reducing energy use in the buildings sector: Measures, costs, and example. Energy Effic. 2009, 2, 139–163. [Google Scholar] [CrossRef] - Hall, C.A.S. Migration and metabolism in a temperate stream ecosystem. Ecology 1972, 53, 585–604. [Google Scholar] [CrossRef] - Hall, C.A.S.; Cleveland, C.J. Petroleum drilling and production in the U.S.: Yield per effort and net energy analysis. Science 1981, 211, 576–579. [Google Scholar] - Cleveland, C.J.; Costanza, R.; Hall, C.A.S.; Kaufmann, R. Energy and the U.S. economy: A biophysical perspective. Science 1984, 225, 890–897. [Google Scholar] - Hall, C.A.S.; Cleveland, C.J.; Kaufmann, R. Energy and Resource Quality: The Ecology of the Economic Process; Wiley: New York, NY, USA, 1986. [Google Scholar] - Hall, C.A.S.; Powers, R.; Schoenberg, W. Peak Oil, EROI, Investments and the Economy in an Uncertain Future. In Renewable Energy Systems: Environmental and Energetic Issues; Pimentel, D., Ed.; Elsevier: London, UK, 2008; pp. 113–136. [Google Scholar] - Huijbregts, M.A.; Hellweg, S.; Frischknecht, R.; Hendriks, H.W.; Hungerbühler, K.; Hendriks, A.J. Cumulative energy demand as predictor for the environmental burden of commodity production. Environ. Sci. Technol. 2010, 44, 2189–2196. [Google Scholar] - Ulgiati, S.; Raugei, M.; Bargigli, S. Overcoming the inadequacy of single-criterion approaches to Life Cycle Assessment. Ecol. Model. 2006, 190, 432–442. [Google Scholar] [CrossRef] - Svensson, N.; Roth, L.; Eklund, M.; Mårtensson, A. Environmental relevance and use of energy indicators in environmental management and research. J. Clean. Prod. 2006, 14, 134–145. [Google Scholar] [CrossRef] - Mulder, K.; Hagens, N.J. Energy return on investment: Toward a consistent framework. AMBIO 2008, 37, 74–79. [Google Scholar] [CrossRef] - Gagnon, L. Civilisation and energy payback. Energy Policy 2008, 36, 3317–3322. [Google Scholar] [CrossRef] - Hall, C.A.S.; Balogh, S.; Murphy, D.J.R. What is the minimum EROI that a sustainable society must have? Energies 2009, 2, 25–47. [Google Scholar] [CrossRef] - Polimeni, J.M.; Mayumi, K.; Giampietro, M.; Alacott, B. The Myth of Resource Efficiency; Earthscan: London, UK, 2009. [Google Scholar] - Hammond, G.; Jones, C. Embodied Carbon - The Inventory of Carbon and Energy (ICE); Lowrie, F., Tse, P., Eds.; University of Bath with BSRIA: Bath, UK, 2011. [Google Scholar] - Hernandez, P.; Kenny, P. Zero Energy Houses and Embodied Energy: Regulatory and Design Considerations. In Proceedings of the American Society of Mechanical Engineering 2nd International Conference on Energy Sustainability; Jacksonville, FL, USA: 10-14 August 2008. - Hernandez, P.; Kenny, P. Defining Zero Energy Buildings—A life cycle perspective. In Proceedings of the PLEA 2008—25th Conference on Passive and Low Energy Architecture, Dublin, Ireland, 22-24 October 2008. - Kellenberger, D.; Althaus, H.G. Relevance of simplifications in LCA of building components. Build. Environ. 2009, 44, 818–825. [Google Scholar] [CrossRef] - ISO, Buildings and Constructed Assets—Service-Life Planning—Part 8: Reference Service Life and Service-Life Estimation; ISO 15686-8:2008; International Organization for Standardization (ISO): Geneva, Switzerland, 2008. - Sartori, I.; Hestnes, A.G. Energy use in the life cycle of conventional and low-energy buildings: A review article. Energy Build. 2007, 39, 249–257. [Google Scholar] [CrossRef] - Massachusetts Institute of TechnologyEnergy LaboratoryEnergy Technology Availability: Review of Longer Term Scenarios for Development and Deployment of Climate-Friendly Technologies; Massachusetts Institute of Technology, Energy Laboratory: Cambridge, MA, USA, 1997. - Gregory, A.N.; Yost, P. A transparent, interactive software environment for communicating life cycle assessment results: An application to residential windows. J. Ind. Ecol. 2002, 5, 15–28. [Google Scholar] - Paulsen, J.H.; Borg, M. A building sector related procedure to assess the relevance of the usage phase. Int. J. Life Cycle Assess. 2003, 8, 142–150. [Google Scholar] [CrossRef] - Johnston, D. A Physically Based Energy and Carbon Dioxide Emission Model of the UK Housing Stock. Ph.D. Thesis, Leeds Metropolitan University, Leeds, UK, 2003. [Google Scholar] - Khasreen, M.M.; Banfill, P.F.G.; Menzies, G.F. Life-cycle assessment and the environmental impact of buildings: A review. Sustainability 2009, 1, 674–701. [Google Scholar] [CrossRef] - Guinée, J.B. Life Cycle Assessment: An Operational Guide to the ISO Standards; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2002. [Google Scholar] - Van Oers, L.; de Koning, A.; Guinée, J.B.; Huppes, G. Improving Characterisation Factors for Abiotic Resource Depletion as Recommended in the New Dutch LCA Handbook; Roads and Hydraulic Engineering Institute: Leiden, The Netherlands, 2002. [Google Scholar] - Habert, G.; Bouzidi, Y.; Chen, C.; Jullien, A. Development of a depletion indicator for natural resources used in concrete. Resour. Conserv. Recy. 2010, 54, 364–376. [Google Scholar] [CrossRef] - Habert, G.; Castillo, E.; Morel, J.C. Sustainable Indicators for Resources and Energy in Building Construction. In Proceedings of the Second International Conference on Sustainable Construction Materials and Technologies, Ancona, Italy, 28-30 June 2010. - Muncey, R.W. Optimum thickness of insulation for Australian houses. Aust. J. Appl. Sci. 1955, 6, 486–495. [Google Scholar] - Molnar, J.; Armitage, J.B. Determination of economic thickness of insulation. J. Inst. Engng. Ans. 1968, 40, 129. [Google Scholar] - Barnhart, J.M. Economic thickness of thermal insulation. Chem. Eng. Prog. 1974, 70, 50. [Google Scholar] - Brundrett, G.W. Some effects of thermal insulation on design. Appl. Energy 1975, 1, 7–30. [Google Scholar] [CrossRef] - Probert, S.D.; Giani, S. Economics of thermal insulation. Appl. Energy 1976, 2, 189–204. [Google Scholar] [CrossRef] - Probert, S.D.; Thirst, T.J. Thermal insulation provided by triangular sectioned attic spaces. Appl. Energy 1977, 3, 41–50. [Google Scholar] [CrossRef] - Anani, A.; Jibril, Z. Role of thermal insulation in passive designs of buildings. Sol. Wind Techchnol. 1988, 5, 303–313. [Google Scholar] [CrossRef] - Hasan, A. Optimizing insulation thickness for buildings using life cycle cost. Appl. Energy 1999, 63, 115–124. [Google Scholar] [CrossRef] - Gustafsson, G. Optimisation of insulation measures on existing buildings. Energy Build. 2000, 33, 49–55. [Google Scholar] [CrossRef] - Chaumont, D.; Angers, J.-F.; Frigon, A.; Pacher, G.; Roy, R. Évolution des Conditions Climatiques au Québec, Développement d’un Scenario Climatique Utilisé à des Fins de Prévision de la Demande d’Électricité au Québec sur l’Horizon 2030 (in French); Consortium Ouranos: Montreal, QC, Canada, 2007. [Google Scholar] - Frank, T. Climate change impacts on building heating and cooling energy demand in Switzerland. Energy Build. 2005, 37, 1175–1185. [Google Scholar] [CrossRef] - Gaterell, M.R.; McEvoy, M.E. The impact of energy externalities on the cost effectiveness of energy efficiency measures applied to dwellings. Energy Build. 2005, 37, 1017–1027. [Google Scholar] [CrossRef] - Wang, X.; Chen, D.; Ren, Z. Assessment of climate change impact on residential building heating and cooling energy requirement in Australia. Build. Environ. 2010, 45, 1663–1682. [Google Scholar] [CrossRef] - De Wilde, P. The implications of a changing climate for buildings. Build. Environ. 2012, 55, 1–7. [Google Scholar] [CrossRef] - De Wilde, P.; Tian, W. Management of thermal performance risks in buildings subject to climate change. Build. Environ. 2012, 55, 167–177. [Google Scholar] [CrossRef] - Feist, W. Life-Cycle Energy Balances Compared: Low-Energy House,Passiv House,Self-Sufficient House. In Proceedings of the International Symposium of CIB W67, International Council for Building Research, Vienna, Austria, 4-10 August 1996; pp. 183–190. - Szalay, A.Z.-Z. What is missing from the concept of the new European Building Directive. Build. Environ. 2006, 42, 1781–1769. [Google Scholar] - Comaklı, K.; Yüksel, B. Optimum insulation thickness of external walls for energy saving. Appl. Therm. Eng. 2003, 23, 473–479. [Google Scholar] [CrossRef] - Bolattürk, A. Determination of optimum insulation thickness for building walls with respect to various fuels and climate zones in Turkey. Appl. Therm. Eng. 2006, 26, 1301–1309. [Google Scholar] [CrossRef] - Dombaycı, Ö.A.; Gölcü, M.; Pancar, Y. Optimization of insulation thickness for external walls using different energy-sources. Appl. Energy 2006, 83, 921–928. [Google Scholar] [CrossRef] - Kurt, H. The usage of air Gap in the composite wall for energy saving and air pollution. Environ. Prog. Sustain. Energy 2011, 30, 450–458. [Google Scholar] [CrossRef] - Daouas, N.; Hassen, Z.; Ben Aissia, H. Analytical periodic solution for the study of thermal performance and optimum insulation thickness of building walls in Tunisia. Appl. Therm. Eng. 2010, 30, 319–326. [Google Scholar] [CrossRef] - Daouas, N. A study on optimum insulation thickness in walls and energy savings in Tunisian buildings based on analytical calculation of cooling and heating transmission loads. Appl. Energy 2011, 88, 156–164. [Google Scholar] [CrossRef] - Mahlia, T.M.I.; Iqbal, A. Cost benefits analysis and emission reductions of optimum thickness and air gaps for selected insulation materials for building walls in Maldives. Energy 2010, 35, 2242–2250. [Google Scholar] [CrossRef] - Harvey, L.D.D. Net climatic impact of solid foam insulation produced with halocarbon and non-halocarbon blowing agents. Build. Environ. 2007, 42, 2860–2879. [Google Scholar] [CrossRef] - Bribián, I.Z.; Uséon, A.A.; Scarpellini, S. Life cycle assessment in buildings: State-of-the-art and simplified LCA methodology as a complement for building certification. Build. Environ. 2009, 44, 2510–2520. [Google Scholar] [CrossRef] - Masoso, O.T.; Grobler, L.J. A new and innovative look at anti-insulation behaviour in building energy consumption. Energy Build. 2008, 40, 1889–1894. [Google Scholar] [CrossRef] - Huberman, N. Life Cycle Energy Costs of Building Materials: Alternatives for a Desert Environment. M.Sc. Thesis, Ben-Gurion University of the Negev, Beer-Sheva, Israel, 2007. [Google Scholar] - Pulselli, R.M.; Simoncini, E.; Marchettini, N. Energy and emergy based cost-benefit evaluation of building envelopes relative to geographical location and climate. Build. Environ. 2009, 44, 920–928. [Google Scholar] [CrossRef] - Utama, A.; Gheewala, S.H. Indonesian residential high rise buildings: A life cycle energy assessment. Energy Build. 2009, 41, 1263–1268. [Google Scholar] [CrossRef] - Saito, M.; Shukuya, M. Energy and material use in the production of insulating glass windows. Solar Energy 1996, 58, 247–252. [Google Scholar] [CrossRef] - Weir, G. Life Cycle Assessment of Multi-Glazed Windows. Ph.D. Thesis, Napier University, Napier, Australia, 1998. [Google Scholar] - Weir, G.; Muneer, T. Energy and environmental impact analysis of double-glazed windows. Energy Convers. Manag. 1998, 39, 243–256. [Google Scholar] [CrossRef] - Baumert, K.; Selman, M. Heating and Cooling Degree Days; World Resources Institute: Washington, DC, USA, 2003. [Google Scholar] - Menzies, G.F.; Wherrett, J.R. Multiglazed windows: Potential for savings in energy, emissions and cost. Build. Serv. Eng. Res. Technol. 2005, 26, 249–258. [Google Scholar] [CrossRef] - Asif, M.; Muneer, T.; Kubie, J. Sustainability analysis of window frames. Build. Serv. Eng. Res. Technol. 2005, 26, 71–87. [Google Scholar] [CrossRef] - Sedovic, W.; Gorrhelf, J.H. What replacement windows can’t replace: The real cost of removing historic windows. APT Bull. J. Preserv. Technol. 2005, 36, 25–29. [Google Scholar] - Recio, J.M.B.; Narvaez, R.P.; Guerrero, P.J. Estimate of energy consumption and CO2 emission associated with the production, use and final disposal of PVC, aluminium, and wooden windows. Département de Projectes d’Engineyeria, Universitat Politecnica de Catalunya, Environmental Modelling Lab., Barcelona, Spain, 2005. Available online: http://www.pvcinfo.be/bestanden/Baldasano%20study_windows.pdf (accessed on 4 July 2012). - Dahlstrøm, O. Modern Highly Effective Windows; Report; NTNU: Trondheim, Sweden, 2010. [Google Scholar] - Ramesh, T.; Prakash, R.; Shukla, K.K. Life cycle energy analysis of buildings: An overview. Energy Build. 2010, 42, 1592–1600. [Google Scholar] [CrossRef] - Keoleian, G.A.; Blanchard, S.; Reppe, P. Life-cycle energy, costs, and strategies for improving a single-family house. J. Ind. Ecol. 2001, 4, 135–156. [Google Scholar] - Uzsilaityte, L.; Martinaitis, V. Impact of the Implementation Energy Saving Measure on the Life Cycle Energy Consumption of the Building. In Proceedings of the 7th International Conference Environmental Engineering, Vilnius Gediminas Technical University, Vilnius, Lithuania, 22-23 May 2008. - Yohanis, Y.G.; Norton, B. Life-cycle operational and embodied energy for a generic single-storey office building in the UK. Energy 2002, 27, 77–92. [Google Scholar] [CrossRef] - Verbeeck, G.; Hens, H. Life cycle inventory of buildings: A contribution analysis. Build. Environ. 2010, 45, 964–967. [Google Scholar] [CrossRef] - Gustavsson, L.; Joelsson, A. Life cycle primary energy analysis of residential buildings. Energy Build. 2010, 42, 210–220. [Google Scholar] [CrossRef] - Ardente, F.; Beccali, M.; Cellura, M.; Mistretta, M. Energy and environmental benefits in public buildings as a result of retrofit actions. Renew. Sustain. Energy Rev. 2011, 15, 460–470. [Google Scholar] [CrossRef] - Fay, R.; Treloar, G.; Iyer-Raniga, U. Life-cycle energy analysis of buildings: A case study. Build. Res. Inf. 2000, 28, 31–41. [Google Scholar] [CrossRef] - Karlsson, J.F.; Moshfegh, B. A comprehensive investigation of a low-energy building in Sweden. Renew. Energy 2007, 32, 1830–1841. [Google Scholar] [CrossRef] - Hernandez, P.; Cavanagh, S.; Brophy, V.; Futcher, J.; Szalay, Z.; Kenney, P. The Challenge of Refurbishing recentely Built Apartments: A Life Cycle Perspective. In Proceedings of the Paper Presented at the SB10mad: International Sustainable Building Conference, Madrid, Spain, 28-30 April 2010. © 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
<urn:uuid:7731b0d9-03f9-46c8-81be-9e89be9b5806>
CC-MAIN-2016-26
http://www.mdpi.com/2071-1050/4/8/1711/htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902243
14,838
3.046875
3
The H1N1 and Seasonal Flu Vaccine: What You Should Know As a part of New York City College of Technology’s on-going effort to keep you informed with the latest information regarding H1N1, we have provided below new information from the Center of Disease Control and Prevention (CDC) and the New York City Department of Health (DOH) regarding the H1N1 and seasonal flu vaccine. According to the CDC, the U.S. government has purchased over 250 million doses of H1N1 vaccine, so everyone who wishes to receive a dose will likely have an opportunity to do so. Initially, however, until enough vaccine has been manufactured and distributed, vaccination efforts will focus first on people in five target groups who are at higher risk for 2009 H1N1 influenza or related complications, are likely to come in contact with influenza viruses as part of their occupation and could transmit influenza viruses to others in medical care settings, or are close contacts of infants younger than 6 months (who are too young to be vaccinated). The five target groups are: • pregnant women, • people who live with or provide care for infants younger than 6 months (e.g., parents, siblings, and day care providers), • health care and emergency medical services personnel, • people 6 months through 24 years of age, and, • people 25 years through 64 years of age who have certain medical conditions that put them at higher risk for influenza-related complications. After demand in these high risk groups has been met, vaccinations will proceed with everyone ages 25-64, and then those who are 65 years or older. You may learn more information about the H1N1 vaccine at the following website: Where to get vaccinated for H1N1 and Seasonal Flu: For additional information or to find out where you can get either the seasonal flu shot or H1N1 flu shot, call the toll-free NYC Department of Health's Flu Vaccination Information Line at 311. You may also search for flu clinic convenient to you and find more information on H1N1 in New York City on the following website: www.nyc.gov/flu. Prevention is Your Best Defense To reduce your risk of infection and prevent the transmission of H1N1, seasonal influenza and other airborne respiratory illnesses, follow these three simple steps: • Always cover your mouth when you cough or sneeze with a tissue, or cough into shoulder or sleeve. Do not cough or sneeze into bare hands. Promptly throw tissue in trash. • Wash your hands often with soap and water or alcohol-based cleaners, especially after you cough or sneeze. • Avoid close contact with sick people. If you get sick with a fever accompanied by a sore throat, stay home from work or school for 24 hrs after your fever subsides and limit contact with others to avoid infecting them. Also, CUNY has compiled an extensive web page that adds to this information: The Student Wellness Center is available to answer questions about H1N1and any other health-related matter. The Center can be reached at (718) 260-5910 and is located in Pearl 104. We will post more information as it becomes available. Please continue to check the City Tech website for updated information.
<urn:uuid:58e6b64a-6965-4f84-b7a7-e2c4c1b51327>
CC-MAIN-2016-26
http://www.citytech.cuny.edu/students/health/swine_flu_info.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00154-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93714
691
2.921875
3
Treatment-Resistant Epilepsy Linked to Autism Study Shows Epilepsy in People With Autism Is Often Hard to Treat April 19, 2011 -- Epilepsy that is difficult to treat may be more common in those with autism than previously believed, new research suggests. "In general, we knew prior to this study that people with autism have significantly elevated rates of epilepsy," says researcher Orrin Devinsky, MD, professor of neurology, neurosurgery, and psychiatry at the New York University School of Medicine. Devinsky is also director of the NYU Comprehensive Epilepsy Center. In his new research, he found that epilepsy in autism is often treatment-resistant. ''Among those with autism who have epilepsy, in many cases it is difficult to control with medication,'' he says. In the small study, about 55% of those with sufficient data available had treatment-resistant epilepsy, he tells WebMD. The research is published online in the journal Epilepsia. It follows research published last week in the Journal of Child Neurology finding those with both autism and epilepsy have a higher death rate than those with autism alone. Autism spectrum disorders, a group of developmental disabilities, affect about one in 110 U.S. children, according to the CDC. Epilepsy, a brain disorder involving spontaneous seizures, affects about 3 million Americans, according to the Epilepsy Foundation. Autism Patients With Epilepsy Devinsky evaluated the records of 127 patients with autism and at least one epileptic seizure over a 20-year period. He looked at laboratory and clinical data from the patients who had been coming to the NYU Epilepsy Center. He defined treatment-resistant as failing two trials of tolerated drugs to treat epilepsy. Overall, Devinsky found that 33.9% of the patients had treatment-resistant epilepsy and 27.5% were seizure-free (no seizures during a 12-month period). The other 38.6% had insufficient information or infrequent seizures and were not placed into a category. "We only have good follow-up data on two-thirds of the 127," he says. "Of those two-thirds, more than 50% have intractable epilepsy." Those who were treatment-resistant reported seizure onset at an earlier age than those who were seizure-free. They also had more regression in developmental tasks. And they had more delays in motor and language skills.
<urn:uuid:9e187e1d-425d-467b-bce4-2827e5bd996e>
CC-MAIN-2016-26
http://www.webmd.com/epilepsy/news/20110419/treatment-resistant-epilepsy-linked-autism?src=rsf_full-news_pub_none_rltd
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969336
498
2.875
3
National Osteoporosis Awareness Month may be next month, but reminders about enhancing bone health are always appropriate. For example, you can remind your clients that the best defense against osteoporosis is to develop strong bones, especially before the age of 30, and that regular exercise has been shown to encourage bone growth throughout life. Of course, according to a study presented at the 70th annual meeting of the American Academy of Orthopaedic Surgeons in New Orleans, bone may be more responsive to certain types of exercise than to others. In the study, researchers compared the bone mineral densities (BMDs) of adolescent females involved in Olympic-style weight lifting, competition-level swimming or competition-level tennis. The weight lifters and tennis players had similar BMDs, which were noticeably greater than that of the swimmers. The researchers concluded that activities such as weight lifting and tennis are much more weight-bearing than minimal-impact activities, such as swimming. Activities done while one stands on both feet work the bones and muscles against gravity, and, to adapt to the impact of weight and pull of muscle, bone builds more cells to become stronger. Laura Gehrig, MD—study leader and orthopedic surgeon at Louisiana State University Health Sciences Center in Shreveport, Louisiana—elaborated, “During childhood, when bone accrual is greatest, exercise potentiates bone development...activities such as stair climbing, step aerobics, soccer, jogging and skating are weight-bearing activities that may increase bone mass sooner than expected in adolescents and to a greater degree than we thought.” Walking is another weight-bearing activity, but Diane Daniels, author of Exercises for Osteoporosis, cautions against it as a primary means of combating osteoporosis. “Walking...can be part of a program to prevent osteoporosis, but it is not the whole story. To cause bone to grow, it must be challenged with a new, added weight, not the same load over and over again as with walking,” she said. She added that strength training is ideal for minimizing the risk of developing the disease. To learn more about osteoporosis and how to avoid it, visit the National Osteoporosis Foundation Web site at www.nof.org.
<urn:uuid:4668aab3-88be-4786-b803-f8734714a75e>
CC-MAIN-2016-26
http://www.ideafit.com/fitness-library/reminders-appropriate-as-osteoporosis-awareness-month-nears
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960123
482
3
3
Art and Design Watercolor Painting 2 (3-0) 3 Cr. Hrs. This course is a continuation of ART 211 providing opportunities for analysis of watercolor and water-based media, (WC/WBM) techniques. A variety of WC / WBM techniques used by historically important WC / WBM artists are explored. Students learn appropriate application of WC / WBM paints for effects of opacity and transparency. (A requirement that must be completed before taking this course.) Upon successful completion of the course, the student should be able to: - Compare and contrast works of art which encourage positive behavior. - Demonstrate the ways in which the visual arts can function within society. - Differentiate the relationship of WC / WBM to other art forms. - Compare and contrast the characteristics and applications of opaque and transparent WC / WBM. - Utilize effectively the different properties of WC / WBM paints (i.e. opaque, transparent, staining, non-staining) that are unique to specific WC / WBM paints. - Explain the characteristics and application of transparent water-based media. - Apply effectively washes to emulate a studied juried reviewed artist's piece. - Apply effectively masking techniques to emulate a studied juried reviewed artist's piece. - Apply effectively glazing techniques to emulate a studied juried reviewed artist's piece. - Create a series of paintings in WC / WBM utilizing three or more WC / WBM techniques or applications within each piece. - Research different methods of painting with WC / WBM from established art history sources. - Develop proficiency in technical requirements of painting using WC / WBM. | ||212||147104||Watercolor Painting 2|| ||3||Sarris C||$37.00||15/20/0||Open|| | ||212||127129||Watercolor Painting 2|| ||3||Sarris C||$35.00||8/20/0||Open|| Note: This schedule updates once every 24 hours. Please reference WebAdvisor for current status.
<urn:uuid:c7b8acb0-a73b-4ede-a946-5252df00869a>
CC-MAIN-2016-26
http://schoolcraft.cc.mi.us/academics/course-description/ART/212
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.834829
435
2.5625
3
#202 Lock Your Computer When Not In Use In support of National Cyber Security Awareness Month, this tip will focus on physical security of desktops and laptops. Physical security of a computer is an often overlooked component of computer security. Whether you are walking away from your office computer or your laptop in a public place, you should always lock your computer and require a password to access it. To lock a windows PC: - Press Windows Key + L For other useful Windows Key shortcuts, please see this Microsoft Support Article and select “Windows Logo Key Keyboard Shortcuts”. To lock a Mac, first you must adjust this preference: - Open System Preferences - Click “Security & Privacy” - Click the lock in the bottom left and enter the administrator password to make changes - Make sure the box for “Require password immediately after sleep or screen saver begins” is checked. Note: You can select the amount of time before requiring a password, but “immediately” is recommended if leaving your computer quickly. Now you can lock your mac by clicking on your user name in the top right corner of the menu bar and selecting “Login Window”. If you do not see your user name in the top right corner, you need to enable fast user switching in System Preferences. For more information on securing your Mac, please see this Apple Support Article.
<urn:uuid:9f296a6b-6fe3-4da2-8456-e88409b33045>
CC-MAIN-2016-26
https://www.law.upenn.edu/live/news/2877-202-lock-your-computer-when-not-in-use
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.856504
289
2.6875
3
|BirdLife Species Champion||Become a BirdLife Preventing Extinctions Programme Supporter| |For information about BirdLife Species Champions and Species Guardians visit the BirdLife Preventing Extinctions Programme.| This species is listed as Endangered because it has a very small population which is declining as a result of the development of coastal wetlands throughout its range, principally for industry, infrastructure projects and aquaculture. Preliminary analyses of survey data collected at its breeding sites in Russia have provided evidence that the species's population is indeed undergoing a very rapid decline and imply that the population size may have been overestimated; clarification of these results may lead to a review of its threat status in the near future. Christidis, L.; Boles, W. E. 2008. Systematics and taxonomy of Australian birds. CSIRO Publishing, Collingwood, Australia. del Hoyo, J.; Collar, N. J.; Christie, D. A.; Elliott, A.; Fishpool, L. D. C. 2014. HBW and BirdLife International Illustrated Checklist of the Birds of the World. Barcelona, Spain and Cambridge UK: Lynx Edicions and BirdLife International. 29-32 cm. Medium-sized sandpiper with slightly upturned, bicoloured bill and shortish yellow legs. Breeding adults are boldly marked, with whitish spots and spangling on blackish upperside, heavily streaked head and upper neck, broad blackish crescentic spots on lower neck and breast and darker lores. In flight, shows all-white uppertail-coverts and rather uniform greyish tail. Toes do not extend beyond tail tip. Juvenile is browner above than non-breeding adult, has whitish notching on scapular and tertial fringes, pale buff wing-covert fringes and faintly brown-washed breast with faint dark streaks at sides. Similar spp. Common Greenshank T. nebularia has longer, greener legs, longer neck, less obviously bicoloured bill, and more obviously streaked crown, nape and breast-sides. Voice Call is distinctive kwork or gwaak. Bird, J. P.; Lees, A. C.; Chowdhury, S. U.; Martin, R.; Halder, R.; Ul Haque, E. 2010. Observations of globally threatened shorebirds in Bangladesh. BirdingASIA: 53-58. BirdLife International. 2001. Threatened birds of Asia: the BirdLife International Red Data Book. BirdLife International, Cambridge, U.K. Brazil, M. 2009. Birds of East Asia: eastern China, Taiwan, Korea, Japan, eastern Russia. Christopher Helm, London. Delany, S.; Scott, D. 2006. Waterbird population estimates. Wetlands International, Wageningen, The Netherlands. Li Zuo Wei, D.; Yeap Chin Aik; Lim Kim Chye; Kumar, K.; Lim Aun Tiah; Yang Chong; Choy Wai Mun. 2005. A report on survey of the status of Nordmann's Greenshank Tringa guttifer and Chinese Egret Egretta eulophotes in Malaysia. Li, Z.W.D., Yeap, C. A.; Kumar, K. 2007. Surveys of coastal waterbirds and wetlands in Malaysia, 2004-2006. In: Li, Z. W. D.; Ounsted, R. (ed.), The status of coastal waterbirds and wetlands in Southeast Asia: results of waterbird surveys in Malaysia (2004-2006) and Thailand and Myanmar (2006), pp. 1-40. Wetlands Internationa, Kuala Lumpur. Tan Gim Cheong. 2009. Nordmann's Greenshank Tringa guttifer reappears in Singapore after a 27-year break. BirdingASIA 11: 75-79. Tirtaningtyas, F. N.; Philippa, J. 2009. Nordmann's Greenshank Tringa guttifer on Cemara Beach, Jambi, Indonesia. BirdingASIA 12: 97-99. Further web sources of information Detailed species accounts from the Threatened birds of Asia: the BirdLife International Red Data Book (BirdLife International 2001). Text account compilers Benstead, P., Gilroy, J., Pilgrim, J., Taylor, J. Boyle, A., Bunting, G., Iqbal, M., Lappo, E., Li, Z., Moores, N. IUCN Red List evaluators Butchart, S., Symes, A. BirdLife International (2016) Species factsheet: Tringa guttifer. Downloaded from http://www.birdlife.org on 02/07/2016. Recommended citation for factsheets for more than one species: BirdLife International (2016) IUCN Red List for birds. Downloaded from http://www.birdlife.org on 02/07/2016. This information is based upon, and updates, the information published in BirdLife International (2000) Threatened birds of the world. Barcelona and Cambridge, UK: Lynx Edicions and BirdLife International, BirdLife International (2004) Threatened birds of the world 2004 CD-ROM and BirdLife International (2008) Threatened birds of the world 2008 CD-ROM. These sources provide the information for species accounts for the birds on the IUCN Red List. To provide new information to update this factsheet or to correct any errors, please email BirdLife To contribute to discussions on the evaluation of the IUCN Red List status of Globally Threatened Birds, please visit BirdLife's Globally Threatened Bird Forums. Additional resources for this species |Current IUCN Red List category||Endangered| |Family||Scolopacidae (Sandpipers, Snipes, Phalaropes)| |Species name author||(Nordmann, 1835)| |Population size||330-670 mature individuals| |Distribution size (breeding/resident)||169,000 km2| |Links to further information| |- Additional Information on this species|
<urn:uuid:1c991511-c53e-4888-b9a3-be78cf03e504>
CC-MAIN-2016-26
http://www.birdlife.org/datazone/species/factsheet/22693225
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00031-ip-10-164-35-72.ec2.internal.warc.gz
en
0.711669
1,303
2.9375
3
Yugoslavia (former) Military Exchanges Sources: The Library of Congress Country Studies; CIA World Factbook Yugoslavia conducted active military exchanges with a number of countries, including the United States, the Soviet Union, other NATO and Warsaw Pact countries, and several countries in North Africa and the Middle East. Reciprocal military visits with the United States were frequent in the late 1970s, a period of intense United States concern about the direction of post-Tito Yugoslavia. The United States secretary of defense visited Yugoslavia in 1977 to discuss possible arms sales and training for YPA officers in the United States. He and the Yugoslav federal secretary for national defense exchanged visits the following year. They discussed possible transfer of antitank, antiship, and air-to-surface missiles, aircraft engines, communications equipment, and an integrated naval air defense system. Of these, only the sale of Maverick air-to-surface missiles was completed. The chairman of the United States Joint Chiefs of Staff visited Yugoslavia in 1979 and 1985, and the secretary of defense returned there in 1982. These visits included discussions of the general strategic situation in Europe and the longstanding Yugoslav interest in buying advanced arms from the United States. Despite the Soviet Union's role as Yugoslavia's leading arms supplier, relatively few high-level military exchanges occurred between the Soviet Union and Yugoslavia. A 1988 visit by the Soviet minister of defense to Belgrade was the first since 1976; the Yugoslav federal secretary returned this visit in 1989. Both visits featured discussion of increased military cooperation. Yugoslavia had many contacts with countries in North Africa and the Middle East, with special attention to Libya, Egypt, and Ethiopia. Several high-level exchanges occurred with the Libyan armed forces in the late 1970s and 1980s. As a result, Libya purchased Yugoslav armored personnel carriers, small arms, patrol boats, and ammunition as well as training for Libyan officers in Yugoslavia. Egypt and Yugoslavia established a military cooperation program in 1984. Reciprocal general staff visits in 1988 and 1989 elaborated the Yugoslav role in training Egyptian soldiers and upgrading older Soviet arms and equipment in the Egyptian inventory. In 1988 the federal secretary for national defense visited Ethiopia to promote military-industrial cooperation between the two countries. Other significant military visits included reciprocal exchanges of high-ranking officers with India in 1979 and 1984 and with Angola in 1979 and 1986. Like Yugoslavia, each of those countries had large numbers of Soviet weapons systems that were the stimulus for military cooperation. Angola also expressed interest in Yugoslav aircraft and pilot training. Data as of December 1990 NOTE: The information regarding Yugoslavia (former) on this page is re-published from The Library of Congress Country Studies and the CIA World Factbook. No claims are made regarding the accuracy of Yugoslavia (former) Military Exchanges information contained here. All suggestions for corrections of any errors about Yugoslavia (former) Military Exchanges should be addressed to the Library of Congress and the CIA.
<urn:uuid:514136f0-6499-4bd1-bb1b-48e9344a2b74>
CC-MAIN-2016-26
http://www.theodora.com/wfbcurrent/yugoslavia_former/national_security/yugoslavia_former_national_security_military_exchanges.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954436
594
2.59375
3
THE expertise of a York conservator has been called upon by a team hired to examine new discoveries from a famous shipwreck. Ian Panter, principal conservator at York Archaeological Trust, is heading to Ireland to work on items recovered from the underwater remains of the passenger ship RMS Lusitania, which sank off the Irish coast in 1915. The latest finds – a telemotor, which was part of the ship’s steering mechanism, its telegraph and four portholes – were retrieved from the hull of the vessel last week in almost 330ft of water. Mr Panter is also currently working on the Swash Channel wreck, the UK’s largest maritime archaeology project, from which a 400-year-old merman is currently on display at the DIG exhibition at York’s Hungate site, and has worked on two cast-iron cannons at the Tower of London recovered from the Elizabethan shipwreck off Alderney. He said: “The Lusitania’s telegraph will, I hope, provide evidence of the very last command given to the engine room by its captain immediately after being hit by a torpedo. It could therefore shed more light on the events surrounding the so-called ‘second explosion’ which some people claim to have seen.” Mr Panter will work with Irish maritime archaeologists Laurence Dunne and Julianna O’Donoghue to carry out the conservation at a facility in Co. Kerry.
<urn:uuid:0768742b-fc68-4d67-b02f-cf1d19d570ee>
CC-MAIN-2016-26
http://www.yorkpress.co.uk/news/9229708.York_expert_to_examine_Lusitania_finds/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967314
310
2.53125
3
Antibacterial Soaps and Cleaners continued... Other scientists are sounding an alarm over the environmental effects of millions of pounds of antibacterial chemicals in soap that get flushed and rinsed into waterways each year. Research by Rolf Halden, PhD, associate professor at Arizona State University’s Biodesign Institute, demonstrates harm to algae and other aquatic life from the antibacterial chemicals deposited in the water. In his view, the risks to the environment are only likely to increase, as massive use of these products continues. At last check by the CDC, 75% of adults and children’s urine tested positive for triclosan, the most common antibacterial ingredient. People in higher income brackets were more likely to have triclosan in their bodies. Although the levels were generally low, Greene asks, “If there’s a potential harm to people, and proven environmental damage, without any benefits, why are we using these products?” What you can do: Don’t buy products containing triclosan or triclocarban, the most common antibacterial chemicals. Not all products will list ingredients, but you can safely avoid any product that advertises itself as “antibacterial,” say experts. Wash hands -- and clean surfaces in your home -- with regular soap and water. Pronounced “THAL-ates,” these chemicals are common ingredients in fragrances in consumer products. (They are also “plasticizers” used in plumbing, shower curtains, varnishes, vinyl floors, and many other products.) “Some of the phthalates are known to function as hormones in the human body,” says Greene. In animal studies, high doses of phthalates disrupt hormone production. It was believed that the smaller exposures people get from each product they use were safe. But the fact that phthalates are everywhere -- even in the indoor dust we breathe -- has created concern and led to closer monitoring. The CDC finds low levels of phthalates in most of our bodies. Some recent evidence suggests that exposure to phthalates in humans may be related to low sperm count and quality in men. Exposures in pregnant women have been associated with subtle changes in genital formation in baby boys. What you can do: Until more evidence is in about phthalates, “it makes sense to avoid them in your personal care products when you can,” says Greene. “That’s especially true for expecting moms and children.” Unfortunately, it’s often impossible to know which of your personal care products contain phthalates, because they’re only listed as “fragrance.” Opt for fragrance-free products or choose those that use essential oils, like lavender and citrus. Check products’ ingredients in the Cosmetics Database.
<urn:uuid:605d8705-6b5e-4e93-a8ba-8e0ee6802f3c>
CC-MAIN-2016-26
http://www.webmd.com/health-ehome-9/healthier-hygiene?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951705
601
3.015625
3
Research may help explain why some people infected with HIV rapidly develop Aids while others remain free of symptoms for more than a decade. HIV can modify itself very easily The key seems to be genetic variation in molecules that trigger the destruction of cells infected by HIV. Oxford University researchers have found HIV is more likely to scupper this process for common forms of the molecule than for rarer versions. The study is published in Proceedings of the National Academy of Sciences. The Oxford team looked at the role of molecules called human leukocyte antigens (HLAs), which are found on the surface of many of the body's cells. Under normal circumstances when a virus such as HIV infects a cell, HLAs trigger the activation of immune system T cells that move in and destroy the infected cell. The Oxford researchers examined data from a long-term study of Swiss HIV patients. They found that patients who showed signs of rapid disease progression often carried an HLA variation that failed to mobilise T cells in the normal way. The scientists also found that rare HLA types were more likely to elicit an immune response than commonly occurring HLA types. They believe that HIV, which has an astonishing ability to mutate to suit its circumstances, may be able to change to evade detection by the more commonly occurring HLA types. However, because the virus has seldom encountered the more rare HLA types, it has not had the opportunity as yet to mutate, and escape detection in the same way. Researcher Dr John Frater said the findings suggested that potential HIV vaccines based on commonly found HLA versions might be less likely to be effective. He told BBC News Online: "HIV's profound ability to adapt to its environment, whether it be drug pressure, or the body's own immune system, continues to astound." Hope for vaccine Keith Alcorn, senior editor of the HIV information service NAM, told BBC News Online: "The findings help to explain why some people can live with HIV infection for years without illness. "They also have implications for vaccine design because vaccine developers are trying to identify which viral peptides need to be included in a vaccine." However, Mr Alcorn said to get a fuller picture it would be necessary to repeat the research on other groups of HIV patients who have been infected with different sub-types of the virus. Deborah Jack, chief executive of the National Aids Trust said: "This kind of research makes an important contribution to our understanding of HIV and Aids and can contribute to the vital objective of discovering an effective Aids vaccine."
<urn:uuid:a2d7b26c-510f-497d-a157-4871faf6f466>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/health/3548236.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96181
535
3.28125
3
|By: Prof.dr. Ibrahim Khalil| In this series of articles, I will try to explore this Dogma on scientific basis to show up that it is forged. If we study any topic in Quran versus Bible, we will observe the big differences between both. If we link such dissimilarities to the sciences, we will realize that the Quran preceded the modern sciences by more than 1300 years. In the topic of Stars, one may ask this: Is the Quran quoted from the Bible? And which book preceded the sciences? The book which talks about the wandering or the singing Stars or the one which talks about the great issue of the positions of the Stars! Before dealing with the topic of the Stars in Quran versus Bible versus sciences, it is mandatory to give some statistical hints. The total words in the Bible are 788,280 while total words in the Quran are 77,473. It follows that, the Bible is more than 10 times the Quran word-wise. In other words, the Bible has the potential of more than 10 times than the Quran to exhibit its topics. Back to the Stars, they are mentioned 35 times in the Bibles and 9 times in the Quran. The Stars in the Bible: In 13 verses, the Lord kept promising that the numbers of the children of Israel will be too much the same as the number of the Stars. Examples of such verses are: That in blessing I will bless thee, and in multiplying I will multiply thy seed as the Stars of the heaven, and as the sand which is upon the sea shore; and thy seed shall possess the gate of his enemies; And I will make thy seed to multiply as the Stars of heaven, and will give unto thy seed all these countries; and in thy seed shall all the nations of the earth be blessed; And lest thou lift up thine eyes unto heaven, and when thou seest the sun, and the moon, and the Stars, even all the host of heaven, shouldest be driven to worship them, and serve them, which the LORD thy God hath divided unto all nations under the whole heaven. And ye shall be left few in number, whereas ye were as the Stars of heaven for multitude; because thou wouldest not obey the voice of the LORD thy God. 1 Chronicles 27:23 But David took not the number of them from twenty years old and under: because the LORD had said he would increase Israel like to the Stars of the heavens. As we know, There are about 100,000,000,000 (100 Billion) Stars in the Milky Way and the question is: does the number of the children of Israel came close to that number of Stars? In Job 38:7: When the morning Stars sang together, and all the sons of God shouted for joy? As far as I know, the scientists never say that the Stars sing! Also, I wonder who the sons of God (Pleural form) are. In Psalm 147:4: He telleth the number of the Stars; he calleth them all by their names. The scientists could roughly figure out the number of the Stars but I believe they have no idea about the names of all the 100,000,000,000 (100 Billion) Stars present in the Milky Way. Not only that, but in Jude 1:13: Raging waves of the sea, foaming out their own shame; wandering Stars, to whom is reserved the blackness of darkness for ever. I can not recall any scientist talking about wandering Stars! The Stars in the Noble Quran: In Surah 6: 97: It is He Who maketh the Stars (as beacons) for you, that ye may guide yourselves, with their help, through the dark spaces of land and sea: We detail Our Signs for people who know. The Quran is talking about the guidance value of the Stars in darkness. Also, the Scientists said that Pole Stars are often used in celestial navigation. While other Stars' positions change throughout the night, the pole Stars' position in the sky does not. Therefore, it is a dependable indicator of the direction north. In Surah 16: 12 He has made subject to you the Night and the Day; the Sun and the Moon; and the Stars are in subjection by His Command: verily in this are Signs for men who are wise. The Stars are signs for the wise men. Hence the study of Stars needs wise men. Why is that? The mighty swear: Allah Who created the Stars swear with their positions and HE told us that this swear is great and mighty IF WE KNOW! In Surah 56: 75-76 Furthermore I call to witness the setting of the Stars and I swear with their positions, And that is indeed a mighty adjuration (great swear) if ye but knew, When we look at the night sky we see Stars in different positions depending on the time of night. We also see different groups of Stars depending on the time of year. The reason we see Stars move during the night is because the Earth spins. The Earth orbits the Sun in one year and in one year you will see many different groups of Stars in the sky. These groups are the constellations. Because the Earth orbits the Sun you will see different constellations, for example, at night in the winter months than in the summer. The Stars themselves exhibit motion relative to each other. Using repeated overlapped CCD scans (up-to-date sophisticated technique); positions for 661,591 Stars have been established, however with some errors. As far as I know, There are 100,000,000,000 (100 Billion) Stars in the Milky Way, of which some 50 Million are cataloged. Up my knowledge, which is admittedly limited in Astronomy, To plot "all the Stars in our galaxy" is undoubtedly impossible. Hence, to determine the positions of the Stars is a very big issue that is out of human competence. Now, do you realize and accept the value of Allah ' swear by the positions of the Stars. The great issue of the positions of the Stars in the Quran is NEVER mentioned in the Bible. We are in need of sciences and wisdom and intellectual people to determine precisely the positions of the Stars. That is why, ALLAH said: "Furthermore I call to witness the setting of the Stars and I swear with their positions, And that is indeed a mighty adjuration (great swear) if ye but knew" This means that if we know we would realize the value of Allah' swear. The mighty swear is for what? If you admit that Allah' swear is great and mighty, then I recommend you read Surah 56: 75-81 to know what is the reason for Allah' swear. Also, I may ask you this question: Is the Quran quoted from the Bible? And which book look good with sciences? The book which talks about the wandering or the singing Stars or the one which talks about the great issue of the positions of the Stars!
<urn:uuid:8f04d0d4-921f-4b23-b4d2-318227e85fea>
CC-MAIN-2016-26
http://www.streetdirectory.com/travel_guide/105269/religion/the_stars_in_bible_and_quran.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938478
1,474
2.515625
3
| For Immediate Release: July 6, 2006 Habitat Protected for World’s Most Imperiled Whale San Francisco, Calif. — In response to a strongly-worded court opinion that criticized the National Marine Fisheries Service for failing to protect the world’s most imperiled whale, the agency announced today that it is protecting almost 37,000 square miles of critical habitat for the North Pacific right whale in the Bering Sea and Gulf of Alaska. In issuing its final determination, the National Marine Fisheries Service (a division of the U.S. Department of Commerce) rebuked claims from opponents of the Endangered Species Act, finding that protecting critical habitat is essential to the conservation of right whales and that “the economic impacts do not outweigh the benefits of designating critical habitat.” “Today’s announcement is an important step toward right whale recovery,” said Brent Plater, author of the original petition to protect right whale habitats in the Bering Sea and an attorney in the case to protect it. “But there is little time to waste. The Minerals Management Service, Department of Commerce and all other federal agencies should immediately begin cooperating and consulting with right whale experts to ensure that right whales and their habitats are protected.” The Endangered Species Act is a federal law providing a safety net for fish, wildlife and plants that are on the brink of extinction. The law recognizes that one of the most effective ways to protect imperiled species is to protect the places they live, and recent scientific reports confirm that species with their critical habitats protected are twice as likely to be recovering as those species without their critical habitats protected. The North Pacific right whale is so rare that in the 1980s a sighting of a single individual was deemed worthy of publication in scientific journals. However, beginning in 1996 scientists began to see a congregation of right whales annually in the Bering Sea, and in 2004 scientists found more right whales in this area than were found in the previous five years. In light of these remarkable sightings, in 2000 the Center for Biological Diversity formally requested that NMFS protect the right whale’s “critical habitat” as required by the federal Endangered Species Act. However, NMFS refused to protect any habitat for the whale, even though the species’ critical summertime habitats had been discovered. The Center then requested that NMFS reconsider its determination, but the agency never responded to any of the Center’s requests. The Center was thus left with no choice but to initiate litigation in late 2004 to ensure that the Right Whale’s recovery was not impeded. “The right whale was nearly hunted to extinction, and it is our shared responsibility to ensure that this species survives,” said Plater. “We owe it to future generations to protect this special creature, and one of the most effective ways to do that is to protect the places the whales call home.” Photos and additional information are available online at:
<urn:uuid:e5114e29-6b81-4489-afc3-6040bf53a384>
CC-MAIN-2016-26
http://biologicaldiversity.org/news/press_releases/right-whale-07-06-2006.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936581
614
2.71875
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Social reinforcement is reinforcement whose reward value depends for on the social context in which it is delivered and the significance it has for the recipient. Verbal feeback such as praise or words of encouragement may be seen as providing social reinforcement. However it is important to understand the role of context. Praise from a teacher may be a reward to an interested student but the same words given to the class clown who wants to impress his friends with his anti education stance may have the opposite effect. Positive feedback can be provided through gesture and other means of nonverbal communication such as thumbs up signs, smiles, pats on the back etc. - Eye contact - Social approval - Social control theory - Social influences - Social learning - Social learning theory - Social punishment - Vollmer, T. R., & Hackenberg, T. D. (2001). Reinforcement contingencies and social reinforcement: Some reciprocal relations between basic and applied research. Journal of Applied Behavior Analysis, 34, 241-253.pdf
<urn:uuid:b38903b4-6d54-465f-bb35-fa6a7d01a19b>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Social_reinforcement
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.881834
231
3.40625
3
7 Answers | Add Yours Besides being a term of endearment, Holden's calling his sister old Phoebe indicates that he has known her for a long time, that he knows and understands her very well, and that he likes almost everything he knows about her character. He especially likes the fact that she is completely honest and natural, that she is not a phony and couldn't be a phony because she wouldn't know how. She serves as a contrast to the many phony adults he has enountered in his brief stay in New York. Both Holden and Phoebe are obviously exceptionally intelligent--like the members of the Glass family Salinger will write about later, notably in "Franny" and "Zooey"--although Holden doesn't think of himself as being intelligent. Because the brother and sister are both so intelligent, they understand each other easily and without having to engage in a lot of explanations. Because they have had good communication for years in spite of their age difference, Holden thinks of her as an old friend. Throughout the novel Holden is feeling terribly lonely and seems to be on the verge of a nervous breakdown. It is only at the end when he is briefly united with one person he loves and who truly loves him, that he experiences an epiphany which will lead to his being healed. Phoebe is Holden's younger sister. He refers to her as "old" Phoebe as a term of endearment. Meaning, good old Phoebe, one of the only people that Holden really trusts. For Holden, all adults are phonies and cannot be trusted to tell the truth. Holden has difficulty with his peer relationships also. He does not really have friends. Phoebe is the only constant in Holden's life. His sister is the only person who can tell Holden that he has messed up his life. She can be counted on, he loves her and depends on her love. Phoebe is Holden Caulfield's little sister. He describes her as a smart and funny kid, and he makes it clear that he truly loves and admires her. She represents his love and admiration for children; Holden loves the innocence of children and the fact that they have not yet been hit by life and the phoniness that adults portray. He refers to her as "old" Phoebe more than anything else as a term of endearment. It is typical of the time period to refer to someone you know well as "old" so-and-so, and that is how he also refers to her. It does not suggest her age or his view of her in any way. Phoebe is Holden's younger sister in the catcher in the rye. She is the only character in the book we see him showing affection to. He sees her as someone who is smarter than him, and someone he confides him. In his mind she is an innocent child, and unlike all of the other adults around him she is not a phony. The reason he calls her old Phoebe is because he was known her for a long time. If you read the book you would notice he says the word old a lot in front of names. It might be his way of showing familiarity to a person. She is his younger sister. She brings him comfort. Phoebe Josephine Caulfield, Holden’s wiry, red-haired, and bright ten-year-old sister. Regarding Phoebe as a living copy of all that he loved in Allie, Holden creeps home Sunday night to seek out her loyal companionship and her understanding. He is comforted by Phoebe’s jauntiness and vitality; he yearns to protect her from the ugliness he perceives in the world around them. A last coherent memory he has before his breakdown is of a rush of happiness as he watches Phoebe serenely riding the Central Park carousel, a tangible link with much that was joyous in his own childhood. Phoebe is Holden's younger sister. Old feebs is an endearment describing his younger sister Phoebe. He thinks she is really smart, is pretty, and is a great dancer. Her speaks highly of her and loves her very much. We’ve answered 328,170 questions. We can answer yours, too.Ask a question
<urn:uuid:02b25fc4-058a-4106-a6f7-1f2959e792b5>
CC-MAIN-2016-26
http://www.enotes.com/homework-help/who-phoebe-why-does-holden-call-her-quot-old-quot-38255
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.990944
897
2.578125
3
The fine hair on a newborn infant is known as lanugo. It helps to anchor vernix caseosa (“cheese-like varnish”), a waxy substance that protects the fetus from maceration by the amniotic fluid. At birth, placental blood flow ceases and lung respiration begins. The sudden drop in right atrial pressure pushes the septum primum against the septum secundum, closing the foramen ovale. The ductus arteriosus begins to close almost immediately, and may be kept open by the administration of prostaglandins. Other embryonic circulatory vessels are slowly obliterated and remain in the adult only as fibrous remnants. Table 17 - Adult Remnants of Fetal Circulatory Structures Patent Foramen Ovale Failure of the foramen ovale to close at birth, e.g., due to faulty development of the septum primum and/or septum secundum. This condition is usually physiologically insignificant. Patent Ductus Arteriosus Failure of the ductus arteriosus to close after birth. Patients with some heart anomalies can survive only if they have a patent ductus arteriosus. Administration of prostaglandins can delay the closure of the ductus arteriosus. Conversely, drugs that inhibit prostaglandin synthesis (e.g. with indomethacin) can sometimes be used to close the ductus arteriosus without surgery.
<urn:uuid:ee20d081-3ad6-4ea3-9165-0f1e743be941>
CC-MAIN-2016-26
http://www.med.umich.edu/lrc/coursepages/m1/embryology/embryo/18changesatbirth.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.811193
319
3
3
DNA sample links 2 men, 9,000 years apart July 31, 1997 Web posted at: 3:40 p.m. EDT (1940 GMT) CHEDDAR, England (CNN) -- Adrian Targett is a regular guy in Cheddar, a schoolteacher. His extended family is another matter -- real cavemen, some of them. Or at least, thousands of years ago they were. Targett learned recently that he is the direct descendant of a man who lived 9,000 years ago, and whose bones were found at the turn of the century in Cheddar's famous caves. Scientists compared the DNA from one of Cheddar Man's molars to that of scrapings taken from the mouths of 20 local Cheddarites, and Targett was a match. The discovery has made for a strange family reunion, as CNN's Siobhan Darrow explains. Watch these shows on CNN for more sci-tech stories: CNN Computer Connection | Future Watch | Science & Technology Week © 1997 Cable News Network, Inc. All Rights Reserved.
<urn:uuid:cfeb78e9-b358-4e4c-9fac-21f532d6c1f5>
CC-MAIN-2016-26
http://www.cnn.com/TECH/9707/31/cheddar.man/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952853
223
2.6875
3
View A Video on Plastics - A Quick Introduction Plastics refer to a broad group of materials and products that are derived from the processing of polymer resins. These long chains of molecules consist of several smaller monomers held together by covalent bonds. This generic formula underlies a tremendous number of specific chemical formulations that create a diverse field of precise plastic There is an extensive variety of plastic materials on the market today. Polyethylene is the most commonly used member of the plastics family. It is being used to produce a plethora of products, from artificial knees to shampoo bottles and milk cartons. Polystyrene is also ever present in modern life, though more frequently referred to as the trade marked extruded foam, StyroFoam. Delrin is another common plastic, often used as a metal substitute and therefore very popular in the automotive and construction industries. PVC, acrylic, polycarbonate and polypropylene are just a few of the other predominate plastics producing products currently in circulation in the medical field, hardware stores, electronics and food and beverage industries as well as many more industries. So named for their plastic or mold-able qualities, thermoplastics such as these can be molded and remolded any number of times, allowing for continuous transformations of materials and easy distribution. Highly diversified, plastic materials can be purchased in supply stock forms such as plastic rods, plastic sheets and plastic films. While these items may be used as the finished product, they easily comply with secondary processing such as plastic fabrication and precision plastic machining. Plastics made of synthetic, natural and organic monomers are divided into two categories: thermoplastics and thermosetting plastics. While the names are very similar, it is important to distinguish between the two when selecting the proper material for a given application. The former, thermoplastic, is more commonly used because it can be melted and remolded numerous times. The composition is formulated to become pliable when heated and rigid when cooled. The pitfall of thermoplastics is that they may become glass-like and fracture at extremely cold temperatures. Thermosetting plastics are more adept to cold applications but chemically deteriorate when subjected to high heat. While thermoplastics can be purchased as pellets or any number of stock shapes for secondary processing, thermosets are available only in two-part liquid resins or non-flowing mass premixed blends. Far more limited in their possibilities, thermosets cannot be remolded after curing and must therefore be supplied in raw form or finished products. Cure technology is as diverse as plastic itself and includes air setting, film drying, anaerobic, hot melt, cross-linking, room temperature curing, and vulcanizing. The cure technology and type of polymer used depends largely on the manufacturing processes and final purpose of the product. The processes required to form specific compounds vary significantly. Suspension, emulsion/dispersion, solution and mass methods are commonly used to produce the resins, liquids, gels and powders that manufacturers distribute. While this raw form is used for thermosetting plastics that undergo only one manufacturing process, most plastic suppliers provide thermoplastics in stock shapes that have undergone some initial processing. The shapes and performs provided are more easily handled and in some instances may be used as the final product. Plastic sheets, pipes, profiles and rods, which are the most common stock shapes, are usually produced through some form of injection molding or extrusion. Films are made using blown film extrusion, in which an extruded tube of plastic is inflated to stretch the material to the desired length and thickness. Some films are available as thin as .0004inch. Additional processes include foam extrusion, precision plastic machining, vacuum forming, pressure forming, thermoforming, casting, pultrusion, welding and grinding. Each method uses heat and pressure to combine the resins before cooling to create the final form. While these processes may use pure resins, manufacturers can combine the raw plastic materials with a number of additives such as heat stabilizers, lubricants, fillers and plasticizers as needed for specific applications. These can have a significant impact on the color, strength, density, working temperature range, structural integrity and corrosion and heat resistance of a polymer. The unique properties of each specific plastic must be carefully understood with regards to the final product. There are limitless industrial, commercial and residential applications for plastic. Virtually every industry utilizes some form of it. Food and chemical processing, water treatment, gas and oil, medical, pharmaceutical, aerospace, automotive, building and construction industries are among those that utilize the common applications for plastic parts and products. From toothbrushes to windows to aircraft panels, it is nearly impossible to go a day without encountering plastic materials. The pervasiveness of plastics, however, has led to great concerns over their potentially harmful impact on the environment. Because plastics are manufactured to be durable and long-lasting, they are rarely biodegradable. More recently, however, concern for the environment has led to research into bioplastics made from synthesizing polyethylene from ethanol obtained from sugarcane. There could soon be a present where all plastic products are biodegradable. However, until bioplastics are developed enough to be mainstream, it is the responsibility of each consumer to diligently recycle the thermoplastics used on a daily basis so they might be reused. The essence of plastics allows them to be melted down and reformed, which greatly reduces the negative impact they have on our planet. Plastic Materials - Precision Punch & Plastics Plastic Materials - Precision Punch & Plastics Plastic Materials - Precision Punch & Plastics is a popular family of resins that is strong and resistant to most chemicals and stains. ABS is created by the polymerizing of acrylonitrile and styrene (liquids) and butadiene (a gas). - is a plastic that is able to replace metal in many mechanical and structural applications. It has good tensile strength and excellent machinability. - Acrylic is made of clear, thermoplastic resins that are found in acrylic acid and natural sources like petroleum. - is a highly versatile, extruded twin-wall plastic sheet that is durable and able to be cut to produce collapsible signage. Corrugated plastic is an outstanding replacement for poster board in interior applications, since its surface is resistant to most solvents, oils and water and is easily cleaned. - Delrin is the trade mark name of a specific polyoxymethylene, an excellent general purpose mechanical engineering plastic that is highly such as Teflon, are heat, moisture and wear resistant, and thus are used for many different valve, gasket and bearing applications. Fluoroplastics are flexible thermoplastics. is in the categories of polyester and vinyl ester. It is comprised of phenol and formaldehyde, has the same strength as iso-polyester and is the best material for fire safety. - Plastic film is simply plastic material that has been produced in a thin, flat continuous sheet to a precise thickness. - fabricate goods and components out of plastic. - Plastic materials are made from polymer resin and derive their name from its plastic or moldable quality. Plastic materials can be formed into any desired shape when heat and pressure are applied, and continue to retain this shape when cooled. is material designed to transport various liquids, such as water, chemicals, oil and fuels. Plastic piping is also used in drainage applications. - Plastic rods are extruded in much the same way plastic tubing and plastic profiles are, except that plastic rods are solid instead of hollow. is created from continual-phase plastic in a form in which the thickness is very low in proportion to the length and width. - Plastic sheets are large, flat pieces of plastic used in manufacturing. - Plastics are made from polymer resin and are used to make components for almost every industry. - Polycarbonate has good light transmission and stability and the highest rating of all transparent thermoplastics. Some applications for polycarbonate include electronic housings, machine guards and aircraft panels. - Polyethylene is a the most common and most versatile polymer available in a range of molecular weights and structures specially configured for its use in a number of industrial, commercial and residential settings. is a kind of plastic that is very flexible at low temperatures and resistant to chemicals. PP is frequently used for banner materials. - Polystyrene is used for such things as packaging, automotive and lighting because of its very good electrical properties and machinability. is used for bumpers, gears, gaskets and roll covers. Polyurethane is a tough and durable plastic that has good abrasion resistance and a - PVC (is a very strong thermoplastic that is resistant to acids, water and abrasion. - Thermoplastics encompass all of those plastic materials which soften and may even return to a liquid state when heated but solidify and become rigid when cooled. A substance that is added to a resin to enrich particular characteristics. - The chemical and physical changes a material undergoes over time, due to environmental forces that will deteriorate or improve the material. - Combinations of polymers or copolymers with other elastomers or polymers. - A resin or other substance that unites particles. Binders supply mechanical strength and guarantee solidification, consistent uniformity or adhesion to a surface coating. - The lack of cloudiness in a plastic material. - A plastic structural substance that is comprised of a blending of materials. - The capacity of a plastic material to withstand crushing forces. - Different monomers that chemically react with one another, resulting in a compound. - The process of altering properties of polymers into a state of greater stability and usability. Curing is achieved through radiation, heat or reaction with chemical additives. - The period of time at set conditions in which a reacting thermosetting material is cured. - A change in the original color of a plastic material due to environmental conditions, such as light exposure and chemical attack. - The procedure in which an existing plastic shape is changed into another one. - The tendency of certain plastic materials to absorb water. - A concentration of material in a base polymer, such as pigments, additives and fillers. - Plastic materials that will not transmit light. - A high-boil organic or liquid low-melt solid, the addition of which gives flexibility to hard plastics. Plasticizers differ in their solvating capabilities and softening actions, due to the reduction of intermolecular pressures in the polymer. - A blend of resins and plasticizers that can be transformed into continuous films through the application of heat. - A synthetic or natural compound of high molecular weight, which is comprised of long chains of repeating units, each relatively light and simple, including polyethylene materials with increased mechanical properties, resulting from the embedding of high strength fillers in the composition. - A solid or pseudosolid organic material that typically has a high molecular weight with a propensity to flow when stress is applied and generally has a melting or softening - Chemicals that permit the formation of a close mixture or emulsion of usually mismatched substances by the alteration of the surface characteristics and the manipulation of the flowing and wetting characteristics of liquids. - Plastic compounds or resins that in their last stage are insolvable and infusible. After curing is complete, thermosets cannot be softened through heat. - Plastic material, such as granules, pellets, floc or liquid, that has not had any processing applied to it other than what is needed for initial manufacturing.
<urn:uuid:e6219eec-6b6c-4b92-95c8-2cbaedca2bdd>
CC-MAIN-2016-26
http://www.iqsdirectory.com/plastics/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932709
2,505
3.515625
4
Q. Could you explain the role and significance of the Japanese Experiment Module Kibo? Japanese Experiment Module Kibo Kibo Pressurized Module First of all, Kibo is Japan's first human space facility, and it represents the fruits of our country's advanced technologies. Kibo is going to be the base for Japanese space activity, and it will eventually benefit all humankind. At the moment, only the United States and Russia have such permanent space facilities, but Europe and Japan are going to have their own in 2007 and 2008, respectively. It was unfeasible for one nation to build the International Space Station on its own, so 15 nations decided to collaborate. Within the framework of this international project, Japan should first use Kibo to accomplish its own goals. There are also American and European experiment modules on ISS, but Kibo is a laboratory where Japan can freely conduct experiments just for itself. Kibo is also the only ISS experiment module equipped with both a pressurized module and an exposed facility that allows us to perform experiments and observations in the vacuum of space. The completion of Kibo promises a new era for Japanese space development. Space experiments on the space shuttles last only a week, or two weeks at most, but Kibo will allow us to perform longer-term studies. We will finally be able to conduct experiments in such fields as new material development, which require a lot of preparation time, and to study the impact of long stays in space on the human body. We anticipate great outcomes. But Kibo is not only about experiments. Its construction and operation allow us to maintain, combine and further develop Japan's advanced technologies. Developing the capacity of long stays in space will accelerate Japan's progress in science and technology. The experiments we do in space will allow us to gain new knowledge and apply it to such fields as industry and medicine. And at the same time, it is also very important to acquire new technologies through research and development, and to build international relationships through cooperation. Cultural activities are another major objective. There is something about space that touches even people who are not interested in science. As part of its cultural initiatives, JAXA has invited public applications for the Space Poem Chain. This is a project for pondering the universe, the Earth and life, together with others, across borders, cultures, generations and specialties. Its first phase, from October 2006 to March 2007, gathered submissions from about 800 people aged 8 to 98, from Japan and overseas. Once the poem chain is complete, it will be recorded on a DVD, and will be taken to Kibo (which means "hope" in Japanese) by Astronaut Takao Doi in 2008, to be archived. The second phase, with the theme "There Are Stars", started in July 2007, and the poem chain is currently being compiled. It's also very meaningful for the Kibo mission that so many people think about human space activity from their own point of view, and depict this in the form of a poem chain. Q. What kind of experiments will be conducted on Kibo? And how will the outcomes benefit our lives? Payload racks in Kibo Pressurized Module (ground facility) Astronaut Chiaki Mukai performing an experiment on the Space Shuttle STS-95 (courtesy of NASA) The initial experiments will be about fluids and cells. These experiments aim to discover the nature of phenomena that cannot be observed under Earth's gravity. In the fluid experiments, we'll be observing the properties of convection under microgravity, in order to understand flow dynamics that are too difficult to observe on Earth. Even though we know this particular type of flow has an impact on the production process of semiconductor materials (i.e. crystals), its dynamics are still poorly understood. I believe this work will be key to developing new, better-quality materials. And I'm also anticipating that the results of this work - observing the complicated process of crystal growth from liquid - will help us reduce impurities and broken crystals in the process of material production. Once we have a better understanding of crystal growth, this knowledge can be applied in industry, for example in the development of new materials. And furthermore, since proteins are so essential to our bodies, we can apply discoveries about protein crystals to the development of new medicines. In the cellular experiments, various cells will be grown in space, analyzed at the molecular and cellular level, and compared to cells grown on Earth, in order to understand the impact of an environment of microgravity and cosmic radiation on living bodies. For example, we know that plants somehow perceive gravity - their roots grow downwards and stalks grow upwards. But how this works - which plant cells act as gravity sensors - is so far unknown. There are some hypotheses based on experiments carried out on the ground, but we are hopeful that space experiments will help us find a definitive answer. Understanding the mechanism of plant growth can benefit agricultural production. Also, we will be studying the cellular basis for variation in bone formation and conservation of muscle mass, in order to understand why long stays in space cause bone loss and muscle atrophy. This research will help decrease the impact of long stays in space on human beings. On the ground, it will help prevent muscle atrophy caused by immobility, and osteoporosis. Experiments with aquatic habitats are planned for 2010 or later. Japan has the best techniques to grow aquatic habitats in confined spaces, such as a space module. On Kibo, the impact of gravity on living organisms will be examined using the ancient Japanese Medaka fish (Japanese killifish). Gene decoding of the Medaka fish is quite advanced in Japan - we've decoded 90 per cent of its DNA. And we've learned that 80 per cent of its genes are identical to ours. So studying this fish's DNA further is expected to bring significant results in the field of medicine. For example, if we can isolate the gene that causes a particular disorder in the fish, we may be able to understand how a similar disorder is induced in humans. The advantage of working with Medaka fish is their rapid growth rate: three generations are born in just 90 days. We plan to grow Medaka on Kibo for more than 90 days, in order to observe how fish that have never experienced Earth's gravity grow and behave in space, and how their gene activity will vary. These will be very interesting experiments. On the exposed facility, the initial experiments will be X-ray astronomical observations. An all-sky survey of cosmic X-ray sources in outer space will be conducted, and the world will be notified of flare events, such as supernovas, when they are observed. We will also study the minor chemical components that deplete the Earth's stratospheric ozone layer. This will give us a better understanding of the level of destruction of the ozone layer. In addition to these already scheduled experiments, we are discussing other proposals for projects in collaboration with foreign scientists. There is definitely rising interest in scientific work in the vacuum of outer space.
<urn:uuid:70fcdc5b-fba5-4d0e-8d73-dc8d108487c9>
CC-MAIN-2016-26
http://global.jaxa.jp/article/special/kibo/tanaka01_e.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944235
1,440
3.203125
3
Periodontal (Gum) Disease Causes, Symptoms, and Treatments If you have been told you have periodontal (gum) disease, you're not alone. An estimated 80 percent of American adults currently have some form of the disease. Periodontal diseases range from simple gum inflammation to serious disease that results in major damage to the soft tissue and bone that support the teeth. In the worst cases, teeth are lost. Gum disease is a threat to your oral health. Research is also pointing to possible health effects of periodontal diseases that go well beyond your mouth (more about this later). Whether it is stopped, slowed, or gets worse depends a great deal on how well you care for your teeth and gums every day, from this point forward. What causes periodontal disease? Our mouths are full of bacteria. These bacteria, along with mucus and other particles, constantly form a sticky, colorless "plaque" on teeth. Brushing and flossing help get rid of plaque. Plaque that is not removed can harden and form bacteria-harboring "tartar" that brushing doesn't clean. Only a professional cleaning by a dentist or dental hygienist can remove tartar. The longer plaque and tartar are on teeth, the more harmful they become. The bacteria cause inflammation of the gums that is called "gingivitis." In gingivitis, the gums become red, swollen and can bleed easily. Gingivitis is a mild form of gum disease that can usually be reversed with daily brushing and flossing, and regular cleaning by a dentist or dental hygienist. This form of gum disease does not include any loss of bone and tissue that hold teeth in place. When gingivitis is not treated, it can advance to "periodontitis" (which means "inflammation around the tooth.") In periodontitis, gums pull away from the teeth and form "pockets" that are infected. The body's immune system fights the bacteria as the plaque spreads and grows below the gum line. Bacterial toxins and the body's enzymes fighting the infection actually start to break down the bone and connective tissue that hold teeth in place. If not treated, the bones, gums, and connective tissue that support the teeth are destroyed. The teeth may eventually become loose and have to be removed. • Smoking. Need another reason to quit smoking? Smoking is one of the most significant risk factors associated with the development of periodontitis. Additionally, smoking can lower the chances of success of some treatments. • Hormonal changes in girls/women. These changes can make gums more sensitive and make it easier for gingivitis to develop. • Diabetes. People with diabetes are at higher risk for developing infections, including periodontal disease. • Stress. Research shows that stress can make it more difficult for our bodies to fight infection, including periodontal disease. • Medications. Some drugs, such as antidepressants and some heart medicines, can affect oral health because they lessen the flow of saliva. (Saliva has a protective effect on teeth and gums.) • Illnesses. Diseases like cancer or AIDS and their treatments can also affect the health of gums. • Genetic susceptibility. Some people are more prone to severe periodontal disease than others. Adapted from NIH Publication No. 02-1142 Reviewed by Francine Kaufman, MD. 4/08 Low Fat Turkey Meat Loaf Balsamic Spiced Lamb Kabobs Pasta Primavera Mayan Iced Coffee Angel Hair With Avocado and Tomatoes Vanilla-Almond Meringues Caramelized Pork over Herbed Lettuce Tomato and Onion Salad Grilled Salmon and Fresh Spinach Wraps Onion-Cheese Bagel Dip June 5, 2016. Our Tour de Cure (New Jersey — Skylands) was nearly rained out. Rain, with periods of thunderstorms, was predicted all day. At the eleventh hour (almost literally! the email was timestamped 21:25 the evening before), the tour organizer notified us that the 100-mile route was being cancelled, but that riders could choose to ride the 66-mile course (or one of the shorter courses) instead. Just before midnight, the decision was made to have a rolling start for the...
<urn:uuid:7cad822e-eb74-4ab4-84b3-a1c83cd1e373>
CC-MAIN-2016-26
http://www.dlife.com/diabetes/complications/dental/periodontal-disease
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936039
907
3.125
3
When Françoise Cochet saw the cord around her son's neck, she knew that he was dead. Fully clothed and still wearing his sneakers, 14-year-old Nicolas had strangled himself sometime after dinner in their apartment in Nice, France. His mother found him the next morning. "I shut the door so my other two children couldn't see, and I didn't touch the body," she says. "I thought that I couldn't live anymore. I thought I needed to die too." Because Cochet had left her son's body as she found it, police were able to rule out suicide. Instead, they determined that Nicolas had accidentally killed himself playing le jeu de foulard (the "scarf game," as it's known in France), a dangerous activity in which children starve their brain of oxygen to achieve a natural high. Known by various names around the world including funky chicken, space monkey, sleeper hold and the blackout, choking or fainting game the activity involves applying pressure to the neck to stop the blood flow to the brain and then releasing the pressure to create a temporary sense of euphoria. It isn't new: French medical books mention the scarf game as early as the 18th century, and deaths in Britain, Canada and the U.S. have occasionally made the headlines over the years. What is new and frightening is that teenagers are now uploading instructional videos to the Internet that glamorize the potentially deadly practice. "This is disturbing, highly dangerous, very risky, and the practice should be avoided at all costs," says Dr. Steve Field, chairman of the Royal College of General Practitioners in London. "You can have an epileptic fit, you can go into a coma and you can die." Many teenagers already are dying. Figures on choking-game deaths remain sketchy a lack of awareness among police means that cases often end up being classified as suicides. The Centers for Disease Control and Prevention in Atlanta estimates that at least 82 people died from the activity between 1995 and 2007. But according to the Wisconsin-based campaign group Games Adolescents Shouldn't Play (GASP), as many as 1,000 young people die in the U.S. each year playing some variation of the game. In France, officials identified 17 deaths in 2009, but they suspect that many more go unreported. The medical community remains divided over whether to publicize asphyxiation games. "There's a fear that if you raise awareness then other people will start to copy it," Field says. Last year, the medical journal Pediatrics reported that one-third of American doctors had never heard of the choking game and only 2% had ever discussed it with teenage patients or their parents. But it appears that many young people are finding out about the activity on their own potentially without being made aware of the dangers. In a study in the journal Injury Prevention published last February, nearly half of all students at eight schools in Texas said they knew someone who had participated in the choking game. The lack of preventive education alarms Cochet, founder and president of the Association of Parents of Young Victims of Strangulation in France. She believes that raising awareness about the game can save children from accidental death. It was only after police explained how her son Nicolas had died that Cochet began piecing together the warning signals she had missed. About six months before his death, he had told her about a "fun game. Then one day he had headaches. Another day I saw that he had marks on the edge of his neck," she says. "I saw all these things but didn't understand what they meant, because I didn't know that this game existed." Awareness will also help victims' parents overcome the stigma attached to having a child die this way. People frequently confuse the game with erotic asphyxiation, the sexual practice thought to heighten an orgasm. And they frequently assume that victims suffer from psychiatric conditions like depression. In fact, victims tend to be high-achieving students at school, active in sports and well-behaved, according to doctors and some victims' parents. "They aren't playing this game for sexual gratification," Field says. "It's to get a high without taking drugs." Following her son's death, Cochet and her family moved from Nice to Paris in an effort to move on with their lives. She remains committed to sparing other families from the grief she still lives with. In December, she helped France's Ministry of Health organize a symposium on the choking game, bringing together 200 doctors, physicians, teachers, policemen and bereaved parents from nine different countries. Her English isn't perfect, but when it comes to explaining the risks of choking, she speaks rather eloquently. "Our children are alone in their bedrooms," she says. "They're getting dizzy, and the great risk is that at any moment their hearts can stop." And when that happens, as Cochet knows all too well, a parent's heart stops too.
<urn:uuid:ea541139-bb29-49f1-9456-f68c6eac4ac9>
CC-MAIN-2016-26
http://content.time.com/time/health/article/0,8599,1953653,00.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.984875
1,021
2.625
3
QUOTE(firdausj @ May 19 2007, 05:00 AM) [snapback]2953885[/snapback] What is the definition of "Malay" in Malaysia ? In response to Firdausi's question: What is the definition of "Malay" in Malaysia? There is a joke in Malaysia: When is a Malay a 'Malay'? When he is not a 'non-Malay' ethnic Chinese or Indian!! And an Iban or Kadazan isn't a Malay or a non-Malay either but a 'Bumiputra' just like the Malays which means that ethnic Chinese and Indians are neither Malay nor Bumiputra!! Pretty confusing huh?? Seriously speaking ...a "Malay" in Malaysia is CONSTITUTIONALLY defined as: (1) a person who is a Muslim of the Sunni branch of Islam of the Shafie mazhah (school) of thought, (2) having both or one of the parents as Malays, (3) habitually living according to Malay culture and customs including having Malay as a mother tongue, and (4) parents or ancestors from any part of Malay world but having lived in Malaysia on or before Malaysia day. So under condition (1), one has to be a Muslim of the Sunni Shafie variety to be a Malay in Malaysia, meaning that Arab Muslims, Indian Muslims or Persian Muslims, for example, can't be Malays because they are not of correct Muslim specification under our Constitution Under condition (2), either one or both parents must be Malays. So, a a child of mixed Malay-Chinese, mixed Malay-Indian, mixed Malay-Arab, mixed Malay-Siamese marriages, etc, is still a Malay, of course other conditions being satisfied too. Under condition (3), one is a Malay only if he lives habitually according to what are prescribed as Malay customs, Malay adat istiadat, Malay culture, habitually speak Malay, has Malay as mother tongue, live as a Muslim,etc ... of course, other conditions being satisfied too. Under conditon (4), people of Malay race from regions that now form Indonesia, Singapore, Thailand, Philippines, etc, are Malays if on or before Malaysia Day (August 31 1965) they or their ancestors lived in Malaysia. So, technically, all Javanese,Bugis,Banjars,Boyan (Bawean), Kerincis,Mandailings, Minangkabaus, southern Thai Malays, Brunei Malays,etc are all Malays if they had been in Malaysia on or before Malaysia Day.... as long as as they are Muslims of Shafie Sunni sect. But Malaysia is unique .. it is only here that all ethnics of the Malay race willingly and proudly accept the label "Malay" for them regardless of whether they are ethnically Malay, Javanese, Minangkabau, Bugis, etc. In Indonesia, for example, I am not sure if an ethnic Javanese wants to be called a "Malay". Officially, there are no "Javanese" or "Bugis" or "Minang" or "Banjar" or "Boyan" in Malaysia .... for these people, only the label "Malay" is found on all their official documnets. Of course, people who come from Indonesia or Philippines to Malaysia TODAY are no longer accepted as "Malays" .. they are simply officially labeled as "Indonesians" or "Filipinos" in offical documents ... it is already past Malaysia Day ... so, they are constitutionally not "Malays" in Malaysia. Interesting facts about some Malaysians: The first Prime Minister of Malaysia (Tunku Abdul Rahman) was mixed Malay-Siamese (mother was Manjalara, a Siamese princess). The 2nd PM (Tun Abdul Razak) was ethnically Bugis. Ther 3rd PM (Hussein Onn) was mixed Malay-Arab-Turkish. His famous cousin, Ungku Aziz was even more complicated .. mixed Malay-Arab-Turkish-English!! The 4th PM (Mahathir Mohammed) is mixed Malay-Indian (his grandpa came from Kerala, India and married a Malay woman to give birth to his father, Mohammed). The present and 5th PM (Abdullah Badawai) is mixed Malay-Chinese (his grandpa was a Chinese Muslim from Hainan, China who married a Malay woman to give birth to his mother Kailan). Abdullah Badawi's late wife, Endon, was mixed Malay-Japanese (her mother Japanese, father Malay). Musa Hitam, Malaysia's former deputy PM was mixed Malay-Chinese (mother a Chinese). He married a Peruvian woman, so his children mixed Malay-Chinese-Peruviano. Selangor State current Chief Minister (Khir Toyo bin Toyo) and Johor State current Chief Minister (Abdulah Ghani) are Malays of Javanese ethnicity. The majority of Malays in Negri Sembilan State are ethnic Minangkabaus who still practice Adat Perpatih and their Sultan/Yang Dipertuan Besar descended from Pagarruyung in Sumatra. The Sultans of Selangor and Johor States are Malay Sultans of Bugis ethnicity. P.Ramlee, Malaysia's famous anak seni (artist) was partly Acehnese (father, Teuku Nyak Puteh, an Acehnese from Aceh) and partly Malay (mother, Che Mah Hussein, a Malay).
<urn:uuid:64720a27-4ae8-4265-854f-60636c99c193>
CC-MAIN-2016-26
http://www.asiafinest.com/forum/lofiversion/index.php/t120602.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943063
1,203
2.5625
3
XFiles Friday: The Ciporhtna PrincipleJanuary 11, 2008 — Deacon Duncan (Book: I Don’t Have Enough FAITH to Be an ATHEIST, by Geisler and Turek, chapter 4) Yes, if you hadn’t figured it out, the Ciporhtna Principle is the Anthropic Principle, backwards. I thought it was a suitable intro to this week’s topic. Geisler and Turek make quite the fuss about the Anthropic Principle and how it allegedly proves that God exists. The trouble is, they’ve got it exactly backwards. The term “Anthropic Principle” is one that has only been around since the early 1970’s, and has generated a fair amount of controversy and conflicting definition. But the true significance of the Anthropic Principle is this: the fact that we are here observing the cosmos means that we already know something about the characteristics of physical reality, namely, that the conditions which exist here are consistent with the fact that we exist and are able to observe the universe. That may seem like a “well, duh” kind of observation, but that’s really all there is to the Anthropic Principle. At any given point in the history of science, man may not know the correct value for any number of universal constants, but the fact that we’re here studying those constants is enough to tell us that when we do find them, we’re going to find that they’re consistent with our existence. It all boils down to the core principle of all science: truth is consistent with itself. If we existed and discovered that the physical constants of the universe made our existence impossible, that would be a contradiction. We know that truth is self-consistent, and therefore we know that the constants of physical reality, and other conditions, are going to be the kind of constants and conditions that allow for us to exist. Otherwise, we wouldn’t be here. So we can expect that the constants of the universe will prove to fall within some range that is conducive to human evolution–it’s a known factor, because we’ve already observed that we do exist. The thing is, these physical conditions cause us to exist. We are the result of physical processes operating under the constraints of physical laws. It’s like Douglas Adams’s mud puddle: the shape of the puddle is determined by the shape of the cavity in which it pools. Suppose the puddle could think, and said to itself, “My, isn’t it remarkable that this cavity is exactly the same shape as I am! Some intelligent designer must have known what shape I was going to be, and carved this cavity into exactly my shape, in order to produce such a perfect fit!” We’d laugh at the simple, ego-centric naïveté of such a superstitious conclusion, but that’s pretty much the argument Geisler and Turek are appealing to. Man is perfectly adapted to his environment, and such a perfect fit could only come about if an Intelligent Designer specifically crafted it to suit Man. Anthropic Constant 1: Oxygen Level—On earth, oxygen comprises 21 percent of the atmosphere. That precise figure is an anthropic constant that makes life on earth possible. If oxygen were 25 percent, fires would erupt spontaneously; if it were 15 percent, human beings would suffocate. Geisler and Turek have cause and effect mixed up. The reason humans are adapted to an atmosphere with 21 percent oxygen is because human life evolved under conditions where oxygen makes up about that percentage of the atmosphere. Geisler and Turek, however, think that oxygen concentration is determined by some kind of constraint that requires the atmosphere to conform to the needs of modern humans. “Isn’t it marvelous,” they say, “how perfectly our location has been carved out to fit exactly our shape.” Well, no, it’s exactly what you would expect from a mechanism like evolution that favors variations that are better adapted to their environment. We are best suited to an atmosphere with 21% oxygen because that’s the atmosphere that was there when we evolved, and we were adapted to it. Geisler and Turek’s argument here is silly on a number of levels. Current research indicates that in prehistoric times, oxygen was actually quite a bit lower than 15% (in fact, probably 0.1% or less). The emergence of life was likely the dominant factor in changing the balance of oxygen in the atmosphere, so in a practical sense there’s not much chance the end result would turn out much different than it did. Oxygen levels are both the prerequisite for, and the by-product of, modern biological processes; after a couple billion years, the process has more or less stabilized and the figure we have now represents an equilibrium level. It’s not at all surprising that life, evolving under those conditions, would end up matching the available oxygen levels. Secondly, suppose for some reason the oxygen level were 25%. Would everything really burst into spontaneous combustion? Remember, early life arose in the sea, where combustion is not really a problem, even if atmospheric oxygen were twice as high. So what would burn? As life emerged from the seas into the oxygen-rich atmosphere, it would need to adapt to those oxygen levels in order to survive. Spontaneous combustion is not a viable evolutionary strategy in most cases, so odds are that plants would evolve in ways that made them more resistant to catching fire. Anything that spontaneously combusted would quickly go extinct, leaving the less-flammable alternatives freer to compete for the remaining resources. Lastly, by the way, the percentage of oxygen in the atmosphere is not a constant. Did I forget to mention that? It’s a variable that currently has a fairly stable value, but it hasn’t always been what it is now, and there’s no guarantee it won’t change in the future. Anthropic Constant 2: Atmospheric Transparency—…The degree of transparency of the atmosphere is an anthropic constant. If the atmosphere were less transparent, not enough solar radiation would reach the earth’s surface. If it were more transparent, we would be bombarded with far too much solar radiation down here. I’m not entirely sure how Geisler and Turek think the atmosphere could become more transparent. Less cloudy, perhaps? Ever been sunburned on an overcast day? Regardless, this is another case of Geisler and Turek reversing cause and effect. We already knew that we were going to find that the earth’s surface was receiving enough solar radiation to support human life in a viable ecosystem. If it didn’t, we wouldn’t be here. But again, Geisler and Turek’s argument is that our existence somehow caused the atmosphere to acquire the “correct” amount of transparency to support human life. God supposedly knew how much sun we were going to need, and tweaked the atmosphere until it had just the right amount of transparency to support us. The only trouble is, if you look back at the earth’s geological and meteorological history, it’s difficult to see where God ever did anything to make things turn out any differently than they would have just via the ordinary operation of natural physical laws and processes. As with all superstitious attributions, Geisler and Turek want to give God the credit for “fine-tuning” the earth’s atmosphere to some predetermined value, but they can’t really show any actual connection between any action of God’s and the atmospheric transparency we see today. They can’t even describe what such a connection would consist of, beyond a magical “poof, air is now transparent.” Geisler and Turek’s “proof” of God is simply and literally the opposite of science. Anthropic Constant 3: Moon-Earth Gravitational Interaction—…If the interaction were greater than it currently is, tidal effects on the oceans, atmosphere, and rotational period would be too severe. If it were less, orbital changes would cause climatic instabilities. In either event life on earth would be impossible. Again, Geisler and Turek aren’t too clear about what, exactly, they’re talking about. Do they mean if the moon were a different distance from the earth? Do they mean if the acceleration of gravity were any stronger/weaker than it is now? They don’t really say, just a wave of the hand and the claim that “life would be impossible.” Then quick! a distracting re-telling of the Apollo 13 drama (a main feature of this chapter, btw—get the reader all emotional over the plight of the Apollo 13 crew, then toss in a claim or two before quickly returning to Apollo 13 before the reader has a chance to think). Well, let’s summarize their remaining “constants”: the carbon dioxide level (another non-constant, biologically-influenced variable, just like oxygen), and the acceleration of gravity. If the gravitational force were altered by 0.000000000000000000000000000000000000001 percent, our sun would not exist, and therefore neither were we. Right, and the reason it’s called a gravitational constant is precisely because you can’t alter it by that much, nor by any smaller non-zero amount. It’s a constant. Things that can be altered are called variables. Gravity, one of those constants that has been around for all of time, was not caused, because there was no time prior to the existence of the physical universe in which anything could have happened to cause it. It’s silly, therefore, for Geisler and Turek to act like the gravitational constant was somehow deliberately tuned, as though there was some point in time when gravity either did not exist or did not have its present value. But even if gravity could some how be “caused” (in the absence of any time in which to cause it), Geisler and Turek are still getting cause and effect backwards. It is the physical universe, and its constants and laws and conditions, which cause us to exist and observe the cosmos; it is not our existence which causes the universe to have the constants and laws and conditions we require. When used as an Intelligent Design argument, the Anthropic Principle (or at least a distorted version of the Anthropic Principle) becomes superstitious nonsense: a nonsensical attempt to define a cause where no causes are possible, and an attempt to superstitiously ascribe modern conditions to those purported causes, in the complete absence of any way to show any real connection between the alleged “cause” and the observed effects. Indeed, the hallmark of modern ID theory is not only the inability to describe what such a connection would look like, but the deliberate and stubborn refusal to even try to come up with such a description. Geisler and Turek’s anthropic argument is nothing less, and nothing more, than a bald-faced appeal to naive superstition. They cannot describe how the universe was allegedly “fine-tuned,” and they don’t even try to show how the cosmos would be different in the absence of divine intervention. There’s no point in saying “IF things were different, we wouldn’t be here” unless you can show that things would have been different without God. That would be the meat of Geisler and Turek’s argument, if they could do it. But just like the atheists, they know it would be pointless to try. Hence the need for an Apollo 13 story to keep the reader amused, distracted, and above all spectating rather than thinking. The physical laws of the universe are what they are, and they’ve made us what we are, and there has never been any point in time when any outside factor could have changed their value to something different. PS — It’s interesting that Geisler and Turek list (as point #4 on page 105) the amount of water vapor in the atmosphere as being a value critical to human survival. If water vapor levels in the atmosphere were greater than they are now, a runaway greenhouse effect would cause the temperatures to rise too high for human life. Global warming means higher temperatures; higher temperatures mean more evaporation; more evaporation means higher levels of water vapor in the atmosphere. You’d think conservative Christian apologists like Geisler and Turek would be a bit more concerned about global warming, under the circumstances, wouldn’t you? Alas, for them the water vapor level is an “anthropic constant,” so they’re not likely to believe it even can change. Apologetics is an area that does have the potential to have a (possibly lethal) impact on real life.
<urn:uuid:c8683050-09cc-4242-bd95-c5033874e53d>
CC-MAIN-2016-26
http://blog.evangelicalrealism.com/2008/01/11/xfiles-friday-the-ciporhtna-principle/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962078
2,740
2.609375
3
Opiates are drugs derived from the opium poppy plant found both in the natural form, like the street drug heroin, and in a chemically engineered form, like in opiate pain medication. People often abuse opiates for many reasons including ease of obtaining them and the pleasurable side effects of the drugs. These immediate effects of opiate abuse are the reason that addiction usually forms. When addiction begins, it is strong and swift. Opiates create an initial rush or euphoric state from the drug because the brain allows endorphins—the feel good chemicals in the brain—to flood the body. This is what can lure occasional users to more intense use, and addiction quickly develops when the effects create a sense or normality. This is called physical dependence. Immediate effects of opiate abuse include the following: - Extreme relaxation - Decreased sensation of pain - Decreased sexual drive Effects of Opiate Abuse on the Brain Opiates bind to specific receptors in the brain, called neurotransmitters. They control moods, movements and physiology. The physiological effects of opiate use include changes in digestion, body temperature and breathing. When people take opiates, it causes the neurotransmitters in the brain to fire at a high rate that would normally only occur during times of extreme stress. The body gets used to this process and becomes physically dependent on the drugs. Sudden withdrawal of the drug becomes a major shock to the nervous systems, and for chronic users, it may result in a fatality. Long-term Effects of Opiate Abuse Opiates can have damaging affects on the body, and the complications only compound with prolonged use. Long-term abuse of opiates can have serious and severe consequences, which can include the following: - Infections of the heart lining and valves - Abscesses (with injection usage) - Collapsed veins (with injection usage) - Damage to the liver, lungs and kidneys All of these conditions should be considered severe and require oversight from a physician to treat. Effects of Opiate Abuse on Pregnant Women Pregnant women can also suffer extreme effects if they are abusing opiates. These effects include the following: - Spontaneous abortions - Breech deliveries - Premature births Infants who are born to opiate addicted mothers would suffer similar withdrawal effects to that of an adult. An opiate is a sedative narcotic containing opium or one or more of its natural or synthetic derivatives such as heroin, morphine and codeine. Any of these drugs work to dull the senses and provide relaxation. When taken correctly, they are used as painkillers and can be helpful as a prescribed method of coping of pain. With extended usage, opiates can become quickly addicting once a tolerance is built and physical dependence is established. In the culture of substance abuse, opiates have a rapidly growing population of users. Opiates can be injected, smoked, taken orally in pill form or snorted. Opiates in any form are dangerous because of their addictive qualities and should only be taken under the supervision of a medical professional. Opiate Addiction Help The severe effects of being an opiate abuser should be recognized as soon as possible. If you or someone you know is suffering as an opiate abuser, contact our toll-free helpline at (888) 858-5708. Call 24 hours a day to see what the best steps are to break the addiction. We can help you overcome your addiction. Please call now.
<urn:uuid:af503b51-b72a-47f8-99f3-7c3064ec35d5>
CC-MAIN-2016-26
http://www.opiaterehabtreatment.com/effects-of-opiate-abuse
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95324
719
3.296875
3
Fact Sheet: President Bush Is Addressing Climate Change President Bush is dedicated to climate change policies that grow economies, aid development, and improve the environment. The President promotes technological innovation to achieve the combined goals of addressing climate change, reducing harmful air pollution and improving energy security in the U.S and throughout the We have an ambitious and realistic goal: In February 2002, President Bush committed to cut our nation's greenhouse gas intensity -- how much we emit per unit of economic activity -- by 18 percent through 2012. We are making real and accelerated progress: The President's goal amounts to an annual 1.95-percent cut in emissions intensity. In 2003 alone, U.S. intensity declined by 2.3 percent. Preliminary figures for 2004 suggest even greater reductions in emissions intensity during a period of robust economic growth. We are pursuing a balanced approach to overcome poverty with policies that protect the environment while promoting development and economic The President knows that overcoming extreme poverty goes hand-in-hand with improving the environment. Stagnant economies are one of the greatest environmental threats in our world, because people who lack food, shelter, and sanitation cannot be expected to preserve the environment at the expense of their own survival - and poor societies cannot afford to invest in cleaner, more efficient technologies. The long-term answer to environmental challenges is the rapid, sustained economic progress of poor nations. And the best way to help nations develop, while limiting pollution and improving public health, is to promote technologies for generating energy that are clean, affordable, and secure. Some have suggested that the best solution to environmental challenges and climate change is to oppose development and put the world on an energy diet. But at this moment, about two billion people have no access to any form of modern energy - and blocking that access would condemn them to permanent poverty, disease, high infant mortality, polluted water, and polluted air. The President said that we are taking a better approach. We know that the surface of the Earth is warmer, and that an increase in greenhouse gases caused by humans is contributing to the problem. Though there have been past disagreements about the best way to address this issue, we are acting to help developing countries adopt new energy We are taking action: The President has launched a broad portfolio of domestic and international initiatives to develop and deploy new technologies through a broad range of programs, including: Short Term - NOW Midterm - 2010-2020 Hybrid or Clean Diesel Vehicles Hybrid/Clean Diesel Vehicles Clean Coal Efficiency Clean Coal Gasification Energy Efficiency Standards Zero Energy Homes and Bulidings Renewable Fuel Standard Nuclear Plant Relicensing Enhanced Oil Recovery Methane to Markets* Federal Facility Management Plan Fuel Economy Standards Wind, Solar Tax Incentives *Denotes International Partnership We are providing record funding for climate change programs: The Bush Administration will have spent over $20 billion by the end of 2005, more than any other nation. $5.5 billion is proposed for climate change activities in 2006. The President has also proposed $3.6 billion in tax incentives over 5 years to spur use of clean, renewable, and energy-efficient technologies. These Federal programs are only part of the effort, as they are also leveraging billions of dollars in private We are guided by the following principles at the G8 and beyond: We have shared goals, and our areas of agreement are numerous. Climate change is a serious long-term issue, requiring sustained action over many generations by both developed and developing countries. Developing innovative technologies that are cleaner and more efficient is the key to addressing our climate challenge. The greatest progress will be assured by a cooperative effort that combines our strategies with the best strategies of other nations to improve economic and energy security, reduce harmful air pollution, and reduce greenhouse gases. The President firmly believes that economic growth is essential to success. Only economic growth provides the resources for investment in the next generation of cleaner, more efficient technologies. We oppose any policy that would achieve reductions by putting Americans out of work, or by simply shifting emissions from one state to another, or from the U.S. to another country. Like us, developing countries are unlikely to join in approaches that foreclose their own economic growth and development. The President's approach draws upon the best scientific research, harnesses the power of markets, fosters the creativity of entrepreneurs, and works with the developing world to meet shared aspirations for our people, our economy, and our environment.
<urn:uuid:a950939c-33f4-4655-8c7f-fec75a5652db>
CC-MAIN-2016-26
http://georgewbush-whitehouse.archives.gov/news/releases/2005/06/print/20050630-16.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910311
989
3.03125
3
Neisseria meningitidis is a leading cause of bacterial meningitis and other invasive bacterial infections worldwide. The role of the meningococcus as a cause of bacterial meningitis has become more pronounced in recent years with the declines in meningitis caused by Haemophilus influenzae type b and Streptococcus pneumoniae because of the introduction of new conjugate vaccines. The data available on the background incidence of Meningococcal disease in India are suggestive of low incidence of meningococcal disease. N. meningitidis is a gram-negative diplococcus that is a strict human pathogen. It most commonly causes asymptomatic nasopharyngeal carriage but on occasion causes invasive disease. Clinical syndromes caused by N. meningitis include meningitis with or without meningococcemia, bacteremia, fulminant meningococcemia, pneumonia, and septic arthritis. The case fatality is around 12% but varies widely by clinical presentation. The biochemical composition of the polysaccharide capsule determines the serogroup of the strain. Although there are 13 different polysaccharide types, only 5, A, B, C, W-135, and Y are common causes of invasive disease. However Group A Meningococcus is the cause of all the major Indian epidemics. Meningococcal vaccinesPolysaccharide Vaccines: A meningococcal polysaccharide (MPS) vaccine, containing antigens of serogroups A, C, Y, and W135, (MPSV4) has been used since past 20 years. This vaccine protects against the serogroups that cause approximately two-thirds of meningococcal disease that occurs in people 18 to 23 years of age. Because group B polysaccharide is poorly immunogenic in humans, group B vaccines cannot be based on capsular polysaccharide. All four components of MPSV4 have been shown to be immunogenic in adults and older children. Serogroup A polysaccharide has some immunogenicity as early as 3 months of age, albeit not as much as in older children and adults, and serogroup C polysaccharide is poorly immunogenic in children under 2 years old. There are limited data on the duration of protection of MPSV4. Serum antibody levels decline significantly in infants and children under 5 years old but in healthy adults antibodies can still be detected after 10 years. However clinical protection wanes over time in both children and adults. Despite the utility of the MPSV4 in selected populations, there are major limitations that have restricted its widespread use, including lack of immunogenicity in infants, lack of immunologic memory and booster response, and relatively short duration of protection. Meningococcal polysaccharide vaccines have been shown to induce immunologic hyporesponsiveness. In this phenomenon, the antibody response in persons previously immunized with the meningococcal polysaccharide vaccine is less than that in persons receiving a first dose. Adverse reactions such as injection site pain and erythema, are common but usually mild. Transient fever can also occur in up to 5% of vaccinees. Severe adverse reactions are rare. Meningococcal conjugate vaccines: Meningococcal conjugate vaccines, in which meningococcal polysaccharide is covalently linked to a carrier protein, are typically T-cell dependent, which confers major immunologic improvements over polysaccharide vaccines. One of the major advantages is immunogenicity in infants, which protects the age group with the highest incidence. Other advantages include induction of immunologic memory, a booster response, and the ability to overcome the immune hyporesponsiveness that is induced by the polysaccharide vaccine. Serogroup C meningococcal conjugate vaccines also reduce carriage of N. meningitidis in the nasopharynx, which leads to a decrease in transmission to unvaccinated persons. Meninogoccal C conjugate vaccine (Men C): It has been routinely used in UK where infection due to serotype Y is minimum and maximum invasive disease occurs due to serotype C. A schedule of doses at 2, 3, and 4 months of age was introduced into the routine infant immunization schedule and, in addition, there was a catch-up campaign among children under 18 years old. In the 2 years after introduction of infant meningococcal conjugate vaccine, the incidence of serogroup C meningococcal disease declined by 87% in vaccinated people. There has been an approximately two-thirds drop in nasopharyngeal carriage of serogroup C N. meningitidis among 15- to 17-year-olds, as well a similar reduction in serogroup C meningococcal incidence among the unvaccinated population of all ages. Due to herd immunity. However, there clinical protection in infants immunized at 2, 3, and 4 months of age more than 1 year after immunization was not present despite evidence for continued protection in all other age groups that were targeted for immunization. Thus role of immunologic memory seems doubtful. Quadrivalent meningococcal conjugate vaccines: In January 2005, a quadrivalent meningococcal conjugate vaccine (MCV4) was licensed by the U.S. Food and Drug Administration (FDA) for 11 to 55 year olds. A single dose of MCV4 contains 4 mcg each of the A, C, Y, and W-135 polysaccharides conjugated to 48 mcg of diphtheria toxoid. During prelicensure clinical trials antibody levels post vaccination to MCV4 were at least as high among adolescents and adults as responses to MPSV4. In 2011, ACIP recommended a two-dose series of this vaccine for use in children aged 9-23 months. Interference with PCV13 immune responses was noted when MCV-4 and PCV13 were administered simultaneously in patients with asplenia. Hence, it is now recommended that at least one month interval should be kept between PCV13 and MenACWY-D, and PCV13 should be administered first. Monovalent meningococcal A conjugate vaccine (Men A): First monovalent Meningococcal A conjugate vaccine was launched in Burkina Faso in Africa on 6th Dec 2010. The vaccine is recommended for children aged 1 year, adolescents and adults up to 29 years of age, for the prevention of invasive disease caused by Neisseria meningitides Group A. A single dose. of 0.5 ml should be administered by deep intramuscular injection. Six months after the successful introduction of MenAfriVac, the meningococcal A conjugate vaccine; Burkina Faso, Mali, and Niger report the lowest number of confirmed meningitis A cases ever recorded during an epidemic season. Indian Academy of Pediatrics (IAP) recommendations for meningococcal vaccine: The IAP recommends the use of Meningococcal vaccines only in certain high risk situations depicted below. The conjugate vaccines are preferred. - During disease outbreaks, if caused by serogroups included in the vaccine - Children with terminal complement component deficiencies. - Children with functional/ anatomic asplenia/ hyposplenia - Laboratory personnel and healthcare workers who are exposed routinely to Neisseria meningitides in solutions that may be aerosolized should be considered for vaccination. - Travelers to Saudi Arabia for Haj (mandatory requirement) - Travelers to the African meningitis belt particularly between December to June and especially if there is an ongoing epidemic. - Students going for study abroad (mandatory in most universities in the USA)
<urn:uuid:d19e6d93-2a8b-4f16-bf9a-04f5a79f8892>
CC-MAIN-2016-26
http://www.pediatriconcall.com/forpatients/vaccination/article.aspx?artid=398
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93784
1,640
3.359375
3
Today, Texas auf Deutsch. The Honors College at the University of Houston presents this program about the machines that make our civilization run, and the people whose ingenuity created them. I talked in a previous episode about Frederick Law Olmsted’s travels throughout Texas in the 1850s. Olmsted was the founding father of American landscape architecture, but his travel narratives of the south were an important contribution to the debate on slavery, which raged before the Civil War. To my surprise, I found a German edition of his Texas travelogue published in Leipzig in 1857. That’s the very same year as the first American edition. Olmsted’s accounts were also published in London because the British followed the American slavery debate with intense interest. But why was a German edition so quick to come out? Olmsted camping in Texas with his brother. From A Journey Through Texas (1857) The answer is, because Texas in the 1850s was a very German place, and Germans loved to read about it. A group of German noblemen had organized immigration to Texas in the 1840s, initially with catastrophic results. The death toll among the first wave of settlers was appalling, due to poor timing and provisioning, and unrealistic expectations. But the Germans quickly recovered and built thriving centers at New Braunfels and Fredericksburg—both named for noblemen who were a part of the immigration scheme. According to Olmsted’s figures, Germans comprised nearly a third of the population in towns like San Antonio; they were the dominant group in many counties in the Hill Country. But numbers weren’t the reason for Olmsted’s interest in the Germans. The thriving German communities were his standing argument against the slave-plantation economy that was encroaching upon the Texas frontier. They were the shining example of what free labor and devotion to democratic ideals could accomplish in the new land. Many of the Germans Olmsted encountered were former revolutionaries—but not from the Texas Revolution. They were refugees from the great European Revolutions of 1848; they’d fled Europe after the forces of reaction set in. Some of them were highly educated, yet forced by circumstances to build a life in the rugged conditions of the frontier. Olmsted writes about the “bizarre contrasts” of these settlers, “You are welcomed by a figure in a blue flannel shirt and a pendant beard, quoting Tacitus, having in one hand a long pipe, in the other a butcher’s knife;…coffee in tin cups upon Dresden saucers; barrels for seats, to hear a Beethoven’s symphony on the grand piano…a book-case half filled with classics, half with sweet potatoes.” Title Page of Olmsted's Wanderungen Durch Texas und im Mexicanishen Grenzlande, 1857. Courtesy of Special Collections, University of Houston Libraries Olmsted was greatly impressed by these principled people who labored under harsh conditions. They remained devoted to the high ideals they’d fought for in Europe, even though they’d lost all their wealth and position. For a time, he even worked together with German radicals to promote the idea of creating a free state in West Texas to check the tide of slaveholding in the new territories. This didn’t happen, of course; but it’s thrilling to imagine such startling ideas tossed about in the Wild West. In 1857, Germans in Europe were still only dreaming of a democratic nation. But German Texans were already part of a great and dangerous experiment in radical freedom. I’m Richard Armstrong, at the University of Houston, where we’re interested in the way inventive minds work. Frederick Law Olmsted, Wanderungen durch Texas und im mexicanischen Grenzlande. Published in the series Hausbibliothek für Länder- und Völkerkunde, vol. 13, edited by Karl Andree. Leipzig, Carl B. Lorck, 1857. Karl Andree (1808-1875) was a German nationalist with an intense passion for geography, “scientific ethnology,” and especially for North America—hence his eagerness to publish Olmsted’s book. This German edition is historically interesting in that he sustains throughout an argument with Olmsted about the future development of non-white races, in which Andree is engaging in pre-Darwinian racism (he sees different ethnic groups as separate species with fixed natural characters). He dismisses Olmsted as a naïve abolitionist and “philanthropist.” One of the radicals Olmsted befriended was Adolf Douai, publisher of the anti-slavery newspaper The San Antonio Zeitung. He later was forced to move east, where he remained politically active. See the article in the Handbook of Texas Online. On the original German emigration scheme of the Adelsverein, see the article in the Handbook of Texas Online. Also see Episode 2497. The Engines of Our Ingenuity is Copyright © 1988-2009 by John H.
<urn:uuid:99d1c146-6219-4d7a-a5e6-e3673597f484>
CC-MAIN-2016-26
http://www.uh.edu/engines/epi2499.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954976
1,084
3.25
3
This article will be longer than the above statement because that definition is not agreed upon, and the most prominent advocacy groups, conflate the term basic income with a larger set of policy alternatives (mainly Guaranteed income) which though they may have similar appeals and motivation, are economically incoherent and thus objectionable, and destined to fail any legislative attempt. Groups that correctly describe basic income The Basic Income Earth Network defines basic income perfectly, and the link provides organized responses to many questions. Citizen's Income is another word for basic income. I am unsure who coined it. It is a great word because it reflects the core philosophy of basic income. A benefit entitlement to citizens rewarding them for their participation in society whether or not that participation is limited to consumption. The commitment of society to its citizens can hopefully motivate additional contributions to society from its citizens. Unfortunately, some groups include programs with other philosophies (guaranteed income) under a citizen's income "brand". If I were allowed to name the umbrella of policies that include basic and guaranteed income, I would call it "populist income" I define programs that follow citizen's income natural philosophy to be basic income, and (my definition of) social dividends. Social dividends is the surplus of tax revenue over social expenses that is, as it should be, distributed equally to all citizens. Basic income is the fixed component of citizen's income, while social dividends is a variable entitlement based on the success of the economy, and any government efficiencies or program cuts. Social dividends ensure that government administration is diligent and purposeful, because every citizen pays equally for any program (because the alternative to any program is distributing its cost equally to citizens). Groups that corrupt the definition of basic income USBIG (Basic Income Guarantee)'s definition "a government ensured guarantee that no one's income will fall below the level necessary to meet their most basic needs for any reason." is unfortunately Guaranteed income, and not basic income. They see themselves as an umbrella group for all populist income proposals. Guaranteed income of $15000 vs Basic Income of $10000 Guaranteed or minimum income of $15000 means that every eligible recipient receives a socially funded cheque equal to ONLY the difference between their other income sources and $15000. So, they receive nothing if their income is $15k or more, receive $1k if their other income is $14k, and receive $15k if they have no other income. To understand why basic income and guaranteed income are drastically different, in the context of work: - Basic income (of $10k) is identical to giving every full time (40 hour/week) worker a $5/hour raise, and every half-time worker a $10/hour raise. - Guaranteed income (of $15k) reduces every full time worker wages by at least $7.50/hour, and every half-time worker wages by at least $15/hour. In exchange for a $15k payment. The naive and misguided appeal of Guaranteed Income Would you rather receive $15k for not working or $10k for not working? A citizen's income should not provide a rational incentive for people to refuse work, and especially not even part time work. The only rationale for preferring Guaranteed income over basic income is if you intend to refuse all work. Guaranteed income, if we assume no one refuses work, can appear to be more affordable because fewer people are eligible, and most people might not receive the full amount (because their other income reduces their stipend). The problem here is more complex (dealt with shortly), but the presupposition that no one refuses work is false, as it directly makes many work propositions unacceptable and irrational. The forecasted costs of guaranteed income are impossible to be accurate without certainty of the rate at which people might refuse work. Guaranteed income is a stupid idea designed to sabotage populist income policies Refusing work is rational if the work will pay $20k or $25k per year (regardless if that income is earned with part time hours). $15k for doing nothing, means no commuting and lunch expenses, and more free time, autonomy, and less stress. If we are not born qualified for a full time $25k+/year job, then it discourages training and working for the experience required to be qualified for a $25k+/year job. If you can make $10k tax free by stealing or traficking, then that may be more appealing than a $25k/year official-economy job. You make as much when adding the $15k government stipend. With a $15k guaranteed income, employers will prefer to pay you a small amount of cash instead of a salary. That could even apply to well paying jobs, as you would gain more total benefits at a lower cost to the employer. This would harm the tax base used to fund the guaranteed income. People with incorporated businesses would instead of paying themselves, pay their children, or pay themselves $45k+ every 3 years, and get the guaranteed income the other 2 years. People with investments can have negative income. Even if Guaranteed income were limited to a maximum of $15k, then there are already extremely risky investment alternatives equivalent to betting on a roulette spin in a casino. If I had earned $15k already that year, I would bet red on a roulette wheel. Red I win another $15000, black I get whatever I lost back from the Government. Other investments within the tax code include losing $15k this year, in hopes of earning more in following years. For seasonal workers such as teachers, fishermen, and farmers, earning $5k/month for 4 months would provide them with no economic benefit whatsoever, unless they are confident in finding work outside of those 4 months. Similarly, people starting a job search in the middle of the year would be very unlikely to make money from employment. There would be an attractiveness to develop schemes where people with high incomes appear to lose and transfer all of it in order to get an extra $15k. All of the above are perverse incentives that would ruin the tax base that Guaranteed income relies on to be feasibly fundable, and especially ruin economic participation and activity. The gross stupidity of guaranteed income is the same obvious abuse that too-big-to-fail banks can make through bets where heads they win big, tails we (society) reimburse any losses they "sufferred". Basic income is the freedom to do anything Basic income provides freedom from slavery by equalizing the bargaining power of parties in the labour market, and prevents generational theft of providing retirement benefits to the current generation of seniors but not funding future generations' retirement. Basic income contains no disincentive for work, and through the spending it adds to the economy, creates significant work opportunities. What is likely the greatest benefit of all of social dividends is its power to make sure everyone pays an equal amount for any social service. It is too easy for citizens to resign themselves to the government taking as much money as it can from society in order to pay for whatever it wants. A viable alternative to any program/cost must be an equivalent cash payment to citizens. Basic income as a great benefit to labour While guaranteed income would provide organized labour with the huge benefit of forcing extremely high wages in order to make people want to work, and so fewer people in the labour force, and much better bargaining position for wages, guaranteed income would destroy the economy by reducing work too much, and having an unsustainable funding requirement. It will never be adopted because of this, and is merely a ploy to get political campaign funding. Basic income may be a smaller beneft to labour, but it is still the same benefit. Some people will work less, and thus improve labour's bargaining position. Increased spending means there will be more work available to do, further benefitting sellers of that work. If more people have the freedom to pursue education or business startups, then that further enhances the competitive position of those who want to work now. Basic income is sustainable, with predictable funding requirements, and is not a power grab for any constituency. Balancing life quality away from the 1%, while still benefiting them along with everyone else, is a win for labour too. Basic Income should be a taxable benefit The only controversial point among real basic income proponents is whether the benefit should be taxable. It should be because it suits both of the 2 core philosophies of basic income. The first philosophy of basic income is that it is an anti-poverty program. If it is a taxable benefit, then at whatever taxation policies, a higher amount of pretax basic income can be afforded than if it is non-taxed (say $10k/year taxed vs $8k/year non taxed), and the poor will keep a larger portion of the benefit compared to the very rich. The 2nd philosophy of basic income and social dividends is that it is a rightful benefit of participating in society. Your existence, through consumption, benefits society fueling its work and purpose. Your duty to pay back a portion of your benefits from society should not be affected by the source of those benefits. The mathematical distinction between taxable and non-taxable is not very large. An equivalent balance of benefits to poor vs. rich can be made by increase top marginal tax rates at the same time that basic income is first implemented. To me, the philosophical purity of the 2nd point is more important than the feel good appearance made by calling it non-taxable. Criticism of basic income levels Set too low to meet comfort levels: The core justification of basic income is to permit survival so as to eliminate oppressive slavery-like forces as a reason to work. For someone who doesn't want to work, they may be unable to afford certain lifestyle choices such as living alone in an urban apartment. Solutions include getting a job or room mates, electing municipal politicians that will implement a local basic income supplement, or moving away if all work seems oppressive or unsuitable. Set too high such that it discourages work: The only criteria for determining if basic income level is too high is that it is socially unaffordable. If robots do all work, then $50k or $100k in basic income/social dividends is not necessarily too high, if the robot owners pay high taxes. When determining social affordability, basic income can be too high to motivate some people to bother with working, but unlike minimum/guaranteed income, there is still an incentive of always earning more by working than by not working, and so people who value life style and status will always be willing to work. Still the balance between people doing real work to pay real taxes, and those pursuing science, education, art, and entrepreneurship to potentially pay future taxes must exist. Too high a tax rate on those earning work income such that it prevents needed work from being done can occur, but usually high taxes on very high incomes just provides more opportunity for work sharing and work delegation. A sustainable level of basic income is simply the tax revenue raised less the cost of other wanted government services. Unemployment and inflation concerns: Unemployment occurs because work is unwanted or un-needed. Price (of labour) is a perfect market mechanism for determining employment levels, but slavery or coerced work should not be an appropriate tool to reduce unemployment. If wage rates go up, it may cause some inflation, but coerced work is not an appropriate tool to reduce wages and inflation. Variable social dividend levels is another market feedback tool that allows society to adjust to economic or natural conditions in a moral reaction to how work is needed or wanted. Recommended implementation guidelines Starting a basic income program at the lowest possible level that provides survivability, so that its sustainability and affordability is certain is the best approach. From that certain initial implementation, it will become obvious what sustainable increases to the fixed basic income level, and variable social dividend are possible.
<urn:uuid:df6ee64a-2b24-4a6a-8a99-5b192a7102b0>
CC-MAIN-2016-26
http://www.naturalfinance.net/2013/03/basic-income-real-definition-and.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956334
2,465
2.546875
3
Tsunami Prediction Algorithms As with any examination of naturally occurring events, the number of confounding variables associated with any isolated even is almost incomprehensible; however, some sense can be made of Tsunamis as algorithmic events if the right variables are examined. Currently, mathematical models are mainly used post-tsunami. These models can be utilized to model the events that occurred during a given time period, generally beginning with an earthquake or other seismic event and ending with the dissipation of the tsunami waves. For example, scientists were able to model both the 1755 and 1969 tsunami events that affected the western coasts of Ideally, however, mathematical models could be used to determine magnitudes and directions of tsunamis, as well as predict which area(s) along a given coast are at the highest risk during a given tsunami. One example of a tsunami risk assessment model was developed for the coast of The arrival time of the tsunami can be determined for each section of the coast, which allows us to assess the risk for each area based on the time of arrival. In the Japanese case, all the times were approximately twenty minutes, meaning that there exists almost no disparity between risk assessments for each area; however, if the model were applied on a larger scale (for example, on the coast of Peru), the disparities calculated for sections of the coast would allow risk factors to be determined and to allow for communications to those areas to be prioritized. The “ratio of excess” can be determined by the total number of historical tsunamis and the arrival of tsunami waves over three meters tall at specific locations. We can determine the probability of waves being over five meters tall in each area. As with the arrival time, the disparities calculated based on the ratio of excess can be used to determine which areas are at greater risk . The determination of risk acquired from the arrival time and the ratio of excess can also be applied to the placement of the sensor system. If we know areas that are at a greater risk level, we can place the sensors closer to said areas so that those areas will be warned about an event more quickly than they otherwise would . These models can be developed, theoretically, based on data collected by the DART II system or a similar sensor system. In real time, data could be collected by various sensor points and instantaneously be used to estimate arrival time and, combined with a predetermined ratio of excess, to determine which areas should be evacuated or warned. As aforementioned, the examination of natural events can be a seemingly impossible task, and by no means is it a simple one. I have taken a step towards understanding existing models, allowing for more informed decisions concerning sensor systems for earthquakes and tsunamis to supplement current algorithms and future mathematical tsunami models. V.L., Baptista, M.A., Miranda, J.M., Miranda, P.M.A. (1999). Can Hydrodynamic Modeling of Tsunami Contribute to Seismic Risk Assessment? Physics and Chemistry of the Earth Part A: Solid Earth and Geodesy, 24, Sato, Hiroaki., Murakami, Hitoshi., Kozuki, Yamamoto, Naoaki. (2003). Study On Simplified Method of Tsunami Risk Assessment. Natural Hazards, 29, for banner on this page from Page last updated by cwhit at 11/02/2005 8:41:32 P.M.
<urn:uuid:6de35617-5932-45cc-b9dd-9d2c0f7d5502>
CC-MAIN-2016-26
http://web.mit.edu/12.000/www/m2009/teams/5/Algorithm.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94475
714
3.5
4
WHAT AMERICA WILL LOOK LIKE IN 2050? November 4, 2013 Part 3: Linguistic chaos and tension As a reminder validating the reason for this series: demographic experts project the United States adding 100 million immigrants to this country by 2050—a scant 37 years from now. All totaled, since we reached 300 million in October of 2007, we will add 138 million people by 2050 to total 438 million people—enough to duplicate 20 of our top cities’ populations to our country. The Pew Research Center, U.S. Population Projections by Fogel/Martin and the U.S. Census Bureau document those demographic facts. From the dawn of time, ethnic tribes created languages to fit their understanding of their surroundings. Eskimos created words that defined ice, cold, caribou, whales and frozen seasons. Tribes in Africa created languages that described their trees, rivers, monkeys, elephants and tigers. Tribes in the desert of the Middle East formed entirely different languages based on heat, camels and sand storms. Each language not only allowed tribes to communicate, language defined their “worldview” or how they perceived existence. That same language also formed their religions. They created their religions based on their fears of the unknown—to give them a sense of hope, community and purpose. Each language defined how a tribal member understood and interpreted the meaning of life. Language also allowed human beings to become self-aware, pursue understandings of the world around them and form family and community bonds. It served them well and humanity advanced in word, thought and concepts. Language also separated tribes because they could not understand one another. Back in those times, civilizations grew, but never mixed because few seldom stepped outside their territorial boundaries. However, when they stepped out of their “turf”, they fought in wars for dominance. History reads as one Great War or conflict after another right up to 2013. In the last 10 years, the USA fought two wars. Another 20 wars wage in different areas of the planet as you read this series. Isolation of tribes changed with mass transportation first with the sailing ship, locomotive, automobile and finally the airplane. Today, we see cultures, civilizations and individual humans crossing over onto all seven continents. The one thing they take with them with a powerful sense of meaning remains their culture and their language. It defines them and offers them meaning. However, when they cross over into countries with totally different languages, cultures and meanings—they become ostracized, confused, marginalized, out of place and ultimately, angry. No multicultural and multilingual country in the world today enjoys a peaceful state of being. Today, Canada struggles with French, Arabic, Chinese and other Asian languages overwhelming their schools via immigration. Belgium, Lebanon and Malaysia suffer conflicts and tension from multiple languages. In those countries, minorities with different languages vie for autonomy. Pakistan separated from India and Cyprus divided because of language, religion and culture. Nigeria suppressed ethnic rebellion. France faces difficulties with Basques, Bretons, Corsicans and a growing Muslim demographic. With hundreds of languages in the world today, we see a clashing of civilizations, which ultimately come down to culture and language. A country without a single language in the 21st century faces ultimate disintegration of its culture, worldview and language. With different languages come different ideas on how political “things” should proceed in a country. Some languages suppress all women’s rights. Other languages condone “honor killings” of women as a normal way of life. Immanuel Kant said, “Language and religion are the great dividers.” You can see his wisdom working all over the planet in violent confrontations: Iraq, Afghanistan, Pakistan, Sweden, France, UK, Tunisia and many more. A country, culture and language constitute more than a place to live. A language creates a state of mind, a worldview and distinct understanding of a person’s standing in life. His or her culture defines how he or she operates in the world. If a person in a country loses language and culture—they lose their ability to function in a viable manner. If you notice all the terrorist attacks on the USA in the last 11 years, they came from people who speak other languages, come from other cultures—yet injected themselves into America via our immigration policies. From the 9/11 maniacs, to the Fort Dix Six, to the Times Square Bomber, to the Shoe Bomber, to the Underpants Bomber, to the Denver bomber, to the Fort Hood killer, to the Korean shooter at Virginia Tech, to the Boston Marathon Bomber, to the Muslim who beheaded and be-handed two people last year—all of them arrived from a different language. Unfortunately, at the present rate of 1.0 million legal immigrants annually and the proposed 2.0 million immigrants annually via Senate Bill 744 Amnesty, Americans guarantee themselves more bombings, more mass murderers and more language breakdown that will descend on this country at blinding speed. Especially in education! Once we lose our literacy, we lose our ability to maintain a first world civilization. Already, America faces a complete language change with Spanish when the Mexican tribe becomes the new majority in 2042, a scant 29 years from now. You can bet they will force their language onto America. In 2013, every business in America offers a phone recorder with “Press 1 for Spanish” and “Press 2 for English.” Already in Detroit, Michigan, a recorder says, “Press 1 for Arabic.” This linguistic chaos speeds into America at such a rate of speed, that once it lands in greater numbers, we will not be able to turn back. When Caesar crossed the Rubicon, he sealed his face. If we citizens allow Congress to pass S744, we seal our fate as a multicultural and multi-linguistic nation guaranteed to fracture every community, our culture and our future. You see, as former Colorado Governor Richard D. Lamm said, “Different languages create a deeper and more intractable separating factor. America has been successful because we have become one people. Language is the social glue, shared history and uniting symbols that tie us together.” We need one language to bind us, one culture to sustain us. When a host country such as Canada, Australia, Sweden, Norway, Europe and Holland lose their language, they lose their foundation. Subscribe to the NewsWithViews Daily News Alerts! If we continue on this current path, by 2050, America faces 100 million more immigrants with at least 100 to 150 new languages and they will press for their right to speak, learn and establish their languages in their areas. They will crush English, crush our schools and create chaos in our culture. By 2050, America cannot help but become a multicultural morass, linguistic battlefield and suffer 100 million immigrants attempting to make their language THE language of America. It’s not going to be pretty for anybody because no one will be able to understand anyone else. If you remember the Biblical Tower of Babel, God changed one language into multiple languages. They disagreed, fought, separated and finally abandoned the tower. America faces the same fate with multiple languages. Visit my website to find organizations you can join to stop S744. [Join me, Frosty Wooldridge, with Dave Chaffin, host of the Morning Zone at 650 AM, www.KGAB.com, Cheyenne, Wyoming every Monday 7:00 a.m. to 8:00 a.m., as we discuss my latest commentaries on www.NewsWithViews.com about issues facing America. You may stream the show on your computer. You may call in at: 1-888-503-6500.] © 2013 Frosty Wooldridge - All Rights Reserved Frosty Wooldridge possesses a unique view of the world, cultures and families in that he has bicycled around the globe 100,000 miles, on six continents and six times across the United States in the past 30 years. His published books include: "HANDBOOK FOR TOURING BICYCLISTS"; “STRIKE THREE! TAKE YOUR BASE”; “IMMIGRATION’S UNARMED INVASION: DEADLY CONSEQUENCES”; “MOTORCYCLE ADVENTURE TO ALASKA: INTO THE WIND—A TEEN NOVEL”; “BICYCLING AROUND THE WORLD: TIRE TRACKS FOR YOUR IMAGINATION”; “AN EXTREME ENCOUNTER: ANTARCTICA.” His next book: “TILTING THE STATUE OF LIBERTY INTO A SWAMP.” He lives in Denver, Colorado. His latest book. ‘IMMIGRATION’S UNARMED INVASION—DEADLY CONSEQUENCES.’
<urn:uuid:565a5931-ab60-4c9b-b225-73d92729b9f6>
CC-MAIN-2016-26
http://www.newswithviews.com/Wooldridge/frosty901.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934592
1,866
2.640625
3
Index of content: Volume 60, Issue 3, September 1976 60(1976); http://dx.doi.org/10.1121/1.381128View Description Hide Description Time‐averaged holography has been used to quantitatively investigate sound radiation from an edge‐clamped circular flat plate mounted in an infinite rigid baffle. For a particular mode of vibration, the plate response is measured using holography and the sound power radiated is measured in a reverberant room with the plate mounted in one of the room walls. From these measurements a radiation efficiency is determined. The theoretical plate responce is calculated using both classical and Mindlin–Timoshneko plate theory and is shown to agree well with experimental measurements.Radiated sound power is calculated for each mode of interest by solving the wave equation in oblate spheroidal coordinates at the plate surface. These calculations are verified by direct evaluation of the Rayleigh integral in the farfield. Good agreement is obtained between experimentally measured radiation efficiencies and theoretical predictions. Small discrepancies between theory and experiment are discussed. Subject Classification: 20.55, 40.24, 35.65. 60(1976); http://dx.doi.org/10.1121/1.381129View Description Hide Description A matrix theory is developed for investigating the scattering of elastic waves in solids by an obstacle of arbitrary shape. The scattering matrix which depends only on the shape and nature of the obstacle relates the scattered field to any type of harmonic incident field. Expressions are obtained for the elements of the scattering matrix in the form of surface integrals around the boundary of the obstacle, which can be evaluated numerically. Using the principle of reciprocity and the conservation of energy, the scattering matrix is shown to be symmetric and unitary. These properties are essential to assure the accuracy of numerical calculations. Both two‐ and three‐dimensional problems are discussed, and the obstacle may be an elastic inclusion, a fluid inclusion, a cavity, or a rigid inclusion of arbitrary shape. Subject Classification: 20.15, 20.30. 60(1976); http://dx.doi.org/10.1121/1.381130View Description Hide Description Upon invoking Huygen’s principle, matrix equations are obtained describing the scattering of waves by an obstacle of arbitrary shape immersed in an elastic medium. New relations are found connecting surface tractions with the divergence and curl of the displacement, and conservation laws are discussed. When mode conversion effects are arbitrarily suppressed by resetting appropriate matrix elements to zero, the equations reduce to a simultaneous description of acoustic and electromagneticscattering by the obstacle at hand. Unification with acoustic/electromagnetics should provide useful guidelines in elasticity. Approximate numerical equality is shown to exist between certain of the scattering coefficients for hard and soft spheres. For penetrable spheres, explicit analytical results are found for the first time. Subject Classification: 20.15, 20.30. 60(1976); http://dx.doi.org/10.1121/1.381131View Description Hide Description It is well known that the beam pattern of a parametric acoustic source whose difference frequency is generated largely within the nearfield of the primary beam includes a multiplicative aperture factor that can be important at large angles when ka≳1 (k is the difference frequency wave number, a is the aperture radius). Not so well known, however, is the fact that the same aperture factor arises in the case of a spherically spreading, conical primary beam of finite initial aperture. The importance of the aperture size in determining the off‐axis behavior of parametric sources is discussed. Subject Classification: 25.35; 20.30; 30.75. 60(1976); http://dx.doi.org/10.1121/1.381132View Description Hide Description Three sound recordings of tornadoes have been spectrally analyzed over the frequency interval between 100 and 2000 Hz. The low‐frequency analysis was limited by the response of the microphones used for the recordings; the upper frequency limit was imposed both the microphone response and the low‐signal level. One recording was made by James Cramer of a tornado which passed through Clay Center, Kansas on 25 September 1973. A second recording was made by Richard Allen Lindley of a tornado that passed through Guin, Alabama on 3 April 1974, and a third recording was made by Tom Bittman of a tornado which damaged his home in Tulsa, Oklahoma on 8 June 1974. All of the recordings are of low quality and the audio information required major spectral corrections, but the data analysis does indicate that the audio emissions from the tornadoes decrease in intensity as a function of increasing frequency. An attempt has been made to correlate the data with real (and conjectured) physical characteristics of the tornadoes and attendent atmospheric phenomena. It is clear to us from an objective analysis (and the more difficult subjective evaluation of the recordings) that identification of tornadoes based on acoustic emissions is possible . A study of tornadic sounds can not only provide a new tool for gaining insight into electrical and mechanical disturbances within a tornadic storm, but will also allow acoustic detection of a tornadic storm. A study of the change in intensity of the sounds emitted by the approaching Guin storm at both high and low frequencies suggests that noisesgenerated by the high‐speed winds of the principal tornado vortex as its base scoured the ground might be discerned from noisesgenerated by the winds of one or more smaller vortices moving around the tornado, and/or electrical discharges aloft. Subject Classification: 28.45, 28.65. 60(1976); http://dx.doi.org/10.1121/1.381133View Description Hide Description Periodic vortex streets are formed in the wakes of blunted trailing edges on airfoils and struts. The pressures generated on the shedding struts by the vortices in these wakes are periodic in time with a frequency that is set by the shedding rate for the vortices. A simple analytical formulation is derived to relate wake‐induced pressures to the characteristics of the wake near the edge. The chordwise distribution and magnitude of the pressure is shown as a function of the circulation of shed vortices, as well as the formation distance and the spacing of the vortices in the street. Predictions from the theory are compared to some recent measurements which were obtained in the wakes downstream of different trailing edges. These measurements were made at Reynolds numbers, based on trailing edge thickness, on the order of 104 to 105. Subject Classification: 28.65; 50.55. 60(1976); http://dx.doi.org/10.1121/1.381134View Description Hide Description The theory of sound radiation from cylinders vibrating in resonance with vortex shedding is extended to consider the effects of vibration amplitude and mode shape. Farfield intensity and total radiated power are expressed as functions of given structural and flow parameters. Closed form solutions for intensity are obtained when cylinder vibration velocity is either much smaller than or comparable to mean flow velocity. Subject Classification: 28.65, 40.26, 50.55. 60(1976); http://dx.doi.org/10.1121/1.381121View Description Hide Description UltrasonicDoppler flowmeters can simultaneously obtain spatial and velocity distribution patterns of flow in a blood vessel. The factors fixing spatial and velocity resolution, however, must be determined if optimum utilization is to be realized. The parameters influencing spatial resolution have been described in the literature, but velocity resolution has received relatively little attention. This paper demonstrates analytically how optimum velocity resolution can be derived with a simple mathematical model and presents experimental data to verify theory. In addition, a ’’resolution product’’ is offered which characterizes pulsed Doppler flowmeters. This product shows explicitly the necessary compromise between position and velocity resolution for a given instrument. Subject Classification: 28.60, 28.20; 80.70; 35.80. 60(1976); http://dx.doi.org/10.1121/1.381122View Description Hide Description An experiment designed to measurenormal mode amplitude functions and attenuation coefficients was conducted in shallow water on Campeche Bank off the Yucatan Peninsula. Measurements were made at two locations on the bank in water of about 30 m in depth over a bottom consisting of consolidated limestone having a measured and sound velocity of 1900 m/sec. Pulsed cw signals with frequencies of 400, 750, and 1500 Hz were used. Theoretical calculations of the mode amplitude functions using a fluid model of the bottom were found to agree well with the measurements. In order to reconcile the measured mode attenuation coefficients with theory, it was necessary to assume that the shear velocity of the bottom was 1000 m/sec. The latter is lower than the minimum sound velocity in the water column so that the generation of propagating shear waves in the bottom was the dominant attenuation mechanism. Significant differences in the measured mode attenuation coefficients at the two stations were explained by the deepening of the low velocity channel at the bottom of the water column. Subject Classification: 30.20, 30.50. 60(1976); http://dx.doi.org/10.1121/1.381123View Description Hide Description The amplitude of a sonarecho from a fish depends upon the species and size of the fish, acoustic wavelength, aspect, position of the fish in the sonar beam, range and backscattering cross section. We simplify the problem to a single species and size of fish, vertically downward echosounding, single aspect, and nonoverlapping echoes. After removal of attenuation due to range and absorption two random functions remain. The position of the fish in the sonar beam is random and the scattering cross section for each trail is random. We assume that the fish have a uniform density (number/m3) and calculate the probability density function (PDF) for insonification and reception. We assume that the PDF of the envelope of the echo (excluding the variability of insonification and reception) has a Rayleigh PDF. Assuming two PDF’s are independent, we calculate the PDF of the echo envelopes w E (e). w E (e) depends upon the beamwidth of the sonar and the mean backscattering cross section. The theoretical PDF has the same shape as the measured PDF of echoes from alewife in Lake Michigan. We use the fit of the PDF’s to estimate the backscattering cross section and fish density. This calibrates the echo‐integration processing system. A profile of the density of alewife in Lake Michigan is shown. Subject Classification: 30.40; 80.40. 60(1976); http://dx.doi.org/10.1121/1.381124View Description Hide Description An improved technique has been developed for studies of the shear viscosity of fluids. It utilizes an acoustic resonator as a four‐terminal electrical device; the resonator’s amplitude response may be determined directly and simply related to the fluid’s viscosity. The use of this technique is discussed briefly and data obtained in several fluids is presented. Subject Classification: 35.10, 35.68; 85.52. Excitation, detection, and scattering of electroelastic surface waves via an integral equation approach60(1976); http://dx.doi.org/10.1121/1.381125View Description Hide Description The problem of the excitation, detection, and scattering of electroelasticsurface (Bleustein) waves is solved exactly by determining the charge distribution on the fingers of an interdigital transducer. The approach is to solve an integral equation, in the Fourier transform domain, that relates the charge density on the fingers to the electric potential of the fingers. The solution of the integral equation is accomplished by expanding the charge distribution in a series of pulses and then transforming the problem to a vector matrix,one which is readily handled by a computer. In this manner the charge distribution is determined for a variety of conditions. Subject Classification: 35,54. 60(1976); http://dx.doi.org/10.1121/1.381126View Description Hide Description The dissipation in an elastic medium is represented by a dissipation mechanism which is similar to one used in an earlier paper [M. Caputo, Geophys. J. R. Astron. Soc. 13, 529–539 (1967)], but is simpler and has a frequency‐independent Q −1. The vibrations of a plate are studied by obtaining the eigenfrequencies, the amplitude of the displacement, the dispersion relation, the Q −1, the hysteresis cycle, and the yield stress. Subject Classification: 40.24. 60(1976); http://dx.doi.org/10.1121/1.381127View Description Hide Description This paper presents an analysis of the free vibrations of a disk–cable system spinning freely about a fixed axis through the disk center. Secondly, it is shown how the cable can be used as a vibration absorber to reduce the effect of torsional disturbances on a rotating system. Subject Classification: 40.22, 40.70. 60(1976); http://dx.doi.org/10.1121/1.381135View Description Hide Description The current method of measuringimpactnoise transmission involves the use of a standard hammer machine to produce a series of impact on the floor‐ceiling structure, and the measurement of the resulting noise produced in the room below. The method has been criticized on the ground that ratings based on the test data correlate poorly with the subjective judgments of people listening to real‐life impacts on the same floors. An alternative test method is proposed that uses a modified hammer machine whose internal impedance, intensity of impact, and striking frequency simulate those of real footfalls. The new method involves several changes from the present standard: short‐term rms impact sound levels are measured instead of long‐term rms levels; no normalization for the sound absorption of the receiving room is required; since the short‐term levels are higher than the long‐term levels usually measured, background noise is less of a problem than for the existing method. These proposed changes based on recent studies are expected to improve the correlation between test data and subjective judgments of floors. Subject Classification: 55.80; 50.45. 60(1976); http://dx.doi.org/10.1121/1.381136View Description Hide Description Measurements of the microphonic potential from the lateral line sensory organs of killifish (F u n d u l u s h e t e r o c l i t u s) are presented in this report. The potential consists of a dc shift and a dominant oscillatory component at twice the frequency of the stimulus, and it grows with the square of the stimulus at low amplitudes. An electrical circuit model of the microphonic is developed, assuming a simple quadratic nonlinearity in the electrical response of an individual hair cell as suggested by Flock. The model allows investigation of variable conductive or variable capacitive effects in microphonic generation, and the results obtained from the model are compared to the observed properties of the microphonic potentials. It is concluded that hair cell microphonics appear to be generated by a process involving current flow through a variable conductance, although capacitive effects of the hair cell are important in determining the waveform of the microphonic and its behavior as a function of frequency. A discrepancy between the observed low‐amplitude growth of the microphonic in killifish and the reported low‐amplitude growth of the microphonic in other acoustico‐lateralis preparations is also discussed. Subject Classification: 65.28, 65.20, 65.40. 60(1976); http://dx.doi.org/10.1121/1.381137View Description Hide Description An interferometric optical heterodyne technique has been developed especially for vibrational amplitude and phase measurements on auditory organs of live animals. Laser light diffusely scattered from the vibrating structure is used for the measurement. Continuous calibration and feedback compensation systems were developed to cope with the problems of drift in interferometer alignment and small background movements. Vibrational amplitudes from below 0.1 Å to above 400 Å have been detected on the posterior tympanic membranes of live crickets. Subject Classification: 65.20; 40.60; 35.65. 60(1976); http://dx.doi.org/10.1121/1.381138View Description Hide Description Loudness growth at 1000 and 3000 Hz was measured directly by magnitude estimation and production, and indirectly by loudness matches between tone and wide‐band noise and by interfrequency matching. The outcome of the three series of experiments does not reveal any systematic difference in shape of the loudness curves at 1000 and 3000 Hz. To a first approximation, above about 30 dB SL the loudness functions at 1000 and 3000 Hz are power functions of sound pressure with an exponent close to the accepted ISO standard of 0.60 (0.30 r esound intensity). Below 30 dB SL both loudness curves become progressively steeper than a simple power function and approach the same limiting slope, r esound intensity, of unity. Consistent with Steven’s calculation system [J. Acoust. Soc. Am. 51, 575–601 (1972)], the data also show that loudness equality is achieved when a 3000‐Hz tone is about 8 dB below the SPL of a tone at 1000 Hz. Subject Classification: 65.50, 65.75. 60(1976); http://dx.doi.org/10.1121/1.381139View Description Hide Description The identification of specific random waveforms, imbedded within random interference, was examined. Backward interference (interference after the specific waveform) was more effective than forward interference (interference before the specific waveforms). The accuracy of identification with combined interference (interference before and after the specific waveforms) is approximated by an independence model of interference. Under the present test conditions, interference with the identification of specific random waveforms is interpreted to be more nearly related to the interruption of auditory processing than to the masking of signal audibility. Subject Classification: 65.58, 65.75, 65.52. 60(1976); http://dx.doi.org/10.1121/1.381140View Description Hide Description Old World monkeys were trained with an operant conditioning technique to discriminate the natural speech sounds /ba/–/da/ and transferred to synthetic speech. Human and monkey difference thresholds for formant transitions were then compared along a seven‐step /ba/–/da/ continuum. Monkeys were not as sensitive as humans to differences in formant transition: the just noticeable difference for monkeys was about 320 Hz, and for humans, about 160 Hz. Although humans were more adept at intraphonemic discriminations than monkeys, their latencies to stimulus changes revealed evidence of ’’categorical perception’’ of the continuum: While latencies for the monkeys increased linearly as stimulus difference was decreased, human latencies were essentially constant for all interphonemic comparisons, but increased sharply for intraphonemic comparisons. We view these data as evidence for (a) similar sensory capacities in monkeys and humans, but (b) unique speech processing capacities in humans. Subject Classification: 65.75; 70.30; 80.50.
<urn:uuid:58a563e8-3a6a-41ea-9e12-07e8de10eb64>
CC-MAIN-2016-26
http://scitation.aip.org/content/asa/journal/jasa/60/3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00078-ip-10-164-35-72.ec2.internal.warc.gz
en
0.9147
4,112
2.640625
3
Canadian Space Agency The Astronautics Vocabulary is a glossary of terms that pertain to the science and technology of spaceflight. This alphabetical list can be navigated by clicking on the letters A-Z displayed on this page. Definition: An act of yawing; a movement of deviation from the direct course, as from bad steering; angular motion or displacement about a yawing axis. Other Definition: None
<urn:uuid:50ce1112-5323-48bd-8aba-228b99712726>
CC-MAIN-2016-26
http://www.asc-csa.gc.ca/eng/resources/vocabulary_view.asp?id=97
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.819811
89
2.6875
3
Scientist Isaac Newton, writer Emily Brontë, fictional detective Sherlock Holmes, singer Morrissey, and comedian Janeane Garofalo all share an unlikely commonality: they were or are thought to be asexual. An asexual, or “ace,” is someone who doesn’t experience sexual attraction or a desire for sex—an anomaly in today’s sexually preoccupied world. The phenomenon has garnered increasing attention in recent years as human sexuality experts and the media attempt to understand it. For the last decade Anthony Bogaert, a psychology professor at Brock University in Ontario and a leading expert on asexuality, has been working to change the notion that being asexual is some kind of problem or disorder. “It used to be the case that a lack of sexual interest, a lack of sex drive, or a lack of sexual attraction to other people was not necessarily construed as a problem—it was actually considered to be a virtue,” Bogaert explains. “That sort of changed in the past 20 years or so, when the medical community became interested in looking for treatments, interventions related to human sexuality, and an absence of sex was starting to be construed as a problem.” Asexuals often have a life-long disinterest or little interest in sex, says Bogaert. He notes, however, that asexuality is not the same being sexual but choosing to be celibate, or experiencing a temporary loss of sex drive from an illness or traumatic experience. Bogaert jump-started international research in the field of asexuality with his 2004 paper “Asexuality: Prevalence and Associated Factors in a National Probability Sample,” which suggested that at least one percent of people are asexual. In Canada, that would be nearly 350,000 people. He has been an influential authority on the subject ever since, culminating in his latest book, “Understanding Asexuality,” which characterizes asexuality as an emerging sexual orientation. Bogaert’s studies have also challenged popular attitudes and norms in today’s sex-obsessed Western culture. “When you start looking at it you start to see sex for its particulars and some of its strange intricacies and manifestations. It also makes you start to think about, really, what is a disorder and what is not a disorder,” he says. Bogaert’s work has been extremely well received by the global asexual community, many of whom see the professor as a champion of their cause. It has also likely been instrumental in changing attitudes in the academic and medial communities. For example, last year’s edition of the Diagnostic and Statistical Manual of Mental Disorders differentiated asexuality from sexual disorders for the first time. Amy de Vos, a 21-year-old Kitchener-based photographer, has identified as an asexual since the age of 16. She says that although awareness about asexuality is growing, she still encounters many misconceptions. “’You just haven’t found the right person’—that’s probably one of the most significant responses I’ve gotten,” she says. “It’s kind of saying, ‘you don’t know who you are.’ I am very aware of myself, so I don’t like people telling me that.” De Vos coordinates meet-ups with other asexuals in her area, usually groups of 10-12, but says it isn’t easy to meet others like her. She hopes to get married one day but doesn’t want children, and plans to remain celibate. “Sometimes you kind of wish that you weren’t [asexual] so that you could find more people like you,” she says. But there’s a positive side to asexuality, she adds— putting the focus on someone’s character and compatibility when choosing a partner as opposed to animal attraction. “Personally I think it’s just more healthy to focus on those romantic aspects and someone’s personality, as opposed to lust,” she says, adding that she finds the modern obsession with sex “disconcerting.” “Especially if someone isn’t that sexual, there’s a lot of pressure on people to act.” According to the Asexual Visibility and Education Network (AVEN), the main online portal for the global asexual community, there are a wide range of relationships amongst asexuals: many enjoy romantic partnerships, others are satisfied with close-knit friendships, and some are happiest alone. “Figuring out how to flirt, to be intimate, or to be monogamous in nonsexual relationships can be challenging, but free of sexual expectations we can form relationships in ways that are grounded in our individual needs and desires,” the website states. With increasing attention paid to asexuality in recent years, the community appears to be expanding. Several dating websites for asexuals have cropped up, and a documentary examining asexuality is currently available on Netflix. One of the largest-ever gatherings for asexuals will be held in Toronto on June 28 at the 2014 WorldPride Asexual Conference, featuring international visitors including the founders of AVEN. This exposure is important, says Bogaert, because the more asexuality comes into mainstream consciousness, the more “closeted” asexuals will be able to identify it in themselves and avoid an identity crisis. “If you don’t have a label for yourself and you don’t know what this is you can’t really ‘come out,’ so to speak, and be part of an ‘out’ minority and be counted,” he says. “If you don’t have a label for it people just assume they’re part of some other group.”
<urn:uuid:3cb8ed02-7b0c-4a9e-aaca-32243c3acb82>
CC-MAIN-2016-26
http://www.theepochtimes.com/n3/465467-asexuality-its-normal-says-expert/?photo=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949298
1,264
2.921875
3
Evolving Roles of Living Plant Collections in Botanic Gardens for Biodiversity Conservation - a Survey of Japanese Botanic Gardens Oikawa, J and Kendle, A.D. Department of Horticulture and Landscape, School of Plant Sciences, University of Reading PO Box 221, Reading RG6 6AS UK Home | Contents | Abstract | Introduction | The Conservation Role of Botanic Gardens | Living Plant Collections in Botanic Gardens | Education in Botanic Gardens | Living Plant Collection and Displays for Public Education | Case Study - A Survey of Botanic Gardens in Japan | Acknowledgments | References | Figures Although the contribution made by the living plant collection in gardens is often cited in promotional literature, confusion about the long-term value of the living collections is a fundamental problem today. As education is increasingly integrated in the mission or policy of gardens, a new role and a primary justification for the living collection could be display and public education. By integrating effective display, garden design and careful plant selection, the living collection could become a showcase of science and conservation issues. As part of a survey programme designed to explore how botanic gardens are currently responding to these issues, data are presented here from a range of Japanese botanic gardens. It represents the first analysis of how they are currently structured and their priorities. The lack of education staff at botanic gardens in Japan limits education activities, although public education and recreation are the one of the priority roles of organisation and the living collections are often focused on this purpose. The survey has also shown that the education programme tends to focus on increasing knowledge about plants, rather than developing awareness or positive attitude towards the environment and conservation. This approach reflects a lack of understating of the idea of education for the environment or sustainability. To extend the critical analysis of the real opportunity and need for new educational strategies in botanic gardens, further international survey work is planned. The role of botanic gardens as key locations to promote both in-situ and ex-situ biodiversity conservation has been widely recognised (WWF, IUCN and BGCI, 1989; Maunder, 1997). A major challenge for botanic gardens today is how to evolve their own conservation policies for the next century. Following the recent development of holistic approaches to conservation, there is an increasing emphasis on education in the policy or mission of botanic gardens (Rae, 1995; Willison, 1997). However, botanic gardens across the world face fundamental problems regarding their priority facilities, the gardens. The use and value of this facility in terms of conservation and research have become confused. This paper argues for a review of the role of, and policies towards, living plant collections. This could also be an opportunity to ask how the missions of the organisations have been implemented in practice. Data from a survey of Japanese botanic gardens is presented to illustrate current attitudes in that country. Most of these gardens have not played an active part in developing international contacts, and the data presented here represents the first analysis of how they are currently structured and their priorities. Obviously the role of botanic gardens has changed over time and will need to do so again to match the evolution and requirements of society (Maunder, 1997). Whilst botanic gardens have a long historical tradition of accumulating and studying plant diversity, the conservation role of botanic gardens has not been clearly stated until the middle of the twentieth century (WWF, IUCN and BGCI, 1989; Maunder, 1997; Willison, 1997). However there is an increasing promotion of the importance of this role in publications such as The Botanic Gardens Conservation Strategy (WWF, IUCN and BGCI, 1989). The awareness that there are valuable collections, functions, and expertise at botanic gardens that could contribute significantly to conservation and sustainable development has been growing amongst conservation organisations, governments and botanic gardens themselves (Rae, 1995; Maunder, 1997). Particularly since the Earth Summit in 1992, botanic gardens have committed themselves to sustainability worldwide at local, national and global level, and they are internationally recognised as centers for plant conservation today. However it does not automatically follow that the living collection will play a core role in these programmes. The range of plants cultivated in botanic gardens may be determined by specific corporate strategies, which change and evolve, or may be a result of largely unplanned historical accumulation (Maunder, 1997). Frequently the purpose and the meaning of living plant collections has become uncertain. The potential value of living collections in terms of the value for scientific research, conservation, and education has been well argued (WWF, IUCN and BGCI, 1989; Rae, 1995; Dixon, 1997; Minster, 1997). However the identification and adoption of contemporary roles that are both politically, economically and biologically viable is therefore a complicated issue (Rae, 1995; Maunder, 1997). In parallel to this, research in plant sciences has become focused on molecular and genetic levels whilst even taxonomists find that the risk of morphological distortions in cultivation often makes it preferable to study herbarium rather than living specimens. The problems of maintaining ex-situ living material, such as hybridisation, disease transfer and above all genetic impoverishment, can put a constraint on direct use of the plant stock. With regard to ex-situ plant conservation, advances in seed and tissue bank technology offer opportunities to maintain a wider genetic range and higher quality plant health compared to traditional living collections. Within a scientific context it would therefore be no surprise if calls for redirection of funds from living collections to other activities will become more apparent within the institutions (Rae, 1995). In fact, for some botanic gardens the limited and decreasing opportunities to maintain costly collections in the garden with increasing funding demands coming for laboratory equipment has already put living collections into a crisis (Rae, 1995). Even where ex-situ techniques are well advanced, biodiversity conservation in its broadest sense depends upon in-situ conservation. Ex-situ techniques can do nothing to maintain ecosystem services or aesthetic and recreational values of landscapes. Above all they offer nothing for the majority of the world's biota that has never even been identified. The diversity of life on the earth is complex and its components are all interdependent and interconnected in several dimensions. So great is this complexity that it is difficult even to characterise the nature of life. It has been argued that the number of species described today could be as low as 10% or even 1% of the actual numbers alive on the earth (Wilson, 1992). Whilst plants are often amongst the easier organisms to characterise and maintain, the broader objectives of biodiversity conservation can not be met unless habitats are protected in-situ. To achieve this is not easy and requires strategies on many fronts. One priority is to develop public and political support for biodiversity conservation (Glowka, Duilmon, and Synge, 1994). The search for a positive focus for the living collection in the garden needs to investigate new directions and it seems clear that education and public interpretation of science and conservation issues should become a priority. Given the location of many botanic gardens in centers of population rather than centers of biodiversity, it makes particular sense that the living collections become showcases for conservation education. Since the seventeenth century education has been recognised as one of the important functions and roles in some botanic gardens. From formal to informal, a variety of education programmes and activities have been offered to an extensive range of audiences (WWF, IUCN and BGCI, 1989). It is, however, rather surprising that neither the evolution of education policy nor the actual programmes have been well documented prior to the 1970s (Willison, 1997). Nevertheless, an increasing growth of interest in education internationally can be seen clearly, for example through the development of international congresses on education in botanic gardens organised by the Botanic Gardens Conservation International (Willison, 1997). In parallel with a general increase in awareness of environmental issues and conservation movements in the 1990s, botanic gardens have also started to realise that they can offer not only education in botany and horticulture but also biodiversity conservation education. It has been argued that no organisation bears greater responsibility for informing the public concerning the nature and importance of plants and their roles in natural systems than botanic gardens (Rae, 1995). Botanic gardens can not ignore the important role education could play in supporting their goals of biodiversity conservation and sustainability. As several well-known environmental policies and documents, such as World Conservation Strategy. (IUCN, UNEP and WWF, 1980) Caring for the Earth: A strategy for sustainable living, (IUCN, UNEP and WWF, 1991) and Chapter 36 of Agenda 21, have clearly stated, education, particularly such as environmental education is an important vehicle for the development of a sustainable future. Such goal-focused education is not the same as passively presenting information. It is based on more instructive, constructive, hands-on and inquiry-based approaches (Sterling, 1996; Patchatt, pers. comm.), and aims to make people change their attitudes towards the environment, develop an ethic of sustainability, and act positively for a sustainable future. Some particularly successful examples have been seen in conservation projects or education activities where the public can actively participate as volunteers (Jordan, 1990; Shigematsu, 1993; Stevens, 1995; Grundy and Simpkin, 1996). Through direct experiences and positive participation, volunteers are able to develop greater stewardship (a wish to care for nature) and capacity building (an ability to act) for both sustainable natural and built environments (Jordan, 1990; Oikawa, 1996). While broad ranging ideas of this new model of education have already been integrated into the philosophy behind some botanic garden education programmes (Willison, 1997), there are still gardens which do not recognise such education as being a significant part of their activity. Biodiversity conservation, sustainability and education are all broad concepts that embody many possible paradigms and approaches. As one of these possible approaches we have argued that plant collections in the gardens should be used for public display and education. However it is possible for buzzwords to be incorporated into policy and mission statements without any real substance in practice. Although the concept of design involves the broader purposes, processes and principles that determine the development of gardens, it is rare to find a garden where education staff have had an influence over the composition and layout of the living collection. To fulfill their potential botanic gardens should be designed with teaching and learning in mind and the educators have to be involved in garden collection, plant display and garden design (Cox, 1988). As with museums, interpretation such as explanation panels, displays, brochures, or guides makes up one of the key elements of education in botanic gardens. Good interpretation increases the visitors understanding, enjoyment of the gardens and also gives a greater feeling of satisfaction after a visit (Burbidge 1990). A successful design combining a plant display and interpretation presenting conservation messages could be a powerful tool as education to develop awareness and a more responsible attitude towards environment. This could also increase the visitors' perception of the value of the gardens organisation, to help build a constituency with sympathy for a gardens work and its development. From this perspective pubic education and displays could be seen as the greatest challenge, but also the greatest opportunity to maintain living collections in botanic gardens. The case study was chosen to explore the current role of living plant collections in botanic gardens in Japan, and to test the degree to which the themes discussed above had begun to influence priorities. Other aims were to identify current education activities, to examine the attitudes and visions of educational officers, and to investigate the current role of their living collections in relation to public education and display. In 1966, the Association of Japanese Botanic Gardens was founded as a corporate organisation of the Ministry of Education (Association of Japanese Botanic Gardens, 1997) and it has acted as the center of the national Japanese network of botanic gardens and related organisations that maintain living plant collections. Currently 136 institutions and 119 individual members are included in this network (Association of Japanese Botanic Gardens, pers. comm.). However neither the Association nor its individual member gardens have so far participated in discussion in the international arena, and there is only limited information available about them worldwide (Willison and Wyse-Jackson, pers. comm.). Therefore this survey also had the objective of collecting up to date background information about botanic gardens in Japan. The questionnaire survey was designed in two phases. The primary objective of Phase I was to contact botanic gardens and similar organisations in Japan and to obtain a wide range of basic information about their priorities, while Phase II investigated detailed issues about education and living plant collections in the garden. Referring to A Guidebook of Botanic Gardens in Japan (Anon. 1, 1990) all members of the Association of Japanese Gardens except ones which were not open to the public were included in the sample population of the Phase I questionnaire. The rest of the sample were randomly chosen from a list of botanic gardens in Japan presented by A Guidebook of Botanic Gardens in Japan up to a total of one hundred and sixty. Replies were collected in May 1996. Phase II of the survey was designed in two separate parts; one directed to the senior staff, either director or curator (part 1), and the other to education officers (part 2). The finalised format of these questionnaires was translated into Japanese and printed on different coloured papers. The sample size in this case was in 100 of each. The questionnaires were distributed to the participants at the Annual General Meeting of the Association of Japanese Botanic Gardens in May 1998. The rest were posted to other botanic gardens which were randomly chosen from the references used previously. Replies were collected in June 1998. Ninety replies of the Phase I were gathered in total (56%), while 47 (47%) in Part 1 with 42 (42%) in Part 2 of the Phase II. The results of each questionnaire are shown in Figure 1~7 and Table 1. Focus questions of the questionnaire Phase I and their results (Figure 1~4) Q1.) Do you have any education officers in your botanic garden? A1.) Yes 14% | No 86% (Figure 1) Q2.) Do you have any volunteers in your botanic garden? A2.) Yes 22% | No 78% ( Figure 2) Q3.) What are your educational programmes focused on?A3.) (Figure 3) - Botany - 47 - Horticulture - 41 - Environment - 26 - Conservation - 13 - Restoration - 3 - Others - 12 Q4.) What limits your activities in education? A4.) (Figure 4) - Staff 54 - Budget 44 - Idea 14 - Policy 14 - Others 3 Focus questions of the questionnaire Phase II and their results (Figure 5~7, Tab. 1) Q5.) What are the primary roles of your organisation? Does your organisation use a living collection for this purpose? (Figure 5) Primary Role Use of Collection taxonomy 20 21 other research 17 17 hort. Interest 20 20 display 17 18 public education 11 8 conservation 34 27 recreation/tourism 14 12 academic education 21 23 historic heritage 35 22 Q6.) What proportion does professional and public education play within the entire work of the organisation? A6.) See Figure 6 Q7.) Do you have any programmes on environmental education? (Figure 7)A7.) Yes 45% | No 54% What are your goals for environmental education? Table. 1: The five most common answers - to encourage people to have better knowledge of environment and ecosystems - to interpret how to recycle, reduce and reuse - to encourage a greater love of nature - to enhance the idea of coexistence between humans and the rest of nature - to educate people to understand the importance of species diversity on the earth and to have an ability for critical decision making with regard to their life style and society With the hypothesis that living plant collections in gardens should have a focus on public education and display, this survey was conducted to critically review current policy and practice. Japanese botanic gardens were selected as a research sample because worldwide there is only limited information available about these gardens and their activities. A more extensive survey targeting a broader range of botanical institutes is now underway. The same questionnaire was distributed to all of the delegates at the 5th International Botanic Gardens Conservation Congress 1998. The results of this survey, including a comparison of the data between Japan and the rest of the global community, will be produced. Although public education is highly regarded and seen as one of the primary roles of the organisations surveyed, it is still not common for botanic gardens in Japan to actually employ education officers. This greatly limits the education activities carried out. In terms of quality of education, the majority of Japanese botanic gardens who do run education programmes tend to focus on the themes of botany and horticulture which are often traditionally focused on learning about plants and their cultivation. Environmental education has been carried out at some botanic gardens, and those that do so have well focused goals. However overall the most important level of environmental education; education for the environment or for biodiversity conservation, and sustainability has not yet been seen as a strong theme in botanic gardens in Japan. Volunteer participation at Japanese botanic gardens is also not common yet. Further exploration and development of the role of volunteers represents a new challenge for the institutions to achieve their goals of education and conservation. Living plants in the gardens are seen as an important tool for public education as well as recreation and tourism in Japan. This suggests that attention should be paid towards to the policy of plant selection as well as the design of plants displays for both education and public attraction purposes. We are grateful to all of the botanic gardens that completed the questionnaires and the staff that gave their time and insights. We also thank the Royal Horticulture Society and the Kew Guild for their generous financial support for the attendance to the Congress. - Anon. 1. (1990) A Guidebook of Botanic Gardens in Japan. (in Japanese). (eds.) Takido, M., Kawakami, Y., Kurokawa, H., Nakamura, T. and Sashida, Y. Nippon Television Broadcast Corporation. Tokyo. - Association of Japanese Botanic Gardens. (1997). The Association of Japanese Botanic Gardens. (unpublished leaflet) - Burbidge, R.B. (1990) Interpretation in botanic gardens. In: Proceedings of the International Symposium on Botanic Gardens. (eds.) He, S.A., Heywood, V.H. and Ashton, P.S. Nanjin, China. 269-278. - Cox, M. (1988) From the bottom up - designing gardens for education. Botanic Gardens Education. Australian National Botanic Gardens Occasional Publication. 11. 21-26. - Dixon, K.W. (1997) Gardening to conservation - the emerging role of botanic gardens in recovery of endangered species. In: Conservation into the 21st Century - Proceeding of the 4th International Botanic Gardens Conservation Congress. (eds.) Touchell, D.H. and Dixon, K.W. Kings Park and Botanic Gardens, Western Perth, Western Australia. 169-174. - Glowka, L., Duilmon, B. and Synge, H. (1994) A Guide to the Convention on Biological Diversity. IUCN, Gland, Switzerland. - Grundy, L. and Simpkin, B.(1996) Education for Sustainability. (eds.) Huckle, J. and Sterling, S. Earthscan, UK. - IUCN, UNEP and WWF. (1980) World Conservation Strategy. IUCN, Gland, Switzerland. - IUCN, UNEP and WWF. (1991) Caring for the Earth: A strategy for sustainable living. IUCN, Gland, Switzerland. - Jordan, W. R. (1990) Earthkeeping: A Realisation. Restoration & Management Notes. 8:2 Winter. - Maunder, M. (1997) Botanic Garden Response to the Biodiversity Crisis: Implications for Threatened Species Management. PhD thesis. University of Reading. UK. - Minter, S. (1997) Cultural botany: conserving what you value. In: Conservation into the 21st Century - Proceeding of the 4th International Botanic Gardens Conservation Congress. (eds.) Touchell, D.H. and Dixon, K.W. Kings Park and Botanic Gardens, Western Perth, Western Australia. 329-332. - Oikawa, J. (1996) A report of the study tour on environmental education and ecological restoration. (unpublished) - Rae, D. A. H. (1995) Botanic Gardens and Their Live Collections: Present and Future Roles. PhD thesis. University of Edinburgh, UK. - Shigematsu, T. (1993) Woodland Management by Local Communities. Conserving Nature in Woodland. (in Japanese). Ishida, M., I., Ueda, K. and Shigematsu, T. Tsukiji Shokan, Ltd. Japan. - Sterling, S. (1996) Developing Strategy. Education for Sustainability. (eds.) Huckle, J. and Sterling, S. Earthscan, UK. - Stevens, W. T. (1995) Miracle Under the Oaks. Pocket Books, NY. - Willison, J. (1997) Botanic Gardens and Education for Sustainability: Opportunities and constraints. MSc thesis. South Bank University. London. - Wilson, E. O. (1992) The Diversity of Life. The Belknap Press of Harvard University Press. Massachusetts. - WWF, IUCN, and BGCS. (1989) The Botanic Gardens Conservation Strategy. WWF/IUCN, Gland, Switzerland. Key for Figures 5 & 6 a = taxonomic research |d = display||g = public recreation/tourism| |b = other botanical research||e = public education||h = academic education| |c = horticultural interest/development||f = conservation||i = historic heritage| What proportion does professional and public education play within the entire work of the organisation? [x = proportion (%); y = number] Copyright 1999 NBI
<urn:uuid:74b871fa-9550-464e-834a-42ba42b4a05c>
CC-MAIN-2016-26
http://www.bgci.org/congress/congress_1998_cape/html/oikawa.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923354
4,649
2.859375
3