text
stringlengths
188
632k
By The Editors Toxic Hot Spots are areas where the concentration of toxic substances, which may be present in water, soil or air, is significantly higher than background levels. In these areas, the risk of adverse health effects is elevated. Toxic hot spots are often located in the vicinity of landfills, car battery recycling sites, sewage treatment plants, refineries, tanneries, mines, and numerous other operations. Living nearby these sites may cause serious adverse affects, as for example cancer and retardation in children.. We usually think of infectious diseases as the major global health problem. However, a new study by Kevin Chatham-Stephens and collaborators, published this month in Environmental Health Perspectives, shows that living near a toxic hot spot may lead to a higher health threat than some of the most dangerous infectious diseases worldwide, such as malaria and tuberculosis. The study focuses on three countries, India, Indonesia, and the Philippines. The researchers estimate that more than eight million persons in these countries suffered disease, disability, or death resulting from exposures to industrial contaminants in 2010. The toxic substances causing the majority of negative health effects are lead and hexavalent chromium, a carcinogen. The researchers conclude that “toxic waste sites are a major, and heretofore under-recognized, global health problem.” The study results confirm the findings of the 2012 World’s Worst Pollution Problems report, which clearly shows the large extent of the global health impact of pollution. The image below (from the University of Hedelberg) is a global map showing pollution hotspots around the world. In this case, the hotspots were located in 2004 through detection of nitrogen dioxide, which is released into the atmosphere from the burning of fossil fuels by power plants, heavy industry and vehicles.
This guide for 3D printing materials is all you need to know about polymers to have the best results with 3d Printing This compact guide for 3D printing materials intends to provide information for the optimal use of polymer-based filaments produced by us in the 3D printing industry. Through this information, we hope to give you all of the necessary information needed to choose the ideal material required for the specific application. This guide will lay out specific characteristics of the most widely used polymers in the industry, offering a quick and comprehensive look at each 3D printing material. “Here at TreeD Filaments we think that the progress is easier to reach if we all share a bit of the knowledge we own and make it available for the others.” The considered material data refers to filaments produced by the 3D printing filament company TreeD, The data was borrowed from industrial production and is commonly used in industrial fields. The guide does not consider any materials that are uncertain or non-existent in the industrial market. Some materials may consist of small additivities to enhance the 3D printing process. Therefore, the following information comes from both major plastic materials manufacturers data-sheets, as well as our own manufacturing experience and practice, which was aided by top-notch research laboratories. GUIDE FOR 3D PRINTING MATERIALS : PART 1 : THE FFF TECNOLOGY AND ITS MATERIALS - 1.1 Tecnologies - 1.2 FFF Technology - 1.3 FFF Materials PART 2 : MATERIAL’S PROPERTIES - 2.1 Physical - 2.1.1 Density - 2.1.2 Melt Flow Index - 2.1.3 Mold Shrinkage - 2.1.4 TG - 2.1.5 Hygroscopy - 2.2 – Mechanical - 2.2.1 Resistance - 2.2.2 Resilience - 2.2.3 Hardness - 2.2.4 Fatigue - 2.3 – Thermal - 2.3.1 H.D.T. PART 3 : A FILAMENT FOR EVERY NEED - 3.1 Materials available - 3.2 Differences between 3d printing and traditional molding - 3.3 1,75mm Vs 2,85mm filaments - 3.4 How to identify a well made filament PART 4 : INSIGHTS - 4.1 Bio Materials - 4.2 Composites - 4.3 Polymer Alloys and Blends - 4.4 Additives and charges - 4.5 Multi-material molding process PART 5 : CASE STUDY – PROSTHETIC MATERIAL THIS GUIDE IS ONLY FOR PERSONAL USE – NOT FOR COMMERCIAL USE Treed Filaments è distribuito con Licenza Creative Commons Attribuzione – Non commerciale 4.0 Internazionale.
There could be several reasons your data usage increases when you're not using the Internet. Here are the most common, along with recommended solutions to help you avoid excessive data consumption: - An Unsecured Wi-Fi Router – To prevent unauthorized use of your home network and consumption of your data, encrypt and password-protect your home network. - Oversharing of Wi-Fi Passwords – Only give your home network password to those you trust. If you think your network is being accessed by unauthorized users, change your password to improve your security. - Streaming Video Players (e.g. Apple TV, Roku, Amazon Fire TV, Chromecast, PlayStation, etc.) – Pause or stop the program you’re streaming in order to stop your video player from consuming data. Simply turning off your TV will NOT stop the player, which will continue to consume data unless it is specifically stopped (or paused). - File Backup Programs (e.g. iCloud, Dropbox and Google Drive) – Always set automatic backups to occur during non-peak usage hours (12 a.m.–6 a.m. CT). - Operating System Updates – Schedule these updates to occur during non-peak usage hours (12 a.m.– 6 a.m. CT). - File-Sharing Programs (e.g. BitTorrent) – Beware: these types of programs share files from your personal computer with other Internet users. As a result, you are unable to control when or how much of your data is consumed.
West Nile Virus Prevention and Control Plan West Nile Virus Prevention and Control Plan This summer, the City is conducting a public education campaign regarding what New Yorkers can do to protect themselves and help control the West Nile virus. This campaign will include posters, brochures, radio spots and a Public Health Youth Corps to promote the elimination of standing water around homes, the reporting of dead birds, and the need for personal protection measures, such as covering one’s skin between dusk and dawn. Mayor Michael R. Bloomberg and Department of Health (DOH) Commissioner Thomas Frieden announced New York City’s West Nile Virus (WNV) prevention and control plan and urged New Yorkers to protect themselves against mosquito bites during the summer. "Each year, DOH’s program has been increasingly prevention oriented, with less reliance on pesticides," Commissioner Frieden said. "This progress has been made possible several innovations including state-of-the-art surveillance and laboratory testing. We can now pinpoint geographically viral activity in birds and mosquitoes before humans are infected and target prevention measures accordingly. And now that our laboratory is equipped to test mosquitoes and humans for the virus, we can turnaround test results more quickly, an advance that is critical to preventing further disease. These strides, combined with the public’s participation in prevention, has put New York City at the forefront of West Nile virus control efforts." "In just three years since the introduction of West Nile virus into New York City, we have in place one of the most sophisticated monitoring and control programs in the world," said Mayor Bloomberg. "In 1999, when the virus was first discovered, there were four deaths; in 2000 there was one death; and in 2001 there were no deaths, and that’s the way we want to keep it. New Yorkers can help by reporting dead birds and significant areas of standing water, by removing objects that can contain standing water around their homes, and by taking personal measures to avoid mosquito bites." "I want to assure New Yorkers that we at Department of Environmental Protection (DEP) have taken all prudent environmentally sound steps to control mosquitoes at our facilities," said Commissioner Christopher O. Ward. "In addition to assisting DOH with the larviciding of catch basins, DEP has implemented two innovative programs at our wastewater treatment plants – the use of fish that eat mosquito larva and the installation of Mosquito Magnets, the latest technology that attracts and captures flying mosquitoes without harming beneficial insects, such as ladybugs, bees, butterflies and moths. And, we survey all of our facilities to ensure that there are no areas or equipment that can hold standing water where mosquitoes might breed." To control the West Nile virus, DOH will again rely mainly on larval control (larviciding) and reducing areas of standing water. Beginning this month, larvicide is being applied to parks, pools, ponds, lakes, unused swimming pools, and wastewater treatment plants, and beginning in early June, will be applied to more than 135,000 catch basins citywide. Larvicide will be reapplied as needed throughout the mosquito-breeding season. There are eleven things that New Yorkers can do around their homes to protect against West Nile Virus: - Make sure that doors and windows have tight-fitting screens. Repair or replace all screens that have tears or holes. - Remove all discarded tires from your property. - Dispose of tin cans, plastic containers, ceramic pots, or similar water-holding containers. - Make sure roof gutters drain properly. Clean clogged gutters in the spring or fall. - Clean and chlorinate swimming pools, outdoor saunas and hot tubs. If not in use, keep empty and covered. - Drain water from pool covers. - Change the water in birdbaths every 3 to 4 days. - Turn over plastic wading pools and wheelbarrows when not in use. - Eliminate any standing water that collects on your property. - Remind or help neighbors to eliminate breeding sites on their properties. - Some local hardware stores may carry a product called Mosquito Dunk. If these products are purchased for home use, careful reading of the directions is recommended. This year DOH will aggressively inspect and issue Notices of Violation for properties with significant areas of standing water that are deemed potentially harmful to public health. Report standing water to DOH at 1-(877)-WNV-4NYC or (1-877-968-4692). Animals will be monitored for infection and illness, with a focus primarily on dead birds, especially crows, and domestic animals, particularly horses. Veterinarians are required to report any suspected animal cases with neurological illnesses to DOH. Dead bird reports are essential to analyze and track the location of the virus, and the public is urged to report dead birds to DOH at 1-(877)-WNV-4NYC (1-877-968-4692), http://www.nyc.gov/html/doh/html/wnv/wnvbird.html.
–You use your feet constantly, so it’s little wonder that there are many issues that can affect them. Your toes are also susceptible to problems like hammer, mallet, and claw toes. All of these conditions involve the toes becoming bent into odd positions. Now, a hammer toe is a toe that bends downward from the middle toe joint, causing the joint to rise up. A mallet toe, instead, bends down from the top joint of the toe. Both tend to primarily affect the second toe. Then you have what’s known as a claw toe.1 What Are Claw Toes? A claw toe bends up from the joint where your toes actually join your foot, and down from the middle joint. So your toe, or toes, appear claw-like and curled. To better picture this, imagine someone asked you to pull your toes into what you think “claws” look like. You’d naturally start curling them up and inward, almost scrunching them. Now, imagine that you can no longer straighten out your toes. That is claw toe. Claw toes often affect all four small toes at the same time, but the big toe is rarely affected.2 What Causes Claw Toes To Form? It’s easy to blame the shoes you wear for a condition where your toes have become curled and cramped. And ill-fitting shoes, worn over many years, can certainly be to blame. But claw toe is due to a weakening, or imbalance, in the foot muscles – and it is often caused by other, seemingly unrelated, conditions. Claw toe can also be caused by conditions like: - Blood sugar issues - Certain joint conditions - Excessive alcohol intake - Certain neurological issues - Injuries to the foot or ankle3 There are two types of claw toes – flexible and rigid. If your claw toes are flexible, the joint can still move, and you may be able to straighten it out manually. Splinting the affected toes, for example, could prove helpful. On the other hand, a rigid claw toe cannot be moved, and this can make it very painful. It can also affect your entire ability to move your foot properly.4 How To Avoid Claw Toes Not only are claw toes distressing, but they can also cause additional foot problems, like painful corns or calluses, due to the cramped position of your toes. So its best to prevent them from happening in the first place. If you have any underlying medical conditions that could put you at a higher risk for developing claw toes, it’s very important that you discuss and manage them with your doctor. When choosing shoes, look for low-heels with a roomy toe-box area. Women are far more likely to get claw toes than men because they often wear heels. Small, pointy-toed shoes and high heels can affect the muscles and tendons of your foot over time. Your doctor may also recommend certain pads or insoles that can also help to better cushion your feet. But sometimes, the causes claw toes are unknown, and the condition may even be genetic.5 Are Claw Toes Treatable? Claw toes are treatable but, unfortunately, the treatment is not as simple as a DIY home regimen. You will need to talk to your doctor about the condition (and its causes) and have a foot specialist examine your feet in order to move forward. Now, when you first notice claw toes curling, it’s critical to seek professional help as soon as possible, while the joints still have some flexibility. That way, your doctor will also have some flexibility in their treatment options. Ultimately, surgery may be needed to permanently correct severe claw toes.6 Claw Toes: Final Thoughts If you think you may be experiencing the first signs of curling and cramped toes, it’s essential that you speak to your doctor as soon as possible. The causes of claw toe can be so varied, and they often seem unrelated, so the best thing you can do is go by visual cues. If it looks like something is not quite right, it’s time to discuss things with a physician. And, if you are suffering from a related medical condition, it’s important to ask your doctor what you can do to avoid claw toes from occurring. Interesting Ways That Shoe Styles Can Help (and Hurt) Your Feet 5 Reasons Why You Should Always Wear Socks With Shoes DIY: The Best Foot Massage Techniques
"Where flowers bloom so does hope." (Lady Bird Johnson) Yellow & blue wildflowers along a highway courtesy It was just over 50 years ago that the Lady Bird Bill was signed. President Eisenhower had overseen the building of the Interstate Highway System. Now, President Johnson, with his wife leading the effort, would oversee the beautification of those highways. The Highway Beautification Act of 1965 called for the control of outdoor advertising, for the removal of junkyards along the highways and for "scenic enhancement and roadside development" (https://www.fhwa.dot.gov/infrastructure/beauty.cfm). Daffodils along the Potomac River courtesy Lady Bird concentrated not only on beautifying the nation's highways, but also cities. Focusing on the Washington DC, which in the 1960's was in a dilapidated state, she hoped to set an example for other cities in the United States. She believed that the state of America's cities was reflected in the state of the nation's minds. In January 1965, Lady Bird wrote in her diary: "Getting on the subject of beautification is like picking up a tangled skein of wool. All the threads are interwoven -- recreation and pollution and mental health, and the crime rate and rapid transit and highway beautification, and the war on poverty and parks -- national, state and local. It is hard to stitch the conversation into one straight line, because everything leads to something else." (http://www.pbs.org/ladybird/shattereddreams/shattereddreams_report.html) Pink & red azaleas and white tulips in front of the Capitol courtesy http://about.usps.com/news/national-releases/2012/pr12_117.htm. The Beautification Act faced fierce opposition: the billboard industry, which had sprung up under Eisenhower, would have no part of it. The President and the First Lady, who made frequent road trips from their Texas ranch to Washington DC, had tired of the endless advertisements along America's highways. Lady Bird Johnson would not not give up the fight. The First Lady was so involved in the beautification effort that Kansas Representative Robert Dole, who is still alive today, suggested an amendment to the bill which would replace the title "Secretary of Commerce" with Lady Bird, but lost by a voice vote. Cherry trees in blossom by the Jefferson Memorial courtesy Robert Dole may have lost the battle, but Lady Bird won the war. Her husband, who had just gotten out of the hospital for gall bladder surgery, signed the bill on October 22, 1965. Commenting on his drive from Bethesda Naval Hospital to the White House along George Washington Memorial Parkway, he said: "I saw Nature at its purest. The dogwoods had turned red. The maple leaves were scarlet and gold. And not one foot of it was marred by a single unsightly man-made obstruction -- no advertising signs, no junkyards. Well, doctors could prescribe no better medicine for me."(http://www.pbs.org/ladybird/shattereddreams/shattereddreams_report.html) Rows of crab apple trees along a suburban road courtesy For more information, read A White House Diary by Lady Bird Johnson at https://www.amazon.ca/White-House-Diary-Lady-Johnson/dp/0292717490. Lady Bird Johnson circq 1963 courtesy http://tti.tamu.edu/about/hall-of-honor/inductees/yr2012/
This is part of IEEE Spectrum's Special Report: Why Mars? Why Now? Tiny, potato-shaped Phobos doesn’t look like a place worth visiting. But the Soviet Union tried twice, with limited success, to reach this Martian moon, the larger of two circling Earth’s near neighbor. Now the Russians are working flat out on a third assault, in the form of a sample-return probe called Phobos-Grunt ( grunt is the Russian word for ”soil”). At press time, technical problems seemed likely to push back the launch by two years, to 2011. Whenever it flies, it will be Russia’s most ambitious deep-space mission in years. Why Phobos? Measuring just 27 kilometers at its widest, the satellite orbits the Red Planet about three times a day, at an altitude of 9400 km. If you were standing on Mars’s equator, Phobos would appear about half as big as the sun. Planetary scientists have long debated the mysterious moon’s origin. They’ve also proposed Phobos as a landing site for a crewed mission to Mars. Phobos-Grunt should shed light on both matters. The spacecraft will ride on a Zenit rocket, a well-tested Soviet design. Also on board will be a life-sciences experiment from the Planetary Society and an orbiter built by the Chinese. Touching down on Phobos, the lander will use a clawlike manipulator to grab 15 to 20 samples of regolith, the loose surface material, and load them into a return capsule. The capsule will be rocketed back to Earth, leaving the lander to perform further studies. The value of a sample-return mission is obvious, says Francis Rocard, a planetologist participating in the Phobos-Grunt mission on behalf of the French space agency, CNES. ”What we can do in the lab is absolutely different from what we can do in situ,” Rocard says. ”Sample-return missions always lead to discoveries.” Although other kinds of probes have yielded groundbreaking results—take the NASA Phoenix lander’s recent discovery of water ice on Mars—they can’t do the most complex analyses, such as carbon dating, electron microscope scans, or precise isotopic measurements. For that, you need to bring the specimens home. Doing so also lets separate groups of scientists study the samples using different methods, a critical step for achieving widely accepted conclusions. Phobos-Grunt could, serendipitously, retrieve samples from Mars itself, Rocard adds. During the violent youth of the solar system, space rocks carpet-bombed the Martian surface for eons, and some of the debris from those impacts may have ended up on low-orbiting Phobos. Despite their value, sample-return missions are rare, largely because of their complexity. During the launch into orbit, the landing and sample retrieval, and the journey back to Earth, many things can go wrong. And when it comes to Mars, something usually does. Before its dissolution, the Soviet Union sent 19 spacecraft to Mars, including the two Phobos probes in 1988. Only four of them reached the Martian system, and none completed more than a fraction of its scientific work. The first and only post-Soviet attempt to explore the Red Planet—Mars 96—never made it out of low Earth orbit. Since then, Russian scientists have mostly looked on as U.S. and European missions produced a wealth of new data on the dramatic geological history of Mars. The Russians contributed instruments and experiments to these projects, but their own planetary exploration program remained grounded and short of funds.
But having said that, many Japanese, and in many instances, the Japanese Government / Imperial Diet still refuse to acknowledge many of the atrocities committed by the Imperial Army during WWII. Proof of the pudding is the Mayor of Osaka is only in tepid water of remarks he made about the Imperial Army's use of "Comfort Women". Some historians estimate that 200,000 women were rounded up from across Asia to work as comfort women for the Japanese Army. Other historians put that number in the tens of thousands, and say they served of their own will. Japan formally apologized to the comfort women in 1993. Mr. Hashimoto told reporters in Osaka on Monday that they had served a useful purpose. “When soldiers are risking their lives by running through storms of bullets, and you want to give these emotionally charged soldiers a rest somewhere, it’s clear that you need a comfort women system,” he said. When pressed later, he insisted that brothels “were necessary at the time to maintain discipline in the army.” Other countries’ militaries used prostitutes, too, he said, and added that in any case there was no proof that the Japanese authorities had forced women into servitude. Instead, he put the women’s experiences down to “the tragedy of war,” and said surviving comfort women now deserved kindness from Japan. Mr. Hashimoto is a co-leader of the Japan Restoration Association, a populist party with 57 lawmakers in Parliament. His comments followed those of a string of Japanese politicians who have recently challenged what they say is a distorted view of Japan’s wartime history. Last month, Prime Minister Shinzo Abe seemed to question whether Japan was the aggressor during the war, saying the definition of “invasion” was relative. That was reported on May 13th, and there was some backlash, but yesterday, he basically doubled down on his feelings by saying many other countries had done this too. The issue existed in the armed forces of the U.S.A., the UK, France, Germany and the former Soviet Union among others during World War II. It also existed in the armed forces of the Republic of Korea during the Korean War and the Vietnam War. If only Japan is blamed, because of the widely held view that the state authority of Japan was intentionally involved in the abduction and trafficking of women, I will have to inform you that this view is incorrect.He went on to question whether the Japanese Government was involved in the system of taking, holding, and enslavement of the Comfort Women. His popularity number have 'slumped', but if OUR political set up is any bell weather for how idiot politicians get treated or reelected based on 'numbers', he'll eventually become Prime Minister. Given that last week or the week before Germany was touted as the Happiest Industrialized Country, and that PM Merkel has repeatedly referred to Germany as THE leader and Economic Engine of the EU, and as such their opinion should carry more weight, I'm hearing defenses of past events and remarks from countries who wanted to rule the world is a little off putting. If these people are at or near the top, and they think like this, what does Hans the Plumber think in Hamburg or Yoshi the Plumber in Osaka? Listen, I don't dislike the Japanese nor the German people as 'entities'. But I'm starting to question whether we used enough atom bombs during WWII.
Parakeets or kākāriki (little kākā) are slender green parrots with long tails. Like other parrots, they have broad, curved beaks and are zygodactyl – they have two toes pointing forward and two backwards. The ancestor of New Zealand’s kākāriki species came from New Caledonia within the last 500,000 years, and evolved into six species spread between the subtropical Kermadec Islands and the subantarctic islands. All are now endemic – found only in New Zealand. They belong to the Cyanoramphus genus, which also includes other South Pacific parakeets. Kākāriki make a chattering call as they fly and while feeding. They often hold food up to their mouth with one claw. In autumn and winter they search for food in flocks, but are more solitary during the breeding season. The Māori saying ‘ko te rua porete hai whakarite’, meaning ‘just like a nest of kākāriki’, was used to describe a group of people gossiping excitedly. The Antipodes Island parakeet is the largest. Males measure 32 centimetres from head to tail and weigh 130 grams. The smallest is the yellow-crowned parakeet. It is shorter – males are 25 centimetres long, females 23 centimetres – and much slighter. Males weigh just 50 grams and females 40 grams. The other parakeet species are within this range. Parakeets were reasonably common when European settlers arrived in the 1840s, and were shot for feathers to fill pillows. Now they are legally protected, but introduced rats, cats and stoats have taken a heavy toll. None are common, having disappeared from much of their former range. Two mainland species – the red-crowned and yellow-crowned parakeet – are also quite abundant on some predator-free islands, and the orange-fronted parakeet has been moved to others. Red-crowned parakeets (Cyanoramphus novaezelandiae) are green, with red from bill to crown, a thin red band past each eye and small red flank patches. They mainly eat the seeds of beech, tussock and flax, as well as fruits, flowers, leaves, shoots and invertebrates. They nest in trunks or crevices and burrows, laying about seven white eggs. Feeding and nesting close to the ground means they are very vulnerable to predators. They are now absent from most of the South Island, and sparse in larger forested areas of the Ruahine Range, central North Island, and Northland. They are still doing well on stoat-free Stewart Island and some smaller islands as far south as the Auckland Islands. Separate subspecies are found on the Kermadec and Chatham islands. Yellow-crowned parakeets (Cyanoramphus auriceps) are green, with yellow feathers on the crown meeting a red band above the bill. They live in conifer–broadleaf and beech forest as well as scrub, in both the North and South islands. They mainly feed in the treetops, eating scale insects, leaf miners and aphids, the buds or flowers of kānuka, rātā and beech, and beech seeds. They usually nest in holes in old trees, laying five or more white eggs. Some remain in South Island native forests and the larger forests of the central North Island. On the mainland they are more widespread and common than red-crowned parakeets, but on most predator-free islands the red-crowned species dominates. The least common species is the orange-fronted parakeet (Cyanoramphus malherbi), which is green with a yellow crown meeting orange above the bill. It once lived in the South Island, on Stewart Island and as far north as Hen (Taranga) Island, but is now only found in a few North Canterbury valleys of the Southern Alps. In 2001, its population – already low at 700 – fell below 200 when a bumper crop of beech seed (its main food) also triggered a huge increase in the number of rats, which prey on the eggs and chicks. Intensive efforts to prevent extinction were made: better predator control, cross-fostering eggs to other birds, and moving some birds to predator-free islands in Fiordland (Chalky Island), the Marlborough Sounds (Maud and Blumine islands) and Tūhua (Mayor Island) off the coast of Tauranga. Offshore island parakeets The mainland red- and yellow-crowned parakeets occur at sites as distant from New Zealand as the Auckland Islands. A subspecies of red-crowned parakeet is found in the Kermadec island group, and another on the Chatham Islands. The Chatham Islands have a local endemic species, Forbes’ parakeet (Cyanoramphus forbesi). Subantarctic Antipodes Island has two – the all-green Antipodes Island parakeet (Cyanoramphus unicolor) and the red-crowned Reischek’s parakeet (Cyanoramphus hochstetteri). Both live in a treeless habitat, nesting in metre-deep burrows at the base of tussock clumps. They feed on tussock leaves, seeds and other plant material. The Antipodes Island parakeet scavenges fat from dead chicks at penguin colonies, and sometimes seeks out and kills storm petrel chicks in their burrows.
EMU’s Stress Test The promise of a hard currency union Most Germans watched the creation of EMU with highly mixed feelings. Many economists and the highly respected Bundesbank warned that a stable monetary union was only possible as the crowning achievement of full European economic integration and a political union. On the other hand, many politicians, including Chancellor Kohl, in the early 1990s saw the need to establish a firm time table for EMU so as to anchor the united Germany in Europe and help to appease fears of neighbouring countries, especially France, that Germany would turn its back on the EU. To reduce German concerns about the stability of an EMU that would come before, or even without, political union and ahead of a deeper integration of European economies, politicians constructed the ECB in the image of the Bundesbank (including putting it physically into Frankfurt) and reassured the German public that the euro would be as hard and stable as the D-Mark. Moreover, politicians promised that German tax payers would never have to give financial support to other EMU countries in financial trouble. The principles of a hard currency union, where members would be held fully liable for their government finances, were enshrined in the Maastricht Treaty, on which EMU was based. In addition, governments concluded a “Stability and Growth Pact” (SGP), which was to ensure fiscal discipline of EMU members. The successful launch of EMU came at a time when interest rates globally fell to record lows and investors developed a strong appetite for fixed income securities. As a result, yields on government bonds of EMU participants converged at a low level, close to that of Germany. Initially, ECB officials expressed concern about the markets’ ignorance of different default risks of EMU sovereign debt, but these concerns were ignored and subsequently no longer voiced by the ECB. In fact, the ECB’s uniform acceptance of EMU sovereign debt as collateral in its refinancing operations with banks and regulators’ uniform zero risk-weighting of sovereign debt in banks’ balance sheet and its exemption from prudential lending limits promoted the convergence of yields. Eventually, ECB and government officials saw the narrowing of yield spreads as welcome sign of successful financial integration in the EMU. The sweet poison of low interest rates However, the decline in yields created a false sense of security among EMU member governments and seduced them to careless borrowing. Article 125 of the EU Treaty, which prohibits the EU and individual countries from assuming the liabilities of other countries, was seen as irrelevant in practice and hence lacking any credibility. The violation and subsequent bending of the SGP by Germany and France in 2003-04 unmasked this instrument for fiscal discipline as a paper tiger and demolished the last remaining weak safeguards against over-borrowing. With the burst of the real estate price and credit bubble in 2007 it was only a matter of time until markets exposed the over-borrowing of some EMU member countries. The admission of the Greek government in the autumn of 2009 of a huge deficit and false reporting of fiscal data for many years then triggered a public debt crisis in the euro area that has mutated into a full-blown crisis of EMU. Belatedly, markets are treating almost all euro area sovereign debt, with the exception of that of Germany, like private credit, i.e., subject to the possibility of default, and are demanding corresponding risk premia. But debt levels of almost all EMU countries are too high to allow default without potentially severe implications for the financial system. Hence, when private sector funding of maturing government debt and deficits dried up, the public sector had to step in. As long as only small, peripheral EMU member countries were cut off from private capital markets, financial support could be arranged from other, larger countries with more solid government finances. However, when markets began to avoid Italian sovereign debt, only the ECB had enough financial fire power to provide a credible backstop against a seizing-up of the Italian bond market.
Researchers from the University of Minnesota analyzed the gut bacteria of 514 women with South Eastern origin by dividing them into three groups; the first group includes women residing in Southeast Asia; the second group includes women recently shifted from Southeast Asia to the U.S., and the third group includes the U.S. born children of immigrants with Southeast origin. Researchers analyzed the gut bacteria of recent immigrant women to the U.S. and discovered the sudden decrease in the diversity and number of their essential gut bacteria responsible for digestion and immunity. There was an immediate and considerable reduction in the native gut bacteria of the suspected women after they shifted to the U.S. and the alien microbes’ number steadily increased, that are usually present in European-American people. The effect was significantly found among obese as well as in the U.S. born children of immigrants. The initially dominant gut bacterial species ‘Prevotella’ was replaced by ‘Bacteroides’ The obesity problem in Southeast Asians immigrated to the U.S. may be due to switching diet plan; as Asians diet include more vegetables and carbohydrates, and the Americans diet include high-fat content. This variation in gut microbiomes may be due to some other factors that may include exposure to different antibiotics or consumption of water of different quality However, according to the study, the gut bacterial variation and obesity are not inter-related. But still further research over the strains of gut bacteria found in the Southeast Asians and Americans to validate whether the bacterial strain found in Southeast Asians’ gut prevents obesity or the bacterial strain present in Americans’ gut may lead to obesity. Futuristic research may also find some cure for obese immigrants in their health improvement. Researchers may develop some kind of probiotics for immigrants that could be able to compensate with the decline in native gut microbes.
Oral health is the indicator of the overall well-being of a person because oral problems give the first indication of any kind of body disorder. People generally avoid going to the dentist out of the fear of pain. Also, the common perception is that a dentist needs to be visited only in case of pain and discomfort. What people fail to realise is that visiting a dentist regularly can prevent that pain and discomfort completely because any kind of dental problem is nipped in the bud itself. So people should go for regular oral checkups to any dental clinic in Melbourne to avoid dental problems. Common dental problems faced by people Mostly, people develop cavities at a very young age but go on ignoring them so that the cavities become deep and then only fillings or root canal procedure can save the situation. In extreme cases, the tooth needs to be extracted. The other problems that people face are yellow teeth and bad breath. This is the result of not brushing your teeth properly or brushing them only once in a day. Development of tartar is also seen in people who do not take care of their teeth properly. This can lead to serious problems like cavities and gum infection. Nowadays, many people are particular about having white teeth so that they can smile confidently and can create a good impression on others. Sometimes, there is puss formation in the gums due to infection caused by germs. This problem needs immediate attention otherwise such infection can lead to oral cancer. Some other common dental problems include sensitive teeth, abscessed tooth, teeth grinding, enamel erosion, broken tooth, wisdom tooth pain, swollen or bleeding gums, mouth lesions, dry mouth, tongue and lip sores, etc. You can visit any dental clinic in Melbourne to get a solution to all your dental problems. How to maintain oral hygiene A daily oral health routine should be followed. Special routines should be followed in case of diabetes or pregnancy. Fluoride based toothpastes should be used to avoid germ formation and decay. Brushing should be done twice, flossing once and rinsing of mouth after every meal, and in the morning and at night. Another thing which is very important to maintain oral health is eating right. Eating healthy food can prevent germ formation and helps in exercising of the teeth. Eating junk food and soft drinks can lead to accumulation of sugar particles in the gaps of the teeth. This triggers tooth decay and cavity formation which weaken the tooth and ultimately destroy it. Daily care and correct eating habits help in maintaining oral hygiene. Finding a dentist in Melbourne You can easily find a dental clinic in Melbourne by visiting websites like http://www.docklandsdentalstudio.com.au/. These dental clinics carry out general, restorative and cosmetic dentistry. X-rays, periodic checkups, tooth fillings, extractions, root canal treatment, placing of crowns and bridges, making dentures and braces, teeth whitening and shaping are all done at these dental clinics. The Australian government provides assistance for basic dental services for children between the ages of twelve and seventeen. Adult public dental services are also provided to make dentistry services affordable for all. Therefore, Australians need not worry about the expenses of going for a dental treatment.
Contextualizing Bill Gates: Addressing the Global Climate Disaster Bill Gates’ How to Avoid a Climate Disaster is wide-ranging, fairly inclusive, and certainly accessible. It does not necessarily lay blame for climate change at the feet of any one country but puts pressure on corporations, their methods, and their strategic cultures. It has something of a plan of action and admits quite freely the need for government intervention to address climate change. But it takes some trouble over-identifying China as the culprit, the one whose fast and unrestrained growth is threatening the rest of us. Gates’ implication is that growth there must be curbed or thoroughly overhauled. Thus, for Gates, the story of coal is dominated by the growth of Chinese consumption since 2000. There is, of course, a relevant historical story to be told. Coal has been with us for centuries, but its usage became pronounced in Europe and the United States around the 1760s. Industrial revolutions in Europe and the United States were centred on the absorption of coal, iron, select chemicals and minerals, water resources, and natural growths such as cotton or wool. Slightly before the 1890s, the economies of North America and Europe were joined by the military-centred heavy industrial economies of Russia and Japan that centred on steel usage, railways, shipping, and preparations for expansive warfare. Perhaps a dozen or so industrialised nations were dependent on expansive exploitation of raw material, energy, and working skills that led the way into the enormous global depression of the 1930s. From that time this phalanx of industrial warriors has been joined by smaller East Asian economies, again adopting basic industrial models but following emerging technologies. South Korea with shipbuilding, Taiwan with microelectronics, then China and now India and perhaps even Brazil. But the massive achievements of the Chinese system have been almost smuggled into the story – far less expansionism in reality, almost no military conflict with other major powers, no formal imperialism, and leeway at its frontiers. Chinese military expenditure remains around one-quarter of that of the United States and is spent on equipment that is not as effective as that of the U.S. Yet the GDP of China, depending on the precise form of estimation, is now quite close to that of the United States. By some estimates, within the next decade, China will overtake the U.S. economy. The central point is that, invariably, classic industrialisation has intrinsically involved the use of coal, iron, and select chemicals that has forged the modern world of high environmental exploitation. The capitalist and technological underpinnings are undeniable and fundamental to any sensible visions of who we are, where we are, and where we are going. High income per capita growth means two inescapable things. First, developed economies have long ago passed the stage where manufacturing was an essential base of growth. Second, they have slower rates of GDP growth but fast rates of civil society growth, strong rhetoric of identity, and free-choice consumption by large wealthy populations increasingly dominated my middle-class lifestyles. These inescapable historical trends cannot be talked away in Paris, they cannot be fought into the ground by Greenpeace or Friends of the Earth, or by Extinction Rebellion. But it is possible to distract from them, and this appears to be the role of Gates’ book. We may look at the broad environmental meaning of this history by considering the ten largest economies at present. The richest of these have long shed their dependency on industrialisation – manufacturing sector output as a proportion of their GDP is 12% or less in the U.S., UK, France, Canada, and Brazil; 16-20% in Japan, Italy, Germany, and India, but possibly 30% in China. Of these, the two highest growth economies have been China and India with annual coal consumption per cubic feet per capita of 3,055 and 729 respectively; this compares with a coal consumption of Germany at 3,132, the U.S. at 2,263, Japan at 1,648, and the UK at 625. China here looks high if not beyond the pale. But when we switch to kilograms of oil per capita, the United States stands at 6,804, the UK at 2,764, Germany at 3,818, and Japan at 3,429 – with China at 2,237 and India at 637. The newer, brasher supposedly altogether cruder economies of both China and India are doing very well, despite operating as they are at the lower or heavier end of the global manufacturing regime, using older technologies and producing massive producer goods. Indeed, one reason for high coal consumption in China is that a portion of its fast-growing, earlier manufacturing and mining enterprises absorbed coal more readily than oil, and this is now inherited as a coal-based energy infrastructure of the Belt and Road economy. Even more telling is the comparative figure which captures the major impact of all energy sources, per capita annual CO₂ emissions. The European Commission Joint Research Centre emissions database for 2019 estimates figures are as follows: 15.5 for the U.S., 9.1 for Japan, 8.5 for Germany, 5.5 for the UK, 4.8 for France, 15.7 for Canada, and 8.1 for China. This is not a result we might expect from the global rhetoric or from the Gates approach. Although the underlying data can be fleshed out in his book, Gates prefers the allusion to China, thus his “China is the best example – its transportation emissions have doubled over the past decade and gone up a factor of 10 since 1990.” You must strain to find that Chinese transport emissions actually remain well below those of the U.S., the EU, and most low and middle-income nations, and these figures are not in per capita terms. When the historical dynamics are considered there is every reason to conclude that China is an exemplar of industrial development at its earlier phases. Yet in contrast to the Western and Japanese models, its energy usage is generally low in comparison to far more mature industrialised systems, and it is likely that the economy will reduce energy usage as a proportion of its GDP as it becomes more mature. China will move further towards services at a faster pace, and as a supplier of products, it will be replaced by industrial newcomers. Very clearly, the great wealth gainers in our world are wedded to the existing global pattern of energy and raw material usage by structural factors that cannot be shifted by rhetoric, regulation, or by governance. But they can be and are continually abstracted from in the service of a global distraction. Ironically, further falls in growth in older manufacturing systems clustered around the Atlantic and higher growth in the Pacific economies would for a transitional period almost certainly dampen the environmental crisis.
Browned on the outside, cold in the middle; the holiday turkey still wasn’t done. The 1960s preconvection era oven Andy Erickson’s mom had grappled with for years had produced yet another undercooked bird. Eyes wide with excitement, Andy’s dad, George Erickson, ransacked his toolbox, producing a steel tube that he jammed into the turkey before shoving it back in the oven. A well-roasted Butterball emerged minutes later, signaling a culinary triumph hinged on a homespun version of a fascinating new device George called a “heat pipe.” Turkey physics: Here's how it worked George’s heat pipe held liquid and a wicklike material running from top to bottom. As the liquid heated up, it vaporized, condensing at the other end of the pipe and releasing heat into the turkey’s core before traveling back via the wick to restart the journey. This process brought a consistent amount of heat to the turkey’s core, cooking it evenly. Dinner to deep space: A truly 'universal' technology Today, the heat pipe is one of the Laboratory’s most widely used products, with copious applications large and small, on domestic, industrial and extraterrestrial scales. In fact, your laptop likely uses a heat pipe — it’s radiating heat from the microchips under your keyboard. More than 120,000 heat pipes are used along the 800-mile Trans-Alaska Pipeline to create additional ground cooling during winter. In this model, heat pipes facilitate a natural convection process in which heat is absorbed from soil under the pipeline and ejected into the atmosphere. This sustains permafrost around the pipeline’s support pylons and prevents it from sagging in warmer temperatures. Heat pipes also work well in zero-gravity environments and have been used to manage temperatures inside spacecraft, where heat generated by electronics can build up and damage equipment. In 1996, the space shuttle Endeavour carried three Laboratory heat pipes that operated at temperatures above 900°F. Over the past two decades, the Laboratory has also worked with NASA’s Marshall Space Flight Center in developing heat pipes to generate electricity and propulsion in spacecraft designed to journey to the solar system’s outer limits. Recently, the Laboratory pioneered a new Kilopower reactor, which leverages heat pipes to create a versatile power source in remote locations, like Mars. Early practical heat pipes used mostly low-temperature working fluids like water, but more recent practical applications, like Kilopower, use liquid metals such as sodium. Science maverick's DIY — an innovation for the ages While Lab physicists George Grover and Ted Cotter are largely credited with propelling the heat pipe into the science mainstream, Andy points out that it was his father’s hands-on production of the first prototype that formed the material basis for the device’s eventual widespread use. “Grover had the notion, but Dad put the concept into practice,” he said. “I have proof he built the first demonstrated heat pipes because the original blank is hanging on my wall.” Andy said he often marvels at his father’s industriousness, noting that unlike many of his Laboratory peers, George did not have a college degree. “Dad didn’t agree with math, and math didn’t agree with Dad,” he added. “He was not a good student in the traditional sense.” The lack of formal education certainly never stifled George’s creative ingenuity. Bob Reid, of Applied Engineering Technology (AET-1) at the time of this interview, said the elder Erickson was a science “maverick.” The story goes that George whipped up the first heat pipe virtually on the spot after Cotter casually mentioned the concept in passing one afternoon. Grover’s personal notebook, which Bob has on file, outlines the first experiment with a diagram drawn by hand. “George built the first heat pipe in less than a day and tested it using heat lamps,” Bob said. “He went out, blew the glass, sealed it and put it all together, and Grover wasn’t even aware he was working on it. My understanding is that Cotter got nervous because he wasn’t supposed to share the information, but Grover was pleased the idea worked.” Lighting the fuse for knowledge Growing up around Chicago, George was what Andy calls a “basement bomber,” obsessing over homemade explosives and ham radios. He spent hours at home, mixing chemicals, building electronic contraptions and testing his creations in empty lots or at Lake Michigan. Those early infatuations eventually led George to a job at Argonne National Laboratory and finally Los Alamos in 1957. George was quick to instill those childhood passions in his own offspring. “When I was 6 years old my dad bought me Tenney L. Davis’s book, ‘The Chemistry of Powder and Explosives,’” Andy said. “He told me, ‘We will build anything you want as long as you demonstrate proficiency first. Then you can blow your own stuff up.’ He encouraged that inquisitiveness but wanted things done safely.” Not a typical gift for a 6-year-old, Andy admits. But in those days in the Los Alamos science community, it was “just normal.” “He was just Dad, and he encouraged thought,” Andy said. “He always had 25 ideas and projects going simultaneously.” George’s projects sometimes bore scientific fruit, like the time Andy saw him offhandedly build the Laboratory’s first carbon dioxide laser. A short time later, he used the same concept to rig up a 40-watt carbon dioxide laser with household parts for a neighbor’s kid’s science fair carbon dioxide project. “They just set it up on the kitchen table,” Andy said. “When Dad turned it on, a fire brick fell over and the laser shot right through the wall, narrowly missing the phone line. Mom was always proud of that hole.” The curious inventor At the Laboratory, George was an out-of-the-box thinker with an innate ability to create from the metaphysical. “He’s what we used to call an ‘inventor’ at the Lab,” Bob Reid said, referring to George’s remarkable dexterity. “George is the most effective inventor I’ve ever known.” According to Bob, it was George’s radical curiosity — the need to explore ideas at an intrepid pace — that made him a great innovator. “He’s one of a kind — that’s all there is to it,” Bob said. “I’m trying to grow my own crop of Georges to think his way and be curiosity-driven. We need more people in the Laboratory like George.” George retired from the Laboratory in 1992. For many years after, he reveled in lending his expertise to the Los Alamos community, running the local Kiwanis Club’s July Fourth fireworks show every summer. Editor’s note: George Erickson was 89 years old when this story was first published. He died on March 5, 2019. G. Andrew (Andy) Erickson was the director of Global Security Programs when this story was first published; he retired in 2022.
City streets and sidewalks in the United States have been engineered for decades to keep vehicle occupants and pedestrians safe. If streets include trees at all, they might be planted in small sidewalk pits, where, if constrained and with little water, they live only three to 10 years on average. Until recently, U.S. streets have also lacked cycle tracks – paths exclusively for bicycles between the road and the sidewalk, protected from cars by some type of barrier. Today there is growing support for bicycling in many U.S. cities for both commuting and recreation. Research is also showing that urban trees provide many benefits, from absorbing air pollutants to cooling neighborhoods. As an academic who has focused on the bicycle for 37 years, I am interested in helping planners integrate cycle tracks and trees into busy streets. Street design in the United States has been guided for decades by the American Association of State Highway and Transportation Officials, whose guidelines for developing bicycle facilities long excluded cycle tracks. Now the National Association of City Transportation Officials, the Federal Highway Administration and the American Association of State Highway and Transportation Officials have produced guidelines that support cycle tracks. But even these updated references do not specify how and where to plant trees in relation to cycle tracks and sidewalks. In a study newly published in the journal Cities and spotlighted in a podcast from the Harvard T. H. Chan School of Public Health, I worked with colleagues from the University of Sao Paulo to learn whether pedestrians and bicyclists on five cycle tracks in the Boston area liked having trees, where they preferred the trees to be placed and whether they thought the trees provided any benefits. We found that they liked having trees, preferably between the cycle track and the street. Such additions could greatly improve street environments for all users. Separating pedestrians and cyclists from cars To assess views about cycle tracks and trees, we showed 836 pedestrians and bicyclists on five existing cycle tracks photomontages of the area they were using and asked them to rank whether they liked the images or not. The images included configurations such as a row of trees separating the cycle track from the street or trees in planters extending into the street between parked cars. We also asked how effectively they thought the trees a) blocked perceptions of traffic; b) lessened perceptions of pollution exposure; and c) made pedestrians and bicyclists feel cooler. Respondents strongly preferred photomontages that included trees. The most popular options were to have trees and bushes, or just trees, between the cycle track and the street. This is different from current U.S. cycle tracks, which typically are separated from moving cars by white plastic delineator posts, low concrete islands or a row of parallel parked cars. Though perception is not reality, respondents also stated that having trees and bushes between the cycle track and the street was the option that best blocked their view of traffic, lessened their feeling of being exposed to pollution and made them feel cooler. Factoring in climate change Many city leaders are looking for ways to combat climate change, such as reducing the number of cars on the road. These goals should be factored into cycle track design. For example, highway engineers should ensure that cycle tracks are wide enough for bicyclists to travel with enough width to pass, including wide cargo bikes, bikes carrying children or newer three-wheeled electric bikes used by seniors. Climate change is increasing stress on street trees, but better street design can help trees flourish. Planting trees in continuous earth strips, instead of isolated wells in the sidewalk, would enable their roots to trade nutrients, improving the trees’ chances of reaching maturity and ability to cool the street. Drought weakens trees and makes them more likely to lose limbs or be uprooted. Street drainage systems could be redesigned to direct water to trees’ root systems. Hollow sidewalk benches could store water routed down from rooftops. If these benches had removable caps, public works departments could add antibacterial or anti-mosquito agents to the water. Gray water could also be piped to underground holding tanks to replenish water supplies for trees. Thinking more broadly about street design The central argument against adding cycle tracks with trees to urban streets asserts that cities need this space for parallel-parked cars. But cars do not have to be stored on the side of the road. They also can be stored vertically – for example, in garages, or stacked in mechanical racks on urban lots. Parking garages could increase occupancy by selling deeded parking spaces to residents who live nearby. Those spaces could provide car owners with a benefit the street lacks: outlets for charging electric vehicles, which rarely are available to people who rent apartments. Bus rapid transit proponents might suggest that the best use of street width is dedicated bus lanes, not cycle tracks or street trees. But all of these options can coexist. For example, a design could feature a sidewalk, then a cycle track, then street trees planted between the cycle track and the bus lane and in island bus stops. The trees would reduce heat island effects from the expansive hardscape of the bus lane, and bus riders would have a better view. More urban trees could lead to more tree limbs knocking down power lines during storms. The ultimate solution to this problem could be burying power lines to protect them from high winds and ice storms. This costs money, but earlier solutions included only the conduit for the buried power lines. When digging trenches to bury power lines, a parallel trench could be dug to bury pipes that would supply water and nutrients to the trees. The trees would then grow to maturity, cooling the city and reducing the need for air conditioning. Climate street guidelines for US cities To steer U.S. cities toward this kind of greener streetscape, urban scholars and planning experts need to develop what I call climate street guidelines. Such standards would offer design guidance that focuses on providing physiological and psychological benefits to all street users. Developers in the United States have been coaxed into green thinking through tax credits, expedited review and permitting, design/height bonuses, fee reductions and waivers, revolving loan funds and the U.S. Green Building Council’s Leadership in Energy and Environmental Design rating system. It is time to put equal effort into designing green streets for bicyclists, pedestrians, bus riders and residents who live on transit routes, as well as for drivers.
Plans are underway in the United Kingdom to create a digital pound that would offer a stable alternative to bitcoin or ether. The Bank of England is leading the charge on this project, with the hope of issuing the digital pound by the end of 2018. The central bank and the UK Treasury announced on Monday that an official digital currency is “likely to be needed in the future.” They added that the government is currently exploring the possibility of issuing a digital currency and that there are many benefits to doing so. Among these benefits are increased efficiency and reduced costs. Could a digital pound be the new way to pay? This is something that UK finance minister Jeremy Hunt is exploring, as he believes that it would be a more trusted, accessible and easy way to use money. However, ensuring the financial stability of the country is always a top priority. The Bank of England and the Treasury are considering launching a digital pound, nicknamed “Britcoin,” in the next few years. Central banks around the world are considering issuing their own digital currencies. Unlike cryptocurrencies that are currently available, these coins would have official backing, which would result in a stable value and mean they could be used for everyday spending. In the United Kingdom, £10 of a digital pound would be worth £10 in cash. The Bank of England would provide the foundational public infrastructure — or a “core ledger” — while private companies would issue digital wallets that could be accessed via smartphones or smartcards. This would allow people to use their phones or smartcards to pay for goods and services, without having to carry any cash around with them. Central bank digital currencies could make online spending more convenient, increase cross-border transactions and boost competition among providers of digital financial assets. What if there was no digital pound? In a speech in November, Sir Jon Cunliffe, deputy governor of the Bank of England for financial stability, said that without a digital pound, a few big players could “dominate and perhaps control innovation in payment services.” This could have a negative impact on the economy, as it would be harder for new businesses to enter the market.
In June 2000, a press conference was held in the White House to announce an extraordinary feat: the completion of a draft of the human genome. For the first time, researchers had read all 3 billion of the chemical “letters” that make up a human DNA molecule, which would allow geneticists to investigate how that chemical sequence codes for a human being. In his remarks, President Bill Clinton recalled the moment nearly 50 years prior when Francis Crick and James Watson first discovered the double-helix structure of DNA. “How far we have come since that day,” Clinton said. But the president’s comment applies equally well to what has happened in the ensuing years. In little more than a decade, the cost of sequencing one human genome has dropped from hundreds of millions of dollars to just a few thousand dollars. Instead of taking years to sequence a single human genome, it now takes about 10 days to sequence a half dozen at a time using a high-capacity sequencing machine. Scientists have built rich catalogs of genomes from people around the world and have studied the genomes of individuals suffering from diseases; they are also making inventories of the genomes of microbes, plants, and animals. Sequencing is no longer something only wealthy companies and international consortia can afford to do. Now, thousands of benchtop sequencers sit in laboratories and hospitals across the globe. DNA sequencing is on the path to becoming an everyday tool in life-science research and medicine. Institutions such as the Mayo Clinic and the New York Genome Center are beginning to sequence patients’ genomes in order to customize care according to their genetics. For example, sequencing can be used in the diagnosis and treatment of cancer, because the pattern of genetic abnormalities in a tumor can suggest a particular course of action, such as a certain chemotherapy drug and the appropriate dose. Many doctors hope that this kind of personalized medicine will lead to substantially improved outcomes and lower health-care costs. But while much of the attention is focused on sequencing, that’s just the first step. A DNA sequencer doesn’t produce a complete genome that researchers can read like a book, nor does it highlight the most important stretches of the vast sequence. Instead, it generates something like an enormous stack of shredded newspapers, without any organization of the fragments. The stack is far too large to deal with manually, so the problem of sifting through all the fragments is delegated to computer programs. A sequencer, like a computer, is useless without software. But there’s the catch. As sequencing machines improve and appear in more laboratories, the total computing burden is growing. It’s a problem that threatens to hold back this revolutionary technology. Computing, not sequencing, is now the slower and more costly aspect of genomics research. Consider this: Between 2008 and 2013, the performance of a single DNA sequencer increased about three- to fivefold per year. Using Moore’s Law as a benchmark, we might estimate that computer processors basically doubled in speed every two years over that same period. Sequencers are improving at a faster rate than computers are. Something must be done now, or else we’ll need to put vital research on hold while the necessary computational techniques catch up—or are invented. How can we help scientists and doctors cope with the onslaught of data? This is a hot question among researchers in computational genomics, and there is no definitive answer yet. What is clear is that it will involve both better algorithms and a renewed focus on such “big data” approaches as parallelization, distributed data storage, fault tolerance, and economies of scale. In our own research, we’ve adapted tools and techniques used in text compression to create algorithms that can better package reams of genomic data. And to search through that information, we’ve borrowed a cloud computing model from companies that know their way around big data—companies like Google, Amazon.com, and Facebook. Think of a DNA molecule as a string of beads. Each bead is one of four different nucleotides: adenine, thymine, cytosine, or guanine, which biologists refer to by the letters A, T, C, and G. Strings of these nucleotides encode the building instructions and control switches for proteins and other molecules that do the work of maintaining life. A specific string of nucleotides that encodes the instructions for a single protein is called a gene. Your body has about 22 000 genes that collectively determine your genetic makeup—including your eye color, body structure, susceptibility to diseases, and even some aspects of your personality. Thus, many of an organism’s traits, abilities, and vulnerabilities hinge on the exact sequence of letters that make up the organism’s DNA molecule. For instance, if we know your unique DNA sequence, we can look up information about what diseases you’re predisposed to, or how you will respond to certain medicines. The Human Genome Project’s goal was to sequence the 3 billion letters that make up the genome of a human being. Because humans are more than 99 percent genetically identical, this first genome has been used as a “reference” to guide future analyses. A larger, ongoing project is the 1000 Genomes Project, aimed at compiling a more comprehensive picture of how genomes vary among individuals and ethnic groups. For the U.S. National Institutes of Health’s Cancer Genome Atlas, researchers are sequencing samples from more than 20 different types of tumors to study how the mutated genomes present in cancer cells differ from normal genomes, and how they vary among different types of cancer. Ideally, a DNA sequencer would simply take a biological sample and churn out, in order, the complete nucleotide sequence of the DNA molecule contained therein. At the moment, though, no sequencing technology is capable of this. Instead, modern sequencers produce a vast number of short strings of letters from the DNA. Each string is called a sequencing read, or “read” for short. A modern sequencer produces reads that are a few hundred or perhaps a few thousand nucleotides long. The aggregate of the millions of reads generated by the sequencer covers the person’s entire genome many times over. For example, the HiSeq 2000 machine, made by the San Diego–based biotech company Illumina, is one of the most powerful sequencers available. It can sequence roughly 600 billion nucleotides in about a week—in the form of 6 billion reads of 100 nucleotides each. For comparison, an entire human genome contains 3 billion nucleotides. And the human genome isn’t a particularly long one—a pine tree genome has 24 billion nucleotides. Thus our first daunting task upon receiving the reads is to stitch them together into longer, more interpretable units, such as genes. For a organism that has never been fully sequenced before, like the pine tree, it’s a massive challenge to assemble the genome from scratch, or de novo. How can we assemble a genome for the first time if we have no knowledge of what the finished product should look like? Imagine taking 100 copies of the Charles Dickens novel A Tale of Two Cities and dropping them all into a paper shredder, yielding a huge number of snippets the size of fortune-cookie slips. The first step to reassembling the novel would be to find snippets that overlap: “It was the best” and “the best of times,” for example. A de novo assembly algorithm for DNA data does something analogous. It finds reads whose sequences “overlap” and records those overlaps in a huge diagram called an assembly graph. For a large genome, this graph can occupy many terabytes of RAM, and completing the genome sequence can require weeks or months of computation on a world-class supercomputer. Slideshow: De Novo Sequencing We have an easier job when we’re studying a species whose genome has already been assembled. If we’re examining mutations in human cancer genomes, for example, we can download the previously assembled human genome from the National Institutes of Health website and use it as a reference. For each read, we find the point where that string of letters best matches the genome, using an approximate matching algorithm; the process is similar to how your spell-check program finds the correct spelling based on your misspelled word. The place where the read sequence most closely matches the reference sequence is our best guess as to where it belongs. Thanks to the Human Genome Project and similar projects for other species (mouse, fruit fly, chicken, cow, and thousands of microbial species, for example), many assembled genomes are available for use as references for this task, which is called read alignment. In general, these reference genomes are far too long for brute force scanning algorithms—those that simply start at the beginning of the sequence and work their way through the entire genome, looking for the part that best matches the read in question. Instead, researchers have lately focused on building an effective genome index, which allows them to rapidly home in on only those portions of the reference genome that contain good matches. Just like an index at the back of a book, a genome index is a list of all the places in the genome where a certain string of letters appears—for example, the roughly 697 000 occurrences of the sequence “GATTACA” in the human genome. One powerful recent invention is a genome index based on the Burrows-Wheeler transform—an algorithm originally developed for text compression. This efficient index allows us to align many thousands of 100-nucleotide reads per second. The algorithm works by carefully changing the order of a sequence of letters into one that’s more compressible—and doing so in a way that’s reversible. So, for example, let’s say you have 21 As in your jumbled string of As, Ts, Gs, and Cs. That part of the string could then be compressed into A21, thus using 3 characters instead of 21—a sevenfold savings. By compiling a genome index of sequences reordered in this way, the search algorithm can scroll through the entire genome much more quickly, looking for a read’s best match. Once we have the best algorithms and data structures, we arrive at the next massive challenge: scaling up, and getting many computers to divvy up the work of parsing a genome. The roughly 2000 sequencing instruments in labs and hospitals around the world can collectively sequence 15 quadrillion nucleotides per year, which equals about 15 petabytes of compressed genetic data. A petabyte is 250 bytes, or in round numbers, 1000 terabytes. To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall. And with sequencing capacity increasing at a rate of around three- to fivefold per year, next year the stack would be around 6 to 10 miles tall. At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station. Clearly, we’re dealing with a data deluge in genomics. This data is vital for the advancement of biology and medicine, but storing, analyzing, and sharing such vast quantities is an immense challenge. Still, it’s not an unprecedented one: Other fields, notably high-energy physics and astronomy, have already encountered this problem. For example, the four main detectors at the Large Hadron Collider produced around 13 petabytes of data in 2010, and when the Large Synoptic Survey Telescope comes on line in 2016, it’s anticipated to produce around 10 petabytes per year. The crucial difference is that these physics and astronomy data deluges pour forth from just a few major instruments. The DNA data deluge comes from thousands—and soon, tens of thousands—of sources. After all, almost any life-science laboratory can now afford to own and operate a sequencer. Major centers like the Broad Institute, in Cambridge, Mass., or BGI, in Shenzhen, China, have more than 100 high-capacity instruments on site, but smaller institutions like the Malaysia Genome Institute or the International Livestock Research Institute, in Kenya, also have their own instruments. In all these facilities, researchers are struggling to analyze the sequencing data for a wide variety of applications, such as investigations into human health and disease, plant and animal breeding, and monitoring microbial ecology and pathogen outbreaks. The only hope for these overwhelmed researchers lies in advanced computing technologies. Genomics researchers are investigating a range of options, including very powerful but conventional servers, specialized hardware, and cloud computing. Each has strengths and weaknesses depending on the specific application and analysis. But for many, cloud computing is increasingly the best option, because it allows the close integration of powerful computational resources with extremely high-volume data storage. One promising solution comes from Google, a company with plenty of experience searching vast troves of data. Google doesn’t regularly release information on how much data it processes, but in May 2010 it reported searching 946 petabytes per month. Today, three years later, it’s safe to assume that figure is at least an order of magnitude larger. To mine the Internet, Google developed a parallel computing framework called MapReduce. Outside of Google, an open-source alternative to MapReduce called Apache Hadoop is emerging as a standard platform for analyzing huge data sets in genomics and other fields. Hadoop’s two main advantages are its programming model, which harnesses the power of many computers in tandem, and its smart integration of storage and computational power. While Hadoop and MapReduce are simple by design, their ability to coordinate the activity of many computers makes them powerful. Essentially, they divide a large computational task into small pieces that are distributed to many computers across the network. Those computers perform their jobs (the “map” step), and then communicate with each other to aggregate the results (the “reduce” step). This process can be repeated many times over, and the repetition of computation and aggregation steps quickly produces results. This framework is much more powerful than basic “queue system” software packages like the widely used HTCondor and Grid Engine. These systems also divide up large tasks among many computers but make no provision for the computers to exchange information. Hadoop has another advantage: It uses the computer cluster’s computational nodes for data storage as well. This means that Hadoop can often execute programs on the nodes themselves, thus moving the code to the data rather than having to access data in a comparatively slow file server. This structure also brings a reliability bonus, even on off-the-shelf servers and disks. Google created MapReduce to run in data centers packed with cheap commodity computers, some of which were expected to fail every day, so fault tolerance was built into the system. When a data set is loaded into the program, it’s split up into manageable chunks, and each chunk is replicated and sent to several computer nodes. If one fails, the others go on. This model also works well in a flexible setting such as the Amazon Elastic Compute Cloud, where nodes can be provisioned for an application as needed, on the fly, and leased on a per-hour basis. We’re still a long way from having anything as powerful as a Web search engine for sequencing data, but our research groups are trying to exploit what we already know about cloud computing and text indexing to make vast sequencing data archives more usable. Right now, agencies like the National Institutes of Health maintain public archives containing petabytes of genetic data. But without easy search methods, such databases are significantly underused, and all that valuable data is essentially dead. We need to develop tools that make each archive a useful living entity the way that Google makes the Web a useful living entity. If we can make these archives more searchable, we will empower researchers to pose scientific questions over much larger collections of data, enabling greater insights. This year, genomics researchers may reach a remarkable milestone: the US $1000 genome. Experts have long said that when the cost of sequencing a human genome falls to that level, the technology can be used routinely in biological research and medical care. The high-capacity Illumina systems are nearing this price point, as is the Ion Proton machine from San Diego–based Life Technologies. Such sequencing capacity is already enabling projects that can reinvent major sectors of technology, science, and medicine. For example, the U.S. Department of Energy recently launched KBase, a knowledge base for biofuel research that integrates hundreds of terabytes of genomic and other biological data inside its own compute cloud. KBase will use state-of-the-art machine learning and data-mining techniques to build predictive models of how genome variations influence the growth of plants and microbes in different environments. Researchers can then select which plants and microbes should be bred or genetically engineered to become more robust, or to produce more usable oils. This scenario is just a hint of what is to come if we can figure out how to channel the data deluge in genomics. As sequencing machines spew out floods of As, Ts, Cs, and Gs, software and hardware will determine how much we all benefit. About the Authors Michael C. Schatz is an assistant professor of quantitative biology at Cold Spring Harbor Laboratory, in New York state. Schatz knows firsthand how much genetics research depends on computer science. His first postcollege job was working as a software engineer in the trenches of a major genomics research institute, where he wrote programs for analyzing all the genetic data generated by high-tech sequencing machines. Ben Langmead began collaborating with Schatz when the two were Ph.D. students in bioinformatics and computational biology. Now an assistant professor of computer science at Johns Hopkins University, Langmead relishes the contributions his programs can make to medical and scientific research. “When I’m helping life scientists design an experiment so the data they get can be analyzed, it answers the ‘What’s the point?’ question,” he says.
Report on Colima (Mexico) — February 1995 Bulletin of the Global Volcanism Network, vol. 20, no. 2 (February 1995) Managing Editor: Richard Wunderman.. Colima (Mexico) Summit temperatures, gas measurements, and July 1994 explosion crater description Please cite this report as: Global Volcanism Program, 1995. Report on Colima (Mexico). In: Wunderman, R. (ed.), Bulletin of the Global Volcanism Network, 20:2. Smithsonian Institution. https://doi.org/10.5479/si.GVP.BGVN199502-341040. 19.514°N, 103.62°W; summit elev. 3850 m All times are local (unless otherwise noted) Scientists from the geologic group of CUICT (Centro Universitario de Investigaciones en Ciencias de la Tierra), RESCO (Red Sismologica Telemetrica de Colima), and the Colima Volcano Observatory at the University of Colima visited the summit on 4 and 15 February 1995. During a previous ascent on 20 May 1994, temperature measurements of fumaroles were taken at 21 locations in two areas, E and NE of the summit; values were in the 274-304°C range. A gas sampling experiment (SO2 and CO2) used an aspirating pump (Matheson-Kitagawa toxic gas detector system) with 100-ml precision detector tubes and 1-5 minute collection times. SO2 values of 200 ppm were measured at both sites; CO2 was 0.2 and 0.3%, respectively. Low temperatures (<60°C) at the gas sampling sites were required. A second ascent later in 1994 was not undertaken because of increased seismicity following a phreatic explosion in July. During February 1995, the group visited the same points as in May 1994, as well as the bottom of the July 1994 crater. On 4 February, fumarole temperatures measured at 17 locations in the E summit area averaged 372°C, with a high value of 504°C. Temperatures in the NE sector averaged 398°C. Gas sampling (HF, HCl, SO2, and CO2) was again conducted at almost the same sites. Values in the E and NE sectors, respectively, were as follows for each gas: HF, 17.4 and 78.3 ppm; HCl, 8.0 and 63.3 ppm; SO2, 180 and 460 ppm; CO2, 0.25 and 0.85%. On 15 February, temperatures taken inside the E rim of the July 1994 crater averaged 230°C. A survey showed the crater to have a rim diameter of 135 m, a depth of 40 m, a floor diameter of 37 m, and an internal slope of 30° on the E side (figure 21). |Figure 21. Sketch map and topographic profiles of the summit of Colima, February 1995. Courtesy of Andrea Csillag Tirelli, Universidad de Colima.| A flight was made during clear weather on 11 February with a correlation spectrometer (COSPEC) to measure the SO2 flux. Ten traverses at 3,050 m altitude were made between two navigational benchmarks using the aircraft global positioning system (GPS), assuming that the traverses were perpendicular to the plume axis. Wind speed and direction was computed using GPS at two points beneath the plume as well as before and after the traverses above the summit. Wind direction was 289° with an average velocity of 10.9 m/s. The SO2 flux was determined to be 386 ± 160 metric tons/day, and was calculated according to instructions provided by S. Williams during a June 1994 workshop at UNAM in México City. Geologic Background. The Colima volcanic complex is the most prominent volcanic center of the western Mexican Volcanic Belt. It consists of two southward-younging volcanoes, Nevado de Colima (the 4320 m high point of the complex) on the north and the 3850-m-high historically active Volcán de Colima at the south. A group of cinder cones of late-Pleistocene age is located on the floor of the Colima graben west and east of the Colima complex. Volcán de Colima (also known as Volcán Fuego) is a youthful stratovolcano constructed within a 5-km-wide caldera, breached to the south, that has been the source of large debris avalanches. Major slope failures have occurred repeatedly from both the Nevado and Colima cones, and have produced a thick apron of debris-avalanche deposits on three sides of the complex. Frequent historical eruptions date back to the 16th century. Occasional major explosive eruptions (most recently in 1913) have destroyed the summit and left a deep, steep-sided crater that was slowly refilled and then overtopped by lava dome growth. Information Contacts: Carlos Navarro, Juan-José Ramirez, Abel Cortes, and Juan-Carlos Gavilanes, Colima Volcano Observatory and CUICT, Universidad de Colima; Andrea Csillag Tirelli, RESCO-CICBAS, Universidad de Colima.
2003 invasion of Iraq From Wikipedia, the free encyclopedia - This article regards the 2003 invasion of Iraq. For events after May 1, 2003, see Iraq War, and Post-invasion Iraq, 2003–2006 |2003 Invasion of Iraq| Black Hawk Helicopters from the 2nd Brigade, 101st Airborne Division (Air Assault) move into Iraq during the opening stages of the 2003 Invasion |George W. Bush Invasion – Post-invasion (Insurgency – Civil War) |Recent wars in the Persian Gulf| |Iran-Iraq War – Gulf War – Iraq War| The 2003 invasion of Iraq, codenamed "Operation Iraqi Freedom" by the United States, officially began on March 20, 2003. The stated objective of the invasion was "to disarm Iraq of weapons of mass destruction, to end Saddam Hussein's support for terrorism, and to free the Iraqi people".. In preparation, 100,000 US troops were assembled in Kuwait by February 18. The United States supplied the majority of the invading forces. Supporters of the invasion included a coalition force of more than 40 countries, and Kurds in northern Iraq. The 2003 Iraq invasion began the Iraq War. Prelude to the Invasion Prior to the invasion, the United States' official position was that Iraq was in violation of UN Security Council Resolution 1441 regarding weapons of mass destruction and had to be disarmed by force. The United Kingdom and United States attempted to get a U.N. Security Council resolution authorizing military force, but withdrew it before it could come to a vote after France, Russia, and later China all signaled that they would use their Security Council veto power against any resolution that would include an ultimatum allowing the use of force against Iraq. On March 20, 2003, the invasion of Iraq began. This was claimed by some to be a violation of international law, breaking the UN Charter (see Legitimacy of the 2003 invasion of Iraq). The Iraqi military was defeated, and Baghdad fell on April 9, 2003. On May 1, 2003, U.S. President Bush declared the end of major combat operations, terminating the Baath Party's rule and removing Iraqi President Saddam Hussein from office. Coalition forces ultimately captured Saddam Hussein on December 13, 2003. Political and diplomatic aspects Since the conclusion of the Gulf War of 1991, Iraq's relations with the UN, the US, and the UK remained poor. In the absence of a Security Council consensus that Iraq had fully complied with the terms of the Persian Gulf War ceasefire, both the UN and the US enforced numerous economic sanctions against Iraq (see, Iraq sanctions) throughout the Clinton administration. The U.S. and the UK patrolled Iraqi airspace to enforce Iraqi no-fly zones that they had declared to protect Kurds in northern Iraq and Shi'ites in the south. The no-fly zone was contested however by Iraqi military helicopters and planes on numerous occasions. The United States Congress also passed the "Iraq Liberation Act" in October 1998 after Iraq had terminated its cooperation with the U.N. in August, which provided $97 million for Iraqi "democratic opposition organizations" in order to "establish a program to support a transition to democracy in Iraq." This contrasted with the terms set out in U.N. Resolution 687, all of which related to weapons and weapons programs, and made no mention of regime change. Weapons inspectors had been used to gather information on Iraq's WMD (Weapons of Mass Destruction) program and to enforce the terms of the 1991 cease fire, which forbade Iraq from developing WMD. The information was used in targeting decisions during Operation Desert Fox, a US and UK bombardment of Iraq in December 1998 which was precipitated by lack of cooperation between Iraq and the UN weapon inspections team. The United States Republican Party's campaign platform in the U.S. presidential election, 2000 called for "full implementation" of the Iraq Liberation Act and removal of Saddam Hussein with a focus on rebuilding a coalition, tougher sanctions, reinstating inspections, and support for the pro-democracy, opposition exile group, Iraqi National Congress then headed by Ahmed Chalabi. Upon the election of George W. Bush as president, according to former treasury secretary Paul O'Neill, an attack was planned since the inauguration, and the first security council meeting discussed plans on invasion of the country. O'Neill later clarified that these discussions were part of a continuation of foreign policy first put into place by the Clinton Administration. Notes from aides who were with Defense Secretary Donald Rumsfeld in the National Military Command Center one year later, on the day of the September 11, 2001 Terrorist Attack, reflect that he wanted, "best info fast. Judge whether good enough hit Saddam Hussein at same time. Not only Osama bin Laden." The notes also quote him as saying, "Go massive", and "Sweep it all up. Things related and not." Shortly thereafter, the George W. Bush administration announced a War on Terrorism, accompanied by the doctrine of 'pre-emptive' military action, termed the Bush doctrine. From the 1990s, U.S. officials have constantly voiced concerns about ties between the government of Saddam Hussein and terrorist activities, notably in the context of the Israeli-Palestinian conflict. Through the Palestinian Arab Liberation Front (PALF), Saddam had offered $10,000 USD for families of "civilians killed during Israeli military operations" and, $25,000 USD for "families of suicide bombers." In 2002 the Iraq disarmament crisis arose primarily as a diplomatic situation. The Bush administration waited until September 2002 to call for action, with White House Chief of Staff Andrew Card saying "From a marketing point of view, you don't introduce new products in August." In October 2002, with the "Joint Resolution to Authorize the Use of United States Armed Forces Against Iraq", the United States Congress granted President Bush the authority to "use any means necessary" against Iraq, based on repeated Bush Administration statements to Congress and the public, which turned out to be incorrect, that Iraq possessed weapons of mass destruction. The joint resolution allowed the President of the United States to "defend the national security of the United States against the continuing threat posed by Iraq and enforce all relevant United Nations Security Council Resolutions regarding Iraq." In November 2002, United Nations actions regarding Iraq culminated in the unanimous passage of UN Security Council Resolution 1441 and the resumption of weapons inspections. Force was not authorized by resolution 1441 itself, as the language of the resolution mentioned "serious consequences", which the majority of Security Council members argued did not include the use of force to overthrow the government; however the threat of force, as cultivated by the Bush administration, was prominent at the time of the vote. Both the U.S. ambassador to the UN, John Negroponte, and the UK ambassador Jeremy Greenstock, in promoting Resolution 1441, had given assurances that it provided no "automaticity", no "hidden triggers", no step to invasion without consultation of the Security Council. Such consultation was forestalled by the US and UK's abandonment of the Security Council procedure and their invasion of Iraq. The stated cause by the United Kingdom to forego further UN resolutions was notice supplied by France that they would block any further Security Council resolutions on Iraq. Negroponte was noted as saying "one way or another, Mr. President, Iraq will be disarmed. If the Security Council fails to act decisively in the event of a further Iraqi violation, this resolution does not constrain any member state from acting to defend itself against the threat posed by Iraq, or to enforce relevant U.N. resolutions and protect world peace and security." There is still considerable disagreement among international lawyers on whether prior resolutions, relating to the 1991 war and later inspections, permitted the invasion. Richard Perle, a senior member of the administration's Defense Policy Board Advisory Committee, argued in November 2003, that the invasion was against international law, but still justified. At the same time Tony Blair's Attorney General Lord Goldsmith, while concluding that a reasonable case could be made that resolution 1441 required no further resolution of the UN, he could not guarantee that an invasion in the circumstances would not be challenged on legal grounds. On October 11, 2002, the United States Congress passed the "Authorization for Use of Military Force Against Iraq Resolution of 2002", giving U.S. President George W. Bush the authority, under US law, to attack Iraq if Iraqi President Saddam Hussein did not give up his weapons of mass destruction (WMDs) and abide by previous UN resolutions on human rights, POWs, and terrorism. On November 9, 2002, at the urging of the United States government, the UN Security Council passed United Nations Security Council Resolution 1441, offering Iraq "a final opportunity to comply with its disarmament obligations" that had been set out in several previous resolutions (Resolutions 660, 661, 678, 686, 687, 688, 707, 715, 986, and 1284), notably to provide "an accurate full, final, and complete disclosure, as required by Resolution 687 (1991), of all aspects of its programmes to develop weapons of mass destruction and ballistic missiles". Resolution 1441 threatened "serious consequences" if these are not met and reasserted demands that UN weapons inspectors that were to report back to the UN Security Council after their inspection should have "immediate, unconditional, and unrestricted access" to sites of their choosing, in order to ascertain compliance. Significantly, the Resolution stated that the UN Security Council shall "remain seized of the matter" (United Nations Security Council Resolution 1441). The Iraqi government did what it was required in the 1441 resolution and presented a report of its weapons. The US government claimed that the report was false for not recognizing having the WMDs. It announced the invasion in the Spring of 2003. In his March 17, 2003 address to the nation, Bush demanded Hussein and his two sons Uday and Qusay to surrender and leave Iraq, giving them a 48-hour deadline. This demand was reportedly rejected. Iraq maintained that it had disarmed as required. The UN weapons inspectors (UNMOVIC) headed by Hans Blix, who were sent by the UN Security Council pursuant to Resolution 1441, requested more time to complete their report on whether Iraq had complied with its obligation to disarm (UN Security Council Resolution 1441; UNMOVIC). The International Atomic Energy Agency IAEA reported a level of compliance by Iraq with the disarmament requirements (UN Security Council Resolution 1441; IAEA) Hans Blix went on to state the Iraqi government may have been hoping to restart production once sanctions were lifted and inspectors left the country, as speculated by senior Iraqi officials and a prominent defector, Gen. Hussein Kamel. The attempt of the United Kingdom and the United States to obtain a further Resolution authorizing force failed. Thus, the Coalition invasion began without the approval of the United Nations Security Council, which United Nations Secretary-General Kofi Annan regarded as a violation of the UN Charter. (cf. The UN Security Council and the Iraq war) Several countries protested. United Nations Secretary-General Kofi Annan said in September 2004, "From our point of view and the UN Charter point of view, it was illegal." Proponents of the war claim that the invasion had implicit approval of the Security Council and was therefore not in violation of the UN Charter. Nevertheless, this position taken by the Bush administration and its supporters, has been and still is being disputed by numerous legal experts. According to most members of the Security Council, it is up to the council itself, and not individual members, to determine how the body's resolutions are to be enforced. Since 2003, 500 chemical weapons containing mustard or sarin nerve agents have been found. Both of these nerve agents are classified by the United Nations as Weapons of Mass Destruction. According to an unclassified NGIC Report, Coalition forces have recovered approximately 500 weapons munitions consisting mainly of mustard and sarin nerve agents. These are suspected to originate from the first Gulf War, however even degraded chemical warefare agents remain hazardous and potentially lethal. The invasion is claimed to have been a contributing factor to Muammar al-Gaddafi's decision to disclose and give up his nascent nuclear program. However, the existence of such a weapons program is in doubt, and some suspect that it suited all involved to exaggerate - or even invent - both the threat posed by the alleged program, and the sacrifice made in abandoning it. As a response to the imminent invasion, the February 15th anti-war protests were held- the largest of their kind since the Vietnam War. took place with 6-10 million people in over 60 countries around the world. The invasion of Iraq had not the support of the UN, as the invasion of Afghanistan had. See Legitimacy of the 2003 invasion of Iraq for a more detailed analysis of these issues. In the wake of the September 11 attacks and the apparent success of the U.S. invasion of Afghanistan in 2001, the Bush administration felt that it had sufficient military justification and public support in the United States for further operations against perceived threats in the Middle East. The relations between some coalition members and Iraq had never improved since 1991, and the nations remained in a state of low-level conflict marked by American and British air-strikes, sanctions, and threats against Iraq. Iraqi radar had also locked onto and anti-aircraft guns and missiles were fired upon coalition airplanes enforcing the northern and southern no-fly zones, which had been implemented after the Gulf War in 1991. Throughout 2002, the U.S. administration made it clear that removing Saddam Hussein from power was a major goal, although it offered to accept major changes in Iraqi military and foreign policy in lieu of this. Specifically, the stated justification for the invasion included Iraqi production and use of weapons of mass destruction, alleged links with terrorist organizations, and human rights violations in Iraq under the Saddam Hussein government. Bush and his cabinet repeatedly linked the Hussein government to the September 11th attacks, despite the fact that there was no convincing evidence of Hussein's involvement. Saddam Hussein refused to allow weapon inspectors to search for weapons of mass destruction and prove that Iraqi government had nothing to hide. Because Hussein reneged on his promise to cooperate with UN weapons inspectors for a second time, the United States and Great Britain began planning air strikes. Giorgio Agamben, the Italian philosopher, has offered a critique of the logic of pre-emptive war. "The Iraq story boiled over last night when the chief U.N. weapons inspector, Richard Butler, said that Iraq had not fully cooperated with inspectors and--as they had promised to do. As a result, the U.N. ordered its inspectors to leave Iraq this morning" --Katie Couric, NBC's Today, 12/16/98 "What Mr. Bush is being urged to do by many advisers is focus on the simple fact that Saddam Hussein signed a piece of paper at the end of the Persian Gulf War, promising that the United Nations could have unfettered weapons inspections in Iraq. It has now been several years since those inspectors were kicked out." --John King, CNN, 8/18/02 At the end of 2002, UN inspection teams returned to Iraq. At the time of the invasion, they had searched for alleged weapons for nearly four months without finding them, and were willing to continue. However, further delay in military action would have posed problems for an invasion due to seasonally rising temperatures, which would have made use of chemical protection gear unbearable as early as April and risen to around 48C (120F) in the summer. President George W. Bush stated that Saddam's weapons of mass destruction needed to be disarmed, and that the Iraqi people were to have control of their own country restored to them. However, 18 months after the invasion, in an interview with the BBC "From our point of view and from the Charter point of view it (the war) was illegal." --Kofi Annan, BBC, September, 2004. Military aspects Approximately 100,000 soldiers and marines from the United States, and 30,000 from the United Kingdom, as well as smaller forces from other nations, collectively called the "Coalition of the Willing", were deployed prior to the invasion primarily to several staging areas in Kuwait. (The numbers when naval, logistics, intelligence, and air force personnel are included were 214,000 Americans, 45,000 British, 2,000 Australians and 2,400 Polish.) Plans for opening a second front in the north were abandoned when Turkey officially refused the use of its territory for such purposes. Forces also supported Iraqi Kurdish militia troops, estimated to number upwards of 50,000. Despite the refusal of Turkey, the Coalition conducted parachute operations in the north and dropped the 173rd Airborne Brigade, thereby removing the necessity of any approval from Turkey. (Later on, during the invasion, it was rumored that Turkey itself had sent troops into the Kurdish part of Iraq.) The number of personnel in the Iraqi military prior to the war was uncertain, but it was believed to have been poorly-equipped. The International Institute for Strategic Studies estimated the Iraqi armed forces to number 389,000 (army 350,000, navy 2,000, air force 20,000 and air defense 17,000), the paramilitary Fedayeen Saddam 44,000, and reserves 650,000. Another estimate numbers the army and Republican Guard at between 280,000 to 350,000 and 50,000 to 80,000, respectively, and the paramilitary between 20,000 and 40,000. There were an estimated thirteen infantry divisions, ten mechanized and armored divisions, as well as some special forces units. The Iraqi Air Force and Iraqi Navy played a negligible role in the conflict. Prior to invasion, US-led Coalition forces involved in the 1991 Persian Gulf War had been engaged in a low-level conflict with Iraq, enforcing Iraqi no-fly zones. Iraqi air-defense installations were engaged on a fairly regular basis after repeatedly targeting and firing upon US and UK air patrols. In mid-2002, the US began to change its response strategy, more carefully selecting targets in the southern part of the country in order to disrupt the military command structure in Iraq. A change in enforcement tactics was acknowledged at the time, but it was not made public that this was part of a plan known as Operation Southern Focus. The tonnage of US bombs dropped increased from 0 in March 2002 and 0.3 in April 2002 to between 7 and 14 tons per month in May-August, reaching a pre-war peak of 54.6 tons in September - prior to Congress' 11 October authorization of the invasion. The September attacks included a 5 September 100-aircraft attack on the main air defense site in western Iraq. According to an editorial in New Statesman this was "Located at the furthest extreme of the southern no-fly zone, far away from the areas that needed to be patrolled to prevent attacks on the Shias, it was destroyed not because it was a threat to the patrols, but to allow allied special forces operating from Jordan to enter Iraq undetected." Opening attack On March 20, 2003 at approximately 02:30 UTC or about 90 minutes after the lapse of the 48-hour deadline, at 05:33 local time, explosions were heard in Baghdad. There is now evidence that various Special Forces troops from the coalition (led by the Australian SAS but including British SAS, the U.S. Army's Delta Force, U.S. Navy SEALs, U.S. Marine Corps Force Recon and U.S. Air Force Combat Controllers) crossed the border into Iraq well before the air war commenced, in order to guide strike aircraft in air attacks. At 03:15 UTC, or 10:15 p.m. EST, U.S. President George W. Bush announced that he had ordered the coalition to launch an "attack of opportunity" against targets in Iraq. As soon as this word was given the troops on standby crossed the border into Iraq. These troops were led by the 4th bomb disposal unit which at the time had three R.A.F. regiment soldiers from 15th squadron on a tour. Before the invasion, many observers had expected a lengthy campaign of aerial bombing in advance of any ground action, taking as examples the Persian Gulf War or the invasion of Afghanistan. In practice, US plans envisioned simultaneous air and ground assaults to decapitate the Iraqi forces as fast as possible (see Shock and Awe), attempting to bypass Iraqi military units and cities in most cases. The assumption was that superior Coalition mobility and co-ordination would allow the US-led Coalition to attack the heart of the Iraqi command structure and destroy it in a short time, and that this would minimize civilian deaths and damage to infrastructure. It was expected that the elimination of the leadership would lead to the collapse of the Iraqi Forces and the government, and that much of the population would support the invaders once the government had been weakened. Occupation of cities and attacks on peripheral military units were viewed as undesirable distractions. Following Turkey's decision to deny any official use of its territory, the US-led Coalition was forced to abandon a planned simultaneous attack from north and south, so the primary bases for the invasion were in Kuwait and other Persian Gulf nations. One result of this was that one of the divisions intended for the invasion was forced to relocate and was unable to take part in the invasion until well into the war. Many observers felt that the Coalition devoted insufficient numbers of troops to the invasion, and that this (combined with the failure to occupy cities) put them at a major disadvantage in achieving security and order throughout the country when local support failed to meet expectations. The invasion was swift, with the collapse of the Iraq government and the military of Iraq in about three weeks. The oil infrastructure of Iraq was rapidly secured with limited damage in that time. Securing the oil infrastructure was considered of great importance to funding the rebuilding of Iraq after the invasion ended. In the Persian Gulf War, while retreating from Kuwait, the Iraqi army had set many oil wells on fire, in an attempt to disguise troop movements and to distract Coalition forces. Prior to the 2003 invasion, Iraqi forces had mined some 400 oil wells around Basra and the Al-Faw peninsula with explosives. The British 3 Commando Brigade Royal Marines launched an air and amphibious assault on the Al-Faw peninsula, supported by units of the Special Boat Service Royal Marines and US Navy SEALs during the closing hours of 20th March to secure the oil fields there; the amphibious assault was supported by frigates of the Royal Navy and Royal Australian Navy. The 15th Marine Expeditionary Unit, attached to 3 Commando Brigade and Polish GROM attacked the port of Umm Qasr. The British Army's 16 Air Assault Brigade also secured the oilfields in southern Iraq in places like Rumaila while the Polish commandos captured offshore oil platforms near the port, preventing their destruction. Despite the rapid advance of Coalition forces, some 44 oil wells were destroyed and set blaze by Iraqi explosives or by incidental fire. However, the wells were quickly capped and the fires put out, preventing the ecological damage and loss of oil that had occurred at the end of the Persian Gulf War. In keeping with the rapid advance plan, the U.S. 3rd Infantry Division moved westward and then northward through the western desert toward Baghdad, while the 1st Marine Expeditionary Force moved along Highway 1 through the center of the country, and 1 (UK) Armoured Division moved northward through the eastern marshland. Initially, the U.S. 1st Marine Division fought through the Rumaila oil fields, and moved north to Nasariyah--a moderate-sized, Shi'ite dominated city with important strategic significance as a major road junction and its proximity to nearby Talil Airfield. The U.S Army 3rd Infantry Division defeated Iraqi forces entrenched in and around the airfield and bypassed the city to the west. On 23 March, U.S Marines and Special Forces units pressed the attack in and around Nasiriyah. During the battle an Air Force A-10 was involved in a case of fratricide that resulted in the death of six Marines. Because of Nasiriyah's strategic position as a road junction, significant gridlock occurred as U.S forces moving north converged on the city's surrounding highways. With Nasiriyah and Tallil Airfield secured, U.S. forces gained an important logistical center in southern Iraq, establishing FOB/EAF Jalibah, some 10 miles outside of Nasiriyah through which additional troops and supplies were brought. The 101st Airborne Division continued their attack north behind the 3rd Infantry Division, and the 82nd Airborne Division began to consolidate in and around Tallil airfield for further operations. By 27-28 March, a severe sand storm slowed the U.S advance as the 3rd Infantry Division fought on the outskirts of Najaf and Kufa, with particularly heavy fighting in and around the bridge adjacent to the town of Kifl before moving north toward Karbala. Further south, the British 7 Armoured Brigade ('The Desert Rats') fought their way into Iraq's second-largest city, Basra, on 6 April, coming under constant attack by regulars and Fedayeen, while the Parachute Regiment cleared the 'old quarter' of the city that was inaccessible to vehicles. Entering Basra had only been achieved after two weeks of conflict, which included the biggest tank battle by British forces since World War II when the Royal Scots Dragoon Guards destroyed 14 Iraqi tanks on 27 March. Elements of 1 (UK) Armoured Division began to advance north towards U.S. positions around Al Amarah on 9 April. Pre-existing electrical and water shortages continued throughout the conflict and looting began as Iraqi forces collapsed. While British forces began working with local Iraqi Police to enforce order, REME (Royal Electrical Mechanical Engineers) and Royal Engineers of the British Army rapidly set up and repaired dockyard facilities to allow humanitarian aid began to arrive from ships arriving in the port city of Umm Qasr. After a rapid initial advance, the first major pause occurred in the vicinity of Karbala. There, U.S. Army elements met resistance from Iraqi troops defending cities and key bridges along the Euphrates River. These forces threatened to interdict coalition logistical supply routes as U.S. forces moved north. By the end of March, elements of the 82nd Airborne Division augmented with a mechanized infantry battalion task force of the U.S. 1st Armored Division began diversionary assaults in and around the city of Samawah in order to divert Iraqi forces that may have otherwise threatened the extended rear of the coalition's lead elements. Meanwhile, the U.S. 101st Airborne Division and infantry elements of the U.S. 1st Marine Division, supported by an armored battalion task force of the 1st Armored division and U.S. Marine and Army air support, attacked and secured the cities of Najaf and Karbala in order to prevent any Iraqi counterattacks from the east. These attacks effectively protected the eastern flank and rear of the 3rd Infantry Division, which allowed the western flank of the invasion to resupply and continue its advance north through the Karbala Gap and on toward Baghdad, where U.S Marine and British forces had already begun a preliminary assault on the outskirts of the city. Special Operations In the North, the 10th Special Forces Group (10th SFG) had the mission of aiding the Kurdish parties, the Patriotic Union of Kurdistan and the Kurdistan Democratic Party, de facto rulers of Iraqi Kurdistan since 1991. Turkey had officially forbidden any US troops from using their bases, so lead elements of the 10th had to make certain detours; their journey was supposed to take four hours but instead it took ten. However, Turkey did allow the use of its air space and so the rest of the 10th flew in. The mission was to destroy the bases of the Kurdish islamist group Ansar al-Islam, believed to be linked to Al Qaida. On March 26, 2003, the 173rd Airborne Brigade augmented the 10th SFG by parachuting into northern Iraq. The 173rd would eventually take responsibility for Kirkuk. The target was Sargat and after heavy fighting with both groups, the Special Forces finally took Sargat and pushed the remaining units out of Northern Iraq. After Sargat was taken, Bravo Company along with their Kurdish allies pushed south towards Tikrit and the surrounding towns of Northern Iraq. During the Battle of the Green Line, Bravo Company with their Kurdish allies pushed back, destroyed, or routed the 13th Iraqi Infantry Division. Bravo took Tikrit. Iraq was the largest deployment of Special Forces since Vietnam. Fall of Baghdad (April 2003) Three weeks into the invasion, U.S. forces moved into Baghdad. Initial plans were for armored units to surround the city and gradually move in, forcing Iraqi armor and ground units to cluster into a central pocket in the city, and then attack with air and artillery forces. This plan soon became unnecessary, as an initial engagement of armor units south of the city saw most of the Republican Guard's armor assets destroyed and much of the southern outskirts of the city occupied. On 5 April a "Thunder Run" of US armored vehicles was launched to test remaining Iraqi defenses, with 29 tanks and 14 Bradley Armored Fighting Vehicles rushing from a staging base to the Baghdad airport. They met heavy resistance, including many suicidal attacks, but were successful in reaching the airport. Two days later another thunder run was launched into the Palaces of Saddam Hussein, where they established a base. Within hours of the palace seizure, and television coverage of this spreading through Iraq, US forces ordered Iraqi forces within Baghdad to surrender, or the city would face a full-scale assault. Iraqi government officials had either disappeared or had conceded defeat, and on April 9, 2003, Baghdad was formally occupied by US forces and the power of Saddam Hussein was declared ended. Much of Baghdad remained unsecured however, and fighting continued within the city and its outskirts well into the period of occupation. Saddam had vanished, and his whereabouts were unknown. Many Iraqis celebrated the downfall of Saddam by vandalizing the many portraits and statues of him together with other pieces of his personality cult. One widely publicized event was the dramatic toppling of a large statue of Saddam in central Baghdad by a US M88 tank retriever, while a crowd of Iraqis cheered the Marines on. During this incident, the Marines briefly draped an American flag over the statue's face. The flag was replaced with an Iraqi flag and the demolition continued. The fall of Baghdad saw the outbreak of regional violence throughout the country, as Iraqi tribes and cities began to fight each other over old grudges. The Iraqi cities of Al-Kut and Nasiriyah declared war upon each other immediately following the fall of Baghdad in order to establish dominance in the new country, and Coalition forces quickly found themselves embroiled in a potential civil-war. U.S. forces ordered the cities to cease hostilities immediately, and explained that Baghdad would remain the capital of the new Iraqi government. Nasiriyah responded favorably and quickly backed down, however Al-Kut placed snipers on the main roadways into town, with orders that Coalition forces were not to enter the city. After several minor skirmishes, the snipers were removed, but tensions and violence between regional, city, tribal, and familial groups continued into the occupation period. General Tommy Franks assumed control of Iraq as the supreme commander of occupation forces. Shortly after the sudden collapse of the defense of Baghdad, rumors were circulating in Iraq and elsewhere that there had been a deal struck (a "safqua") wherein the US had bribed key members of the Iraqi military elite and/or the Ba'ath party itself to stand down. In May 2003, General Franks retired, and confirmed in an interview with Defense Week that the U.S. had paid Iraqi military leaders to defect. The extent of the defections and their effect on the war are unclear. Coalition troops promptly began searching for the key members of Saddam Hussein's government. These individuals were identified by a variety of means, most famously through sets of most-wanted Iraqi playing cards. Other areas In the north, Kurdish forces opposed to Saddam Hussein had already occupied for years an autonomous area in northern Iraq. With the assistance of U.S. Special Forces and air strikes, they were able to rout the Iraqi units near them and to occupy oil-rich Kirkuk on 10 April. U.S. special forces had also been involved in the extreme south of Iraq, attempting to occupy key roads to Syria and airbases. In one case two armored platoons were used to convince Iraqi leadership that an entire armored battalion was entrenched in the west of Iraq. On 15 April, U.S. forces took control of Tikrit, the last major outpost in central Iraq, with an attack led by the Marines' Task Force Tripoli. About a week later the Marines were relieved in place by the Army's 4th Infantry Division. Summary of the invasion Coalition forces managed to topple the government and capture the key cities of a large nation in only 21 days, taking minimal losses while also trying to avoid large civilian deaths and even high numbers of dead Iraqi military forces. The invasion did not require the huge army build-up like the 1991 Gulf War, which numbered half a million Allied troops. This did prove short-sighted, however, due to the requirement for a much larger force to combat the irregular Iraqi forces in the aftermath of the war. The Saddam-built army, armed mainly with Soviet-built equipment, was overall ill-equipped in comparison to Coalition forces. Missiles launched from Iraq were either interdicted by U.S. anti-air batteries, or made little to no strategic impact on their targets. Attacks on Coalition supply routes by Fedayeen militiamen were repulsed. The Iraqi's artillery proved largely ineffective, and they were unable to mobilize their air force to attempt a defense. The Iraqi T-72 tanks, the heaviest armored vehicles in the Iraqi Army, were both outdated and ill-maintained, and when they were mobilized they were rapidly destroyed, thanks in part due to the Coalition's air superiority. The U.S. Air Force, Marine Corps and Naval Aviation, and British Royal Air Force operated with impunity throughout the country, pinpointing heavily defended enemy targets and destroying them before ground troops arrived. The main battle tanks (MBT) of the Coalition forces, the U.S. M1 Abrams and British Challenger 2, proved their worth in the rapid advance across the country. Even with the large number of RPG attacks by irregular Iraqi forces, few Coalition tanks were lost and no tank crewmen were killed by hostile fire. The only tank loss sustained by the British Army was a Challenger 2 of the Queen's Royal Lancers that was hit by another Challenger 2, killing two crewmen. All three British tank crew fatalities were a result of friendly fire. However, US Army sources admitted the inferiority to the Challenger 2 of the Abrams, 9 of which were knocked out by RPG fire alone, and which required substantial modification after the end of the campaign. The Iraqi Army suffered from poor morale, even amongst the elite Republican Guard. Entire units disbanded into the crowds upon the approach of Coalition troops, or actually sought Coalition forces out in order to surrender. In one case, a force of roughly 20-30 Iraqis attempted to surrender to a two-man vehicle repair and recovery team, invoking similar instances of Iraqis surrendering to news crews during the Persian Gulf War. Other Iraqi Army officers were bribed by the CIA or coerced into surrendering to Coalition forces. Worse, the Iraqi Army had incompetent leadership - reports state that Qusay Hussein, charged with the defense of Baghdad, dramatically shifted the positions of the two main divisions protecting Baghdad several times in the days before the arrival of U.S. forces, and as a result the units within were both confused and further demoralized when U.S. Marine and British forces attacked. By no means did the Coalition invasion force see the entire Iraqi military thrown against it; Coalition units had orders to move to and seize objective target-points, and could only fire upon regular Iraqi military units if first fired upon. This resulted in most regular Iraqi military units emerging from the war fully intact and without ever having been engaged by US forces, especially in southern Iraq. It is assumed that most units disintegrated to either join the growing Iraqi insurgency or returned to their homes. According to the declassified Pentagon report, "The largest contributing factor to the complete defeat of Iraq's military forces was the continued interference by Saddam." The report, designed to help U.S. officials understand in hindsight how Saddam and his military commanders prepared for and fought the war, paints a picture of an Iraqi government blind to the threat it faced, hampered by Saddam's inept military leadership and deceived by its own propaganda. According to BBC, the report portrays Saddam Hussein as "chronically out of touch with reality - preoccupied with the prevention of domestic unrest and with the threat posed by Iran." Security, looting and war damage Looting took place in the days following. It was reported that the National Museum of Iraq was among the looted sites. The assertion that US forces did not guard the museum because they were guarding the Ministry of Oil and Ministry of Interior is apparently true. According to U.S. officials the "reality of the situation on the ground" was that hospitals, water plants, and ministries with vital intelligence needed security more than other sites. There were only enough US troops on the ground to guard a certain number of the many sites that ideally needed protection, and so, apparently, some "hard choices" were made. Also, it was reported that many trucks of purported Iraqi gold and $1.6 billion of bricks of US cash were seized by US forces. The FBI was soon called into Iraq to track down the stolen items. It was found that the initial claims of looting of substantial portions of the collection were heavily exaggerated. Initial reports claimed a near-total looting of the museum, estimated at upwards of 170,000 pieces. The most recent estimate places the number of looted pieces at around 15,000. Over 5,000 looted items have since been recovered. There has been speculation that some objects still missing were not taken by looters after the war, but were taken by Saddam Hussein or his entourage before or during the fighting. There have also been reports that early looters had keys to vaults that held rarer pieces, and some have speculated as to the pre-meditated systematic removal of key artifacts. The National Museum of Iraq was only one of many museums and sites of cultural significance that were affected by the war. Many in the arts and antiquities communities briefed policy makers in advance of the need to secure Iraqi museums. Despite the looting being lighter than initially feared, the cultural loss of items from ancient Sumer is significant. More serious for the post-war state of Iraq was the looting of cached weaponry and ordnance which fueled the subsequent insurgency. As many as 250,000 tons of explosives were unaccounted for by October 2004. Disputes within the US Defense Department led to delays in the post-invasion assessment and protection of Iraqi nuclear facilities. Tuwaitha, the Iraqi site most scrutinized by UN inspectors since 1991, was left unguarded and may have been looted. Zainab Bahrani, professor of Ancient Near Eastern Art History and Archaeology at Columbia University, reported that a helicopter landing pad was constructed in the heart of the ancient city of Babylon, and "removed layers of archeological earth from the site. The daily flights of the helicopters rattle the ancient walls and the winds created by their rotors blast sand against the fragile bricks. When my colleague at the site, Maryam Moussa, and I asked military personnel in charge that the helipad be shut down, the response was that it had to remain open for security reasons, for the safety of the troops." Bahrani also reported that in the summer of 2004, "the wall of the Temple of Nabu and the roof of the Temple of Ninmah, both sixth century BC, collapsed as a result of the movement of helicopters." Electrical power is scarce in post-war Iraq, Bahrani reported, and some fragile artifacts, including the Ottoman Archive, would not survive the loss of refrigeration. "End of major combat operations" (May 2003) On 1 May 2003 George W. Bush landed on the aircraft carrier USS Abraham Lincoln, in a Lockheed S-3 Viking, where he gave a speech announcing the end of major combat operations in the Iraq war. Bush's landing was criticized by opponents as an overly theatrical and expensive stunt. The ship was returning home off the coast of southern California near the San Diego harbor. Clearly visible in the background was a banner stating "Mission Accomplished." The banner, made by White House staff and supplied by request of the U.S. Navy, was criticized as premature - especially later as the guerrilla war dragged on. The White House subsequently released a statement alleging that the sign and Bush's visit referred to the initial invasion of Iraq and disputing the claim of theatrics. The speech itself noted: "We have difficult work to do in Iraq. We are bringing order to parts of that country that remain dangerous." "Major combat" concluding did not mean that peace had returned to Iraq. Iraq was subsequently marked by violent conflict between U.S.-led soldiers and forces described by the occupiers as insurgents. The ongoing resistance in Iraq was concentrated in, but not limited to, an area referred to by Western media and the occupying forces as the Sunni triangle and Baghdad. Critics point out that the regions where violence is most common are also the most populated regions. This resistance may be described as guerrilla warfare. The tactics in use were to include mortars, suicide bombers, roadside bombs, small arms fire, improvised explosive devices (IED's), and handheld antitank grenade-launchers (RPG's), as well as sabotage against the oil infrastructure. There are also accusations, questioned by some, about attacks toward the power and water infrastructure. There is evidence that some of the resistance was organized, perhaps by the fedayeen and other Saddam Hussein or Ba'ath loyalists, religious radicals, Iraqis angered by the occupation, and foreign fighters. Additionally, as noted above, some (if not most) of the violence immediately following the end of "major combat operations" was due to internal conflicts between groups within Iraq, including but not limited to violence between Sunni and Shi'a Muslims within Iraq over long-standing cultural differences. Military decorations The Medal Of Honor was awarded to Sergeant First Class Paul Ray Smith for actions in Operation Iraqi Freedom while serving with B Company, 11th Engineer Battalion, 3rd Infantry Division in Baghdad, Iraq. President Bush announced Nov. 10 that Cpl. Jason Dunham, who died more than two years ago after covering a grenade with his helmet to save fellow Marines, will receive the Medal of Honor. The United States military has created military awards and decorations related to Operation Iraqi Freedom: NATO also created a military decoration related to Operation Iraqi Freedom: - Non-Article 5 NTM-I (NATO Training Mission-Iraq) NATO Medal Her Majesty Queen Elizabeth II awarded the first Victoria Cross in 23 years to Private Johnson Beharry, a Grenadan in the 1st Battalion, Princess of Wales's Royal Regiment, the highest military decoration for valour in the British and Commonwealth armed forces. Related phrases This campaign featured a variety of new terminology, much of it initially coined by the U.S. government or military. The military's official name for the invasion, "Operation Iraqi Freedom", is rarely used outside the United States. Also notable was the usage "death squads" to refer to fedayeen paramilitary forces. Members of the Saddam Hussein government were called by disparaging nicknames - e.g., "Chemical Ali" (Ali Hassan al-Majid), "Baghdad Bob" or "Comical Ali" (Mohammed Saeed al-Sahaf), and "Mrs. Anthrax" or "Chemical Sally" (Huda Salih Mahdi Ammash). Saddam Hussein was systematically referred to as "Saddam", which some Westerners mistakenly believed to be disparaging. (Although there is no consensus about how to refer to him in English, "Saddam" is acceptable usage, and is how people in Iraq and the Middle East generally refer to him.) Terminology introduced or popularized during the war include: - "Axis of Evil", originally used by President Bush during a State of the Union address on January 29, 2002 to describe the countries of Iraq, Iran and North Korea. - "Coalition of the willing", a term that originated in the Clinton era (e.g., interview, President Clinton, ABC, June 8, 1994), and used by the Bush Administration to describe the countries contributing troops in the invasion, of which the U.S. and UK were the primary members. - "Decapitating the regime", a euphemism for either overthrowing the government or killing Saddam Hussein. - "Embedding", United States practice of assigning civilian journalists to U.S. military units. - "Old Europe", Rumsfeld's term used to describe European governments not supporting the war: "You're thinking of Europe as Germany and France. I don't. I think that's old Europe." - "Regime change", a euphemism for overthrowing a government. - "Shock and Awe", the strategy of reducing an enemy's will to fight through displays of overwhelming force. Many slogans and terms coined came to be used by President Bush's political opponents, or those opposed to the war. For example, in April of 2003 John Kerry, the Democratic candidate in the presidential election, said at a campaign rally: "What we need now is not just a regime change in Saddam Hussein and Iraq, but we need a regime change in the United States." Media coverage US Coverage The most popular cable network in the United States for news on the war was Fox News, some of whose commentators and anchors made pro-war comments or disparaged detractors of the war, such as calling them "the great unwashed". Fox News is owned by Rupert Murdoch, a strong supporter of the war. On-screen during all live war coverage by Fox News was a waving flag animation in the upper left corner and the headline "Operation Iraqi Freedom" along the bottom. The network has shown the American flag animation in the upper-left corner since the September 11, 2001 Terrorist Attack. Fox News' pro-war commentary stood in contrast to many U.S. newspapers' editorial pages, which were much more hesitant about going to war. On the other hand, Fox, like other western media outlets, did have a number of regular commentators and anchors that were against the war. Western networks, including Fox, also gave some coverage to anti-war protests and rallies, anti-U.S. protests in Iraq, and celebrities and politicians that were against the war. Anti-war celebrities appearing frequently on these news networks included actors Tim Robbins, Mike Farrell, Janeane Garofalo, Martin Sheen, Susan Sarandon and director Michael Moore. Most of these celebrities were able to make anti-war comments in the media and receive little public criticism. However, in a widely publicized story, the country music band Dixie Chicks ignited boycotts and record burnings in the U.S. for their negative remarks about President Bush in a concert in London. Independent Coverage The Media Workers Against the War and the Indymedia network, among many other independent networks including many journalists from the invading countries, provided reports in a way difficult to control by any government, corporation or political party. In the United States Democracy Now, hosted by Amy Goodman has been critical of the reasons for the 2003 invasion and the alleged crimes committed by the US authorities in Iraq. The war in Iraq provided the first time in history that military on the front lines were able to provide direct, uncensored reportage themselves, thanks to blogging software and the reach of the internet. Dozens of such reporting sites, known as soldier blogs or milblogs, were started during the war. Coverage in other countries In some countries television journalists behavior differed significantly during the conflict compared to Gulf War conflicts. Jean-Marie Charon said most journalists were more precautious, using conditional form very often, and citing sources. The crew of the HMS Ark Royal, Britain's flagship naval vessel, demanded that the BBC be turned off on the ship because of what they saw as a clear anti-Coalition or "pro-Iraq" bias. One BBC correspondent had been embedded on the ship, but the crew said they had no complaints of his reporting specifically. The sailors on board the ship claimed that the BBC gave more credit to Iraqi reports than information coming from British or Allied sources, often questioning and refusing to believe reports coming from Coalition sources while reporting Iraqi claims of civilian casualties without independent verification. The ship's news feed was replaced with Sky News. Ironically, it later emerged from a study conducted by Professor Justin Lewis of the School of Journalism at Cardiff University that the BBC was the most pro-war of British networks, a finding confirmed in a separate study by the German newspaper Frankfurter Allgemeine Zeitung . Arab media outfit Al Jazeera broadcast many scenes of civilian casualties, usually referring to them as "martyrs", press conferences with Iraqi officials claiming to be winning the war, and of American and British POWs which U.S. media refused to run. Most Arab networks also downplayed the scenes of Iraqi citizens cheering coalition forces entering their towns. Arab networks consistently referred to U.S. and British forces as "invading forces", while Western media referred to them as "coalition forces." However, the war did not benefit Al-Arabiya, the newest of Arabic news networks. Created by the audio-visual group saoudien MBC to compete with Al-Jazeera (whose tone often displeases Arab leaders), Al-Arabiya was launched on February 19, 2003. See also - American government position on invasion of Iraq - American popular opinion of invasion of Iraq - Australian contribution to the 2003 invasion of Iraq - British Mandate of Iraq - Casualties of the conflict in Iraq since 2003 - Foreign hostages in Iraq - Governments' pre-war positions on invasion of Iraq - Human rights in post-Saddam Iraq - Iraq disarmament crisis - Iraqi insurgency - Legitimacy of the 2003 invasion of Iraq - List of Coalition aircraft crashes in Iraq - List of killed, threatened or kidnapped Iraqi academics - List of people associated with the 2003 invasion of Iraq - Occupation of Iraq timeline - Polish involvement in the 2003 invasion of Iraq - Popular opposition to the 2003 Iraq War - Post-invasion Iraq, 2003–2006 - Protests against the 2003 Iraq war - Reconstruction of Iraq - Sectarian violence in Iraq - The UN Security Council and the Iraq war - Views on the 2003 invasion of Iraq - War on Terrorism Notes and References - ^ President Discusses Beginning of Operation Iraqi Freedom - ^ "Sectarian divisions change Baghdad’s image", Associated Press, 2006-07-03. Retrieved on 2006-08-06. - ^ Torture in Iraq worse now than under Saddam?. MSNBC (2006-09-21). Retrieved on 2006-10-19. - ^ Conetta, Carl. "The Wages of War: Iraqi Combatant and Noncombatant Fatalities in the 2003 Conflict", Project on Defense Alternatives Research Monograph #8, 20 October 2003. Retrieved on 2006-08-09. - ^ U.S. has 100,000 troops in Kuwait - ^ Powell, Colin (February 5, 2003). U.S. Secretary of State Colin Powell Addresses the U.N. Security Council. Whitehouse.gov. Retrieved on 2006-05-25. - ^ "US, Britain and Spain Abandon Resolution", Associated Press, 2003-03-17. Retrieved on 2006-08-06. - ^ "Bush: Iraq is playing 'willful charade'", CNN, 2003-03-07. Retrieved on 2006-08-06. - ^ Oliver King and Paul Hamilos. "Timeline: the road to war in Iraq", Guardian Unlimited, February 2, 2006. Retrieved on 2006-05-25. - ^ "Iraq tests no-fly zone", CNN.com, January 4, 1999. Retrieved on 2006-05-25. - ^ "Coalition planes hit Iraq sites in no-fly zone", CNN.com, November 28, 2002. Retrieved on 2006-05-25. - ^ Iraq Liberation Act of 1998 (Enrolled as Agreed to or Passed by Both House and Senate). Library of Congress. Retrieved on 2006-05-25. - ^ RESOLUTION 687 (1991) (April 8, 1991). Retrieved on 2006-05-25. - ^ Gellman, Barton. "U.S. Spied on Iraq Via UN", Washington Post, March 2, 1999, p. A1. Retrieved on 2006-05-25. - ^ "U.S. Spied on Iraq Under UN Cover, Officials Now Say", The New York Times, January 7, 1999. Retrieved on 2006-05-25. - ^ REPUBLICAN PLATFORM 2000. CNN.com. Retrieved on 2006-05-25. - ^ "O'Neill: 'Frenzy' distorted war plans account", CNN.com, January 14, 2004. Retrieved on 2006-05-26. - ^ "Plans For Iraq Attack Began On 9/11", CBS News, Sept. 4, 2002. Retrieved on 2006-05-26. - ^ "Palestinians get Saddam funds", BBC News, 13 March, 2003. Retrieved on 2006-05-26. - ^ William Schneider. Marketing Iraq: Why now?. Retrieved on 2006-09-04. - ^ U.S. Wants Peaceful Disarmament of Iraq, Says Negroponte. Embassy of the United States in Manila (Nov. 8 2002). Retrieved on 2006-05-26. - ^ Iraq. House of Commons Hansard Debates for 18 Mar 2003 (pt 6). Retrieved on 2006-05-25. - ^ Burkeman, Oliver. "Invasion right but 'illegal', says US hawk", The Age, November 21, 2003. Retrieved on 2006-05-26. - ^ Oliver Burkeman and Julian Borger. "War critics astonished as US hawk admits invasion was illegal", The Guardian, November 20, 2003. Retrieved on 2006-05-26. - ^ Iraq Resolution 1441 (PDF). Number-10.gov.uk (March 7, 2003). Retrieved on 2006-05-26. - ^ Global Message. Whitehouse.gov. Retrieved on 2006-06-07. - ^ "Iraq Rejects US Demand That Hussein Leave", Associated Press, March 18, 2003. Retrieved on 2006-05-25. - ^ "Washington Post: Blix Downgrades Prewar Assessment of Iraqi Weapons", Washington Post, June 22, 2003. Retrieved on 2006-06-01. - ^ Iraq war illegal, says Annan. BBC News (2004-09-16). Retrieved on 2006-10-19. - ^ Lynch, Colum. "U.S., Allies Dispute Annan on Iraq War", Washington Post, September 17, 2004, p. A18. Retrieved on 2006-05-25. - ^ "Iraq war illegal, says Annan", BBC News, 16 September, 2004. Retrieved on 2006-05-25. - ^ O'Connell, Mary Ellen (November 21, 2002). UN RESOLUTION 1441: COMPELLING SADDAM, RESTRAINING BUSH. Jurist. Retrieved on 2006-05-25. - ^ Taylor, Rachel S.. International Law - War in Iraq - United Nations - Iraq. World Press Review Online. Retrieved on 2006-05-25. - ^ Iraqui Chemical Munitions (PDF). - ^ Nuclear Overview. The Nuclear Threat Initiative (February 2006). Retrieved on 2006-08-06. - ^ UN continues Libya nuclear probe. BBC News (2004-05-28). Retrieved on 2006-08-26. - ^ Millions join global anti-war protests. BBC News (17 February 2003). Retrieved on 2006-05-25. - ^ Section 10.3, The 9/11 Commission Report (2004). http://www.9-11commission.gov/report/911Report_Ch10.htm. Retrieved Sep. 10, 2006. - ^ Left, Sarah. "Blix wants months - and Straw offers 10 days", Guardian Unlimited, March 7, 2003. - ^ "Transcript of Blix's U.N. presentation", CNN.com, March 7, 2003. Retrieved on 2006-05-25. - ^ Attacking Iraq - Countdown Timeline (November 2002). GlobalSecurity.org. Retrieved on 2006-09-06. - ^ President Bush Addresses the Nation. Whitehouse.gov (March 19, 2003). Retrieved on 2006-05-25. - ^ Global Message. Whitehouse.gov (March 17, 2003). Retrieved on 2006-06-07. - ^ President Discusses Operation Iraqi Freedom at Camp Lejeune. The White House press release (2003-04-03). Retrieved on 2006-07-21. - ^ "Saddam's Last Line Of Defense", CBS, 2003-03-26. Retrieved on 2006-08-06. - ^ "Saddam counts on Republican Guard as last chance for defending Baghdad", Associated Press, 2003-03-26. Retrieved on 2006-08-06. - ^ Burgess, Mark. "CDI Primer: Iraqi Military Effectiveness", Center for Defense Information, 2002-11-12. Retrieved on 2006-08-06. - ^ Windle, David. "Military muscle", New Scientist, 2003-01-29. Retrieved on 2006-08-06. - ^ Iraqi Ground Forces Organization. GlobalSecurity.org. Retrieved on 2006-08-06. - ^ "Most loyal soldiers in Iraq belong to Fedayeen Saddam", The Seattle Times, 2003-03-27. Retrieved on 2006-08-06. - ^ Smith, Michael. "The war before the war", The New Statemen, 2005-05-30. Retrieved on 2006-08-06. - ^ Lowe, Christian. "Stopping Blue-on-Blue", The Daily Standard, 2003-09-08. Retrieved on 2006-08-07. - ^ "Russia denies Iraq secrets claim", BBC News, 2006-03-25. Retrieved on 2006-08-07. - ^ "US shamed by looting of antiquities", The Scotsman, April 19, 20003. - ^ Harms, William. "Archaeologists review loss of valuable artifacts one year after looting", The University of Chicago Chronicle, 2004-04-15. Retrieved on 2006-08-07. - ^ "Pentagon: Some explosives possibly destroyed", Associated Press, 2004-10-29. Retrieved on 2006-08-07. - ^ Gellman, Barton. "U.S. Has Not Inspected Iraqi Nuclear Facility", Washington Post, 2003-04-25, p. A14. Retrieved on 2006-08-07. - ^ a b c Bahrani, Zainab. "Days of plunder", The Guardian, 2004-08-31. Retrieved on 2006-08-07. - ^ Bash, Dana. "White House pressed on 'mission accomplished' sign", CNN, 2003-10-29. Retrieved on 2006-07-21. - ^ "Text Of Bush Speech", Associated Press, 2003-05-01. Retrieved on 2006-07-21. - ^ Operation Iraqi Freedom Maps. GlobalSecurity.org. Retrieved on 2006-07-21. - ^ "Iraqi attacks could signal wide revolt", The Seattle Times, 2003-07-01. Retrieved on 2006-08-07. - ^ Shewchuk, Blair. "Words: Woe and Wonder", CBC News Online, February 2003. Retrieved on 2006-07-21. - ^ The President's State of the Union Address (2002-01-29). Retrieved on 2006-07-21. - ^ Balz, Dan. "Kerry Angers GOP in Calling For 'Regime Change' in U.S.", The Washington Post, 2003-04-03, p. A10. Retrieved on 2006-07-21. - ^ Dixie Chicks singer apologizes for Bush comment. CNN. Additional references - Donnelly, Thomas. Rebuilding America's Defenses: Strategy, Forces and Resources For a New Century. Report of the Project for the New American Century, September 2000. Available online. - McCain, John. Finishing the Job in Iraq Air Force Magazine, July 2004. - Paul, U.S. Representative Ron, Office of (2002). Paul Calls for Congressional Declaration of War with Iraq. Accessed on June 06, 2005. Further reading - Masters of Chaos: The Secret History of the Special Forces by Linda Robinson - Heavy Metal a Tank Company's Battle to Baghdad by Captain Jason Conroy and Ron Martz - Cobra II : The Inside Story of the Invasion and Occupation of Iraq by Michael R. Gordon and Bernard E. Trainor - Dan Thompson (2005). American, Interrupted: 14 Months in Iraq. First Armored Division Museum. Written by 1st Armored Division Corporal while stationed in Iraq from Spring 2003 until July 2004. External links - Better World Links on the Invasion of Iraq over 2000 links - Morgues so full, bodies turned away - The War In Context News aggregator - ProCon's examination of Iraq Invasion - Informed Comment. Thoughts on the Middle East, History, and Religion by Professor Juan Cole - by Professor Dr. Sedat Laciner, "Ten Impasses of the Resistance in Iraq" - Civilian Death Toll - Amnesty International Report on Iraq - Borgen Project: Cost of the Iraq War - Iraq: Amnesty International seeks clarification on house demolitions by US troops in Iraq - Iraq: full texts of speeches and key documents archived by The Guardian. Retrieved 31 May 2005. - Iraq: Forcible return of refugees and asylum-seekers is contrary to international law - Iraq: Tribunal established without consultation - Memorandum on concerns related to legislation introduced by the Coalition Provisional Authority - National Priorities Project Cost of the Iraqi War Estimate - Reconstruction must ensure the human rights of Iraqis - Video Seminar on Iraq Coalition Politics: 20 April 2005, sponsored by the Program in Arms Control, Disarmament, and International Security at the University of Illinois. - by Prof. Dr. Ihsan Bal, JTW, "US Fury on ‘Anti Americanism’ in Turkey" - War in Iraq: Day by Day Guide - Iraq War NEWS DIGEST-Iraq and the U.S.A. - Iraq Special Weapons News - Attacks on journalists in Iraq - IFEX - Archaeologists Review Loss of Valuables in Museum Looting - by Emre Ozkan and Murat Sogangoz, "Do Talabani and Barzani prefer Civil-War in Iraq?" - Iraqi Perspectives Report, Joint Center for Operational Analysis at United States Department of Defense, March 2006 - "Bush Jr.'s War on Iraq" HIR - "Frontline: The Dark Side" PBS documentary on Vice President Dick Cheney's remaking of the Executive and infighting leading up to the war in Iraq - Angle, Jim and Liss, Sharon Kehnemui. "Report: Hundreds of WMDs Found in Iraq", Fox News, 2006-06-22. Retrieved on 2006-07-21. - "Hundreds of chemical weapons found in Iraq: US intelligence", AFP, 2006-06-22. Retrieved on 2006-07-21. - Bush admits Iraq did not order 9/11 attack - U.S. Dollar vs. the Euro: Another Reason for the Invasion of Iraq - 1999 Desert Crossing War Game to Plan Invasion of Iraq and to Unseat Saddam Hussein
Life on Land Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss Forests cover 30.7 per cent of the Earth’s surface and, in addition to providing food security and shelter, they are key to combating climate change, protecting biodiversity and the homes of the indigenous population. By protecting forests, we will also be able to strengthen natural resource management and increase land productivity. At the current time, thirteen million hectares of forests are being lost every year while the persistent degradation of drylands has led to the desertification of 3.6 billion hectares. Even though up to 15% of land is currently under protection, biodiversity is still at risk. Deforestation and desertification – caused by human activities and climate change – pose major challenges to sustainable development and have affected the lives and livelihoods of millions of people in the fight against poverty. Efforts are being made to manage forests and combat desertification. There are two international agreements being implemented currently that promote the use of resources in an equitable way. Financial investments in support of biodiversity are also being provided. This Goal and Architecture The amount of buildings, settlements and cities taking up land is rapidly growing. Ecosystems and biodiversity are under intense pressure due to growing cities and settlements, farming, mining and the changing climate. To protect, restore and support ecosystems and biodiversity, buildings and settlements must include habitats for plants, insects and animals. This means that green-field developments should be kept to a minimum and that planning and development of all new settlements must ensure sustainable conditions for local ecosystems, flora and fauna. Nature networks that allow plant life should be developed in existing settlements and urban areas, so that insects and animals can co-exist with the built environment. Examples are found at all scales, from pocket parks and insect hotels to large-scale planning projects to establish nature networks in big cities. Furthermore, the building industry can help promote sustainable forestry and combat deforestation by using wood only from sustainable sources and by generally using materials that are renewable and sustainably pro- duced and which do not compromise biodiversity and natural habitats for flora and fauna. Local flora and fauna must form the basis of landscape design in buildings and settlements, including lawns and interior greenery, so that the plants will interact with and support local ecosystems. Finally, buildings placed carefully in vulnerable ecosystems or in wild-life-parks can add to their preservation through sustainable tourism and raised public awareness. Sign up for updates Would you like to receive updates on Goals and Architecture and the World Congress of Architects?
View Article in PDF The nuclear detectives at Livermore were suspicious. Samples cannot lie, but the stories told by these particular materials raised too many questions. One sample after another from crustal rocks and mantle reservoirs gathered during the Apollo lunar missions clocked in at a narrow range of ages, between 4.30 and 4.38 billion years old. This surprising result was found no matter where the astronauts collected samples and no matter what isotope ratios the Livermore sleuths measured—rubidium-87/strontium-87, samarium-147/neodymium-143 (147Sm/143Nd), samarium-146/neodymium-142 (146Sm/142Nd), lutetium-176/hafnium-176, or lead-207/lead-206. The theorists said those results could not be right. Some of the rocks had to be much older, because the Moon could not be that young. Moreover, the Moon was thought to have formed over a far longer period. Perhaps the astronauts failed to collect samples in locations where the oldest rocks lay. Or perhaps the theories are wrong. Livermore’s 20 years of work measuring the ages of these lunar samples, summarized in a 2014 paper, points the finger at the latter possibility. Lars Borg, a chemist in Livermore’s Nuclear and Chemical Sciences Division, says, “It’s difficult to see how all of the different events recorded by the Moon rocks could have essentially the same age unless they all reflect the formation age of the Moon.” Planetary geologists are now trying to reconcile Livermore’s findings with a revised theory, first formulated in the 1970s, of how Earth’s Moon developed. According to this origin story, the Earth arose when dust orbiting the Sun began to accrete into solid clumps, then into larger bodies called planetesimals, and finally into Earth itself. Later, some other large body approximately the size of Mars collided with the Earth. Superheated rock and dust scattered, and then re-accreted, forming the Moon in Earth’s orbit. Accretion of the Moon is thought to have produced enough heat to completely melt it. Computer modeling suggested that cooling of this ocean of magma was fairly rapid—lasting only a few million years—and resulted in a sequence of rocks in the lunar crust and mantle of approximately the same age. These rocks include the white crustal rocks that can be seen from Earth, as well as areas where dark basaltic rocks form the flat, smoother surfaces of the Moon called “mares,” or seas, by Galileo. If the above theory is correct, then all rocks thought to represent the first solidification products of the lunar magma ocean should yield roughly the same age. The problem is that ages measured since the 1970s span a range of 4.22 billion to 4.56 billion years ago, suggesting that either the model or the ages are incorrect. Most scientists argue that all but the oldest ages are incorrect. However, the detailed chronologic investigations performed by the Livermore team using newly developed techniques to precisely date individual samples with confidence indicated that all the samples solidified within a narrow window of time. If these rocks represent the solidification products of a primordial magma ocean, then they record the age of the Moon. The scientific debate continues, but Livermore’s contribution to the evidence has significantly stirred the pot. The radioactive decay of isotopes of certain elements provides scientists with a powerful tool—the ability to measure the age of a material anywhere from seconds to billions of years old. By measuring the abundances of parent and daughter isotopes in rocks and minerals with extremely accurate mass spectrometers, Livermore scientists can measure the age of planetary materials with a margin of error of less than 1 percent. Thus, the ratio of a gradually decaying radioactive parent isotope such as 147Sm to its daughter isotope 143Nd can be used to uniquely pinpoint the time billions of years ago that a rock solidified. At Livermore, these tools serve a dual role, supporting both cosmochemical research and nuclear forensic capabilities. Nuclear forensics is central to the Laboratory’s security missions, and cosmochemical work is key to developing techniques to measure isotopes that can be used to provide clues to the origin of illicit nuclear materials that might be diverted for use in weapons of mass destruction. In fact, many of the same systems used for cosmochemical research also apply to constraining the origins of nuclear forensics samples. Borg explains, “In the 1980s, the Laboratory hired academically trained cosmochemists with expertise in mass spectrometry to contribute to the nuclear test program. They found that neutron reactions taking place in these tests were also relevant to cosmochemistry.” As the Laboratory developed its nuclear forensics capability, a world-class cosmochemistry capability also came about. This cosmochemistry group began to unravel the history of the solar system, the ages when the planets formed, and the relationship of meteorites to the solar system’s primordial matter. Thanks to decades of funding from NASA and Livermore’s Laboratory Directed Research and Development Program, Livermore has become one of the world’s foremost centers of research on the origin and evolution of the solar system, including Earth. Livermore researchers have studied samples relating to the age of the Moon and the timing of lunar impact basins, the age of volcanic activity on Mars, the evolution of the Martian atmosphere, the timing of Martian crust and mantle formation, and the sources of materials that contributed to the dust and gas that later condensed forming the Sun, planets, asteroids, and comets of our solar system. The Livermore cosmochemists excel at throwing doubt on long-established hypotheses. A common assumption among planetary geologists, for example, is that the composition of the Earth is equivalent on average to a common variety of stony meteorite called chondrites. Thousands of these meteorites have been found on Earth, and scientists assume their compositions reflect that of the solar system’s primordial rocky matter from which Earth and the other planets formed. Then, beginning in about 2008, investigators elsewhere found that the neodymium-142/neodymium-144 ratio in crustal Earth rocks was higher than in chondrites by about 20 parts per million. Theorists began suggesting that the early Earth was formed from a different reservoir of material than chondrites, or that Earth was enriched in material with a higher Sm/Nd ratio than chondrites, or that a hidden reservoir of material existed within the Earth. All of these theories, although equally plausible, lacked any supporting evidence. In early 2017, a group of Livermore researchers including Borg developed techniques to measure several other stable Nd isotope ratios in various chondrites. The team found that the stable isotopes of Nd were different in chondrites than in Earth, suggesting that terrestrial Nd derived from a slightly different proportion of nucleosynthetic sources than the Nd in chondrites. Heavy elements form through nucleosynthetic processes in the cores of giant stars and when stars explode in a supernova, ejecting the building blocks of new stars and planets. Each type of condition produces material with an identifiable isotopic signature. (See S&TR, July/August 2014, Evidence of a Turbulent Beginning.) Livermore’s research suggested the Earth formed from material that was slightly richer in components from giant stars than in supernova material. Thus, the protoplanetary molecular cloud from which the solar system formed need not have been perfectly uniform in composition—material closer to the Sun could have had more giant star–derived Nd, whereas chondritic meteorites forming farther from the Sun would contain a higher proportion of material from supernovae. No special events were needed to alter Earth’s mix. Instead, the protoplanetary molecular cloud simply had to remain heterogeneous during condensation of the first solids. Livermore’s findings are again forcing a rethinking of current notions. Borg summarizes, “We are trying to provide the physical constraints by which the average composition of bulk chondrites represents the composition of individual planets.” Livermore draws on the talents of many world-class cosmochemists—both on the staff and as visiting scientists—as well as exceptional technological capabilities. Borg states, “What makes us unique in the field is that we have the most relevant technologies in one building, as well as the people working in these laboratories under one roof who can solve various aspects of many problems. Teamwork gives us an advantage.” Livermore’s capabilities include several forms of mass spectrometry—inductively coupled plasma, thermal ionization mass spectrometry, and resonance ionization mass spectrometry (see S&TR, January/February 2017, pp. LION Hunts for Nuclear Forensics Clues). The Laboratory also used its Instrument Development Program funds to acquire a nanometer-scale secondary ion mass spectrometry (nanoSIMS) device, one of less than two dozen such devices in the world today. NanoSIMS enables Livermore researchers to measure isotope concentrations in submicrometer-sized parts deep inside a sample. A next-generation inductively coupled mass spectrometer is also due in September 2017. Since coming to the Laboratory after time at NASA and the University of New Mexico, Borg has guided research addressing questions that reach ever farther out into space, even to the search for the perturbations that triggered the solar system’s beginning. Borg, Gregory Brennecka—a former Livermore postdoc now at the University of Münster in Germany—and others recently developed a method for measuring tellurium-126 (126Te) in calcium aluminum–rich inclusions (CAIs), particles thought to be the first solids to form in the solar system. Because 126Te is a decay product of tin-126 (126Sn), which forms only in supernovae, its presence in CAIs would support a decades-old hypothesis that a nearby supernova triggered the gravitational collapse of the molecular cloud that formed the solar system. In 2017, Brennecka, Borg, and colleagues reported a new method for detecting 126Te at parts-per-million concentrations, or 30 times more sensitive than prior techniques. Finding no evidence for 126Sn in the samples they studied, Borg and his team again cast doubt on long-established hypotheses. In addition, the scientific community now had a new way to continue the search. Borg sums up why this research excites him: “What is really cool about this work is that we’re looking at many different aspects of planetary formation, and it is painting a broad picture of how everything works. We’re slowly answering the question, How did the solar system evolve?” Key Words: calcium aluminum–rich inclusion (CAI), chondrite, cosmochemistry, inductively coupled plasma mass spectrometry, Laboratory Directed Research and Development Program, meteorite, nanometer-scale secondary ion mass spectrometry (nanoSIMS), nuclear forensics, planetesimal, resonance ionization mass spectrometry, nucleosynthesis, solar system evolution, thermal ionization mass spectrometry. For further information contact Lars Borg (925) 424-5722 (firstname.lastname@example.org).
|Product #: SSLB155_TQ| Snakes (Resource Book Only) eBookGrade 4|Grade 5|Grade 6 Please Note: This ebook is a digital download, NOT a physical product. After purchase, you will be provided a one time link to download ebooks to your computer. Orders paid by PayPal require up to 8 business hours to verify payment and release electronic media. For immediate downloads, payment with credit card is required. There are approximately 2,700 species or types of snakes found in the world today. Get all wrapped up in this topic and use the ideas and activities in this resource to supplement your Science or Language Arts program on reptiles. Information topics on the History of Snakes, Mapping Snakes, Snakes on the Move, Snake Charming, Hooded Serpent, Eggs-citing Snakes, Fangs for Nothin, Self Defense, The Big Squeeze, Venomous Villains, Poisonous Snakes, Snake Bite, Importance of Snakes, Introducing the Goidae Family, Snakes Alive, Snake Senses, Snakes in Myths and a Snake Quiz. The cross-curricular activities focus on word study, creativity and math. Submit a review
The northern Indian state of Punjab is the country’s historic breadbasket, and 60-year-old Harnek Singh is one of the million farmers who work its soil. On a sunny February afternoon in Khunimajra, about 275 kilometers from New Delhi, he is busy repairing his tube well. The tube well is simple: a steel pipe bored into the ground and attached to a cheap electric pump. This rudimentary tool is the engine of Singh’s success as a farmer. But it and millions of others like it are quickly draining away India’s agricultural riches. In dirt-covered shorts and an undershirt, Singh squats amid the thick foliage that feeds his 50-odd cattle, preparing his pump for the day’s most critical event. He’s about to get the 4 hours of free electricity that lets him extract water from a natural reservoir 82 meters underground. These days, that hardly suffices. He needs another 8 to 9 hours of power to finish watering his wheat and rice. So he’ll continue to run the pump with diesel generators, which cost him US $4.50 an hour in fuel—a crushing price, considering that his farm’s annual revenue is just $20 000. "Sometimes the [grid] electricity goes off after 2 or 3 hours, and I want to commit suicide," says Singh. Even so, farmers are emptying Punjab’s aquifers at an alarming rate. Each year, as the groundwater table steadily retreats, they are forced to go half a meter deeper to pump water. Two abandoned wells on Singh’s farm offer proof of the changing conditions. If cultivation continues here as it has, the groundwater—the source of most of Punjab’s irrigation—could be exhausted in 20 years, say researchers at Punjab Agricultural University, in Ludhiana. The situation is not unique to Punjab. Collectively, India’s farmers extract about 212 million megaliters of water each year to irrigate some 35 million hectares. That amount of water—enough to submerge London by more than 100 meters—is considerably more than what flows into the aquifers through rainfall and runoff, and plummeting water tables now plague other areas as well. Based on its aquifers’ natural rate of recharge, Punjab can sustainably support at most 1.8 million hectares of rice, according to the state’s director of agriculture, Balwinder Singh Sidhu. At present, it has 2.8 million hectares of rice. If the situation doesn’t change, a food crisis in India seems imminent. A main culprit is grossly underpriced electricity. For decades, it’s allowed farmers to pump groundwater at very low cost. Now, not only is the water running out, but India’s electricity utilities lack the revenue to maintain their infrastructure and provide rural communities with adequate power.
One of the most common complaints for people to seek acupuncture care is for headaches. It can be the main symptom, or can accompany other conditions which cause the headache. Most people will experience a headache at some point, and it can affect people of any age or gender. What causes headaches? Headaches can be broken down into a few main categories, which also narrows down the cause of the headaches as well: Tension headaches: one of the most common headaches, this pain can be felt daily. It can be described as tight or stiff and can be on either side of the head. Tension headaches are the result of muscle contractions in the head, usually as a result of stress, eye strain, lack of sleep, or dehydration. Migraines: can be very severe and last for hours or days and can be associated with nausea, noise/light/odour sensitivity, or stomach pain. The pain is usually felt on one or either side of the head. It can be described as pounding or throbbing pain. While the cause of migraines is not fully understood, a popular theory is that it is a result of changing brain chemistry. Another cause could be the rebounding shift of action from the sympathetic system (the fight or flight state) to the parasympathetic system (the resting state). This shift will cause excessive blood flow to the head which induces heavy pressure upon each heart beat. Cluster headaches: are less common but can be quite severe, with a burning characteristic. This pain is usually felt behind one or both eyes. It occurs when the trigeminal nerve (main facial nerve) is activated, which then leads to the pain felt in the eye. Research has shown cluster headaches to be generated by the hypothalamus. Sinus headaches: are felt as a deep and constant pain in the forehead, bridge of nose, and cheekbones. Other associated symptoms may be sinus-like symptoms such as nasal discharge, feeling of fullness in the head, swelling. These headaches can also come with the changing of the weather or after a cold. Sinus headaches are caused when bacteria invade the nasal sinuses. Cervicogenic headaches: are headaches which originate from a pain in the neck or spine, which then transfers to the head. These are common in patients with whiplash, chronic posture issues, or neck/shoulder problems. How does acupuncture help treat headaches and migraines? Traditional Chinese Medicine and acupuncture have had a long history in the treatment of headaches and migraines for all of the above categories. Many patients have found long-term relief through acupuncture that was not found through medication, most of which only offered only temporary relief. Some headaches are actually caused by medications and leads to a “rebound” headache. The advantage of acupuncture over medication is that is does not have any associated side effects and does virtually no harm. Treatment of the headache or migraine is also focused on prevention of future attacks rather than reducing the pain temporarily. By inserting fine needles into specific acupuncture points (acupoints) of the body, a healing response is triggered which will help to restore normal function to the head. The treatment varies largely on the type of headache experienced and which factors cause it to manifest. Typically the needles are inserted into the hands, feet, and head. The needling will stimulate the peripheral nervous system to regulate the action of the sympathetic and parasympathetic system. Essentially, it will bring balance to the body. Needling also releases endorphins (a natural painkiller) and reduces Substance P (a pain transmitter). - Ropper AH, Samuels MA. Chapter 10. Headache and Other Craniofacial Pains. In: Ropper AH, Samuels MA, eds.Adams and Victor’s Principles of Neurology. 9th ed. New York: McGraw-Hill; 2009. www.accessmedicine.com/. Accessed December 27, 2011. - Goadsby PJ, RaskinNH. Chapter 14. Headache. In: Longo DL, Fauci AS, Kasper DL, Hauser SL, Jameson JL, Loscalzo J, eds. Harrison’s Principles of Internal Medicine. 18th ed. New York: McGraw-Hill; 2012.www.accessmedicine.com/. Accessed December 27, 2011. - Carlsson J, Fahlcrantz A, Augustinsson LE. Muscle tenderness in tension headache treated with acupuncture or physiotherapy. Cephalalgia 1990;10:131-141.
Papillae are found on the surface of your tongue. The chief organ of taste, the tongue also helps in chewing and swallowing and plays an essential part in forming the sounds of words. Covered with a mucous membrane, the undersurface of the tongue is smooth, but many papillae (small projections) give the top of the tongue a rough surface. There are four kinds of papillae: Filiform, Fungiform, Foliate and vallate or Circumvallate papillae, which are only found at the back of the tongue and fungiform. The four types of taste buds, found in it, enable us to distinguish between sweet, sour, salty and bitter tastes. THE SENSE OF TASTE IS THE CRUDE OF OUR FIVE SENSES. IT IS LIMITED IN BOTH RANGES AND VERSATILITY. EACH PAPILLA CONTAINS ONE TO TWO HUNDRED TASTE BUDS What is papilla? The papilla is a small, nipple-like structure on the surface of the tongue. It is also called the lingual papilla. It gives tongue its rough feel. There are four kinds of papillae on the tongue that have different structures. They are classified below: 1. Filiform papillae They are fine, small, cone-shaped covering the majority of the tongue. Filiform papillae are responsible for the sensation of touch and providing texture to the tongue. They cover most of the leading two-thirds of the tongue surface, all except that the filiform papillae are associated with flavour buds. They have cylindrical surface projections and are organised in rows which lie parallel to the sulcus terminalis. 2. Fungiform papillae They are club-shaped projections found on the tip and sides of the tongue. Fungiform papillae are generally red in colour. They have flavour buds in their upper surface which may differentiate the five flavours, i.e., sour, bitter, sweet, salty, and umami. The seventh cranial nerve is innervated them, more especially through the submandibular ganglion, chorda tympani, and geniculate ganglion ascending into the solitary nucleus in the brainstem. 3. Foliate papillae Foliate papillae are found on each side of the tongue and palatoglossal arch of this fauces. There are five vertical folds, and their dimensions and shape are changeable. It is red in colour and covered with epithelium, deficiency keratin and so are softer, and bear many taste buds. Also, they have usually bilaterally symmetrical. They appear small and inconspicuous, and sometimes they’re prominent. Taste buds and the gustatory sense receptors are scattered over the mucous membrane of their surface. Serous glands clean the taste buds and drain to the folds. Lingual tonsils are located immediately behind the foliate papillae and, even when hyperplastic, cause prominence of the papillae. 4. Circumvallate papillae They are dome-shaped structures on the human tongue that change in number from 8 to 12. They’re located on the tongue’s surface forming a row the two rows meet in the midline and medially, and run backwards. Each papilla is made up of projection of mucous.
The herbaceous plant onion grows in temperate zones. It belongs to family “Liliaceae” and has latin name “Allium Cepa”. The plant reaches a height of 2-5 ft and bears cluster of greenish white flowers. Onions develop from the base of leaves to form underground bulbs. Seeds of onion are black in colour. Onion is known as Phalandu in Sanskrit. Its medicinal properties and physical properties are explained in texts of ayurveda. Ayurveda acharyas have explained the uses of onion in various health conditions and have classified onions based on their colours. “Rakta phalandu” (Red coloured onion) and “shweta phalandu” (white coloured onion) are two varieties of onions. According to ayurveda onions are heavy to digest and slimy to touch. They taste sweet and are pungent to smell. They increase fire component of body (ushna veerya) and acquire sweet taste after digestion (madhura vipaka).Its seeds and fruit are used in ayurvedic preparations. Medicinal Uses of Onion in different health conditions: 1.Onion normalizes vata and increases kapha and pitta. Hence it is used in diseases which occur due to vitiation of vata. It acts as an anti inflammatory and reduces pain. Ayurvedic texts recommend its use in sciatica, arthritis and other diseases which involve bones, joints and peripheral nervous system. 2.The poultice of onion and other herbs help to reduce swelling and pain in joints and hard abscess. 3.Application of onion juice is recommended in pigments and dark spots which appear on face. 4.Use of Onion juice is recommended in ear pain and blurred vision. 5.Onion help to rejuvenate liver, normalize digestion, relieve constipation and increase appetite. Hence it is very beneficial in piles (hemorrhoids) , constipation, jaundice and indigestion. 6.White onions are recommended in bleeding disorders to reduce bleeding. Hence it is used in conditions like bleeding piles and bleeding through nose. 7.Onions are used in home remedies to reduce cough. 8.Onion and its seeds are known to help in erectile dysfunction, premature ejaculation and male infertility. It helps to increase libido, quality and quantity of semen. It is a very good vajikara. 9.It helps to reduce itching on skin. 10.As onions increase rajas and tamas of mind , it masks analyzing capacity, intelligence and grasping power. Dr.Savitha Suri is an ayurvedic Consultant Physician. She has an experience of 26 years in the field of ayurveda. The content is copyrighted to Dr.Savitha Suri and may not be reproduced on other websites. You can contact her for free online ayurvedic consultations at email@example.com Follow Dr. Savitha Suri on Google Plus
Controlling MealyBugs on Orchid Mealybugs attack many plants, including orchids especially Phalaenopsis and are among . These pests are gray coloured having soft bodies which is covered by white cottony mass. Where to find them 1. These appear as tiny balls of cotton on the leaf axils. You might even find them on the stems, stem joints, or even the underside of the leaves. 2. In the preliminary stages of infection, they occur at the leaf axils. Damage Caused by mealybugs 1. As they suck the nutrient containing sap from the plant, it weakens the plant. You will find the leaves drying and falling, same will happen to the flowers. 2. They form associations with ants. Ants get nectar from the mealybugs while in turn ants protect the bugs from predators. These ants can cause extensive damage to the flowers and soft new leaves. How to Control the infection: 1. It is very important to isolate the infected orchid, otherwise the pest will spread to other healthy plants as well. 2. These pests can be controlled biologically by introducing natural enemies like the ladybird (especially, the spotless type), parasitic wasp, lacewings (beware, these can give irritating bites too). These are commercially available. 3. Neem oil is also helpful in controlling these pests. Four to five teaspoons of neem oil can be mixed in four litres of water and can be sprayed on the infected orchids. I came across an effective home remedy of spraying a mix of garlic flakes and mineral oil on the mealybugs. 3. Using insecticidal soap or rubbing isopropyl alcohol on affected areas can be helpful. 4. It is difficult to control mealybugs in orchids, using chemical based pesticides. However, spraying parathion or malathion on mealybugs has shown some results. Please use insecticides only in case of large scale pest infections. 5. Repeat these control methods after three to four days, depending upon the scale of infection. Insecticides should be used according to the instructions provided on the container. 6. In my case, I removed the infected areas of the plant (leaf axils and some portion of the stem) along with the mealybugs. The bugs have not returned since three weeks now (though my fingers are still crossed!). Points to keep in mind 1. Ants are known to from friendly association with mealybugs. Ants deterred the natural enemies of mealybugs, so controlling ant populations is important for effectively controlling the mealybugs by biological methods. 2. Use only clean sterilized tools while working with plants. 3. When watering do not leave the orchid in water for very long. If you are watering orchids by dipping them in water then make sure that they dry off, especially the leaves and stems. Dipping in water should also be avoided during hot and humid weather conditions. This is because extra water on the orchid is like and open invitation to pests and bacteria. Note: Restrict the use of chemical pesticide. Many insecticide producing companies are hiding the facts from us about the harmful effect of these chemicals on other insects, especially the bees! If you are planning to use biological and chemical control methods side by side then you might see only a limited success. As the chemical pesticides do not discriminate between friendly and harmful insects. 1. McKenzie, Howard Lester. 1967. MealyBugs of California: With Taxonomy, Biology, and control of North American species (Homoptera, Coccoidea, Pseudococcidae).Berkeley, California: University of California. 2. Flint. ML, Dreistadt. SH, and Clark, KC. 1999. Natural Enemies Handbook: The Illustrated Guide to Biological Pest Control. UCANR Publications. Editor's Picks Articles Top Ten Articles Content copyright © 2018 by Anu Dharmani. All rights reserved. This content was written by Anu Dharmani. If you wish to use this content in any manner, you need written permission. Contact Anu Dharmani for details.
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... Oil is a product of great utility the symbolic signification of which harmonizes with its natural uses. It serves to sweeten, to strengthen, to render supple; and the Church employs it for these purposes in its rites. The liturgical blessing of oil is very ancient. It is met with in the fourth century in the "Prayer Book of Serapion", and in the Apostolic Constitutions, also in a Syriac document of the fifth or sixth century entitles "Testamentum Domini Nostri Jesu Christi." The aforesaid book of Bishop Serapion (d. c. 362) contains the formula for the blessing of the oil and chrism for those who had just received baptism, which was in those days followed by confirmation in such a manner that the administration of both sacraments constituted a single ceremony. In the same book is found a separate form of blessing for the oil of the sick, for water, and for bread. It is an invocation to Christ to give His creatures power to cure the sick, to purify the soul, to drive away impure spirits, and to wipe out sins. In the Old Testament oil was used for the consecration of priests and kings, also in all great liturgical functions, e.g., sacrifices, legal purifications, and the consecration of altars (Exodus 30:23, 33; 39:27, 29; 11:9, 15; Leviticus 6:15 sq.) In the primitive Church the oils to be used in the initiation of catechumens were consecrated on Holy Thursday in the Missa Chrismalis. Two different ampullae were used, one containing pure oil, the other oil mixed with balsam. This mixture, was made by the pope himself before the Mass, in the sacristy. During the Mass two clerics of lesser rank stood before the altar holding the ampullae. Towards the end of the Canon the faithful were allowed to make use of themselves (Tertullian, "Ad Scap." iv.), but the same oil also served for extreme unction. The vessels holding it were placed on the railing surrounding the space reserved for the clergy. The deacons brought some of these vessels to the altar to receive that blessing of the pope which we read today in the Gelasian and Gregorian Sacramentaries. The pope continued the mass while the deacons returned the ampullae to the place whence they had brought them, and a certain number of bishops and priests repeated over those which had not been brought to the altar the formula pronounced by the pope. The consecration of the large ampullae to the archdeacon and one of his assistants. The archdeacon presented to the pope the ampulla of perfumed oil, the pontiff breathed on it three times, made the sign of the cross, and recited a prayer which bears a certain resemblance to the Preface of the Mass. The ampulla of pure oil was next presented to the pope, and was consecrated with less solemnity. The consecration and benediction of the holy oils now take place on Holy Thursday at a very solemn ceremony reserved for the bishop. He blesses the oil which is to serve at the anointing of catechumens previous to baptism, next the oil with which the sick are annointed in the Sacrament of Extreme Unction, finally the chrism, which is a mixture of oil and balsam, and which is used in the administration of the Sacrament of Confirmation. The use of oil in Christian antiquity was not, as has been maintained, a medical prescription adopted by the Church. In Apostolic times St. James directed the priests or ancients of the community to pray for the sick man and to anoint him with oil in the name of Jesus (James 5:14). And shortly afterwards, probably in the second century, a gold leaf found at Beyrout, in Syria, contains an exorcism "pronounced in the dwelling of him whom I annointed." This is, after the text of St. James, the earliest evidence of the use of oil accompained by a formula in the administration of a sacrament [see Theophilus of Antioch (d. 181), "Ad Autolyc." I, xii, in P.G. VI, 1042]. The oil of the sick might be blessed not only by priests, but also by laymen of high repute for virtue, and even by women. In the sixth century St. Monegundus on his death-bed blessed oil and salt which were afterwards used for the sick ("Vita S. Monegundi", ix, in "Acta SS. Ord. S. Bened." I, 204; Gregory of Tours, "Vita Patr." xix, 4). A similar instance is met with in the life of St. Radegund (Vita Radeg., I, xxxv). In the West, however, the tendency was early manifested to confine the blessing of the oil of the sick to bishops only; about 730 St. Boniface ordered all priests to have recourse to the bishop (Statut., xxiv). In 744 the tendency was not so pronounced in France, but the Council of Châlons (813) imposed on priests the obligation of anointing the sick with oil blessed by the bishop (can. xlviii). In the East the priests retained the right to consecrate the oil. The custom even became established, and has lasted to the present time, of having the oil blessed in the house of the sick person, or in the church, by a priest, or, if possible, by seven priests. During the time of the catechumenate those who were about to become Christians received one or more anointings with holy oil. The oil used on this occasion was that which had received the blessing mentioned in the Apostolic Constitutions (VII, xlii). This anointing of the catechumens is explained by the fact that they were regarded to a certain extent as being possessed by the devil until Christ should enter into them through baptism. The oil of catechumens is also used in the ordination of priests and the coronation of kings and queens. The "Ordo Romanus" (c. 730) shows that in Rome, on Holy Thursday, the archdeacon went very early to St. John Lateran, where he mixed wax and oil in a large vase, this mixture being used to make the Agnus Deis (Mabillon, "Mus. Ital.", II, 31.) The same document shows that in suburban churches wax was used while Pseudo-Aleuin (Divin. offic., xix) says that both wax and oil were used. In the Liturgy of the Nestorians and the Syrian Jacobites, the elements presented at the Eucharistic Consecration have been prepared with oil. Among the Nestorians a special rubric prescribes the use of flour, salt, olive oil, and water ("Officium Renovationis fermenti"; Matente, "De antiquis Eccles. ritib.", I, iii, 7; Badger, "Nestorians", II, 162; Lebrun, "Explic. des prieres de la messe", dissert, xi, 9). From the second century the custom was established of administering baptism with water specially blessed for this purpose. Nevertheless, the sacrament was valid if ordinary water was used. We are not well informed as to the nature of the consecration of this baptismal water, but it must be said that the most ancient indications and descriptions say nothing of the use of oil in this consecration. The first witness, Pseudo-Dionysius, does not go beyond the first half of the sixth century; he tells us that the bishop pours oil on the water of the fonts in the form of a cross (De hierarch, eccles., IV, x; cf. II, viii). There is no doubt that this rite was introduced at a comparatively late period. The maintenance of more or less numerous lamps in the churches was a source of expense which the faithful in their generosity hastened to meet by establishing a fund to purchase oil. The Council of Braga (572) decided that a third of the offerings made to the Church should be used for purchasing oil for the light. The quantity of oil thus consumed was greater when the lamp burned before a famous tomb or shrine, in which case it was daily distributed to pilgrims, who venerated it as a relic (Kraus, "Real-Encykl.", II, 522). (See LIGHTS.) SCHROD in Kirchenlex., s.v. Oele helige; BYKOUKAL in Kirchl. Handlex., II (1909). 1205; BARRAUD, Notice sur les saintes huiles et les vases qui servent a les contenir in Bulletin Monumental, VII (1871), 451-505; Revue de l Art Chreiten, II (1884), 146-53. APA citation. (1910). Holy Oils. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/07421b.htm MLA citation. "Holy Oils." The Catholic Encyclopedia. Vol. 7. New York: Robert Appleton Company, 1910. <http://www.newadvent.org/cathen/07421b.htm>. Transcription. This article was transcribed for New Advent by Beth Ste-Marie. Ecclesiastical approbation. Nihil Obstat. June 1, 1910. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
What is Arthritis? Arthritis means an inflamed joint. A joint normally consists of two cartilage-covered bone surfaces that glide smoothly against one another. When joints become inflamed, the joint swells and does not move smoothly. Over time, the gliding surface wears out. There are many types of arthritis. Rheumatoid arthritis is just one type. Wear and tear arthritis (osteoarthritis), gouty arthritis, and psoriatic arthritis are three other common types. Rheumatoid Arthritis is considered a systemic disease. That is, it can affect many parts of the body. Patients often awaken with stiff and swollen joints. Early on, many patients feel tired. Two thirds of patients with rheumatoid arthritis have wrist and hand problems. Rheumatoid Arthritis of the hand Rheumatoid arthritis affects the cells that lubricate and line joints. This tissue – synovium- becomes inflamed and swollen. The swollen tissues stretch supporting structures of the joints such as ligaments and tendons. As the support structures stretch out, the joints become deformed and unstable. The joint cartilage and bone erode. Often the joints feel hot and look red. Rheumatoid arthritis of the hand is most common in the wrist and knuckles (see Figure 1). The disease is symmetric, thus what occurs in one hand usually occurs in the other. Signs and symptoms of rheumatoid arthritis of the hand While stiffness, swelling, and pain are symptoms common to all forms of arthritis, there are some symptoms that are classic features of rheumatoid arthritis. They are: - Firm nodules along fingers or the elbow - Soft lump on the back of the hand that moves as the fingers straighten - Angulation or collapse of fingers (figure 2) - Sudden inability to straighten or bend a finger because of a tendon rupture - Deformity in which the middle finger joint becomes bent (Boutonniere deformity- figure 3) - Deformity where the end of the finger is bent and the middle joint over extends (Swan-neck deformity- figure 3) - Prominent bones in the wrist In addition, patients with rheumatoid arthritis often have problems with numbness and tingling in their hand (carpal tunnel syndrome) because the swelling of the tendons causes pressure on the adjacent nerve. They may make a squeaky sound as they move joints (crepitus) and sometimes the joints snap or lock because of the swelling. How arthritis is diagnosed The diagnosis of rheumatoid arthritis is made based on clinical examination, x-rays, and lab tests. Your doctor will ask questions about your symptoms and how the disease has affected your activities. Rheumatoid arthritis may have a hereditary component, thus your physician will ask whether other family members have had rheumatoid arthritis or symptoms similar to yours. Your doctor will do a detailed examination of your hands. The clinical appearance helps to diagnose the specific type of arthritis. X-rays are often helpful; certain findings are characteristic for rheumatoid arthritis. These findings include swelling of non-bony structures, joint space narrowing, decreased bone density, and erosions near joints. There are several blood tests that are often ordered to confirm the clinical diagnosis. These are the rheumatoid factor, sedimentation rate and sometimes the anti-CCP (cyclic citrullinated peptide). MRI- a special imaging study – has also been used to help confirm the diagnosis. Treatment of Rheumatoid Arthritis Treatment for rheumatoid arthritis aims to decrease inflammation, relieve pain and maintain function. While there is no cure for rheumatoid arthritis, medications are available that slow the progression of the disease. Optimal care involves a team approach among the patient, physicians, and therapists. The care of the rheumatoid patient requires not only a hand surgeon but also a hand therapist, rheumatologist, and the patient’s primary care physician. The rheumatologist is often the physician that monitors and decides the specific type of medicine that is felt to be the most effective for the patient’s stage in the disease process. The hand therapist will provide instruction on how to use your hands in ways that help relieve pain and protect joints. Therapists also can provide exercises, splints, and adaptive devices to help you cope with activities of daily living. Rheumatoid arthritis can be a progressive disease. Surgical interventions need to be appropriately timed in order to maximize function and minimize deformity. In certain cases, preventive surgery may be recommended. Preventative surgery may include removing nodules, decreasing pressure on joints and tendons by removing inflamed tissue, or removing bone spurs that may rub on tendons or ligaments. If a tendon ruptures, a hand surgeon may be able to repair the tendon with a tendon transfer or graft. There are several types of procedures to treat joints affected by rheumatoid arthritis, including removal of inflamed joint lining, joint replacements, and joint fusions. The specific procedure(s) chosen depends on many factors. These factors include the particular joints involved, the degree of damage present, and the condition of surrounding joints. One of the most important factors in deciding the most appropriate surgical procedure is the needs of the patient. There are often many ways to treat hand deformities in rheumatoid arthritis. Your hand surgeon can help you decide on the most appropriate treatment for you. Content provided by American Society for Surgery of the Hand
|One book, Any style...| So, as I continue to observe schools (see previous post), I look for the kinds of choices which will make the greatest percentage of students "OK," and I look for that in every classroom, in the library, in any dedicated computer space, wherever and whenever kids are "working on learning." I look to see different seating choices, different light levels, different senses of enclosure (why restaurants have booths), different height work surfaces (sit, stand, etc), different options for noise control, and different tools for gaining access to communication and for communication. I look to see if students who need to be standing are standing, if those who need to be sprawled on the floor are sprawled on the floor, if those who need space around them have space around them, if those who need close contact have close contact. I look to see, if it is not large group time, if those who need quiet, have quiet - via a place to hide or just an iPod or mp3 player keeping stray sounds at bay. |Where to work?| a few options at one coffee shop. (also, booths, tables, high tables) I look to see if multiple representations are always available. Are there different math manipulatives, and are some kids using their fingers while others process in their heads and others use pencils and others keep track of their steps through (free) calculators which record their actions. Are kids reading ink-on-paper and via computer reader/web based reader and via audiobook? Are kids writing via pencils and pens (differently shaped), via keyboard (and what kind of keyboard), on phone keypad, or with their voice. I look to see if YouTube is in use, if videos are available to explain, if audio files are available to help connect kids (what did a Babylonian sound like?). I like to see kids online looking up words, finding images, hearing pronunciations of unfamiliar words, using whatever technologies they need to scaffold their own learning. And I look to see that in every grade, at every level. It is just as important for a high school physics student to be able to find alternatives to the teacher's explanation and delivery system as it is for a first grader. Do students have collaborative notetaking options (Google Docs)? Are assignments handed in digitally so paper and the handling of it is not an issue? Is homework limited to projects which do not demonstrate parental/home resources more than anything else? Is homework always flexible enough in description to not be ability-centric? I look, from the very entry to the building, to see if the school is technology platform and brand agnostic. No matter how much I might love Apple or Google or Mozilla or Microsoft, I do not ever want to see "branded" schools or publicly "branded" teachers. No student should ever be made to feel 'outside' because of personal or family brand preferences or limited options. And I need to see universal access tools on every school computer, at least all that is free or that which comes with the device's platform. We should not divide access into "ours" and "theirs." Of course I look for complete inclusion of all of these things everywhere. The availability of universal design must be, yes, universal, or it is just another word for "Special Education." There's more... testing, evaluation flexibility - time flexibility - teacher/student matching, things not immediately visible when you walk a school's corridors. But if you see universal design everywhere you look, there's a good chance you'll see it even where you can't quite see. - Ira Socol
The poinsettia (or Mexican Flame Tree or Christmas Star, as it is also known) is synonymous with the festive season but keeping the plant alive long enough to see Christmas Day can be a challenge. Poinsettias are the second best-selling houseplant in the UK (after the Phalaenopsis orchid). Remove the dead leaves from the pot, and continue to remove any leaves that fall off. If the stems of the plant have started to rot, cut them back far enough so that you can remove the dead parts. Place the poinsettia near a bright, south-facing window. Poinsettias are tropical plants that benefit from plenty of light. Monitor how much you are watering your poinsettia. The plant should be moist rather than soggy. How much water you will need will depend on the temperature and humidity level. When in doubt, skip the water. Fertilise the poinsettia once a month after youíve pruned it. As a rule of thumb, a poinsettia will require 1 or 2 tablespoons of fertiliser. Cover your poinsettia plant every night or move it to a dark cupboard overnight. For it to bloom again, a poinsettia requires 14 hours of complete darkness every night. Continue to cover your plant until the buds start to appear again.
While there is no surefire formula for success – achieving your goals requires you to create a solid plan of action. Goals with no precise plan of action sometimes fall through the cracks because there is not a conscious plan of action put in place. Step #1 Set Clear And Specific Goals Table of Contents While there is nothing wrong with setting a goal that challenges you to reach your full potential, you must ensure that your goals are clear and specific. As an example, setting a goal to eat healthier this year is not particularly actionable or measurable. However, setting a goal to eat 5-8 servings of fruits and vegetables a day, is a clear and specific goal for success that you can measure. Step #2 Map It Out Once you have identified a clear and specific goal you want to break it down into bite-size chunks. Some of the goals you set in your personal and professional life will require you to create strategies, tactics, and step-by-step pieces. As an example, if your goal is to be more organized this year, you could start with organizing your closets – then moving on to each room one-by-one. Step #3 Set A Date Achieving your goals means you need to have a clear timeframe you are working with. Without a set date in mind, it is too easy to tell yourself you will start tomorrow, or next week, or next month. Step #4 Take The First Step Sometimes the first step in achieving your goals is a small one, however the first step is often the most difficult and most important. An excellent is how each New Year many individuals create resolutions. While resolutions are excellent, if they only remain on paper it will be impossible for you to realistically achieve your goals. Step #5 Measure Your Success In Step 2 you already mapped out your goals, now it is time to hold yourself accountable by measuring your success. This will require you to be honest with yourself about your progress, and will help you keep track along the way. Even if you have setbacks along the way to your goal, your forward momentum will continue by celebrating your small wins – and holding yourself accountable for your areas of opportunity. Achieving your goals begins with you! By following the 5 simple steps above – you will be on your way to success! Question: What is your experience with setting goals? You can leave a comment by clicking here. Until next time……
Jornal de Pediatria Print version ISSN 0021-7557 TORRES, Marcia R. F. et al. Marcus Gunn Phenomenon: differential diagnosis of palpebral ptoses in children. J. Pediatr. (Rio J.) [online]. 2004, vol.80, n.3, pp. 249-252. ISSN 0021-7557. http://dx.doi.org/10.1590/S0021-75572004000400015. OBJECTIVE: The aim of this paper is to review existing literature on the subject and to report on and discuss a case of Marcus Gunn Phenomenon. DESCRIPTION: A five year-old female, otherwise a healthy patient, while still a few months old, was seen by a pediatrician who detected a disorder of the right eye, initially believed to be strabismus, at a follow-up childcare consultation. Several ophthalmologists failed to establish a precise diagnosis. After a pediatric ophthalmologist had examined the child at four years of age, a diagnosis of Marcus Gunn Phenomenon, otherwise known as jaw-winking phenomenon, was confirmed. Apart from this anomaly, physical, ophthalmological, and neurological examinations were normal. Since ptosis was mild and no association with strabismus, amblyopia or other conditions was established, no surgical procedures were necessary until now. COMMENTS: This report is an alert to pediatricians regarding the presence of this largely unknown phenomenon, making it possible for pediatricians to identify the phenomenon, refer the patient to an ophthalmologist, and establish differential diagnosis from other, more severe forms of ptosis, requiring more aggressive treatment. Keywords : Marcus Gunn Phenomenon; Marcus Gunn; jaw-winking phenomenon; congenital ptosis; eyelid ptosis and differential diagnosis.
This article is only available in the PDF format. Download the PDF to view the article, as well as its associated figures and tables. As stated on the flyleaf and in the introduction to this book, Blindness and Visual Handicap: The Facts is intended to increase the understanding (by social workers, community leaders, employers, and those in close contact with the blind) of the causes of blindness and its effects on blind persons; it also gives examples of ways of alleviating their disability. The first 109 pages are written by a sighted ophthalmologist who outlines the causes of blindness in lay terms. The remainder of the book is written by a blind international leader of work for the blind. The hopeful nature of productive adjustment to blindness is exemplified by a chapter entitled "Trail Blazers," which gives short biographies of famous blind persons. Hughes WF. Blindness and Visual Handicap: The Facts. Arch Ophthalmol. 1983;101(5):830. doi:10.1001/archopht.1983.01040010830033 Artificial Intelligence Resource Center Customize your JAMA Network experience by selecting one or more topics from the list below.
Financing means getting financial support from banking institutions. A launch company or perhaps a company, which has been around, requires ongoing finance. Some companies to operate your day-to-day operations require financial support. Some companies also require financing to grow their professional services and make more branches and develop. The interest rate for financing is fairly high and financing institutions like banks provide loans towards the business proprietors. The lent money and interest amount are paid back in installments. While financing you ought to be careful because the amount lent and the total amount you will pay back won’t be exactly the same, since you need to pay for combined with the rate of interest, that could be 15% -20%. Suppose, you opt for financing for 100,000 dollars, then your amount repayable could be 125,000, but the good thing is that you could pay back in installments during a period of time. While financing you have to look into the rates of interest, monthly repayable amount, finance terms and also the repayment term. You need to first evaluate how much money needed for financing as well as take a look at returns that might be produced by an investment. It’s also wise to calculate and discover in the number of years an investment would earn money for the organization. The borrowed funds amount ought to be sufficient also it will help in growth. Banks or banking institutions, which offer financing facilities, obtain the financed amount in installments such as the rates of interest. Banks or banking institutions make profits plus they normally finance with a few fixed assets as collateral. A collateral is really a be certain that the individual would pay back the lent amount as well as in situation when the person doesn’t pay back the lent amount promptly, then your lenders have the authority to sell the collateral. For small company proprietors, the federal government provides financing schemes, which will help to promote, medium and small sized companies. The medium and small sized companies will also get loans from U.S. Sba (Small business administration) and also the financing schemes are simple and easy , flexible. And it’s also simpler to obtain a loan from U.S. Sba schemes than obtaining a loan from banks along with other banking institutions. Should you make an application for Small Company Loan program then your Small business administration would stand as to safeguard the customer. Another financing choice is equity financing from family, employees etc who definitely are supplied with shares of the organization in return for money. A business may also consider financing by means of investment capital. The venture capitalist invests in the organization and requires a risk when they feel the organization would grow and supply sufficient returns. Financing through vc’s is really a struggle and you will find many strict guidelines to become adopted through the management and proper accounting procedures need to be adopted. Vc’s would also participate the management even though taking decisions their role needs to be stored in your mind.
Copyright © University of Cambridge. All rights reserved. Mathematics is critical to the study of any STEM subject; indeed, historically the development of science, technology, engineering and mathematics has often gone hand in hand. The scientist or engineer needs to embrace mathematics in order to get the most from their studies. Unfortunately, students often struggle with the mathematical aspects of their scientific degree courses. In this article we explore some of the main mathematical problems arising. Far from simply a lack of content knowledge, we believe that the main area of concern is in mathematical process skills. Problem: Students don't know Whilst preparing stemNRICH it was clear that sometimes certain content knowledge was lacking: those teaching biology, chemistry, physics and engineering courses often claimed that students didn't know enough about various topics in mathematics. Sometimes this lack of content knowledge was obvious: students in engineering need to know about complex numbers; other times it was graded or more subtle: biologists needed to know more about graphs and equations. Whilst these various topics obviously varied across universities and courses, interestingly, there was a surprising large overlap between the mathematical needs. The following core topics seemed to emerge across many Solution: We designed stemNRICH around the content areas needed for university STEM courses. Problem: Students can't apply Beneath any issues which might arise in knowledge of content, many students with good grades in mathematics seem to find it difficult to apply the mathematical knowledge that they might have. Why would this be the case? It seems that there are several main reasons, common to all - Overly Procedural thinking Mathematics exams can often be passed by learning the content procedurally. This means that students can answer certain types of question by following a recipe. The problems in scientific mathematics arise because even minor deviations from the precise recipe cause the student to fail to know what to do. - Lack of ability to translate mathematical meaning to Students who are very skilled at mathematics might have trouble seeing how to relate the mathematical process to a real-world context; this hampers the use of common sense, so valuable in - Lack of ability to make approximations or Real scientific contexts are rarely simple. In order to apply mathematics predictively, approximations or estimations will need to be made. To make approximations or estimations requires the student to really understand the meaning and structure of the mathematics, along with the underlying scientific meaning. - Lack of multi-step problem solving Scientific mathematics problems are not usually clearly 'signposted' from a mathematical point of view. The student must assess the physical situation, decide how to represent it mathematically, decide what needs to be solved and then solve the problem. Students who are not well versed in solving 'multi-step' problems in mathematics are very likely to struggle with the application of their mathematical knowledge. - Lack of practice There are two ways in which lack of practice can impact mathematical activity in the sciences. First is a lack of skill at basic numerical or symbolic manipulation. This leads to errors and hold-ups regardless of whether the student understands what they are trying to do. Second is a lack of practice at thinking mathematically in a scientific context. - Lack of confidence Lack of confidence builds with uncertainty and failure, leading to more problems. Students who freeze at the sight of numbers or equations will most certainly underperform. - Lack of mathematical interest Students are hopefully strongly driven by their interest in science. If mathematics is studied in an environment independent of this then mathematics often never finds meaning and remains abstract, dull and difficult. Solution: Our stemNRICH problems target these critical mathematical process skills. To make the most of and enjoy a university STEM course students need to have a solid base of content coupled with an equivalently strong set of mathematical process skills allowing them to apply their knowledge successfully. Insufficient levels in either area will cause students trouble. NRICH problems differ from many standard textbook questions or interventions because there is always a focus on a mathematical process; stemNRICH takes this well-developed NRICH philosophy and applies it to scientific It is hoped that by supplementing standard, traditional preparations with material from stemNRICH, students will arrive at university well equipped for a happy, productive and successful time.
MEDICINES IN CONSTIPATION - In enema, a locally active agent is introduced through anus into rectum and colon that helps in evaluation of the stool, either by drawing water into the rectum (saline enema, glycerine enema) and/or by lubricating the stool (glycerin enema, oil enema). - Enema basically by causing rectal distension (by the volume effect) and stimulate large intestine and rectum in action and thus, helps in relieving distal bowel. - Enemas make backbone in bowel training programme in the children who suffer intractable constipation following anorectal surgery or due to weak nerve supply to rectum. - Commercially packaged enema bags are easily available in the market. The enema bags available in the market have a tip that releases the liquid inside the rectum once tip is placed just inside the rectum and the bag is squeezed. - Simple saline enema suffices most of the times. Oil enema, glycerine enema are required in few cases. In stubborn cases, peroxide enema can be used (1% hydrogen peroxide with olive oil can act as a slow action enema for hardened rectal plug). Soap water enema is usually not preferred as it can irritate the gut. - Administration of enema, however, requires a number of precautions. Hence, it is advisable that a patient first learns the technique of enema administration or he might have to face undesirable consequences. while administration of enema One should be beware of loading too much of water inside a patient with heart or kidney disease, as it can be dangerous. Pushing in the fluid under great pressure can cause colonic rupture. Pushing too much enema fluid, especially plain water enema, in presence of inflamed colon can cause water intoxication (fluid overload on heart, kidney, and other organs). Too frequent administration of enema can cause soreness of anus. In younger children, suppositories are preferred over enema. They are bullet shaped pills that can be pushed up the rectum and can induce defecation by their local actions (glycerin suppository, dulcolax suppository). - Laxatives are the agents taken by mouth either in liquid, tablet gum, powder and granular form and facilitate passage of stools in various ways. - They can be - Bulk forming laxatives : They are the safest fiber supplements. They swell up after absorbing water from the intestine and make stool softer e.g. fleased husk. - Stimulants : They cause rhythmic muscle contractions in the intestines e.g. Bisacodyl - Lubricants : Grease enables stools to move easily inside the intestine e.g. mineral oil taken at bed time. They should never be taken with meals as oil can present absorption of vitamins from food. - Saline laxative : They draw water inside the colon and facilitates easy passage of the stool. e.g. milk of magnesia. - Laxatives are indicated when a person fails to pass stools for 4 days or more and has no stomach pain. For temporary relief of constipation enema is a favoured option as it is unfair to trouble the whole gut when the problem lies in the lower part of the gut (intestine).
Tea is the most popular beverage in Japan and an important part of Japanese food culture. Various types of tea are widely available and consumed at any point of the day. Green tea is the most common type of tea, and when someone mentions "tea" (お茶, ocha) without specifying the type, it is green tea to which is referred. Green tea is also the central element of the tea ceremony. Among the most well-known places for tea cultivation are Shizuoka, Kagoshima and Uji. Sorry, there are no products in this collection Use left/right arrows to navigate the slideshow or swipe left/right if using a mobile device Choosing a selection results in a full page refresh. Press the space key then arrow keys to make a selection.
Introduction to Pre-eclampsia Date Uploaded: 05/02/2019 Attachments: image.png (10KB) Pre-eclampsia (PE) is a disorder of pregnancy characterized by the onset of high blood pressure and often a significant amount of protein in the urine. When it arises, the condition begins after 20 weeks of pregnancy. In severe disease there may be red blood cell breakdown, a low blood platelet count, impaired liver function, kidney dysfunction, swelling, shortness of breath due to fluid in the lungs, or visual disturbances. Pre-eclampsia increases the risk of poor outcomes for both the mother and the baby. If left untreated, it may result in seizures at which point it is known as eclampsia. Risk factors for pre-eclampsia include obesity, prior hypertension, older age, and diabetes mellitus. It is also more frequent in a woman's first pregnancy and if she is carrying twins. The underlying mechanism involves abnormal formation of blood vessels in the placenta amongst other factors. Most cases are diagnosed before delivery. Rarely, pre-eclampsia may begin in the period after delivery. While historically both high blood pressure and protein in the urine were required to make the diagnosis, some definitions also include those with hypertension and any associated organ dysfunction. Blood pressure is defined as high when it is greater than 140 mmHg systolic or 90 mmHg diastolic at two separate times, more than four hours apart in a woman after twenty weeks of pregnancy. Pre-eclampsia is routinely screened for during prenatal care. Recommendations for prevention include: aspirin in those at high risk, calcium supplementation in areas with low intake, and treatment of prior hypertension with medications. In those with pre-eclampsia delivery of the baby and placenta is an effective treatment. When delivery becomes recommended depends on how severe the pre-eclampsia and how far along in pregnancy a woman is. Blood pressure medication, such as labetalol and methyldopa, may be used to improve the mother's condition before delivery. Magnesium sulfate may be used to prevent eclampsia in those with severe disease. Bedrest and salt intake have not been found to be useful for either treatment or prevention. Pre-eclampsia affects 2–8% of pregnancies worldwide. Hypertensive disorders of pregnancy (which include pre-eclampsia) are one of the most common causes of death due to pregnancy. They resulted in 46,900 deaths in 2015. Pre-eclampsia usually occurs after 32 weeks; however, if it occurs earlier it is associated with worse outcomes. Women who have had pre-eclampsia are at increased risk of heart disease and stroke later in life. The word "eclampsia" is from the Greek term for lightning. The first known description of the condition was by Hippocrates in the 5th century BC. You must login to add videos to your playlists.
For writers, editing is an essential part of the writing process. It not only helps to improve the quality and clarity of your work but also allows you to make sure that all aspects of your piece are consistent and accurate. But how do writers go about editing their own work? In this blog post, we’ll look at how experienced writers use effective techniques to edit their own writing. Why Editing Matters Editing is at the heart of any successful writing process, regardless of medium or genre. It allows a writer to perfect and refine their work, while also ensuring that it meets expectations in terms of grammar, punctuation, accuracy, tone, and clarity. For writers, this process of self-editing can be a laborious and painstaking one. But it is also essential if they are to produce work that meets the highest standards. Whether you are writing for yourself, for magazines or newspapers, or for online publications, having an effective editing process in place will save you time and effort in the long run. Developing a Concept Brainstorming is the key to getting started with any writing project; it helps a writer to explore different ideas and perspectives before committing to one particular angle or approach. Once an idea has been chosen, the next step is creating a first draft – this should be done without worrying about grammar or punctuation so that the writer can simply express their ideas clearly. After the initial draft is complete, it’s important for a writer to take some time away from the writing piece. Sometimes this can mean setting it aside for a few days and coming back to it with fresh eyes, or simply reading through it one more time to make sure nothing has been forgotten. During this stage of editing, writers should pay attention to how their ideas are presented: is there a clear structure? Is the message in line with the original concept? Does any part of the text feel out of place? These questions can help writers identify areas that need further work. By taking an objective view of their work, writers will be able to give themselves an honest critique and create stronger material overall. Drafting and Revising Once a first draft has been created, the next step is revising the content. This includes proofreading for errors in grammar and punctuation as well as ensuring that the tone of voice is cohesive throughout the piece. Reading it aloud can help to identify any awkward phrasing or areas where clarification might be needed; this should be followed by soliciting feedback from others who can provide an objective opinion on how to improve the work. Some common mistakes made during this stage include not allowing enough time for editing and not being aware of how one’s word choice might be perceived by the reader. To avoid making these errors, it is important to set aside adequate time for editing and to read through the work several times in order to identify any areas that require further clarification or improvement. Time management is often essential for content creators and freelance writers to ensure their work is polished and of the highest quality. To successfully edit one’s writing, it is important to plan ahead and allow enough time for multiple rounds of editing. One way to do this is by setting a timeline that incorporates the steps needed to complete the editing process including proofreading, revising sentence structure, and checking grammar and spelling. Additionally, taking breaks between each round of edits can be beneficial in order to maintain focus and spot mistakes that may have been missed otherwise. Final Steps Before Publishing or Sharing Content As content creators and freelance writers, it is important to take the time to properly edit your work before publishing or sharing it. The final steps of editing involve ensuring accuracy of facts and references, formatting your product in a way that makes it easier for readers to digest, and optimizing the content so that it can be easily discovered by search engines. Following these steps will allow you to create high-quality content that accurately conveys your message. Ensuring Accuracy of Facts and References The process of editing should include checking all facts and references used in an article. This stage helps ensure accuracy and builds credibility with readers. When conducting research for a project, double-check the sources used for any relevant information. If quoting someone, be sure to include the source and provide any necessary links. After you have double-checked your facts and references, you can move on to formatting. Formatting Your Final Product It is important to format an article in a way that will make it easier for readers to digest. This includes how the content is broken up into sections and how those sections relate to each other. Utilizing internal links allows readers to easily navigate between different parts of the article. Additionally, breaking up text with images or videos can make it more visually appealing and engaging for readers. Optimizing Content for SEO In order for your work to be discovered by search engines, it needs to be properly optimized with relevant keywords, meta tags, and descriptions. It is important to research the right keywords for your article so you know how to incorporate them naturally into the content. Additionally, providing relevant meta tags and descriptions can help search engines understand more about what your content is about. Following these steps will help improve the visibility of your work online. By following each step of the editing process, content creators and freelance writers can create high-quality work that accurately conveys their message and reaches a wider audience. As with any project, taking the time to properly edit an article before publishing or sharing it leads to better results in the long run. Discover the tools and expertise you need to succeed in freelance writing or content creation. Or if you’re a prospective client in need of high-quality content, we’re ready to help. - Buy my book “$100,000 per Year as a Freelance Writer: It’s Possible, and Here’s How” on Amazon for Kindle, Books2Read for Apple, Barnes & Noble, Kobo, Scribed, and more in ebook and print editions, and Payhip as a PDF - Subscribe to my premium weekly newsletter - Contact me directly for freelance writing and content creation services With our expert guidance, achieving all your content creation goals is within reach, so why not let us help you succeed?
The thrill of oceanic wildlife encounters and a trip to yesteryear await you in this gorgeous sea-side region WORDS BY KARYN FANOUS, PHOTOGRAPHY BY JOSEPH AND AARON FANOUS “There she blows!” was the excited cry as the humpback whale surfaced and sent a spout of water shooting high into the sky. And then another appeared, this time breaching and performing a spectacular back slap, sending out huge sprays of water. It seemed as though the whales enjoyed putting on a show for us inquisitive onlookers. At times they came very close to the boat, and appeared to be just as interested in us as we were in them. At other times, they did ‘backstroke’ as they floated belly up and slapped their giant fins on the water’s surface. They are such enormous, majestic creatures. We were aboard a whale watching and diving tour with Narooma Charters. Having boarded at the jetty in the sheltered harbour waters, as we cruised out towards Montague Island, 9km off the coast, we were joined by a pod of dolphins. Our main aim was to scuba dive and snorkel with seals, so whales and dolphins were an added bonus! Narooma was originally known as ‘Noorooma’, the Aboriginal word meaning ‘clear, blue water’, with colonial settlement beginning in the early 1800s. Around this time, dairy cattle and cheese factories were established and Wagonga and the small town at the head of the inlet was used as a port to transport gold from nearby Nerrigundah and the slopes of Mt. Dromedary (also known as Gulaga). This impressive mountain was named by Captain Cook because of its resemblance to a camel’s hump. Today, dairying continues along with timber, fishing, oyster farming and tourism as the region’s main industries. Narooma is a favourite holiday spot for us because of the chance to get up close to the seals. Spring is our preferred time of year, so we can meet up with humpback whales as they migrate south to Antarctic waters. Here at Narooma, the edge of the continental shelf drops off into the deep sea quite quickly, so it is not uncommon for whales to come close to shore. A variety of whales visit Narooma’s waters – one year, we were surprised by the sight of an Orca (Killer) Whale! There is a range of snorkelling and diving sites available. Our favourite is to dive in the shallow waters off Montague Island amongst the colony of Australian Fur Seals. These cute and playful seals love to frolic around underwater with scuba divers: twisting, turning and somersaulting with ease. Back on the surface, they often lie on their sides with one flipper raised in order to cool down. This is strange to us land-dwellers, as in spring, we find the water to be breathtakingly brisk! It’s no surprise that Montague Island is a Nature Reserve. It provides a home for NSW’s largest colony of Australian and New Zealand Fur Seals, and is a breeding ground for over 40,000 sea birds. Around 12,000 Little Penguins, Australia’s sole native penguin, are amongst its residents. These adorable birds feed at sea during the day and return to shore at dusk. The NSW National Parks & Wildlife Service conduct a variety of day and evening island tours covering Montague Island’s wildlife and Aboriginal and European history, including the lighthouse with its spiral staircase and stunning 360 degree view. If you are keen to see the Little Penguins waddling back to their burrows, then the evening tour is the one for you. Montague Island Lighthouse was built from granite and began operation in 1881. It was manually lit for 105 years until it was automated in 1986. The original light is now the centerpiece of Narooma Lighthouse Museum housed in the Visitor Centre. An essential part of the Narooma experience is Surf Beach, adorned at its southern end by Glasshouse Rocks. These striking, jagged pinnacles are a dramatic feature in the bright blue waters. Little Lake sits just behind the beach and makes a great tranquil swimming and kayaking spot. On the next beach to south, there are more fascinating rocky outcrops – some folded into grey and white-banded zigzags, and one that rears up like a shark’s open mouth. We’ve nicknamed this one ‘Shark Rock’. In Narooma Harbour you’ll find lovely little beaches. There’s also a very patriotic rock with a natural hole shaped much like Australia, not surprisingly named ‘Australia Rock’. It’s right next to Bar Rock Lookout on the southern headland at the harbour’s entrance. A very pleasant way to explore the harbor precinct is via the 850m-long Mill Bay boardwalk, which takes you along the pretty foreshores of Narooma. For those keen to wet a line, Narooma is well equipped for fishing. Wagonga Inlet and Montague Island are popular locations with fishing charters, with rock and beach fishing also on offer. There are numerous boat ramps and cleaning areas dotted around the town’s waterways. Flocks of hungry pelicans are likely to pay a visit whenever the day’s catch is being cleaned. The Wagonga Inlet sprawls quietly behind the town, providing lots of water sports fun. A fabulous way to explore the inlet is on the Wagonga Princess, a 100 year old electric boat originally designed as a ferry. Made from Tasmanian Huon pine, she is full of character and charm. You’ll cruise around the beautiful inlet with skipper Charlie at the helm, learn about its history, step ashore for a rainforest walk, and enjoy Devonshire and billy teas. There are plenty of land-based activities as well. The Narooma golf course, with its picturesque cliff-side location, has some legendary holes to challenge its golfers. Alternatively, you can take a nostalgic horse and carriage ride around town, or go on a scenic picnic tour. We thoroughly enjoyed the Wagonga Scenic Tourist Drive (27km; 1 hour return) through the forests around the inlet west of Narooma. The mostly unsealed roads are not suitable for caravans, so leave the van behind and take a picnic instead. We found a beautiful, secluded picnic spot along the drive and had a lovely time kayaking on the inlet. Along the way there’s a lush rainforest walk (30 minutes return). Grants Lookout gazes over the pretty Wagonga Inlet and Narooma, with Montague Island as the backdrop. Another beautiful drive is to head south along the Princes Highway. After 9km, turn east towards Mystery Bay, an excellent snorkelling location surrounded by Eurobodalla National Park. There is a lookout in the national park with an adjacent picnic area amongst wattles and banksias. The Mystery Bay Camping Area, right next to the beach, is a great spot to get back to nature. Campsites are unpowered and set in bushland. Back onto the Princes Highway, head southwest to the National Trust listed Central Tilba and Tilba Tilba. These quaint heritage villages, around 20km south of Narooma, were founded on dairying and gold mining. Pastelcoloured period buildings line the main street of Central Tilba as well as house antiques, art galleries, craft stores, cafes, an old fashioned general store, and the Dromedary Hotel. There are a few shops that we find essential to visit each time we are in Central Tilba. We can never resist dropping in to ‘The Tilba Sweet Spot’, an old-fashioned lolly shop; ‘Mrs Jamieson’s Tilba Fudge’ with its creamy homemade delicacies; and ‘South Coast Cheese’ to stock up on vacuum sealed cheeses to take home. Located in the old ABC Cheese Factory, South Coast Cheese is a destination in itself. Their specialty is handmade cheeses with enticing names such as Kalamata Garlic, Firecracker, Pickled Onion and Vintage Smoked. Not all of our purchases make it home because they are just too delicious! You’ll also need to have a milkshake made straight from the farm’s own Jersey cows. Cheese making and milk bottling can be viewed through the factory’s large windows. There are even cheese making courses available – so much to see and experience! Just 4kms to the north, Tilba Valley Wines produces a range of white and red wines from its vineyard overlooking Lake Corunna. Wine tasting, cellar door sales, and light meals are on offer. Seating options add a country feel. Guests can choose to enjoy lunch in the restaurant or a gazebo, on the verandah, by a pond, or by a log fire in winter. Tilba Tilba is a small village just 2 kilometres south of Central Tilba. This is where you’ll find the delightful Foxglove Spires Garden. Described as having “fairy tale like charm and quaint English style elegance”, this glorious garden is filled to overflowing with gorgeous plants and flowers and hidden paths. Antiques, collectables, homewares and a café are also available. Mt. Dromedary, also known by the Aboriginal name ‘Gulaga’, meaning the ‘Mother Mountain’, sits imposingly behind the Tilba towns. Its cloak of rainforests protects sacred aboriginal sites. This mountain is of great significance to the local Yuin people. Najanuga (Little Dromedary), a smaller rocky outcrop, sits proudly to the east. The mountain’s walking trail leaves from behind Tilba Tilba’s old fashioned general ‘Pam’s Store’. Once a huge active volcano, Gulaga now has a magnificent rainforest up top. Allow half a day to explore this striking landform. Back at the Surf Beach Holiday Park in Narooma, we enjoyed happy hour with our friends as we gazed over the ocean, breathed in the salty air, and discussed what adventures the next day may hold. There’s a lot to love about Narooma. Like us, you’ll want to visit again and again!
European scientists at RoboEarth have created the first Internet for robots, called Rapyuta. Not because robots need a place to look at porn and tweet their bad pope jokes, but to help robots get along in this strange, confusing world in which they play an increasingly prominent role. "Instead of every robot building up its own idiosyncratic catalogue of how to deal with the objects and situations it encounters," BBC reports, "Rapyuta would be the place they ask for help when confronted with a novel situation, place or thing." It sounds fine in theory — if you trust robots. But for those convinced that providing robots with a common brain will only hasten the arrival of the robot uprising against mankind, then Rapyuta is more like a dark harbinger of the apocalypse. We happen to be one of those people, so we reached out to Dr. Heico Sandee, RoboEarth's program manager at Eindhoven University of Technology in the Netherlands, to reassure us that Rapyuta will not lead to our destruction. "That is indeed an important point to be addressed," Sandee acknowledged in an-email. But he assured us that robots will use Rapyuta for no such thing. "The cloud computing systems we are developing are great for helping the robots for instance with recognizing specific objects, [but] they are far from making decisions on the scale you mentioned," he insisted. Sure. For now. But what about 10, 20, 30 years into the future, when robot chefs lose interest in identifying fruits and vegetables, and decide, you know what, let's nuke the moon? What will stop them then? "In order for that to happen, the robots need to become incredibly smart," Sandee told us, patiently. "Even smarter than the people that can circumvent those situations, to become realistic. Personally, I don’t believe that that will ever happen. In the end, it is us humans that also program the robotic systems to behave the way we want them to." We did not find this totally convincing. Couldn't the robots simply bribe or threaten their human overlords? After all, humans, unlike robots, are emotionally frail creatures. "I’m afraid you are watching too many movies," Sandee replied. That's probably what Miles Dyson would have said, too.
Seven Sacraments Coloring Pages for Kids The Diocese of Honolulu YouTube page offers a great video series on sacraments. The One 'Ohana series filmed in Hawaii presents a beautiful sense of Catholic family. '"Part of Hawaiian culture, Ohana, means family (in an extended sense of the term, including blood-related, adoptive or intentional). The concept emphasizes that families are bound together and members must cooperate and remember one another.” (Wikipedia) Crafts for Catholic Kids (Catholic Icing)Guidelines for the Celebration of the Sacraments with Persons with Disabilities (USCCB) Post Sacrament Evangelization Powerpoints (Archdiocese of Los Angeles) The Seven Catholic Sacraments (American Catholic) If you learn more about the sacraments, you can celebrate them more fully. You'll find easy-to-understand articles and a good sample of common questions and answers. On sacraments - from the usccb Click on the Faith Formation tab in the index below to see videos like The Creed, The Sacraments, Morality and Prayer. Each of the videos introduces the basic themes of each section of the Catechism. Video: Faith Formation (USCCB) Videos and other resources on sacraments from the United States Conference of Catholic Bishops.
Prostate cancer is a relatively common type of cancer among men above 50 years of age. Prostate is a small walnut-sized gland located above the urinary bladder, surrounding the upper part of the urethra. This form of cancer usually remains unnoticed as no symptoms appear in the early stages of the disease. As the disease progresses, the cancer may spread to other areas of the pelvis and may cause obstruction in urinary outflow or bone pain. Prostate cancer is reported to be the second leading cause of cancer death among men. Among all the diagnostic tests commonly carried out to screen prostate cancer, Prostate-Specific Antigen (PSA) blood test is the most effective one. It detects upto eighty percent of prostate cancers but a chance of false-positive is high with this test. A new form of the Prostate-Specific Antigen (PSA) test called Complexed PSA or cPSA is now available to screen for prostate cancer. Some organizations are touting cPSA as a more accurate form of PSA blood testing for the detection of early stage prostate cancer, but an abnormal result does not constitute conclusive diagnosis of cancer. PSA exists in three major forms in the blood: 1) free, 2) bound to a protein called alpha-1-antichymotrypsin and 3) bound to another protein called alpha-2-macroglobulin. People with prostate cancer have more of the form bound to alpha-1-antichymotrypsin and less of the free form as compared to healthy men or those with benign diseases of the prostate. Initial screening for prostate cancer commonly uses the total PSA test, which measures all but the third form of PSA. In case slightly elevated results appear then a follow-up test to measure free PSA is recommended. This however leads to many false-positive and false-negative results. Whereas the new cPSA test would replace these so that only a single screening test would be needed. According to a large scale study that included researchers from John Hopkins and the New York University School of Medicine, the cPSA test demonstrated improved specificity a over total PSA assay, reducing the number of false-positives and leading the investigators to conclude that cPSA could be used as a first-line test for prostate cancer screening. Doctors mostly recommend that men above 50 years should get themselves screened for prostate cancer even though they may not have any symptoms. However, one must consult a doctor before undergoing this test.
Listen to today's episode of StarDate on the web the same day it airs in high-quality streaming audio without any extra ads or announcements. Choose a $8 one-month pass, or listen every day for a year for just $30. You are here Moon and Jupiter ASTRONAUTS: Hey! There is orange soil. It's all over. Orange! Hey, it is! I can see it from here! It's orange! Crazy. Apollo 17 astronauts Jack Schmitt and Gene Cernan got a little excited when they stumbled on a patch of bright orange soil while hopping across the lunar surface in December of 1972. It was the only splotch of color in the otherwise gray landscape. And now, that same soil has gotten planetary scientists a little excited, too. A recent analysis found that it contains a lot of water. Combined with observations by robotic spacecraft, the finding shows that the Moon is a lot wetter than anyone had expected just a few years ago. That's an exciting prospect for future lunar explorers, because water is a precious resource. But it's a bit of a scary prospect for lunar scientists, because it makes it tougher to explain how the Moon was born. The leading theory says the Moon coalesced from the debris from a powerful collision between Earth and another planet-sized body. But that scenario should have left the Moon high and dry. The Moon could have picked up some water from impacts by comets and asteroids, but not as much as the study of the orange soil suggests. The theory of the "giant impact" isn't dead, but it'll need some tweaking. The Moon rises after midnight tonight, with the planet Jupiter trailing behind it. Jupiter looks like a brilliant star. We'll talk about one of its moons tomorrow. Script by Damond Benningfield, Copyright 2011
the human body 1 What are the four main blood groups? 2 In degrees Celsius, what is the accepted normal temperature for the human body? 3 In centimetres, what length of head hair does the average person grow in a year? 4 Which is the longest bone in the human body? 5 Which organ produces insulin? 6 A typical adult skeleton has how many bones: a) 206 b) 216 c) 226? 7 Where on your body is the hallux? 8 Which part of the body is affected by lumbago? 9 By what term is the laryngeal prominence better known? 10 Which part of a human being's brain processes visual information received from the eyes? Solution: 1 A, B, AB and O 2 Thirty-seven degrees Celsius 3 15cm 4 The femure, or thigh bone 5 The pancreas 6 a) 206 7 In the foot (it's the big toe) 8 The lower back 9 Adam's apple 10 The occipital lobe.
Questions about the sport of baseball. The [mlb] tag should also be used if the question concerns Major League Baseball directly. Baseball is a bat-and-ball sport in a similar family as cricket, with origins in 18th-century England and first played in the United States in 1791. The game as currently known was primarily developed from its original form in the United States, with professional national leagues originating between 1869 and 1876, and has recently spread to other countries including Japan, China, and Central America. The game is played by two teams of nine players each, with some leagues including most professional leagues allowing for substitutions. The game is played on a purpose-built level field called a "baseball diamond" or simply a "baseball field". Exact measurements vary depending on the age range and skill level (and some measurements can even vary from field to field), but the central feature is a square arrangement of four "bases", spaced 90 feet apart for major league play, with a raised mound in the center. Three bases are square sand- or fiber-filled white bags anchored to the ground numbered counter-clockwise from the right as first, second and third base, while the fourth is a house-shaped white hard plate embedded in the ground, called "home plate". The ground between and around each of the bases is bare dirt, with white chalk "base lines" emanating from home base out to first and third base and then beyond. The central "pitcher's mound" is also bare dirt, elevated ten and a half inches above the level of home plate, with a white hard stripe embedded on it (called the "rubber") exactly 60 feet, 6 inches from home plate in the major leagues. Beyond this "infield" lies the "outfield", bounded on the sides by the "base lines" which continue from home plate beyond the first and third bases, covered primarily with grass and with a fence or wall along its rear border (at a distance to home plate varying between 300 and 450 feet in the majors). At the end of each base line, at the rear fence, is traditionally a pole, called the "foul pole", used to help judge whether a ball hit into play is "fair" or "foul". The primary and most recognizable pieces of equipment used are the baseball itself, the baseball bat, and the baseball glove. The ball is 9 inches in circumference, roughly spherical, and traditionally constructed of a cork core, wound tightly with thin yarn and skinned in two pieces of white leather stitched together with red lacing. The bat is a club-shaped implement made of hardwood that can be no longer than 42" and no wider than 2.75" at its widest. The glove is worn on the player's non-throwing hand, is made of leather or synthetic fiber, oversized to provide padding and additional catching area, and typically has a web or solid layer of leather between the thumb and forefinger again to assist in catching. Players also wear myriad protective equipment at different times; all players typically wear an athletic cup or pelvic protector as well as athletic cleats. Batters typically wear a hard-shelled helmet that protects the cranium and ear, and often wear a shin guard on the outside of their leading leg and/or a forearm guard on their leading arm to guard against hits. The catcher, a specialized position, uses a more padded glove, and wears shin and knee pads, a chest protector and a padded face mask or face cage. Play of a game is divided into 9 innings, which are further subdivided into halves, and then into outs. At the start of an inning, one team (typically the visitor) is "batting", and the other team is "fielding". The fielding team occupies various traditional positions around the infield and outfield; typically there is a "baseman" on each base, a "shortstop" behind and to the right of second base (often the second baseman and shortstop take mirror-image positions behind and to each side of second base), three "outfielders" dividing the outfield into "left", "center" and "right field", and the "pitcher" (on the pitcher's mound) and "catcher" (behind home plate) which together are called the "battery". The batting team sends players one at a time to home plate, where each batter receives a series of "pitches" (throws of the ball) from the fielding team's pitcher, and attempts to hit the ball with their bat so that it flies out into the field of play. A pitch that crosses over the home plate, between the batter's knees and belt buckle, is a "strike"; a batter that receives three strikes has "struck out". A pitch that bounces off the ground or does not pass through this "strike zone" as it crosses the front edge of the home plate is a "ball"; if the pitcher delivers four balls to a batter, the batter "walks" to first base. Regardless of the location of the ball, if the batter swings at a pitch and misses, it is a strike. If a pitch hits the batter's body while the batter is standing in the "batter's box" to the side of home plate, the batter immediately advances to first base. If the batter hits a pitch with the bat and it flies backwards, crosses either base line before reaching the base on that side, or carries in the air to the outside of either foul pole, it is a "foul ball" and counts as a strike (but the third strike must be a "called" or "swinging" strike, not a foul ball) and additionally a foul ball can be caught by a fielder before it hits the ground, and if so the batter is out. If and when the batter hits the ball into the field between the base lines and/or the foul poles, it is a "fair ball" and the batter attempts to reach first base. To reach it safely, the ball he batted into play must hit the ground in bounds before being caught, and the batter must reach first base before the fielding team can tag first base with a hand or foot while holding the ball. If the ball is caught before it lands, the batter is "caught out"; if the batter fails to reach first base before the fielding team can tag first base while holding the ball, the batter is "forced out". If the batter reaches first base without either of these happening, he is "safe", and becomes a "base runner", attempting to advance from first to second to third base, and then home to score a "run", as subsequent batters put the ball into play. The runner may only try to advance on a batted ball if the batter is not caught out, though if the batter is caught out the runner may "tag up" by touching their current base, and then attempt to advance (this is typically seen when the ball is hit deep to the outfield, and is known as a "sacrifice fly"). Only one runner may occupy a base, and if a runner is on a base and a batter or another runner is advancing to that base after the ball is put in play, the runner must vacate the base and advance, and can be "forced out" if the fielding team simply tags the base the runner must get to. A runner may also, at their option, attempt to "steal" a base by running from their current base to the next base without the ball having been batted into play. This is risky, as the pitcher or catcher can throw the ball to a baseman that can then tag the runner out. A batter that hits the ball over the rear fence of the outfield advances safely, without fear of being tagged out. If the ball carries over the fence without bouncing, it is a "home run", and the batter and any runners already on base advance to home plate and score (a "grand slam" is a home run hit with the bases "loaded" - all occupied by a runner - and so scores four runs). If the ball bounces at least once and then ends up over the rear fence, that is a "ground rule double"; the batter advances to second base and all runners advance two bases (with any runner on second or third scoring a run). Certain other situations are also called "ground rule doubles", most commonly when a member of the fielding team not in active play or a spectator interferes with a batted ball on the field. After three players on the batting team have been called out, the half is over, and the teams switch sides; the fielding team comes in to bat, and vice versa. After two halves, the inning is over, and after 9 innings, if one team has scored more runs than the other, they win the game (if the home team, which bats second in each inning, leads at any time after the top of the 9th inning, the game is called at that point even if one or more outs are remaining for the home team; this is known as a "walk-off"). If the score is tied after 9 innings, one or more "extra innings" may be played, until one team scores more runs in their half of the inning than the other team can. There is no "sudden death" in most leagues; both teams will have the same opportunity to score runs in each inning. Youth and adult recreational leagues often do not play extra innings due to time constraints, but major league games cannot end in a tie except in rare circumstances where the outcome of the game will not affect overall standings for either team. Play of the game is untimed, although an informal pace of play is enforced by the umpire; players and coaches may not unduly delay the play of the game. Despite this, games can last over four hours for the normal nine innings of play. The longest game ever played in the major leagues was between the Chicago White Sox and the Milwaukee Brewers in 1984, which required 25 innings to decide and lasted 8 hours and 6 minutes. Baseball is played primarily by boys and young men in the United States (with girls, women and older men playing the related but slower game of softball), at age levels beginning at 4 years old with "tee ball" (the ball is not pitched to the batter but instead simply placed on a tee above home plate), and progressing through "Little League" (for various age ranges younger though high school), interscholastic high school, college (NCAA), and minor and major professional leagues. The two major professional baseball leagues in the United States are the American League and the National League, which together form the Association of Major League Baseball or MLB. Major League Baseball is the third most popular spectator sport in the United States, behind NFL professional football and NASCAR motor racing. The two leagues of MLB have slightly different rules, the primary one being whether the pitcher must also bat (the National League requires this), or if instead the pitcher may be replaced in the batting order with a "designated hitter" (the American League, as well as most amateur, college and international leagues, allow designated hitters, although if the pitcher wishes it and the manager allows it, the pitcher may bat in any league). Baseball is also popular in some non-US countries and territories; Japan in particular has embraced the game with the Nippon Professional League, divided into Central and Pacific leagues much like the NL and AL in MLB, and the game also has a devoted following in other Asian countries, and in many Caribbean and Central American territories, which produce many star players in the U.S. major leagues. The game is part of the Pan-American Games, and was formerly an Olympic sport starting in 1904, but was discontinued after the 2008 Beijing Olympics primarily due to a lack of universal following and thus regional dominance by North American and Asian teams. The "Big League World Series", the championship tournament of Little League's high-school-age league structure, is therefore the most geographically diverse baseball tournament currently in operation, attracting teams from every continent. Baseball's parent sport, cricket, is more globally popular, having been introduced to nations of the former British Empire such as Canada, Australia, New Zealand, India, the Virgin Islands, Hong Kong, Egypt, and many African nations.
It seems pretty unbelievable really but, can you be overweight and malnourished? That is a very contradictory situation. On one hand we see being overweight as having too much nutrition on a consistent basis with not enough exercise. On the other we usually think of malnourishment as someone who is extremely skinny who hasn’t eaten in days. The fact is that you can be overweight and malnourished very easily. It has everything to do with diet. To understand this contradictory situation you must understand what the types of nutrients are, and how they should be consumed. Then you can see how the lack of certain nutritions can create such a precarious situation. A macronutrient is is a carbohydrate, protein, or fat. They are the main providers of calories in the diet. This type of nutrient provides the body with energy to function and grow. Basically, if you intake lots of macronutrients with no other micro or phytonutrients it is very possible to become overweight. The nutrient deficiency comes into play when too much of this is consumed, and not enough foods rich in vitamins and minerals. These nutrients are vitamins and minerals found in vegetables, whole grains, and fruits. Unlike macronutrients, you only need them in trace amounts. Hence the prefix, “micro”. Without these nutrients the body can grow and move, but its functionality will suffer tremendously. It can even cause severe sickness and death. These nutrients are often just an afterthought when we make our food selections. When creating a healthy diet they really should be the number one priority. Micronutrients play a much bigger role in weight loss than you would ever think possible. Another nutrient that is similar to micronutrients are phytonutrients. These nutrients, like micronutrients are only consumed in very small amounts. They are found in bright fruits and vegetables. This is basically what provides those foods with its bright colors. Phytonutrients are not really a necessity because you won’t die from a deficiency. The importance behind these nutrients is that it can have a huge effect on whether you keep that fat packed on, or lose it. Let’s take a look at some benefits of these trace minerals and how they effect weight loss. - Muscle Maintenance: The trace nutrients maintain muscle mass, strength, and function. The more muscle mass you have, the higher your metabolism will be even at rest. That means more calories are burning while you’re just sitting there. - Decrease Inflammation: Some experts are now saying that obesity is a chronic level of inflammation. The problem is that with inflammation it’s harder to lose fat and build muscle. It’s like a cycle from some dietician’s nightmare. With more fat mass comes increased inflammation, so it gets harder and harder to lose. Phytonutrients are anti-inflammatory and provide you with an easier time to lose weight. - Don’t Eat As Much: Your body obviously needs a certain amount of vitamins, minerals and phytonutrients. Some experts are starting to think that if you eat lots of calorie dense foods that are low in these nutrients, your body will tell you to keep eating until you get what you need. In other words, if you don’t eat foods with enough good nutritional value, your body will have a harder time sensing when it has had enough. The bottom line is that it is very possible to be overweight and still be malnourished. It simply comes down to the ratio of macronutrients to micro and phytonutrients that you consume. If you only focus on how many calories you are consuming, chances are you aren’t getting enough vitamins and minerals. Not getting enough of those nutrients can cause you to be malnourished, no matter what you look like. A nutritious diet is the number one priority to lose weight. Your plate should always be filled with lots of fruits and veggies. Some basic diet hints can be found in the article, All You Need To Know About The Practical Diet. If you don’t think you are getting enough vitamins through your diet then a supplement may be a good option. If you have trouble eating vitamins or fruits you should definitely check out, 7 Reasons Why Juicing Can Change Your Life. Juicing can make it much easier to get those food groups into your diet.
The Black Hole of Empire History of a Global Practice of Power When Siraj, the ruler of Bengal, overran the British settlement of Calcutta in 1756, he allegedly jailed 146 European prisoners overnight in a cramped prison. Of the group, 123 died of suffocation. While this episode was never independently confirmed, the story of “the black hole of Calcutta” was widely circulated and seen by the British public as an atrocity committed by savage colonial subjects. The Black Hole of Empire follows the ever-changing representations of this historical event and founding myth of the British Empire in India, from the eighteenth century to the present. Partha Chatterjee explores how a supposed tragedy paved the ideological foundations for the “civilizing” force of British imperial rule and territorial control in India. Chatterjee takes a close look at the justifications of modern empire by liberal thinkers, international lawyers, and conservative traditionalists, and examines the intellectual and political responses of the colonized, including those of Bengali nationalists. The two sides of empire's entwined history are brought together in the story of the Black Hole memorial: set up in Calcutta in 1760, demolished in 1821, restored by Lord Curzon in 1902, and removed in 1940 to a neglected churchyard. Challenging conventional truisms of imperial history, nationalist scholarship, and liberal visions of globalization, Chatterjee argues that empire is a necessary and continuing part of the history of the modern state. PARTHA CHATTERJEE is professor of anthropology and of Middle Eastern, South Asian, and African Studies at Columbia University; and honorary professor at the Centre for Studies in Social Sciences, Calcutta. His books include The Politics of the Governed and Lineages of Political Society. “This is a powerfully argued account of the origins and subsequent justification of British rule in India, and an exploration of the response by Bengali elites to colonialism. A work of classic history, this book carries an intellectual power and brilliance of insight that will excite much interest and comment.”—Thomas Metcalf “Moving skillfully between gripping narrative and thoughtful interpretation, The Black Hole of Empire is a deeply researched, brilliantly crafted, and exquisitely written work on the British empire in India. Ambitious and complex, it richly resonates with contemporary political and ethical concerns. A masterly work by one of the finest intellectuals of our times.”—Sugata Bose Hardback / 440pp / Rs 795 / ISBN 81-7824-356-3 / South Asia rights / May 2012 Copublished with Princeton University Press
As you can see in the chart above, there is a significant divergence in the price of natural gas across US, Europe and Japan. This divergence is the evidence of limited integration of the natural gas markets. Natural gas markets are much less integrated than oil markets because of the cost and logistical hurdles in trading gas across borders. Transportation of natural gas requires either pipeline networks or liquefaction infrastructure and equipment at the source, and then re-gasification infrastructure at the destination. The costs and the time involved in building the infrastructure have created global market dislocations causing price divergence across regions. The prices here have dropped in the recent times due to the ongoing oil shale boom, whereas the recent Fukushima disaster in Japan has seen the prices shoot through the roof there as natural gas plants have been roped into meet energy shortages arising from inoperative nuclear plants. Had there been full integration of the gas market, there would have been uninhibited trade of the supplies resulting in some convergence in the prices across regions, contrary to what we are seeing today. Apart from integration hurdles, the regions have divergent pricing mechanisms too. In the US, the prices of gas are determined in the spot markets, but in Asia the prices are indexed to the crude-oil prices. In Europe, the prices are determined by a combination of spot prices and indexation. The different pricing mechanisms have added to the divergence resulting in segmented markets The recent shale boom in the US has made it the largest natural gas producer in the world. Surging supply and weak demand, has caused prices in the US to fall sharply without having any bearing on other markets, where the prices are relatively at elevated levels partially due to reasons ascribed earlier. Prior to the boom, US were a net importer of gas and therefore have tremendous import regasification capabilities. With the boom, the unused regasification facilities have become redundant, and cannot be converted to liquefaction facilities because liquefaction capacity required is different from regasification capacity. Apart from infrastructure hurdles, there are regulatory hurdles (for detailed discussion read here) too. Firms in the US are required to obtain authorization to export natural gas (except to Canada and Mexico). In the medium term, the regulatory hurdles are expected to be removed, triggering the building up and reconversion of liquefied natural gas infrastructure for export purposes. The US House of Representatives have passed a bill HR.6 – Domestic Prosperity and Global Freedom Act – in June of this year removing the regulatory hurdles that prevent export of natural gas to non-FTA countries and directing the Department of Energy to streamline the process to issue decisions on applications to export natural gas. The new make-up of the Republican-majority Senate should help HR. 6 pass through the elder house without any hassles and should be on the President’s desk to sign sometime next year The Fukushima Daiichi nuclear disaster in March 2011 induced a sharp increase in natural gas usage. Before the disaster, one-quarter of Japan’s energy was generated by means of nuclear plants. Following the disaster, the Japanese government shut down production at all nuclear reactors in the country and to make up for the resulting loss of electricity generation, Japanese power companies increased the use of fossil-fuel power stations and appended natural gas turbines to existing plants. As a result, Japan’s liquefied natural gas imports have increased dramatically by about 40% since the disaster. This sharp increase in demand from Japan has caused higher prices in Asia, and particularly in Japan. Japan is thus now the world’s largest importer of liquefied natural, and in 2013 imported 119 billion cubic meters, about more than one-third of the world total. Increased natural gas demand from Japan has helped offset the reduced imports from the US, and countries like Australia, Brunei, Indonesia, Malaysia and Qatar have seen their liquefied natural gas exports to Japan rise rapidly. In the medium term, prices in Japan are expected to decline when the nuclear reactors resume power generation. The European gas prices are bound to edge lower as the European countries move away from indexation to crude-oil. The geopolitical tensions between Russia and Ukraine do not pose any risk to the prices in continental Europe as was evidenced in the recent times, when as recently as January 2009 the Russian energy giant Gazprom cut off all supply to Europe via Ukraine. Since January 2009, Europe’s dependence on natural gas transiting through Ukraine has decreased from 80 percent to roughly 50 percent with the opening of Nord Stream. In the medium term prices in US are bound to increase with rapidly rising exports, but markedly lower that in Europe and Asia.
Outline and evaluate the effect of media on anti-social behaviour (24 marks) Definition of anti-social behaviour– Anti-social behaviour is behaviour that lacks consideration for others and may cause damage to the society, whether intentionally or through negligence Commentary among the relationship between the media and the production of anti-social behaviour amongst findings from recent years of studies. Many studies have demonstrated an association between television viewing and subsequent aggression amongst viewers for viewers, yet no clear-cut answers have yet emerged about how television/media has such an influence. |Cultivation theory by Gerbner et al. (1986) The idea that TV provides a systematically distorted view of reality (e.g. shows a higher extent of negative news. In result, that Long term TV viewing causes viewers attitudes to become similarly distorted, (e.g. the viewer believes that more violent crimes exists and they are in more danger of suffering the effects of that crime than for say positive experiences) -in result the distortion is the cultivation effect.| |Cognitive PrimingIs the activation of the existing aggressive emotions, (e.g explains why children observe one kind of aggression on TV and commit another kind of aggressive act afterwards). The idea that immediately after a violent programme the viewer is primed to respond aggressively because a network of memories involving that aggressive feeling is retrieved. In result frequent exposure to such material can lead to children storing scripts for aggressive behaviour in their memories, which can be recalled later on. (but not everyone will suffer from this effect)| |Desensitisation The idea that media violence may stimulate aggressive behaviour by desensitising children to the effects of violence, by the frequent exposure to such violent material. In result of frequent exposure to violence the child for example becomes less sensitive to violence and in result the more acceptable aggressive behaviour becomes for that exposed child. In result the child that becomes desensitised to such violent behaviour is more likely to engage in violence themselves-anti social behaviour.
In far flung corners of the globe, where tedious matters of grim reality tend to be of greater concern than the theoretical possibility of the ravages of global warming, there seems to be a growing realisation that, to generate interest from the western media in stuff that is actually happening, it’s necessary to frame stories in terms of climatastrophe. The BBC, for example, did not report on the recent wildfires in Nepal while they were actually burning. But given the excuse to rummage through the embers for signs that climate change is real and is happening, they’re right onto it: Climate change ‘fans Nepal fires’ The forest fires that flared unusually viciously in many of Nepal’s national parks and conserved areas this dry season have left conservationists worrying if climate change played a role. At least four protected areas were on fire for an unusually long time until just a few days ago. The BBC’s entire case hangs on comments from two interviewees. First, there’s Department of Hydrology and Meteorology chief Nirmal Rajbhandari, who lists a string of undesirable weather events in the region before lumping the wildfires into the mix and blaming it all on global warming: “Seeing all these changes happening in recent years, we can contend that this dryness that led to so much fire is one of the effects of climate change,” said Mr Rajbhandari. You can hardly blame Nepalese officials for jumping on the climate change wagon if it’s all that will make the western media prick up their ears. But it’s hard to forgive professional catastrophists WWF, who provide the BBC with its second line of evidence: Anil Manandhar, head of WWF Nepal, had this to ask: Are we waiting for a bigger disaster to admit that it is climate change? “The weather pattern has changed, and we know that there are certain impacts of climate change.” He might have intended his question to be rhetorical, but if sanity is to be maintained, it demands an answer: No. How can the size of a disaster possibly indicative of the strength of its connection to climate change? What we are waiting for is evidence that climate change is causing more frequent and/or more serious disasters. While opportunist NGOs and business interests are happy to push their climate disaster-porn at any opportunity, they do so without a scientific basis. And that is true globally, let alone on the local scales being discussed in the BBC story, as the one scientific expert quoted is only too aware: However, climate change expert Arun Bhakta Shrestha of the Kathmandu-based International Centre for Integrated Mountain Development (ICIMOD) was cautious about drawing conclusions. “The prolonged dryness this year, like other extreme events in recent years, could be related to climate change but there is no proper basis to confirm that. “The reason (why there is no confirmation) is lack of studies, observation and data that could have helped to reach into some conclusion regarding the changes.” But two against one is plenty for a climate-change scare story. If the rest of the report is to be believed, forest fires are not uncommon in Nepal at this time of year. But this year, they have been more serious than usual: Most of the fires come about as a consequence of the “slash and burn” practice that farmers employ for better vegetation and agricultural yields. But this time the fires remained out of control even in the national parks in the Himalayan region where the slash and burn practice is uncommon. In some of the protected areas, the fires flared up even after locals and officials tried to put them out for several days. And Nepal has experienced an unusually dry winter: For nearly six months, no precipitation has fallen across most of the country – the longest dry spell in recent history, according to meteorologists. “This winter was exceptionally dry,” says Department of Hydrology and Meteorology chief Nirmal Rajbhandari. “We have seen winter becoming drier and drier in the last three or four years, but this year has set the record.” Rivers are running at their lowest, and because most of Nepal’s electricity comes from hydropower, the country has been suffering power cuts up to 20 hours a day. It can’t come as much of a surprise, even to the BBC, that drier conditions make a landscape more fire-prone. And nowhere a mention of the role of natural variation. But then natural variation comes in two varieties. There is the type that is ignored by ‘deniers’ asking awkward questions about recent temperature plateaus. And there’s the type that is to be disregarded for the sake of alarmist stories about single, aberrant weather events. Had it not been for recent drizzles, conservationists say some of the national parks would still be on fire. Drizzles caused by climate change, perhaps?
The start of the financial crisis in 2007 has often been described as a ‘Minsky moment’, named after the late American economist Hyman Minsky. Minsky challenged mainstream theory about the way business cycles worked, and the impact of the financial sector. Outside the world of academic economics it may be a commonplace cliché to say that ‘the banks caused the crisis,’ but most mainstream models in fact largely ignore banks. But Minsky placed money, banks and debt at the centre of his view of the economy. And whilst most contemporary economic theory revolves around the concept of equilibrium, Minsky saw the economy as inherently unstable, with periods of stability leading to companies and households taking on ever more levels of debt, creating instability and eventually increasing the risk of a crisis. That, in a nutshell, is Minsky’s instability hypothesis. Minsky was also a great supporter of job guarantee schemes and Labour’s Future Jobs Fund was inspired by his ideas. Australian economist Steve Keen had been following debt levels in developed economies with Minsky’s ideas in mind for some years before 2007, and became alarmed at the explosion of private debt. Keen was one of only a handful of economists who saw the crisis coming and warned about it – to no avail. Over the past few years Steve Keen has been building a powerful economic software package, named Minsky, that will help economists to take into account money and financial institutions when modeling the economy. Though it will never be possible to predict future developments accurately (this is a sophisticated piece of software, not a crystal ball), such a tool would help experts in economics departments, central banks and treasuries to better understand how a modern capitalist economy works, and hopefully enable them to see potential disasters before it is too late. Besides taking into account money and banks, Minsky draws on and adapts the ideas of economists such as Wynne Godley, software applications from engineering and the sophisticated mathematical concepts that are used in modeling complex systems such as climate. Thanks to a grant from the Institute for New Economic Thinking (a think tank founded in the immediate aftermath of the crisis with funding from George Soros), Steve Keen has been able to develop Minsky with the help of computer scientist Russell Standish. He has now launched a campaign through the crowd funding platform Kickstarter to raise funds to be able to continue working with Russell on the next stage of Minsky. Minsky is a software package, not a model, so there will be the flexibility to incorporate different views and different circumstances. I personally would love to see a role-playing game, combining Minsky with fantasy, giving everyone a chance to learn about the economy in a fun way. The aim of this current Kickstarter campaign is to raise a total of $50,000 and anyone can make a contribution. All you need to do is go onto the Kickstarter site: If you click the ‘Manage Your Pledge’ button on the top right-hand of the page, you can determine your pledge. Once you decide your pledge amount, you will be taken to Amazon.com where you need to confirm the pledge. Only if the total reaches $50,000 by 17 March will the money actually be charged. Steve’s actual ambition is much higher—to raise $1 million to be able to put several person-years of programming time into developing Minsky fully. Almost all the money will go to developing Minsky, with around 10% being deducted for admin charges by Amazon and Kickstarter. And if your pledge is large enough you will be eligible to receive a free copy of Steve Keen’s book ‘Debunking Economics’ which I would strongly recommend. Steve Keen was one of the few economists in the world who saw the crisis coming, and his work is an important contribution to developing a new vision for economies could be managed, in the aftermath of the wreckage of the largest financial crisis and the longest downturn since the Great Depression. I strongly encourage you to help with this effort. Tanweer Ali s a finance lecturer with the State University of New York, currently based in Prague
View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app! - The Australian Coat of Arms- Find a picture. What does each part represent? The shield contains the badge of each state of Australia (Queensland, New South Wales, Victoria, Western Australia, Tasmania and South Australia) - the territories (ACT and Northern Territory) are not represented. The Red Kangaroo and Emu represent the unofficial animal emblems of Australia. The seven point star (Commonwealth Star) - six points’ represents the six states while the seventh point represents the combined territories. -The Australian flag- Find a picture. What does it represent? Blue - vigilance, truth and loyalty, perseverance & justice White - peace and honesty Red - hardiness, bravery, strength & valour The basic style shown in the picture of the Australian flag is described as Canton - Immigration- When and why did people begin to migrate to Australia? The immigration history of Australia began with the initial human migration to the continent around 50,000 years agowhen the ancestors of Australia Aborigines arrived on the continent via the islands of Maritime Southeast Asia and New Guinea. From the early 17th century onwards, the continent experienced the first coastal landings and exploration by European explorers. Permanent European settlement began in 1788 with the establishment of the British Crown colony of NewSouth Wales. Aboriginal Culture- Find out what the Aboriginal Flag represents Black: Represents the Aboriginal people of Australia. Red: Represents the red earth, the red ochre and a spiritual relation to the land. Yellow: Represents the Sun, the giver of life and protector.
How to grow a bigger Christmas cactus? Usually, Christmas cactus grow small. However sometimes they may grow bigger. So, to make them grow bigger and stronger you need to know what kind of care practices you need to practice. Usually, a christmas cactus would grow up to two feet in a short span of lifetime. They will keep developing new growth during their entire lifetime. Literally the mature these plants get the larger they become. Christmas cactus are a special set of plants as they never stop growing in their life. They can keep blooming and keep growing stronger and vigorously. To be more precise, they do not have a definite maximum size. When considering that, there are some care tips which you may need to practice when making them grow bigger and stronger. To briefly explain on the plants first, many people tend to grow them as houseplants due to the unique looks they have. If you live in USDA hardiness 10-12, you may grow them as outdoor plants as well. They bloom with colorful blooms which will usually arise on thick scalloped stem sections of the plants. They would blossom with flowers from late November to late January. In fact, this is quite famous as a gift item which many people tend to give. How to grow a bigger Christmas cactus? If you wish to have a Christmas cactus which is grown bigger, ensure that you have not grown them in an excessively larger pot. If you accidentally grow them in a bigger pot, it would slow the growth of the plants. So, if you experience any slow growth of the plants, you need to consider repotting them into a smaller pot than the pot in which you have grown them before. Besides, that pot needs to have sufficient draining holes too. Further I encourage you to have a better draining soil mix too. You can make one from blending equal parts of potting soil, leaf mold and from sand. That will also provide new growing conditions for the plants. You need to repot the Christmas cactus every four years. You need to choose a pot which is at least two inches bigger than the initial pot when you repot them. It is very important that you water these plants properly as it will have a great impact on making the christmas cactus grow bigger. Literally you need to keep in mind that you should never over water the plants at any given point of time. If you do so, it will badly affect the growth of the plants and make them prone towards diseases such as root rot. To be precise, you should water them only if their soil is dry. In addition to that when you water them, you need to water them to a depth of 1 inch. Once a week watering would be adequate for them in general when they are actively growing. It will be sufficient for them to thrive and to have healthy growth. Never water them when they are dormant as they would be in relaxing mode during winter. Once you complete watering, you need to make sure that you allow their soil to dry between two watering sessions also. More importantly check whether the excess water is moving out of the pot without retaining it in the pots as well. If you stick to a consistent watering schedule, you can make the plants grow faster too. Further ensure you use soft water for this purpose too. Not only that but also, you should ascertain whether there is a proper air circulation in the pot too. You may also use Epsom salt to enhance the growth of the Christmas cactus. To do that you can take one teaspoon of the salt and blend it with one gallon of water once a month during the aforesaid periods. Besides, you may also feed them with a water-soluble fertilizer. So, you can add ½ of a teaspoon of 20-20-20 water soluble fertilizer into one gallon of water and then apply it. You may feed them with this just once a year. However do not feed them with Epsom salt right after you apply the regular fertilizers in the same week. Suspend feeding them during their dormancy. Best is to expose the plants into a room air temperature at 60-68 Fahrenheit when there is colder temperature available. On the other hand, they would prefer to have a temperature around 70-80 degrees Fahrenheit during spring and summer. Keep in mind that they should not get bright sunlight more than eight to ten hours per day during fall and winter. It will simulate the bud formation and the flowering of the plants. Will my Christmas cactus get bigger? As explained in the above, Christmas cactus are a set of plants which do not stop growing at a certain point of time during their life. Instead, they would keep growing throughout their lifetime. They do not have any maximum size to reach as such. They would keep adding new growths to their foliage annually. They would develop new growths depending on the growing conditions they get. To reiterate, they can be about 2 feet within a very short period. How large can a Christmas cactus get? Christmas cactus can be about 2 feet in spread within about two years of time. Further they can stay alive for decades as well. They grow faster than many other cactus species. To wind up, Christmas cactus are such a versatile fast-growing set of plants which would be quite interesting to grow. It would be truly fulfilling to watch them growing bigger and faster. Read Next : When To Propagate Christmas Cactus?
The indemnity agreements were not always addressed by such terms when the inception of this insurance took place. In the 1800s, the agreement was not as well-established and detailed as it is today. To ensure cooperation and delivering promised results, governments, businesses and even individuals made contracts as legal documents in case any party intentionally or unintentionally did not fulfill the contract. According to an article in the New York Times, Haiti had to pay France an – independence debt in 1825, when it got freedom from French rule. These were to compensate the losses that French plantation businesses and individual owners suffered in slaves and land. This is one of the most historic cases known where indemnity was evident. Another instance that is recorded as a case of indemnity happened in England, in 1884. Dr. David Bradley was wrongly accused and sentenced to 2 years in prison. In 1885, the Medical Defense Union was formed when other medical practitioners revolved against the decision. It was found that the apparent victim was a woman suffering from erotic delusions during and post-epileptic seizures. The falsely accused doctor was granted pardon and served only eight months in prison. Awareness to defend the doctor’s livelihood and reputation rose among other medical practitioners. Indemnity is commonly seen in post-war situations where the country that wins the war demands from the losing country. The indemnity may take years or decades to pay off. A classic example is when Germany paid as indemnity after its role in the World War I. The extent and amount of indemnity were so high that it was finally paid off only by 2010. The word ‘indemnity’ comes from the Latin word – indemnis which means ‘unhurt, undamaged, without loss.’ Monetary payments were done in indemnity or replacement and repairs were offered. Post World War I, over the next few decades, professional indemnity insurance in Singapore has undergone drastic changes and revisions to better the insurance provisions for traders and governments. Due to globalization and privatization, the growth and development of large-scale industries affect the government directly. Professional indemnity insurance has been able to provide their clients exactly what they expect and what they need for the smooth functioning of their business. When insurance can protect individuals despite unintentional errors in judgment or mishaps, these individuals can take a leap of faith by venturing into a new market as they learn more about the tricks of the trade.
On the 23rd June 2016 the British public voted to leave the European Union and for various reasons we are trying to stop this from actually happening and undo Brexit. What can I do to stop Brexit? If you voted to leave let them know you've changed your mind. As above, but more succinct. Take part in a protest in: London Saturday, 2nd July 2016. 11am. Why ‘undo Brexit’ - Many of the arguments of the leave campaign were based on things that were untrue and so many that voted to leave felt tricked and regret their decision. - The vast majority of young people who must face the consequences of this voted to remain or could not vote as, unlike, the Scottish independence vote, 16-17 year olds could not vote. It isn't fair on them. - Many groups also greatly affected by the result were unfairly excluded from taking part, for example British people living abroad and Europeans living and working in the UK. - We're scared about our futures and we don't know what else we can do. Spelling mistake? Other ideas of things to do? This is an open source project, which means anyone can contribute code or ideas on how to stop Brexit.
What can the butterfly teach us? “Social ecology is an appeal not only for moral regeneration but also, and above all, for social reconstruction along ecological lines.” – Murray Bookchin Social ecology seeks to philosophically fuse the natural world (first nature) with that of human society (second nature), saturating the latter in the roots of the former. By appealing for moral regeneration, social ecology strives to socially reconstruct present-day society along ecological lines. Society is in need of identifying and replacing forms of social domination associated with our economic system. Social ecology presents such a case. It claims that the environmental crisis is a result of the hierarchical organization of power and the authoritarian mentality rooted in the structures of our society. The Western ideology of dominating the natural world arises from these social relationships. As Bookchin argues, if we are to change human society, our relationship with the rest of nature will inevitably become transformed. With respect to animal rights, the welfarist reform differs in kind, not degree, from the abolitionist reform. The former seeks quantitative measures, arguing degree of exploitation. The latter, the abolitionist reform, seeks qualitative measures, arguing moral inconsistencies. Francione writes: “We have historically justified our exploitation of nonhumans on the ground that there is a qualitative distinction between the minds of humans and other animals.” The following briefly describes Professor Francione’s Theory of Animal Rights, in his words. For a thorough explanation of Francione’s abolitionist theory of animal rights, visit his website, Animal Rights: The Abolitionist Approach. “We ought to abolish animal exploitation and not seek merely to regulate it.” “Our only justification for the pain, suffering, and death inflicted on these billions of nonhumans is that we enjoy the taste of meat and dairy products. And if we really do take seriously that it is wrong to inflict unnecessary suffering on nonhumans, our enjoyment in eating animal products cannot be a morally acceptable justification. Our only use of animals that is not transparently trivial is the use of animals in experiments intended to find cures for serious human illnesses. But even in this context, there are serious questions about the necessity of animal use. Because of the biological differences between humans and other animals, there is always a problem extrapolating the results of animal experiments to humans. The data produced by animal use are often unreliable. For example, results from toxicity tests using animals can vary dramatically depending on the method that is used. Considerable empirical evidence indicates that, in many instances, reliance on animal models in experiments has actually been counterproductive. For example, the failure to create an animal model of lung cancer led researchers to ignore evidence of a strong correlation of smoking and lung cancer in humans.” “We kill billions of nonhumans every year for reasons that cannot plausibly be considered as “necessary” even though we maintain that we accept that it is wrong to inflict “unnecessary” suffering on animals. When it comes to other animals, we humans exhibit what can best be described as moral schizophrenia. We say one thing about how animals should be treated, and we turn right around and do another.” “If we recognized that all sentient beings had a basic, moral right not to be treated as property and that we had a moral duty to stop treating sentient beings as resources, we would stop bringing domestic animals into existence for our use. Recognizing “animal rights” does not mean letting all domestic animals run free in the streets. It means caring for those whom we have caused to come into existence. And not bringing anymore into existence to use for food, clothing, entertainment, or experiments. If we took the interests of animals seriously, we would stop bringing domestic animals into existence. There is no reason-other than our pleasure, amusement, or convenience-to eat animal meat or dairy, wear animals, hunt animals, or use animals in entertainment.” An abolitionist and a welfarist: Professor Francione and Erik Marcus debate. Note: All thoughts in quotations are those of Gary L. Francione. Fishing is not in keeping with the subject-of-a-life philosophy (Regan) you, the reader, are most likely familiar with. Fish, like other sentient beings, are an end in themselves and thus not a means to be exploited. They, like humans, possess inherent value. The abolitionist would argue that one (a human being) is never justified in performing an act such as this (among other things, the usually violent removal of a sentient being from its catered oxygenated environment and the subsequent allowance of maximum pain via suffocation upon removal, what we call fishing), no matter the circumstance. Since I indeed subscribe to this philosophy, I therefore think that fishing and ethical discussion have a qualitative difference and not a quantitative one (differs in kind, not degree). Ethicality is thus nonexistent when one engages in the act of fishing. We have seen this avenue of thinking applied to free-range (nonbattery-cage) chickens, pastured cows and the like. These attempts to bring ethics into the discussion are simply welfarist-based; concern is rooted in treatment, not in the reason as to why this sort of thing exists in the first place. Slavery was/is a perfect example: By philosophical standards, was slavery ever justifiable? Of course not. As Gary Francione, Distinguished Professor of Law and Nicholas deB. Katzenbach Scholar of Law and Philosophy at Rutgers University School of Law-Newark, contends, “Many heinous practices and traditions, including slavery and sexism, have been justified by appeals to arguments that assume that certain people are naturally superior and others are naturally inferior.” It seems as though ethics is abandoned, or severely wounded (enter the welfarist), when the subject of our thought/action is nonhuman. Fishing is exploitative and immoral at its very root and thus is deemed unacceptable. Fish protein clogs the arteries and is damaging to both kidneys and bone. 15 to 30 % of fish fat is saturated fat, which is particularly problematic for our species. Also, Omega-3 fatty acids can be collected from plants (where fish obtain it from). Humans require 0.5 to 2 g per day, which can easily be obtained from plant foods. In our toxic food environment, fish is labelled as a healthy item but in reality, it is just healthier than a cheeseburger. Some fish have more cholesterol per calorie than beef. Our waters have become our sewer systems and fish are now loaded with environmental contaminants. Even wild Alaskan salmon have detectable levels of mercury. People who eat the most fish (Greenlanders and Eskimos) have a low life expectancy and high rates of osteoporosis (highest levels on the planet). Does this sound like a food intended for human beings to subsist on? I think not. Architect William McDonough and chemist Michael Braungart are the authors of Cradle to Cradle / Remaking the Way We Make Things (2002), which has served as a personal inspiration ever since a close friend of mine introduced me to it in the first quarter of 2007. “The book itself is a polymer. It is not a tree. With so much polymer, what we really need is technical nutrition and to use something as elegant as a tree. Imagine this design challenge: Design something that makes oxygen, sequesters carbon, fixes nitrogen, distills water, accrues solar energy as fuel, mixes complex sugars in food, creates microclimates, changes colors with the seasons and self-replicates… why don’t we knock that down and write on it.” – William McDonough Aldo Leopold (1887 – 1948), life-long fisherman and hunter, was employed by the U.S. Forest Service before becoming the first professor of Wildlife Management at the University of Wisconsin. He died from acute myocardial infarction at the age of 61. Leopold’s Community Concept “…the individual is a member of a community of interdependent parts. His instincts prompt him to compete for his place in the community, but his ethics prompt him also to cooperate (perhaps in order that there may be a place to compete for).” “Man, he is constantly growing and when he is bound by a set pattern of ideas or way of doing things, that is when he stops growing.” – Bruce Lee Photo: “Nurture the Spirit” by Michelle Boey “We all follow an unknown path and an uncharted stream, and it takes great courage to move ahead with our eyes and our hearts open. When we look with deep compassion, we may find it necessary to change our life again and again, to let go of unwise parts of ourselves or to extend our compassion in new ways to the world around us.” – Jack Kornfield
Canned food is dead food. I don’t recommend it. However, I am human and like everyone else I have a busy schedule that sometimes permits me from planning ahead and soaking my own beans. So, every now and then for convenience I crack open a can or two of something that will help to make my life easier. It’s not the end of the world. However, if your can is lined with BPA (Bisphenol A), it may be! Ok, maybe not the end of the world, but certainly not a healthy move to make for your health. BPA is a chemical that is in the lining of most canned foods, in plastic bottles, in plastic containers, in plastic wrap, in children’s toys, plastic utensils, and many other places. BPA disrupts our hormones, BAD. When possible, only purchase BPA-free cans. Trader Joes has many BPA-free cans! Other companies are realizing that we’re onto them about this issue and are labeling their cans like you see above. Other ways to reduce your exposure to BPA is to avoid plastics. Use glass jars or glass Tupperware. Be sure your dentist uses BPA-free composite fillings. Look for plastic toys (or other products) that are BPA-free. Your hormones are precious and effect everything about your health. Take care and protect them!
From Leeds WIKI Welcome to LeedsWIKI A Wiki is a web technology that allows a web site to be collaboratively constructed and edited with no specialist tools and very little technical know-how. This is of interest in learning and teaching as a wiki can offer students and educators a more active, participative relationship with web based materials. All users with ISS user accounts can edit this WIKI - just use your ISS userid and password to log in. For more ideas on how to use a wiki visit the SDDU resources at http://www.sddu.leeds.ac.uk/online_resources/wikis/index.php Initial notes and discussion on this page have been moved to the Main Page discussion. Click on discussion tab above. Please use the discussion page for suggestions and discussion of the content and structure of this page and any other issues concerning this wiki, uses, policy, support, etc. A good place to start is to try text entry and editing in the SandBox, the practise area, and look at the help documents in the Getting Started section below. This section contains links to instructions on how to edit and create new pages and how to make links from one page to another and to external web sites and other resources. To practice any of these things before adding or contributing to the main pages, you may prefer to visit the SandBox, an area to practice and experiment. Please note, the following links will take you to external websites, you will need to return to the LeedsWIKI to create or edit your pages. - Help:basic Editing - Help:Starting a new page - Help:Wiki Markup - Help:Wikitext Examples - Getting started with Leeds Wiki SandBox a place to try things out Technical Resources on the Mediawiki main site Consult the User's Guide for information on using the wiki software.
Definition of resemblance: proportion, uniformness, equilibrium, simile, parallelism, comparability, affinity, convergence, comparison, coincidence, correspondence, commonality, congruity, common denominator, study at likeness, same, uniformity, community, overlap, relation, parity. - duplicate (part of speech: noun) - similarity (part of speech: noun) - imitation (part of speech: noun) - "The Project Gutenberg Memoirs of Napoleon Bonaparte", Bourrienne, Constant, and Stewarton. - "The Ghost-Seer (or The Apparitionist), and Sport of Destiny", Frederich Schiller. - "Logic, Inductive and Deductive", William Minto.
Seems like a complicated and unfair way to conduct an election, but I guess I'm not a politician. I recommend visiting the CommonCraft site to see all the other explanatory videos available. The Commonwealth of Learning (COL) "Media for Learning" programme aims to make media to be an effective part of the larger Open and Distance Learning process, especially at the community level and particularly in relation to COL's mandate to enable learning for development. Learning about what? Whatever a community's needs and priorities are. For some this means health issues, like HIV/AIDS, malaria or diabetes. For others, it means supplementing secondary school education in English, math and science. How do communities learn using media? By engaging with media to design innovative programmes to address specific needs, e.g. improving agricultural practices, and linking to groups, both in the community and externally, to access useful and appropriate knowledge sources. What does COL focus on in this area? - Building the capacities of media; and - Supporting the establishment and growth of knowledge and learning networks. - Developing effective learning programmes; Good practice: Community Radio Madanpokhara; Recent activity: Jet FM in Jeffrey Town - Strengthening organisation: community ownership and participation, policies, sustainability planning; Upcoming activity: Radio Mang'elete - Smart technology choices - Open sourcing community media; Good practice: KRUU FM Here is an eight minute Business Africa/CTA video production documenting actual cases on the use of Web 2.0 applications in the development sector, specifically among farmers in Africa. CTA is an ACP-EU institution working in the field of information for development. It was set up in 1984 with the task of improving the flow of information among stakeholders in agricultural and rural development in African, Caribbean and Pacific (ACP) countries. Its work focuses on three key areas: - providing information products and services (e.g., publications, question-and-answer services and database services) - promoting the integrated use of communication channels, old and new, to improve the flow of information (e.g., e-communities, web portals, seminars, and study visits) - building ACP capacity in information and communication management (ICM), mainly through training and partnerships with ACP bodies
The Wild and Wonderful Posted by Susan Stoltz on Even though Sumatran tigers are the smallest tigers in the world, they’re still pretty big cats. Most people know that tigers are endangered, and, again, most people know that no two tigers have the same stripe pattern, much like your fingerprints. But did you know…. Fossils of tiger remains from China show that tigers could be over two million years old! Unlike most cats tigers love the water and are very good swimmers. In fact, they’re such strong swimmers they’ll often chase their prey into the water in order to be able to catch it. Sumatran tigers have the narrowest stripes so they can ambush predators from amongst thick vegetation. Although you may wonder how something that is so...
Charles Darwin had formulated his theory of evolution by 1838. In 1858, he still had not written any papers or books on the theory. Darwin was very concerned about public reaction to a theory that disputed the biblical story of creation. He remembered clearly what had happened to Italian scientist Galileo under similar circumstances. He once said publishing his idea would be like “confessing a murder.” When he finally published On the Origin of Species in 1859, he made a point to leave references to humans and religion out of it, but it didn’t work. The book was banned in many places. Leaders of the Christian church were enraged and attacked the notion that man descended from monkeys, something Darwin was careful not to say in the book. Darwin stayed out of the fight and allowed the famous zoologist Thomas Huxley to lead the debate for him. During one debate at Oxford University, Huxley defended evolution by saying he would rather be descended from a monkey than a bishop of the Church of England. Many scientists came to Darwin’s defense, and the debate eventually subsided. Darwin saw the controversy coming, but he was still hurt by the personal attacks. He once said, “I have never been an atheist. This grand and wondrous universe seems to me the chief argument for the existence of God.” Astronomy had changed man’s place in the universe; now biology had changed his place on Earth. In the 20 years Darwin delayed presenting his theory on evolution, he spent eight years studying barnacles. After Darwin returned from his travels aboard the Beagle, he was in constant ill health. He never traveled outside England again and seldom even left his home.
Form of Government The City of Kearney adopted the Council-Manager form of government in 1950. Under this system, the City Council acts as Kearney’s legislative and policy-making body and appoints the City Manager as their chief executive officer. The City Manager is the administrative official responsible for implementing Council policies. The City Manager: - Directs the City work force - Prepares the annual budget for Council action - Recommends to the Council measures considered important - Appoints department heads and staff - Undertakes a wide range of other necessary tasks Additionally, the City Manager attends all the City Council meetings and advises the Council on the technical implications of its decisions. Under the Council-Manager form of government, the City Council acts as Kearney’s legislative and policy-making body. Its five members are elected at large on a staggered basis for four-year terms. From within the Council, one member is elected as President of the Council and is recognized as Mayor. The Mayor presides over Council meetings and represents the City in ceremonial functions. Budget Review and Adoption One of the most important actions the Council undertakes is the review and adoption of the annual budget. The budget is a thoroughly researched plan developed by the City Manager that represents in detail the priorities of the City. Through the budget, the City guides its operations to meet the demands and requirements of the community.
View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app! We can use the knowledge about concept of parallel line related to the topic vector especially in addition and subtraction of vector. Addition of vector The vector is the resultant of the vector and is represented mathematically as . Note that the vector has the same direction and The example above for the addition of vector that is parallel.There are many example that can relate the concept parallel line to addition and subtraction of vector. Multiplication algebraic involving denominator with one term For example : Find WHAT IS THE RELATIONSHIP BETWEEN ALGEBRAIC EXPRESSION AND VECTOR??? Multiply algebraic expression with numerator We use algebraic expression to solve problems especially in addition and subtraction vector. Obviously, it use algebraic expression when to express any vector in any term such as in terms of and Expand and multiply two algebraic terms with fraction Thus, as we can see from the example above subtopic addition and subtraction of vector is connected to the algebraic expression as we always use to solve problem relate to vector The straight line is a line that does not curve. In geometry a line is always a straight (no curve). In coordinate geometry, lines in a Cartesian plane can be described algebraically by linear equations and linear functions. In two dimensions, the characteristic equation is often given by the slope-intercept form: The concept of straight line is used in the topic vector. WHY???....BECAUSE vector also is a straight line dan doesn’t have a curve. That is main properties of vector, STRAIGHT LINE. How it relate to the subtopic vector (addition and subtraction of vector)??? Let, take a look at it now…. The diagram shows the vectors . Express it terms of E F H G SOLUTION: Resultant vector Subtraction of vector Thus, the straight line is the basic concept of vector. By knowing the knowledge of straight line, we know the direction and magnitude of the vector that can be used in the subtopic (addition and subtraction of vector). That why, STRAIGHT LINE is important to vector E F G H
Quarter-of-a-century plant blooming takes centre stage in Leicester A rare plant that flowers once every 25 to 30 years has finally bloomed in the University of Leicester’s Botanic Garden. The Agave succulent plant, otherwise known as the Century plant, is over 15 feet and even had to be cut back at one point for safety reasons as it was pushing on the roof of the greenhouse. It’s believed it would have been at least another metre high if it had been allowed to grow to full height. “Now its flower buds have finally opened into large yellow pom-poms,” said gardener Rachel Benskin. “It really is a wonderful sight to behold.” Generally, Agaves grow, flower, then die. It is originally thought that it took 100 years to flower, hence its common name (the century plant). However, this particular plant self-planted after the previous Agave flowered and died in 1998. It has been kept frost free throughout the winter, and generally dry with some light watering during sunny spells and weekly soaks over the past few weeks. Rachel said: “The Agave is native to Mexico and southern North America and naturalised through much of the Mediterranean range. Its fantastic flower spike can grow between five and eight metres tall so it’s exciting to see this one bloom after so long, despite us cutting it back a little. It will die by the end of the year so visitors should make the most of this wonderful plant while they can.” The Botanic Garden is home to hundreds of different plants from around the world. Originally founded in 1921 with the assistance of Leicester Literary and Philosophical Society. It was established on its present site in Oadby in 1947 and comprises 16 acres of grounds and greenhouses with arboretum, herb garden, woodland and herbaceous borders, rock gardens, water garden, and a collection of hardy Fuchsia, and a series of glasshouses displaying temperate and tropical plants, alpines and succulents.
- Make Enquiry Why enrol for our Science Revision programme? Exam technique is an important skill to master for all of your science exams. Unfortunately, knowing the curriculum by heart isn’t enough to gain all the marks available on the Biology, Chemistry and Physics papers. Many students fail to understand the importance of using command words to structure their answer. Learning to think like an examiner will help you to understand how to gain full marks on longer questions, yet this requires you to be able to understand the cryptic mark scheme. Our revision sessions will be designed to target the most challenging areas of the curriculum, focusing on exam technique and practice questions. You will learn what to look out for in the questions, and how to avoid common pitfalls. You will learn how to analyse your answers critically ensuring you achieve your higher potential! Come to our Science Exam Technique course! You will cover: - Command words such as: compare, explain, describe - Appropriate structure scaffold for 6 markers - Understanding the mark scheme - Interpreting data - Comparing data - Understanding the mark scheme: difference between Level 1, Level 2 and Level 3 answers Please submit your details here for this course. A member of our team will get back to you soon.
There were two distinct regional styles of traditional Japanese woodblock printmaking, the dominant Edo school (Edo was the former name for Tokyo) and the Kamigata-style (for the region including the cities of Osaka and Kyoto — see Kamigata-e Links below). The most obvious difference involved the range of subject matter. Edo prints (Edo-e) included beautiful women, geisha, courtesans, young lovers, erotica, domestic scenes, cityscapes, landscapes, nature scenes (especially birds and flowers), actors, military scenes, historical allegories, parodies, ghosts and demons, genre scenes, and still life. In stark contrast to this wide variety of subjects, Kamigata prints (kamigata-e, 上方絵) portrayed actors almost exclusively, with rare exceptions for a few of the previously mentioned genres. The two regional styles of actor prints (yakusha-e) were derived, in part, from different methods of acting. The great Ichikawa Danjûrô I (1660-1704) created the aragoto ("wild business") manner of Edo acting, often featuring larger-than-life characters of prodigious strength and courage involved tales of bravado and heroism that seemed well suited to the temperament of Edo, the center of shogunate and military power since 1603. In contrast, Sakata Tôjûrô I (1647-1709) developed an acting methodology that combined a gentle and nuanced sensuousness with a refined and somewhat effeminate personna that became a standard for wagoto ("soft style") acting in Osaka, the mercantile center of Japan. Both styles of play writing and acting were found in Edo and Kamigata, and certain stage characters were especially popular because they were performed with a mixture of the two styles. Nevertheless, the tendencies in each region were evident and were generally reflected in their printmaking. There were other differences between the two domains. Among these was the sheer volume of the Edo printmaking industry — prints from Edo outnumbered those published in Osaka by a margin of at least twenty to one. In addition, quite a number of Edo artists were able to make their living primarily from their printmaking, whereas in Osaka a tiny handful of artists were able to do so. The "amateur" status of the highly skilled Osaka designers, coupled with the small number of prints, made for a unique genre of printmaking. © 1999-2019 by John Fiorillo In addition to the links below, more information and images can be found at www.OsakaPrints.com.
We take them for granted: bicycles, cars and other vehicles that move effortlessly thanks to the wheel, a revolutionary invention dating back thousands of years. This wheel is the oldest in the Netherlands: it dates from around 2750 BC. Together with a second wheel it was found in a small peat bog near Ubbena. This wheel has a diameter of seventy centimetres and was made from one solid piece of oak. It was part of a two-wheel cart pulled by oxen and probably loaded with merchandise. Its owner left the wheels in the bog, as a sacrifice to supernatural powers.
When we talk about guts or fearlessness, we refer to a person’s ability to face challenges and overcome them without feeling scared or anxious. But what does having guts actually mean? Is it a product of nature, or can it be developed? What are the benefits of having guts, and how can we acquire it? In this article, we will explore the concept of guts and its significance in various areas of life. The Definition of Guts Guts can be defined as the mental or emotional strength that allows you to face danger, difficulty, or pain without showing fear or giving up. It is the ability to take risks and act boldly in the face of uncertainty or adversity, often at the expense of personal comfort or safety. The term “guts” is often used interchangeably with other words like courage, bravery, or grit, but it has its own connotations and nuances. While courage implies a willingness to confront fear, guts imply a more visceral and instinctual reaction. Guts are often associated with physical actions, such as jumping off a cliff, fighting a dragon, or standing up to a bully. However, guts can also manifest in mental or emotional challenges, such as public speaking, starting a business, or expressing your feelings. The Origins of Guts There is no definitive answer to where guts come from, but most researchers agree that it is a mixture of nature and nurture. Some people are born with a naturally fearless or risk-taking personality, while others develop it through life experiences or training. For example, soldiers, firefighters, or extreme sports athletes often develop a high tolerance for danger and stress through repeated exposure and training. However, guts are not solely determined by genetics or training. Your environment, culture, and upbringing also play a significant role in shaping your mindset and behavior. People who grow up in environments that value risk-taking, independence, or facing challenges often develop more guts than those who grow up in sheltered or risk-averse environments. Parents, mentors, or peers who encourage or model risk-taking behavior can also influence the development of guts. The Benefits of Having Guts Having guts can bring many personal benefits, such as: - Increased self-confidence and self-esteem: When you face and overcome challenges, you feel more capable and empowered, which can boost your confidence and self-esteem. - Enhanced resilience: When you develop a high tolerance for stress or failure, you become more resilient and can bounce back from setbacks more easily. - Greater sense of purpose: When you take risks to pursue your goals or values, you feel more fulfilled and purposeful in your life. - More fulfilling experiences: When you step out of your comfort zone and try new things, you create more memorable and enriching experiences that can enhance your quality of life. Having guts can also benefit your social and professional life, such as: - Improved leadership skills: When you take charge and lead by example, you inspire and motivate others to follow your lead. - Better problem-solving skills: When you face complex or ambiguous situations, you develop critical thinking, creative problem-solving, and adaptability skills. - Positive influence on others: When you model or encourage gutsy behavior, you can inspire or empower others to do the same. - Greater opportunities and connections: When you take risks and explore new avenues, you create more opportunities to meet new people, learn new skills, and gain new perspectives. The Challenges of Having Guts While having guts can bring many benefits, it also comes with its own set of challenges and risks. Some of the challenges of having guts include: - Exposure to danger or harm: When you take risks, you expose yourself to various hazards or injuries, depending on the nature of the risk. - Fear of failure or rejection: When you try something new or challenging, there is always a risk of failure or rejection, which can be emotionally or psychologically painful. - Overconfidence or recklessness: When you become too comfortable or confident in your ability to handle risks, you may become careless or reckless, which can cause harm to yourself or others. - Disapproval or criticism from others: When you challenge social norms or expectations, you may encounter disapproval, criticism, or opposition from others who do not share your values or beliefs. How to Develop Guts If you want to develop more guts, there are various strategies you can try, such as: - Gradual exposure: Start with small or manageable risks and gradually increase the difficulty or intensity as you gain more confidence and tolerance. - Visualization or affirmation: Use mental imagery or positive affirmations to visualize yourself as a fearless or confident person, and reinforce that self-image through practice and feedback. - Training or practice: Join a training or practice group that specializes in a high-risk or challenging activity, such as martial arts, rock climbing, or public speaking, and learn from experienced instructors or mentors. - Learning from failure: Embrace failure as a natural part of the learning process, and use it as an opportunity to reflect, learn, and improve your strategy or skills. - Surrounding yourself with supportive people: Seek out people who share your values or beliefs, and who encourage or inspire you to take risks and pursue your goals. The Connection Between Guts and Success Guts have long been associated with success in various domains, such as sports, business, art, and politics. However, the relationship between guts and success is not straightforward, and depends on many factors, such as: - The nature of the risk or challenge: Some risks or challenges are more conducive to success than others, depending on the degree of complexity, novelty, or opportunity they offer. - The skills or resources required to succeed: Guts alone may not be enough to ensure success, especially if the risk or challenge requires a specific set of skills, knowledge, or resources. - The context or environment: Success or failure may depend on the social, cultural, or economic context in which the risk or challenge occurs, and may be influenced by factors such as luck, timing, or support from others. The Role of Mindset in Developing Guts One of the key determinants of guts is mindset, which refers to your beliefs, values, and attitudes towards risk and challenge. People who have a growth mindset, which is the belief that one’s abilities and intelligence can be developed through effort and practice, are more likely to develop guts than those who have a fixed mindset, which is the belief that one’s abilities and intelligence are innate and fixed. People with a growth mindset are more resilient, persistent, and willing to take risks than those with a fixed mindset, because they view challenges as opportunities for learning and growth rather than as threats to their identity or security. By fostering a growth mindset, you can develop more guts and enhance your personal and professional success. In conclusion, guts are a vital aspect of human psychology that enables us to face challenges and overcome them with courage, resilience, and confidence. Guts are not a fixed trait, but can be developed through various strategies such as gradual exposure, mental imagery, training or practice, learning from failure, and surrounding oneself with supportive people. While guts can bring many benefits, it also comes with its own set of challenges and risks, such as exposure to danger, fear of failure, overconfidence or recklessness, and disapproval or criticism from others. Ultimately, the development of guts depends on many factors, such as genetics, environment, training, social support, mindset, and luck, and its relationship with success is complex and multi-faceted. What is the difference between guts and courage? While courage implies a willingness to confront fear or danger, guts imply a more visceral and instinctual reaction. Guts are often associated with physical actions or challenges, while courage can also manifest in moral or ethical challenges such as standing up for one’s beliefs, speaking up against injustice, or admitting one’s faults. Can guts be developed? Yes, guts can be developed through various strategies such as gradual exposure, mental imagery, training or practice, learning from failure, and surrounding oneself with supportive people. Guts are not solely determined by genetics or training, but also depend on environmental, social, and cultural factors such as upbringing, role models, and exposure to risk. What are the benefits of having guts? Having guts can bring many personal and social benefits, such as increased self-confidence and self-esteem, enhanced resilience, greater sense of purpose, more fulfilling experiences, improved leadership and problem-solving skills, positive influence on others, and greater opportunities and connections. What are the challenges of having guts? While having guts can bring many benefits, it also comes with its own set of challenges such as exposure to danger or harm, fear of failure or rejection, overconfidence or recklessness, and disapproval or criticism from others. How is mindset related to guts? Mindset plays a vital role in developing guts, as people with a growth mindset, which is the belief that one’s abilities and intelligence can be developed through effort and practice, are more likely to develop guts than those with a fixed mindset, which is the belief that one’s abilities and intelligence are innate and fixed. By fostering a growth mindset, you can develop more guts and enhance your personal and professional success. Can guts guarantee success? No, having guts alone may not guarantee success, as success depends on many factors such as the nature of the risk or challenge, the skills or resources required to succeed, the context or environment, and luck. However, having guts can increase your chances of success by enabling you to take more risks, face more challenges, and learn from failure. Kerns, C. D., & Singh, M. (2021). Addressing Grit and Resilience in Sport Science and Performance Psychology: A Systematic Review. Frontiers in Psychology, 12, 657382. Dweck, C. S. (2008). Mindset: The New Psychology of Success. Random House Incorporated. Brown, B. (2012). Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead. Penguin.
In 2012, Cornell Garden Based Learning revived the tradition of focusing on a yearly garden education theme through our CCE county network. The 2013, "Beneficial Insects," theme focuses on the fact that most insects found in and around residential homes and landscapes are either benign or beneficial. Less than 1 percent of insects in the world are pest problems, and we can minimize most pest problems by designing our gardens and landscapes to support healthy beneficial insect populations. When you garden ecologically, the goal is to keep insect pests below population levels where they will cause unacceptable damage, rather than try to get rid of all of them. Inviting beneficial insects into your garden may be the most important and readily available biological control practice you can undertake in your battle against insect pests. As you rely on natural enemies to help you, you need to foster them by providing their needs. One method consists of increasing the diversity of plants in or near the garden to attract more beneficial insects to the area. Beneficial insects play an important role in reducing and controlling populations of both plant and insect pests by acting as predators or parasitoids to these detrimental organisms. There are also insects that are innately beneficial because they act as pollinators or produce products (such as bees that pollinate and produce honey) that are useful to humans. Predators such as lady beetles and lacewings are mainly free-living species that consume many prey during their lifetime. Parasitoids, which include many wasps and flies, are more specialized than predators; the immature stage actually develops within the body of a single insect, ultimately killing it. The adults are free living and often visit flowers for nectar and pollen. Select plants for your garden that are known to lure beneficial insects to help you attract and conserve these garden helpers. Two large groups, or families of plants, are excellent "lures" the parsley family (Umbelliferae) and the sunflower or daisy family (Compositae). You can spot members of the Umbelliferae family by their umbrella-shaped clusters of small 5-petaled flowers. The overall appearance is often a large flat head of white or yellow flowers; Queen Anne's lace is a good example. The flower head provides a place to land for many insects, especially beneficial wasps. Using a variety of these plants that bloom at different times can make your garden look attractive, too. A number of culinary herbs in this plant family including parsley, dill, caraway, cilantro or coriander, and fennel. Some of these herbs are very attractive to syrphid and tachinid flies, assassin bugs, lacewings and parasitic wasps. One caution - these plants will spread quickly if left to go to seed, so remove flower heads after they stop producing nectar, but before seeds mature. Also, some are biennials, so you won't see flowers appear for a year. The Compositae family is characterized by flower heads that are actually made up of many small flowers growing together. Many flowers are composed of rays around a disk-like center. Many well-known ornamental flowers including marigolds, dahlias, daisies, asters, cosmos, calendula, coreopsis, tansy, yarrow, zinnia and sunflowers are in this family. Flowering often lasts over a long period of time, and there is usually more than one flower per plant. This provides a slow flow of nectar over a long period for the insects. Ladybugs, lacewings, parasitic wasps and some predaceous wasps are attracted to plants in this family. Soldier beetles, flower beetles and some lady beetles will feed on pollen in addition to feeding on insects. Dandelions offer early spring pollen to some of these insect predators. Legumes such as clovers and vetch also attract beneficials. They add nitrogen to the soil, provide good shelter and moisture for insects, and may even serve as a source of alternative prey for natural enemies. Beneficial insects such as ground beetles, rove beetles and robber fly larvae are often found in the soil. Cover crops offer protection to beneficial insects when our annual garden plants are not actively growing. Often, beneficial insects move over from the cover crops as these crops begin to die back, feeding on "bad" insects that are in turn consuming the desirable garden plants. Buckwheat is a good one because it not only provides shelter, but has flowers which attract flies, ladybugs and pollinating bees. One caution, however, is that it does self-seed readily. A small permanent planting of buckwheat near the garden allows immature natural enemies to complete development without seeding up your garden. If attracting - and keeping - the "good guys" to your garden is what you would like to do, try planting a few of the "lure" plants from the parsley and sunflower families this year. For more information on beneficial insects, visit: blogs.cornell.edu/horticulture/insects/. The mission of the Chautauqua County Master Gardener Program is to educate and serve the community, utilizing university and research-based horticultural information. Volunteers are from the community who have successfully completed 50-plus hours of Cornell approved training and volunteer a minimum of 50 hours per year. For more information on the Master Gardener Program, contact: Betsy Burgeson, Master Gardener coordinator at 664-9502, ext. 204, or email Emh92@cornell.edu.
Presentation By Nick Theodore History of the Saint George Greek Orthodox Cathedral in Greenville, South Carolina From the Book of Acts we are told, “...and your young men shall see visions; and your old men shall dream dreams.” The dream began many years ago in a small, far off land that provided the beacon for hope and inspiration to mankind from early history. In the late 1800’s, life in Greece was full, meaningful, but also difficult -- and the young men there dreamed the dreams of hope, prosperity and opportunity. Many stories had filtered back into tiny Greece telling of a rich “promise land” for everyone. From different parts of Greece, these young men began their pursuit of this dream. Despite great hardships facing them, and the fear of challenging this unknown far away land, with courage received from God... and their families, the began their quest. Among these young dreamers were George Konduros and Sotiros Maurogeanis. They settled into a small community in the foothills of beautiful mountains with the dream of finding prosperity. It was in 1894 that these two men became the first permanent Greek residents in Greenville. They were followed by more aspiring immigrants seeking their destiny. By 1931 there were enough Greeks in the area who wanted to sustain their religion and language they applied for a state Charter to form a church. In 1936, the Greek community bought a house on Decamp Street and converted it for use as their new Greek church and Fellowship Hall. Father Michael Mekouris was brought here as a permanent Priest on September 14, 1936 to serve the community. A Greek ladies circle was organized by Eugenia Manos. Jimmy Petropoulos was the first President of the Greenville AHEPA men’s organization. Many members established business that were successful. Jimmy Petropoulos had the savory restaurant on Washington and Main street. George Paouris owned the People Bakery on Buncombe Street. There was Pete’s Restaurants and many others. In 1942 a new church that would seat 260 people was built. Under the leadership of Father Pouleropoulos the church was designed by Mr. Cunningham and materials were purchased from the McKinney & Blueridge Lumber company. Due to the scarcity of steel during the Second World war, the church was built entirely of wood. It cost $28,500. The first liturgy was celebrated on Christmas that year. Archbishop Atenagoros visited the community with Bishop Nyssis and helped chose the site for the new church. The first baby to be baptized in the church was Louis Manios. The area around the church was home to many Greek families. They would be seen taking afternoon strolls with their families. In the fifties Greek businesses thrived -- Charlie’s Steakhouse, Open Hearth where Mike Melehes prepared his famous shiskabobs. There were the Carolina, Palmetto and Clock Drive-Ins, forerunners to the modern day franchises such as McDonalds, and Hardees. Today, many of our members carry on this kind of service to the greater Greenville community. Saint George community is a growing community. The Hellenic center was built in the early 1980's providing new facilities for a growing Sunday school and Greek School and a place for the Bake sales and other fundraising events. Our first Greek FestivaL was held in 1986. Construction began in August 1993 for the current Cathedral building. It was completed in late fall 1995 and the first Divine Litugy was celebrated on December 10, 1995. Manios Home 1938 Our Parish Priests Father Michael Mekouris Father Dimitrios Lolakas Father Aemil Pouleropoulos Father Peter Koskores Father Comstantine Economou Father Charles Goumenis Father Andrew Vasilas Father George Alexson Father Tom Pistolis Please contact Deacon Charles Joiner at 254-0150 or firstname.lastname@example.org if you have any information or pictures to add to our history section. Articles on the History of the Eastern Orthodox Church Orthodox Church History Part 1 Orthodox Church History Part 2 of the Orthodox Church by Bishop Kallistos Ware Excerpts from the book, The Orthodox Church (pdf) Greek Orthodoxy - From Apostolic Times to the Present Day by Demetrios Constantelos
Links immediately following the image of the American Flag () are links to other POTUS sites. All other links lead to sites elsewhere on the Web. Jump to: Presidential Election Results | Cabinet Members | Notable Events | Internet Biographies | Historical Documents | Other Internet Resources | Points of Interest James Knox Polk 11th President of the United States (March 4, 1845 to March 3, 1849) Nickname: "Young Hickory" Born: November 2, 1795, in Mecklenburg County, North Carolina Died: June 15, 1849, in Nashville, Tennessee Father: Samuel Polk Mother: Jane Knox Polk Married: Sarah Childress (1803-1891), on January 1, 1824 Education: Graduated from the University of North Carolina (1818) Political Party: Democrat Other Government Positions: - Member of Tennessee House of Representatives, 1823-25 - Member of U.S. House of Representatives, 1825-39 - Speaker of the House, 1835-39 - Governor of Tennessee, 1839-41 Presidential Salary: $25,000/year Presidential Election Results: ||James K. Polk Vice President: George M. Dallas (1845-1849) - Secretary of State - James Buchanan (1845-1849) - Secretary of the Treasury - Robert J. Walker (1845-1849) - Secretary of War - William L. Marcy (1845-1849) - Attorney General - John Y. Mason (1845-46) - Nathan Clifford (1846-48) - Isaac Toucey (1848-49) - Postmaster General - Cave Johnson (1845-1849) - Secretary of the Navy - George Bancroft (1845-46) - John Y. Mason (1846-49) - A large crack in the Liberty Bell proves too large to permit the bell to be rung any more. - Dispute with Britain over the Oregon Territory settled. Both nations get a part of the territory. - Treaty of 1848 with Mexico gave the U.S. control over California, New Mexico, Arizona, Nevada, Utah and parts of Colorado and Wyoming. - Gold discovered in California in December. - James K. Polk -- from The Presidents of the United States of America - Compiled by the White House. - James Polk -- from Table of Presidents and Vice Presidents of the United States - MSN Encarta - Grolier Online has created this resource from its collection of print articles in Encyclopedia Americana. Contains a full biography, written by Edwin A. Miles, along with suggestions for further reading. - James Polk -- from The American President - From the Miller Center of Public Affairs at the University of Virginia, in addition to information on the Presidents themselves, they have first lady and cabinet member biographies, listings of presidential staff and advisers, and timelines detailing significant events in the lives of each administration. - James Knox Polk -- from People in THE WEST - Based on the documentary THE WEST by Ken Burns and Stephen Ives, this biographical sketch focuses on Polk's role in expanding the U.S. borders westward. - James Knox Polk -- from the Hall of Forgotten Presidents - A case for considering Polk as one of the "near-great" presidents. - James K. Polk -- from the North Carolina Encyclopedia - A very text-rich biography on this North Carolina native. - Inaugural Address (1845) Other Internet Resources: Points of Interest: - A week before he died, Polk was baptized a Methodist. - Gaslights were installed in the White House while Polk was a resident. - Polk survived a gallstone operation at age 17 without anethesia or antiseptics. Those medical practices were not used at the time. - The first annual White House Thanksgiving dinner was hosted by Sarah Polk. - Sarah Polk was a devout Presbyterian. She banned dancing, card-playing and alcoholic beverages in the White House. - News of Polk's nomination was widely disseminated using the telegraph. The first time his had been done. Previous President: John Tyler | Next President: Zachary Taylor Return to POTUS Index | IPL Lobby Last Updated January 26, 2014 ©1996-2008. Robert S. Summers. All rights reserved.
Narcotic, drug that produces analgesia (pain relief), narcosis (state of stupor or sleep), and addiction (physical dependence on the drug). In some people narcotics also produce euphoria (a feeling of great elation). A brief treatment of narcotics follows. For full treatment, see drug use. The main therapeutic use of narcotics is for pain relief, and hence they are often called narcotic analgesics. The best-known narcotics are the opiates—i.e., compounds found in or derived from opium. Opium is obtained as the dried milky juice of the seed pods of the opium poppy (Papaver somniferum). Of the 20 or more alkaloids found in opium, the most important is morphine, which is primarily responsible for opium’s narcotic properties. Drugs with actions similar to morphine that are produced synthetically are known as opioids; the terms opiate, opioid, and narcotics are used interchangeably. In most countries the production, trade in, and use of narcotics are limited because of their addictive properties, detrimental effects, and the incidence of narcotic drug abuse. Narcotics occurring naturally in the opium poppy have been used since ancient Greek times, both for relieving pain and for producing euphoria. Extracts of the opium poppy were smoked, eaten, or drunk (as laudanum, a crude mixture of alcohol and opium). The pharmacologically active components of opium were isolated during the first half of the 19th century. The first was morphine, isolated by a young German pharmacist, F.W.A. Sertürner, in about 1804. A much milder narcotic, codeine, was in turn isolated from morphine. The invention of the hypodermic needle in the mid-19th century allowed morphine to be administered by injection, which is useful in medicine because injections of morphine produce much greater effects than taking the same amount of drug orally. However, the availability of morphine injections led to serious problems of abuse, and laws were introduced to control the use, production, and trade of narcotics and other dangerous drugs. Such laws now exist in most countries of the world. In 1898 heroin, or diacetylmorphine, was developed from morphine by the Bayer Company in Germany. Heroin is 5 to 10 times as potent as morphine itself and is used by most narcotic addicts. Because heroin proved to be even more addictive than morphine, a search for synthetic substitutes was undertaken that resulted in such opioids as meperidine (Demerol), methadone, and levorphanol (Levo-Dromoran). Most persistent users of heroin or other narcotics follow a classic progression from inhaling the drug to injecting it subcutaneously and then to injecting it intravenously; each of these stages usually brings a greater likelihood of addiction with it. With increasing use of the drug, euphoria and relaxation eventually give way to drug tolerance and physical dependence; the addict must use progressively larger doses to achieve the same pleasurable effects, and once the drug wears off he must endure painful symptoms of physical and psychological withdrawal. An overdose of narcotics can severely depress the central nervous system, with respiratory failure and death as a consequence. Probably the most effective therapy for narcotics addicts involves the synthetic opiate methadone, which, though itself addictive, blocks the addict’s craving for heroin and provides no disruptive euphoric effects of its own. Medically speaking, narcotics are some of the most powerful painkillers available, but they are used with great caution because of their addictive properties. They are often given to patients who are dying from cancer and are in great pain. Narcotics not only relieve pain but also seem to reduce suffering, worry, fear, and panic associated with severe pain. As terminal cancer patients often do not have long to live and the provision of an acceptable quality of life may be the paramount issue, problems of addiction are largely irrelevant. Test Your Knowledge Human Health: Fact or Fiction? Substances known as narcotic antagonists block the actions of the narcotics and reverse their effects. Examples include naloxone, naltrexone, and nalorphine. They are used to reverse the effects of overdoses of narcotics, and they can often save the life of the victim. Narcotic receptors have been identified in the brain. Narcotics act at these receptors to produce their many effects, whereas narcotic antagonists block these receptors and prevent narcotics from reaching them and exerting their actions.
Despite consumer safety warnings, a new study shows that children under 16 continue to ride all-terrain vehicles (ATVs) –even after suffering serious injuries. “Although ATVs have surged in popularity over the past several years, they pose significant dangers for children 16 and under who simply do not have the physical strength, cognitive skills, maturity or judgment to safely operate ATVs,” says Rebeccah Brown, MD, associate director of Trauma Services at Cincinnati Children’s and the study’s main author. “These are hefty motorized vehicles that weigh up to 600 pounds and are capable of reaching speeds of up to 85 miles per hour.” The study was presented Oct. 22 at the 2012 American Academy of Pediatrics national convention in New Orleans. From 1982 to 2010, more than 11,000 people died in ATV crashes, according to the US Consumer Product Safety Commission (CPSC). Children under 16 accounted for one of every four deaths. Of the children who died, 43 percent were under age 12. In addition, more than 28,000 children received emergency care after being injured in ATV crashes. Brown and colleagues studied five years’ worth of ATV-related admissions to Cincinnati Children’s Level 1 Trauma Center. Among the findings: - About 60 percent of children injured in an ATV crash continued to ride - Children sustained multiple, serious injuries, with 23 percent requiring intensive care and 48 percent requiring surgery. - Only 40 percent of children reported wearing a helmet at the time of the crash. Of those who didn’t wear helmets, 67 percent sustained significant head and neck injuries. - More than 60 percent acknowledged the presence of a label on the ATV warning against operation by children 16 and under and against carrying passengers. - Not a single child injured underwent formal training for safe ATV operation. Only five were offered free training by the ATV dealer. All five declined. “ATV manufacturer warning labels are largely ineffective, and ATV training is infrequently offered to most ATV users,” says Brown. “Mandatory safety courses and licensing, and enforceable helmet legislation, are needed to reduce ATV use by children.”
What are Jingle Cones? Jingle Cones are metal discs that are rolled to create a cone shape, with one end narrower than the other. They have been traditionally used by Native Americans to sew on women’s dresses that are worn for the Jingle Dress Dance. These metal cones make a jingling sound when the dancer moves. The original Jingle Cones were created by Native Americans from the lids of tobacco tins. These lids were rolled into cones and sewn on a fabric dress as a decoration. There are several versions of the story of how the original Jingle Dress and the dance it was made for came to be. The Jingle Dress and the accompanying dance were inspired by an Ojibwa Native American who dreamed about them around the turn of the 20th century. He or she provided instructions for how the dress was to be made and how to perform the dance. Some versions of the story say that the originator’s daughter or granddaughter was ill. This girl was healed during the performance of the new Jingle Dress Dance. Thus the Jingle Dress Dance was at first considered a healing dance. Jingle Dance Today The Jingle Dress and the dance have spread across the Native American community and are popular at Powwows today. The original tobacco lid cones have given way to commercially created metal cones that are already rolled and ready to sew on a dress. The number of jingle cones on a dress can vary depending on the pattern in which they are attached. Some women use 365 cones for each day of the year. The popularity of the Jingle Dress Dance waned among Native Americans in the 1940s. But in the 1970s it experienced a resurgence and has continued to be popular at Native American Powwows. The dance has changed from a healing dance with prescribed dance steps. Today, Jingle Dancers exhibit very fast, intricate footwork and dresses are cut to allow the dancer to make quick direction changes and foot crossing dance steps. Today, you can buy Jingle Cones in traditional silver as well as gold and copper colors. They are also available in different sizes, Adult (2.75” long), Child (2” long) and Toddler (1.75”). The Wandering Bull offers our own Jingle Cones with our One Feather Design in all three sizes and all three colors.
A language is an essential tool for communication purposes. It is used for promoting peace and order in society. It is used for displaying authority and power, and for the obtainment of goals and objectives. However, language can also be destructive to society if it is used inappropriately. According to Graddol (2010), “Throughout India, there is an extraordinary belief, among almost all castes and classes, in both rural and urban areas, in the transformative power of English. English is seen not just as a useful skill, but as a symbol of a better life, a pathway out of poverty and oppression.” He further added, “The challenges of providing universal access to English are significant, and many are bound to feel frustrated at the speed of progress. But we cannot ignore the way that the English language has emerged as a powerful agent for change in India.” English as a Link Language between National and International in Global Context English is important to achieve various employment opportunities in India. Having access to good education gives learners access to good opportunities as well. Having good knowledge of English also gives you more chances of being successful. A good knowledge of the English language also comes in handy anywhere in the world as it is the universal language. An insightful knowledge of the English language leads one to receive many global opportunities. The national language policy for school education, the three-language formula recommended by the National Commission on Education 1964–1966, was incorporated into the national education policies of 1968 and 1986. The language policy in Indian education is still something that is in progress. The three-language formula was introduced as a policy to safeguard national and regional interests. The Central Advisory Board of Education (CABE) also raised some serious concerns about language such as the number of languages that need to be taught, the role of Hindi and English, the teaching of Sanskrit, and more. In 1961, the Conference of Chief Ministers simplified the three-language formula and approved the following: 1. The regional language must be used when it is different from the mother tongue. 2. Hindi must be used in Hindi-speaking areas and other Indian languages in non-Hindi-speaking areas. 3. English or some other common European language can be used. According to the framework of NCERT (2005), “English in India today is a symbol of people’s aspirations for quality in education and fuller participation in national and international life … The level of introduction of English has now become a matter of political response to people’s aspirations, rendering almost irrelevant an academic debate on the merits of a very early introduction.” Using the three-language formula has been seen as a convenient strategy to incorporate all the significant languages in education.
An insightful and comprehensive biography of the great twentieth century sculptor Alberto Giacometti. Alberto Giacometti was one of the most enigmatic and memorable personalities of the twentieth century. Born in Switzerland in 1901, he settled in Paris in 1922 where he lived and worked for the rest of his life. Giacometti’s early influences included his father-painter Giovanni Giacometti-sculptor Antoine Bourdelle, and mentors Zadkine, Lipchitz, and Laurens. When he turned from cubism to surrealism, his work earned near-instantaneous recognition, but Giacometti quickly moved on, pioneering a solitary path on the margin of contemporary trends. Today, his signature elongated sculptures are icons of art history. The fruit of new research, this riveting biography offers a chronological look at the whole of the artist’s life and work. It is published in coordination with the Giacometti exhibition at the Guggenheim in New York (June 8 – September 16, 2018).
When India became independent in 1947 after two centuries of colonial rule, it immediately adopted a firmly democratic political system, with multiple parties, freedom of speech, and extensive political rights. The famines of the British era disappeared, and steady economic growth replaced the economic stagnation of the Raj. The growth of the Indian economy quickened further over the last three decades and became the second fastest among large economies. Despite a recent dip, it is still one of the highest in the world. Maintaining rapid as well as environmentally sustainable growth remains an important and achievable goal for India. In An Uncertain Glory, two of India's leading economists argue that the country's main problems lie in the lack of attention paid to the essential needs of the people, especially of the poor, and often of women. There have been major failures both to foster participatory growth and to make good use of the public resources generated by economic growth to enhance people's living conditions. There is also a continued inadequacy of social services such as schooling and medical care as well as of physical services such as safe water, electricity, drainage, transportation, and sanitation. In the long run, even the feasibility of high economic growth is threatened by the underdevelopment of social and physical infrastructure and the neglect of human capabilities, in contrast with the Asian approach of simultaneous pursuit of economic growth and human development, as pioneered by Japan, South Korea, and China. In a democratic system, which India has great reason to value, addressing these failures requires not only significant policy rethinking by the government, but also a clearer public understanding of the abysmal extent of social and economic deprivations in the country. The deep inequalities in Indian society tend to constrict public discussion, confining it largely to the lives and concerns of the relatively affluent. Drèze and Sen present a powerful analysis of these deprivations and inequalities as well as the possibility of change through democratic practice. Jean Drèze has lived in India since 1979 and became an Indian citizen in 2002. He has taught at the London School of Economics and the Delhi School of Economics, and he is now a visiting professor at Allahabad University. He is the coauthor (with Amartya Sen) of Hunger and Public Action and India: Development and Participation. Amartya Sen is the Thomas W. Lamont University Professor and professor of economics and philosophy at Harvard University. He won the Nobel Prize in Economics in 1998. His many books include Development as Freedom, Rationality and Freedom, The Argumentative Indian, Identity and Violence, and The Idea of Justice. "It's an urgent, passionate, political work that makes the case that India cannot move forward without investing significantly--as every other major industrialized country has already done--in public services. . . . This book is . . . a heartfelt plea to rethink what progress in a poor country ought to look like."--Jyoti Thottam, New York Times Book Review "Sen and Drèze carefully explain such issues as health care, education, corruption, lack of accountability, growing inequality, and their suppression in India's elite-dominated public space. . . . Sen and Drèze also reveal how democracy in its simplest manifestation, the scramble for votes, can drive successful implementation of welfare programs such as the Public Distribution System."--Pankaj Mishra, New York Review of Books "After three decades of trawling the data compiled by central and state governments, Indian nongovernmental organizations, and international bodies, these longtime collaborators know--possibly better than any other commentators--how Indian governments since the 1980s have failed the vast majority of Indians, especially in health care, education, poverty reduction, and the justice system."--Andrew Robinson, Science "[A]n excellent but unsettling new book."--The Economist "[E]legant and restrained prose, and with an array of fresh examples."--Ramachandra Guha, Financial Times "Sen and Dreze are right to draw attention to the limits of India's success and how much remains to be done. They are exemplary scholars, and everything they say is worth careful study."--Clive Crook, Bloomberg News "Economists Dreze and Nobel laureate Sen compellingly argue that Indian policy makers have ignored the basic needs of people, especially those of the poor and women."--Choice Table of Contents: 1 A New India? 1 2 Integrating Growth and Development 17 3 India in Comparative Perspective 45 4 Accountability and Corruption 81 5 The Centrality of Education 107 6 India's Health Care Crisis 143 7 Poverty and Social Support 182 8 The Grip of Inequality 213 9 Democracy, Inequality and Public Reasoning 243 10 The Need for Impatience 276 Statistical Appendix 289 Table A.1: E conomic and Social Indicators in India and Selected Asian Countries, 2011 292 Table A.2: India in Comparative Perspective, 2011 296 Table A.3: Selected Indicators for Major Indian States 298 Table A.4: Selected Indicators for the N orth-E astern States 330 Table A.5: Time Trends 332 Hardcover: For sale only in the United States and Canada Paperback: For sale only in the United States and Canada
Why do we have a CSO problem? By the close of the 19th century, Lynchburg was prosperous enough to follow the lead of urban centers like Boston and San Francisco in the construction of sanitary sewer systems. When our sewer system was first built more than 100 years ago, it was among the finest in the nation, utilizing state-of-the-art technology. Unfortunately, the “state-of-the-art” at the time was to pipe sewage away from densely populated areas ... and into the nearest creekbed or stream. As a result, most of Lynchburg’s untreated sewage eventually made its way downstream into the James River. By the mid-nineteen hundreds, cities like Lynchburg had recognized the shortcomings of the old sewer technology. So in 1955 Lynchburg built a wastewater treatment plant to reduce the pollution of area rivers and streams. This wastewater treatment plant was fed by a network of large underground pipes called interceptors, which transported sanitary wastewater to the treatment plant from the city’s 21 neighborhood drainage basins. This system worked well enough in dry weather, but lacked the capacity to handle the combination of storm water and sanitary wastewater that would result from heavy rainfall. Therefore, the design of this combined sewer system incorporated “overflow outfalls,” which allowed excess stormwater and sanitary wastewater to be quickly diverted through pipes into nearby streams or ditches. These outfalls reduced the chances that untreated sewage would back up and flood into streets and homes when the combined flow of rainwater and wastewater exceeded the sewer system’s capacity. The antiquated “combined sewer” system we have inherited still serves more than 11 square miles of the City. Although many of the overflow outfalls have been eliminated in the first stages of the CSO construction work, many others remain active in the system. And as a result, during heavy rainfall, Lynchburg’s untreated sewage is still being discharged into the James River—and occasionally into streets and yards. How did we choose a solution? For more than a decade, Lynchburg studied ways to upgrade the outdated parts of our wastewater treatment system. An exhaustive study conducted by the City between 1974 and 1979 (the Infiltration/Inflow Evaluation Survey Report) helped to provide strategic direction for the upgrade efforts. This report utilized the U.S. Army Corps of Engineers’ storm computer model to help quantify the extent of the overflow problem. It also identified corrective alternatives and made recommendations, which provided direction for the city’s initial sewer improvement efforts. As a result of the study’s recommendations, Lynchburg began work, investing over $4 million during the 1980s to close several CSO points and to gather the data needed for a long-term control plan to eliminate combined sewer overflow. In 1989, the 1974-79 study was updated, using more sophisticated computer modeling techniques to make an estimate of the combined sewer system inflow and of the frequency, volume, and pollutant loads of each overflow under various rainfall conditions—all information that was required by the Virginia State Water Control Board. The 1989 studies further prioritized Lynchburg’s combined sewer regions into 59 different project areas. These projects were ranked based on a matrix approved by the Virginia Department of Health and the Department of Environmental Quality, which incorporates criteria such as aesthetics, public health considerations, environmental characteristics, water quality, and impact on the James River. Over the years, the City evaluated many possible ways to correct our sewer problems, ranging from the construction of huge retention basins to the introduction of elaborate water filtration systems. Lynchburg’s current three-part plan offered the most technically and financially sound options. The Environmental Protection Agency (EPA), Virginia’s Department of Environmental Quality (DEQ), and the City agreed that local funding for this federally mandated program would come from usage fees and not taxes. Recognizing that sewer customers would bear most of the costs, the City secured a Special Consent Order from the state. This agreement was the first of its kind to be approved by the EPA and allowed the City to keep sewer rates in line with Lynchburg’s median household income. Future increases will also match median household income growth. In the early 1990s, three representative neighborhoods were the sites of test projects (Franklin Street, Fairway Place, Woodland Avenue), and lessons learned in those pilot-project areas were used to establish policies and procedures for the current City-wide work. Recent state and federal laws (most notably the Clean Water Act) have further added momentum to Lynchburg’s CSO efforts—and also to the efforts of many other cities nationwide, such as Richmond, Alexandria, and Boston, with similar sewer problems.
FAO projects that, under current production and consumption trends, global food production must increase 60% by 2050 in order to meet the demands of the growing world population. Yet, more than one third of the food produced today is lost or wasted. Food loss refers to the decrease in edible food mass at the production, post-harvest and processing stages of the food chain, mostly in developing countries. Food waste, a symptom of developed countries’ consumeristic lifestyles, refers to the discard of foods at the retail and consumer levels. This food wastage represents a missed opportunity to food security and comes at a steep environmental price. In avoiding food wastage, there is actually more that would be gained by its reduction than a mere reduction in its ‘footprint’. For instance, more efficient systems that reduce either losses or waste would likely result in additionally reduced GHG emissions, in part directly, since wastage typically generates methane emissions during food disposal, as well as indirectly, given that reducing wastage may lead to critical redesign of supply chains and retail models, which may result in less energy use along the food chain, and thus associated GHG emissions. Generally, less wastage is associated with more efficiency and eventually more effective recycling of resources and less transport and storage needs across long distances – all leading to savings in natural capital, less resource use and lower GHG emissions. With regards increased food security, including availability, access and utilization, reducing wastage can also be achieved by reducing certain loss factors, for instance, by increasing local supplies in Least Developed Countries or by promoting programmes where the food saved from an otherwise waste pathway in retailing is specifically accounted for and used as food aid. To date, no study has analyzed the environmental impact of global food wastage. This project produced the first global Food Wastage Footprint (FWF) in order to quantify the impact of the food grown, but not eaten – both loss and waste – on the environment and the economy, with a view to assist decision-making along the food chain. This project offers, to the extent possible, a picture of the environmental footprint of global food wastage, with a particular emphasis on impacts on soil, water, biodiversity, and climate change. The aim is to bring more precision to the debate on the environmental impacts of food wastage, by providing a more consistent knowledge base, which can be used to underpin future policy debate in this area. The main project outcomes are: The first phase of the FWF project estimated the embedded carbon, water, soil and biodiversity in food wastage by using the best available data; A Toolkit on Food Wastage Footprint Reduction has been produced, with tips for reducing, reusing and recycling food wastage; Best practices are collected through a public database where organizations and individual experts with expertise on food wastage contribute by directly inputing into the Database; The second phase of the FWF project defined methods for the economic valuation of environmental and related social costs of food wastage; A Full-Cost Accounting (FCA) framework was first developed with stakeholders inputs through the E-Forum on full-cost accounting of food wastage which was held from 21 October to 24 November 2013. Although 245 persons registered, discussions confirmed the methodological challenges involved in full-cost accounting. Furthermore, expert knowledge was gathered through a face-to-face meeting organized on the occasion of events held by the Natural Capital Coalition in January 2014; This project results were launched on the occasion of the FAO Regional Conference for Europe, 2 April 2014, that includes on its agenda a ministerial roundtable on food loss and waste. The publications featuring detailed results of Full-Cost Accounting of Food Wastage and Full Costs and Benefits of Food Wastage Reduction Measures are forthcoming. The ultimate objective of this project is to communicate that investments in food wastage reduction is the most logical step in the pursuit of sustainable production and consumption, including food security, climate change and other adverse environmental effects. See also: Food Wastage Footprint 2
Posted by Em Innovations on 9/16/2014 Infection is a constant worry, especially if you are working in a medical care facility. Working with sick people is very dangerous. If they are suffering from a contagious condition then you are putting yourself and the rest of the facility at risk provided you do not take the proper precautions. The CDC estimates that almost two million people get hospital infections every year and ninety thousand of those people die of the same infections. It may sound like a freighting statistic and it is but there are steps that you can take to ensure your facility remains safe. Here is a list of the top 5 things you can do to limit the risk of infection. Manage Wound Sites CarefullyConstantly change wound dressing if you feel that it might be getting wet or loose, ensure that the dressing is dry and clean at all times. It is also a good idea to treat a catheter as a wound sight and ensure that it gets the same care and attention Manage VisitorsIf your patient is highly infections, inform the family to send a card rather than visit, that way you avoid the disease not only spreading through the hospital but getting back into the outside world. Constantly CleanMake sure that you are washing your hands after handling any kind of dirty material, especially after using the restroom, and do not be afraid to remind your co workers to do the same. Take Steps To Lower RiskIf your patient is afraid of infection let them know that little things like losing some weight or cutting back on cigarettes can make a difference particularly in regards to surgery. ImmunizationsNothing can be understated about the importance of staying up to date on immunizations both for you and your patients. The latest vaccines available should always find their way to your patients and to you so that absolutely nothing slips through the cracks and you are able to continue caring for your patients in a safe and clean environment free from infectious risks.
Most of Papua New Guinea’s population of almost 8 million people live in rural communities and are faced with significant challenges in health, education and economic opportunity. PNG’s current WASH indicators are some of the worst in the world. Access to basic level sanitation and water services are 8% and 35% respectively in rural PNG and 48% and 86% in urban areas. Low access to WASH is reflected in Papua New Guinea’s ranking at the bottom of Pacific countries for all WASH related health statistics. Live & Learn is working with Plan International to address this through the Water for Women: Resilient WASH in the Islands of PNG program. WfW PNG aims to improve the health and wellbeing of approximately 60,000 rural people by increasing the quality and accessibility of resilient WASH services in rural schools, Health Care Facilities and communities and by strengthening WASH sector systems. The project will seek improved gender and social inclusion in rural areas and contribute to an enhanced evidence base relating to GSI and WASH. There are four main project outcomes: Outcome 1: Sustainable sub-national government structures (in 2 Provinces, 3 Districts, 29 Wards) supporting, resourcing and monitoring implementation of inclusive WASH aligned with the National WASH policy Outcome 2: Resilient, safe and inclusive WASH infrastructure and practices established and used in communities, schools and health facilities Outcome3: Improved understanding of gender and social inclusion in WASH contributing to changed behaviour responses in households, communities and institutions resulting in more equitable WASH roles and decision making. This program is working in conjunction with the PNG Government’s National WASH Policy (2015-2030), which is providing guidance for the WASH sector and setting ambitious targets for water supply and sanitation, as well as emphasising the importance of gender, disability and other social inclusion in water management. Water for Women is funded by the Australian Department for Foreign Affairs and Trade and implemented by Plan International Australia and Live & Learn.
Photo Credit: Steve Morgan (licensed under Creative Commons) April 23, 1933 The first electric trolleybus system in Ohio began regular operations in Dayton. These trackless trolleybuses — powered by overhead electric wires rather than gasoline – started their initial runs early that Sunday morning. The day’s edition of the Dayton Daily News outlined one of the key activities undertaken by the Dayton Street Railway Company (DSR) to help promote its trolleybuses. “The company will usher in its new service with a charitable slant, in that all of the fares and contributions received in exchange for transportation between 9 a.m. and 5 p.m. will be donated to the Babies Milk Fund,” reported the newspaper. “Almost a score of young women will be in charge of the collections and all monies which they receive will be given to this charitable purpose.” The trolleybuses replaced DSR’s streetcars that had been serving passengers along the Salem Avenue-Lorain Avenue line in Dayton. This change in transit options was due to a fire the previous year at DSR’s maintenance and storage facility. The fire destroyed not only that building but also 16 streetcars and two gasoline-powered buses housed there. DSR president Philip H. Worman and general manager William L. Smith, seeking to come up with a viable alternative to those now-decimated vehicles, led a months-long effort to closely examine various trolleybus systems. This extensive research resulted in DSR placing an order with the J.G. Brill Company for a dozen trolleybuses for a new electrically operated public transportation network within Dayton. This network of trolleybuses became the latest incarnation of electric transit service that has been available in one form or another continuously in Dayton longer than any other city in the nation. (The city’s first electric streetcar service had started back in 1888.) A dedication ceremony was held for the trolleybuses on the day before they officially went into service. About 80 invited guests were on hand for the event, and one of the highlights was when Worman’s wife Kathleen performed a role more commonly associated with ship launches. She broke a bottle on one of the new vehicles while proclaiming, “I christen this railless electric trolley and inaugurate this new system of transportation in the name of and for the city of Dayton.” DSR, which was renamed the Dayton Street Transit Company sometime around 1935 and acquired by the City Railway Company in 1941, was just the first of several private companies to provide trolleybus service in Dayton throughout the years. In 1972, the Miami Valley Regional Transit Authority (the present-day Greater Dayton Regional Transit Authority) took over public transit services in Dayton and continues to operate all of the city’s trolleybuses. At present, this system encompasses seven lines altogether and a fleet of 54 trolleybuses. Dayton’s trolleybus system is one of only five still in existence in the United States. The other systems can be found in Cambridge, Massachusetts; Philadelphia; San Francisco; and Seattle. In addition, Dayton’s network of trolleybuses has the distinction of being second only to the one in Philadelphia as the oldest still around in the entire Western Hemisphere. For more information on the trolleybus system in Dayton, please check out https://en.wikipedia.org/wiki/Trolleybuses_in_Dayton.
FOCUS ON INNOVATION Elementary! For a camera to work it must have a lens! Simple as that! No waiting around for something else to develop! (Couldn’t resist). The purpose of that small piece of glass is to direct light to the camera’s sensor so that it can produce an image. It’s worked fine since cameras were invented. Researchers ever focusing their attention on what might be possible, ever reaching beyond their grasp have found a way to capture images and even video without what has been thought as this essential component. Picture this! Researchers at the University of Utah published their study in the Journal Optics Express. I quote: A NEW PERSPECTIVE. If you were to remove the lens from a standard camera — like, say, the one in your smartphone — you’d still be able to take a photo, but there’d be pretty much no point because it would look like a pixelated blob that no amount of Photoshop magic could fix. The researchers behind this study suspected that they could train an algorithm to decipher the image and make it look like one taken using a lens. Why don’t we think from the ground up to design cameras that are optimized for machines and not humans. That’s my philosophical point,” said study author Rajesh Menon in a news release. To do this, the researchers started by attaching a digital camera sensor to the edge of a sheet of plexiglass, pointing it toward the glass. Then they wrapped reflective tape around the remaining edges of the plexiglass to direct light back towards the sensor. Using the sensor, the researchers snapped pictures and video of several images displayed on an LED screen in front of the plexiglass (and at a 90-degree angle to the sensor). Finally, they had their algorithm interpret the image to produce one that looks like a slightly lower resolution version of what they displayed on the LED screen. Why should we even spend time making a lens-less camera? Turns out, there are lots of uses. We could use them to turn the windows of autonomous vehicles into cameras, or turn the windows of our homes into a security system. A lens-free camera would make augmented reality (AR) glasses a lot less bulky, since you wouldn’t have to point heavy cameras at the wearer’s eyes — we could instead place sleeker sensors on the edges of the glasses. “It’s not a one-size-fits-all solution, but it opens up an interesting way to think about imaging systems,” said Menon in the release. For now, he and his colleagues plan to continue developing their lens-free camera system to effectively capture 3-D images and objects bathed in normal light, the not-insanely-bright kind you probably encounter at home. If their work goes as the researchers hope, the lens may no longer be a necessary component of a camera — maybe it’ll just be an optional one. Read on about this amazing eye opening topic. OSA Recommended Articles Stephen J. Olivas, Ashkan Arianpour, Igor Stamenov, Rick Morrison, Ron A. Stack, Adam R. Johnson, Ilya P. Agurok, and Joseph E. Ford Appl. Opt. 54(5) 1124-1137 (2015) Peng Wang and Rajesh Menon - Opt. Soc. Am. A 35(1) 189-199 (2018) Ganghun Kim, Kyle Isaacson, Rachael Palmer, and Rajesh Menon Appl. Opt. 56(23) 6450-6456 (2017) We’ve selected a few more videos which are very informative. Watch!
‘Young people should NOT drink’: Landmark study warns under 40s should avoid ALL alcohol for the sake of their health – but a small glass of red can cut risk of heart disease, stroke and diabetes in older adults - People under the age of 40 should not consume any alcohol, a new study finds - Researchers find that those who drink are more likely to suffer injury, suicide or be the victim of murder - Around 60% of the global population that does drink an unhealthy amount falls between the ages of 15 and 39, though - There are some benefits to drinking for older people, as it can reduce their risk of stroke, heart disease and other conditions Put the keg away. People under 40 years old should never consume alcohol as it provides them no health benefits while increasing their risk of injury and death, a new study finds. Researchers at the University of Washington, in Seattle, found that people under 40 who drink are more likely to be injured in a car accident, suicide or murder than their peers that avoid alcohol. There could be some benefit to drinking for people over 40, though, as a glass of red wine each day could help reduce the risk of developing heart disease, stroke or diabetes. According to most recent data from the Centers for Disease Control and Prevention (CDC) 66 percent of adults in the U.S. consume alcohol every year, and five percent are heavy drinkers. ‘Our message is simple: young people should not drink, but older people may benefit from drinking small amounts,’ Dr Emmanuela Gakidou said in a statement. Researchers, who published their findings Thursday in the Lancet, gathered data from over 200 countries for their Global Burden of Diseases study. They compared alcohol use to 22 health conditions or outcomes, like injury, heart disease, cancer and more. While the general public is likely aware of some dangers of alcohol – and particularly the damage it can cause to a person’s liver – researchers set out to determine who suffered what level of risk and for what conditions. They found that around 1.3 billion people consumed a harmful amount of alcohol in 2020, or around 15 percent of the global population. Nearly three-in-five people who engaged in risky drinking were between the ages of 15 and 39 – age groups that are not recommended to consume any alcohol. Around 75 percent of that group were males. People in that age group have nothing to benefit from drinking, the researchers find, and are most likely to end up hurting themselves as a result of alcohol consumption. There is some leeway a person has with drinking before it can cause them long-term damage to their health, though. Researchers found that a man aged 15 to 39 years old could drink an average of 0.136 drinks per day – or just under one drink a week – and have nothing to worry about. For women under 40, the tolerance is increased to 0.273 per day, or just under two drinks per week. Researchers found that people under the age 40 do not have anything to benefit from drinking, and should avoid it entirely if possible (file photo) Elderly people who drink red wine every now-and-then can reduce their risk of developing heart disease, diabetes, stroke and other conditions (file photo) Convincing younger people not to drink is a tough task, though, especially in the U.S. and across Europe where alcohol has become engrained in younger cultures. ‘While it may not be realistic to think young adults will abstain from drinking, we do think it’s important to communicate the latest evidence so that everyone can make informed decisions about their health,’ Gakidou said. Elderly people have more of a cushion with drinking and their health, and can even benefit from a drink every now-and-then. Researchers found that people aged 40 to 64 years old could drink up to two daily and maintain their health. The elderly, people 65 and older, can stretch that all the way to three drinks per day. This difference comes from the potential health benefits from drinking experienced by older people. While youngsters have little to gain, older people who drink red wine generally have better heart health and brain health than their peers.
Is your community ready for climate change? If so, you’re fortunate. A growing number of communities are preparing for unavoidable disruptions, but the 2014 National Climate Assessment reports that “current implementation efforts are insufficient to avoid negative consequences” caused by slow-moving hazards like rising sea level and drought or sudden shocks caused by severe storms, floods, wildfire, and heat waves. What’s it mean to be climate ready? At a minimum, it includes a clear-eyed assessment of a community’s vulnerability to the stresses of long-term climate change and exposure to more sudden shocks such as extreme weather events. And it means developing and implementing a plan to reduce those risks, which might include infrastructure failures, water and resource shortages, threats to public health, destruction of homes, social dislocation and unrest. The U.S. Government’s 2014 National Climate Assessment defines community resilience as the capability to anticipate, prepare for, respond to, and recover from significant multi-hazard threats with minimum damage to social well-being, the economy, and the environment. The concept is sometimes defined as the capacity of a community to bounce back. But others note that mere recovery no longer will address the new challenges of climate change. The goal should be to “bounce forward, to build from the proverbial ashes a stronger community or neighborhood better positioned for the future. The United States is fast becoming a bubbling laboratory of creative strategies to boost community readiness. Some are primarily citizen-driven, non-governmental initiatives such as the Transition Town movement. Others are driven primarily by local governments. The most effective approaches, perhaps, are those where the civil society sector and private businesses partner with local governments to tackle the climate challenge. And then there are the majority of U.S. communities where there simply is no discussion about the local implications of climate disruption, let alone the implementation of smart climate readiness plans. I guess we’ll see how that works out for them. Here’s a few examples of community-based efforts to assess and address the risks of accelerating global warming and climate disruptions. Asheville, North Carolina is one of 157 U.S. communities where citizens have rallied to create a “transition town” focused primarily on creating community momentum for local food and energy self-reliance. Transition Asheville anticipates a necessary shift away from extractive industries and fossil fuel dependence. A diverse group of citizens seeks to “re-localize and thrive” by building a green economy and “bringing together the head, heart and hands of community.” The City of New York has perhaps done more than any other large U.S. city to recognize and tackle its climate challenges, including surging seas and tropical storms, heat waves, and crumbling transportation. Strong leadership by elected officials has produced plans to invest billions of dollars in community resiliency, including construction of armored levees, restoration of coastal ecosystems, and reduction of greenhouse gas emissions by 80 percent by 2050. In many communities, the non-profit sector, including philanthropic foundations, have initiated partnerships with local governments. The Rockefeller Foundation, for example, rewards communities that participate in its global network of forward-looking urban areas with funding through its 100 Resilient Cities Initiative. STAR Communities is a nonprofit organization that works with local leaders to evaluate, improve and certify sustainable communities. Nearly 100 cities, towns and counties use STAR to measure their progress across social, economic and environmental performance areas. Public-private partnerships also can be found outside urban hubs. In Missoula, Montana, local governments have teamed up with conservation groups, state and federal agencies, the local university, hospitals and businesses to launch Climate Smart Missoula. Their vision: A vibrant and resilient Missoula community that has a zero carbon footprint and has the crucial community networks to address future climate-related issues in an equitable way. Despite the many examples of engaged communities who actively assess and address risks posed by climate change, it’s safe to say that most local governments and even many states fail altogether to consider the issue. A 2013 analysis by Columbia University, for example, found that State Hazard Mitigation Plans in 18 states fail to address climate change or inaccurate discuss the likely impacts. Those numbers are likely to change, however. In 2015, President Obama issued a directive that federal emergency preparedness funding may only be used in states whose hazard plans factor climate change into their future. No climate planning? No money for you!
Spam and phishing attacks delivered over social networks are a growing problem, says Don DeBolt, director of threat research for IT software firm CA Technologies. For example, a phishing scam operating over Twitter recently stole the iTunes accounts of some users. “People immediately trust these applications because it is how they communicate with friends,” DeBolt explains. “Because people are sending much less text than an e-mail, and URL shorteners are often used, it is harder for people to realize a message may not be real.” DeBolt’s team maintains honeypot profiles of its own, and monitors them manually to look for new spammer tactics. “We have to take great care, though, in curating them as research profiles that don’t impersonate a real person,” he says. The fact that social network honeypots must be part of a community is a fundamental difference from the conventional approach, says Azer Bestavros, a networking specialist at Boston University who has, in the past, worked on analyzing blog spam. A honeypot computer on a network is typically allocated to “dark” address space so that they would never legitimately be contacted by another machine. “Other users could consider our honeypot a real person,” Lee acknowledges. “But we do not have friends or contact other people, and on Twitter our profiles posted random messages so a normal user would not think to contact us.” Some messages and friend requests sent to a social honeypot may be from legitimate users, so information collected from them needs to be treated carefully, says Bestavros. Lee and colleagues are experimenting with varying the output and demographic characteristics of their honeypots to find out what most attracts spammers–for example, varying the dummy user’s age and location, or the frequency of their updates. “Most of the spammers present themselves as college-age females,” says Lee. Data from MySpace honeypots shows that most claim to be located in California, and so far it seems that college-age males are the preferred target. Lee and colleagues are also interested in trying the approach on the world’s largest social network: Facebook. “It is a more private network, but if we were able to get permission from them it would be interesting to try it there,” he says.
Passive TV viewing related to children's sleeping difficultiesA recent Finnish randomized population-based study shows that TV-viewing, and particularly exposure to adult-targeted programs, such as current affairs programs, TV series and police series and movies, markedly increases the risk of sleeping difficulties in 5-6 year old children. Also passive exposure to TV increases sleeping difficulties. Questionnaires concerning TV viewing, sleep disturbances, and psychiatric symptoms were administered to 321 parents of children aged 5-6 years, representing the typical urban population in three university cities in Finland. The results of the study have been published recently in the Journal of Sleep Research. 1. All the families that participated in the study had at least one TV set. In 21% of families, there was a TV set in the children's room. On average, the TV was switched on for 4,2 h a day. Children actively watched TV for a mean of 1,4 h a day and were passively exposed to TV 1,4 h a day. 2. Both active TV viewing and passive TV exposure were related to shorter sleep duration and sleeping difficulties, especially sleep-wake transition disorders and overall sleep disturbances. 3. There was also a clear association between the contents of actively viewed TV programs and the sleep problem scores. Watching adult targeted programs, such as current affairs programs, police series, movies, series, was related to an increased frequency of various sleeping difficulties. 4. Watching TV alone was related to sleep onset problems. 5. Watching TV at bedtime was also associated with various sleeping problems, especially sleep-wake transition disorders and daytime somnolence. 6. Particularly high passive exposure to TV (>2,1 h/day) and viewing adult-targeted TV programs were strongly related to sleep disturbances. The association remained highly significant when socio-economic status, family income, family conflicts, the father's work schedule, and the child's psychiatric symptoms were controlled for statistically. The adjusted odds ratios were 2.91 (95% CI 1.03-8.17) and 3.01 (95% CI 1.13-8.05), respectively. There was also an almost significant interaction between passive TV exposure and active viewing of adult programs (AOR 10.14, 95% CI 0.81-127.04, p=0.07). By contrast, active TV viewing time and the viewing of children's programs were not correlated with sleep problems. Most of the previous research has concentrated on active TV viewing while passive TV exposure has only rarely been considered. Passive TV exposure can be particularly harmful to young children because it increases the risk of children coming into contact with programs intended for adults. Quality sleep is essential for children's wellbeing and health. Therefore reducing the quantity of passive TV exposure and limiting children's opportunities to watch adult-targeted programs might help to reduce children's sleeping problems and increase average sleep duration, which could further lead to beneficial changes in children's daytime behavior. Parents should be advised to control the quantity of TV viewing, to monitor the program content viewed, and to limit children's exposure to passive TV. Watching TV at bedtime should be discouraged. Paavonen E Juulia, Pennonen Marjo, Roine Mira, Valkonen Satu and Lahikainen Anja Riitta: TV exposure associated with sleep disturbances in 5-to 6-year-old children. J Sleep Research (2006) 15, 154-161. This study is a part of the research project "Children's Well-being and Media in Cultural and Social Context", led by Professor Anja Riitta Lahikainen, University of Tampere, Finland. Last reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) "Central pattern generators (CPGs) can be defined as neural networks that can endogenously (i.e. without rhythmic sensory or central input) produce rhythmic patterned outputs" or as "neural circuits that generate periodic motor commands for rhythmic movements such as locomotion." CPGs have been shown to produce rhythmic outputs resembling normal "rhythmic motor pattern production" even in isolation from motor and sensory feedback from limbs and other muscle targets. To be classified as a rhythmic generator, a CPG requires: 1. "two or more processes that interact such that each process sequentially increases and decreases, and 2. that, as a result of this interaction, the system repeatedly returns to its starting condition. Anatomy and Physiology of CPGsEdit Although anatomical details of CPGs are specifically known in only a few cases, they have been shown to originate from the spinal cords of various vertebrates and to depend on relatively small and autonomous neural networks (rather than the entire nervous system) to generate rhythmic patterns. Neural rhythmicity can arise in two ways: "through interactions among neurons (network-based rhythmicity) or through interactions among currents in individual neurons (endogenous oscillator neurons).". A key to understanding rhythm generation is the concept of a half-center oscillator (HCO). "A half-centre oscillator consists of two neurons that individually have no rhythmogenic ability, but which produce rhythmic outputs when reciprocally coupled." Half-center oscillators can function in a variety of ways. First, the two neurons may not necessarily fire in antiphase and can fire in any relative phasing, even synchrony, depending on the synaptic release. Second, half-centers can also function in an "escape" mode or a "release" mode. Escape and release refer to the way in which the off-neuron turns on; by escape or release from inhibition. Half-center oscillators can also be altered by intrinsic and network properties and can have dramatically different functionality based on variations in synaptic properties. For a more detailed description of the neural circuitry underlying the leech heartbeat rhythm generator and the pyloric network of decapod crustacea see Hooper's Review of Central Pattern Generators. Organisms must adapt their behavior to meet the needs of their internal and external environments. Central pattern generators, as part of the neural circuitry of an organism, can be modulated to adapt to the organism's needs and surroundings. Three roles of modulation have been found for CPG circuits: - Modulation in CPG as Part of Normal Activity - Modultion Changes the Functional Configuration of CPGs to Produce Different Motor Outputs - Modulation Alters CPG Neuron Complement by Switching Neurons Between Networks and Fusing Formerly Separate Networks into Larger Entities - Modulation in CPG as Part of Normal Activity For example, the Tritonia diomedea swimming CPG can produce reflexive withdrawal in response to weak sensory input, escape swimming in response to strong sensory input and crawling after escape swimming has ceased. The dorsal swim interneurons (DSIs) of the swim CPGs not only cause the rhythmic escape swimming, but also connect to cilia-activating efferent neurons. Experimental evidence confirms that both behaviors are mediated by the DSIs. "Given the extreme differences between these behaviors—-rhythmic versus tonic, muscular versus ciliary, and brief versus prolonged—these findings reveal a striking versatility for a small multifunctional network." "Part of this flexibility is caused by the release of serotonin from the DSIs, which causes cerebral cell 2 (C2) to release more transmitter and strengthen its network synapses. Application of serotonergic antagonists prevents the network from producing the swimming pattern, and hence this intranetwork modulation appears essential for network oscillation." - Modulation Changes the Functional Configuration of CPGs to Produce Different Motor Outputs Data from experiments by Harris-Warrick in 1991 and Hooper and Marder in 1987 suggest that the functional target of modulation is the entire CPG network. This idea was first observed through experiments with the neuromodulator in the lobster. The effect of proctolin could not be understood by looking only at the neurons it directly affected. "Instead, neurons that are not directly affected both alter the response of the directly affected neurons and help to transmit the changes in the activity of these neurons throughout the network," allowing the entire network to change in consistent and synchronized way. Harris-Warrick and colleagues have conducted many studies over the years of the effects of neuromodulators on CPG neural networks. For example, a 1998 study showed the distributed nature of neuromodulation and that neuromodulators can reconfigure a motor network to allow a family of related movements. Specifically, dopamine was shown to affect both individual neurons, and synapses between neurons. Dopamine strengthens some synapses and weakens others by acting by pre- and post-synaptically throughout the crustacean stomatogastric ganglion. These responses, as well as other effects of dopamine, can be opposite in sign in different locations, showing that the sum of the effects is the overall network effect and can cause the CPG to produce related families of different motor outputs. - Modulation Alters CPG Neuron Complement by Switching Neurons Between Networks and Fusing Formerly Separate Networks into Larger Entities A single neuronal network, such as a central pattern generator, can be modulated moment-to-moment to produce several different physical actions depending on the needs of the animal. These were first coined "polymorphic networks" by Getting and Dekin in 1985. An example of one such polymorphic central pattern generator is a multifunctional network of the mollusk Tritonia diomedea. As described by Hooper, weak sensory input to the swimming CPG produces reflexive withdrawal, while strong input produces swimming. The dorsal swim interneurons (DSIs) of the circuit release serotonin to convert to "swim mode," while application of serotonergic antagonists prevents the swim pattern. Additionally, the same single interneuronal network has been found to produce not only "rhythmic, muscle-based escape swimming," but also "nonrhythmic, cilia-mediated crawling." Evidence also suggests that although the CPG controls related but separate functions, neuromodulation of one function can occur without affecting the other. For example, the swim mode can be sensitized by serotonin without affect the crawl mode. Thus, the CPG circuit can control many separate functions with the appropriate neuromodulation. Although the theory of central pattern generation calls for basic rhythmicity and patterning to be centrally generated, CPGs can respond to sensory feedback to alter the patterning in behaviorally appropriate ways. Alteration of the pattern is difficult because feedback received during only one phase may require changed movement in the other parts of the patterned cycle to preserve certain coordination relationships. For example, walking with a pebble in the right shoe will alter the entire gait, even though the stimulus is only present while standing on the right foot. Even during the time when the left foot is down and the sensory feedback is inactive, action is taken to prolong the right leg swing and extend the time on the left foot, leading to limping. This effect could be due to widespread and long-lasting effects of the sensory feedback on the CPG or due to short-term effects on a few neurons that in turn modulate nearby neurons and spread the feedback through the entire CPG in that way. Some degree of modulation is required to allow one CPG to assume multiple states in response to feedback. Additionally, the effect of the sensory input will vary depending on the phase of the pattern in which it occurs. For example, during walking, resistance to the top of the swinging foot (i.e. by a horizontal stick) causes the foot to be lifted higher to move over the stick. However, the same input to the standing foot cannot cause the foot to lift or the person will collapse. Thus, depending on the phase, the same sensory input can cause the foot to be lifted higher or held more firmly to the ground. "This change in motor response as a function of motor pattern phase is called reflex reversal, and has been observed in invertebrates (DiCaprio and Clarac, 1981) and vertebrates (Forssberg et al., 1977). How this process occurs is poorly understood, but again two possibilities exist. One is that sensory input is appropriately routed to different CPG neurons as a function of motor pattern phase. The other is that the input reaches the same neurons at all phases, but that, as a consequence of the way in which the network transforms the input, network response varies appropriately as a function of motor pattern phase." A recent study by Gottschall and Nichols studied the hindlimb of a decerebrate cat during walking (a CPG controlled function) in response to changes in head pitch. This study describes the differences in gait and body position of cats walking uphill, downhill and on level surfaces. Proprioceptive (Golgi tendon organs and muscle spindles) and exteroreceptive (optic, vestibular and cutaneous) receptors work alone or in combination to adjust the CPG to sensory feedback. The study explored the effects of neck proprioceptors (giving information about the relative location of the head and body) and vestibular receptors (giving information about the orientation of the head relative to gravity). Decerebrate cats were made to walk on a level surface with their heads level, tilted up or tilted down. Comparing the decerebrate cats to normal cats showed similar EMG patterns during level walking and EMG patterns that reflected downhill walking with the head titled up and uphill walking with the head tilted down. This study proved that neck proprioceptors and vestibular receptors contribute sensory feedback that alters the gait of the animal. This information may be useful for treatment of gait disorders. Functions of Central Pattern GeneratorsEdit Central pattern generators can serve many functions in vertebrate animals. CPGs can play roles in movement, breathing, rhythm generation and other oscillatory functions. The sections below will focus on specific examples of locomotion and rhythm generation, two key functions of CPGs. The first modern evidence of the central pattern generator was produced by isolating the locust nervous system and showing that it could produce a rhythmic output in isolation resembling that of the locust in flight. This was discovered by Wilson in 1961. Since that time, evidence has arisen for the presence of central pattern generators in vertebrate animals. This section will address the role of the central pattern generator in locomotion for the lamprey and humans. The lamprey has been used as a model for vertebrate CPGs because, while its nervous system has a vertebrate organization, it shares many positive characteristics with invertebrates. When removed from the lamprey, the intact spinal cord can survive for days in vitro. It also has very few neurons and can be easily stimulated to produce a fictive swimming motion indicative of a central pattern generator. As early as 1983, Ayers, Carpenter, Currie and Kinch proposed that there was a basal CPG responsible for most undulating movements in the lamprey including swimming forward and backward, burrowing in the mud and crawling on a solid surface. The different movements have been found to be altered by neuromodulators, including serotonin in a study by Harris-Warrick and Cohen in 1985 and tachykinin in a study by Perez, CT et. al. in 2007. The lamprey model of CPG for locomotion has been very important to the study of CPGs and is now being used in the creation of artificial CPGs. For example, Ijspeert and Kodjabachian used Ekeberg's model for the lamprey to create artificial CPGs and simulate swimming movements in a lamprey-like substrate using controllers based on a SGOCE encoding. Essentially, these are the first steps toward the use of CPGs to code for locomotion in robots. Central pattern generators also contribute to locomotion in higher animals and humans. In 1994, Calancie, et. al. claimed to have witnessed the "first well-defined example of a central rhythm generator for stepping in the adult human." The subject was a 37-year-old male who suffered an injury to the cervical spinal cord 17 years prior. After initial total paralysis below the neck, the subject eventually regained some movement of the arms and fingers and limited movement in the lower limbs. He had not recovered sufficiently to support his own weight. After 17 years, the subject found that when lying supine and extending his hips, his lower extremities underwent step-like movements for as long as he remained lying down. "The movements (i) involved alternating flexion and extension of his hips, knees, and ankles; (ii) were smooth and rhythmic; (iii) were forceful enough that the subject soon became uncomfortable due to excessive muscle 'tightness' and an elevated body temperature; and (iv) could not be stopped by voluntary effort." After extensive study of the subject, the experimenters concluded that "these data represent the clearest evidence to date that such a [CPG] network does exist in man." As described in Neuromodulation, the human locomotive CPG is very adaptable and can respond to sensory input. It receives input from the brainstem as well as from the environment to keep the network regulated. Newer studies have not only confirmed the presence of the CPG for human locomotion, but also confirmed its robustness and adaptability. For example, Choi and Bastian showed that the networks responsible for human walking are adaptable on short and long timescales. They showed adaptation to different gait patterns and different walking contexts. Also, they showed that different motor patterns can adapt independently. Adults could even walk on treadmills going in a different direction for each leg. This study showed that independent networks control forward and backward walking and that networks controlling each leg can adapt independently and be trained to walk independently. Thus, humans also possess a central pattern generator for locomotion that is capable not only of rhythmic pattern generation but also remarkable adaptation and usefulness in a wide variety of situations. Central pattern generators can also play a role in rhythm generation for other functions in vertebrate animals. For example, the rat vibrissa system uses an unconventional CPG for whisking movements. "Like other CPGs, the whisking generator can operate without cortical input or sensory feedback. However, unlike other CPGs, vibrissa motoneurons actively participate in rhythmogenesis by converting tonic serotonergic inputs into the patterned motor output responsible for movement of the vibrissae." Breathing is another non-locomotive function of central pattern generators. For example, larval amphibians accomplish gas exchange largely through rhythmic ventilation of the gills. A study by Broch, et. al. showed that lung ventilation in the tadpole brainstem may be driven by a pacemaker-like mechanism, whereas the respiratory CPG adapts in the adult bullfrog as it matures. Thus, CPGs hold a broad range of functions in the vertebrate animal and are widely adaptable and variable with age, environment and behavior. Functions in InvertebratesEdit As described earlier, CPGs can also function in a variety of ways in invertebrate animals. In the mollusc Tritonia, a CPG modulates reflexive withdrawal, escape swimming and crawling. CPGs are also used in flight in locusts and for respiration systems in other insects. Central pattern generators play a broad role in all animals and show amazing variability and adaptability in almost all cases. - ↑ 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 Hooper, Scott L. "Central Pattern Generators." Embryonic ELS (1999) http://www.els.net/elsonline/figpage/I0000206.html (2 of 2) [2/6/2001 11:42:28 AM] Online: Accessed 27 November 2007 . - ↑ 2.0 2.1 2.2 Kuo, Arthur D. "The relative roles of feedforward and feedback in the control of rhythmic movements." Motor Control (2002): 6, 129-145, Online: Accessed 27 November 2007 - ↑ 3.0 3.1 Popescu, Ion R. and William N. Frost. "Highly Dissimilar Behaviors Mediated by a Multifunctional Network in the Marine Mollusk Tritonia diomedea." The Journal of Neuroscience (2002): 22(5):1985–1993. - ↑ Harris-Warrick, RM, et. al. "Distributed effects of dopamine modulation in the crustacean pyloric network." Neuronal Mechanisms for Generating Locomotor Activity Annals of the New York Academy of Sciences (1998): 860: 155-167. - ↑ Harris-Warrick, R.M. and Eve Marder. "Modulation of Neural Networks for Behavior." Annu. Rev. Neurosci. (1991): 14:39-57. - ↑ Popescu, Ion R. and William N. Frost. "Highly Dissimilar Behaviors Mediated by a Multifunctional Network in the Marine Mollusk Tritonia diomedea." The Journal of Neuroscience (2002): 22(5):1985–1993. - ↑ Gottschall, Jinger S. and T. Richard Nichols. "Head pitch affects muscle activity in the decerebrate cat hindlimb during walking." Exp Brain Res (2007): 182:131–135. - ↑ Harris-Warrick, RM and AH Cohen. "Serotonin Modulates the Central Pattern Generator for Locomotion in the Isolated Lamprey Spinal Cord." J. exp. Biol. (1985): 116, 27-46. - ↑ Ijspeert, Auke Jan and Jerome Kodjabachian "Evolution and development of a central pattern generator for the swimming of a lamprey." Research Paper No 926, Dept. of Artificial Intelligence, University of Edinburgh, 1998 - ↑ Calancie, Blair et. al. "Involuntary stepping after chronic spinal cord injury: Evidence for a central rhythm generator for locomotion in a man." Brain (1994): 117, 5; ProQuest Nursing & Allied Health Source pg. 1143. - ↑ Choi, Julia T. and Amy J. Bastian. "Adaptation reveals independent control networks for human walking." Nature Neuroscience (2007): 10, 1055 – 1062. - ↑ Cramer, NP, Ying Li and Asaf Keller. "The whisking rhythm generator: A novel mammalian network for the generation of movement." Journal of Neurophysiology (2007): 97 (3): 2148-2158. - ↑ Broch, Lise, et. al. "Regulation of the respiratory central pattern generator by chloride-dependent inhibition during development in the bullfrog (Rana catesbeiana)." The Journal of Experimental Biology (2002): 205, 1161–1169. - How Do Central Pattern Generators Work? @ www.bio.brandeis.edu - Dimitrijevic MR, Gerasimenko Y, Pinter MM. Evidence for a spinal central pattern generator in humans. Ann N Y Acad Sci. 1998 Nov 16;860:360-76. - A.J. Ijspeert and J. Kodjabachian. Evolution and development of a central pattern generator for the swimming of a lamprey. Research Paper No 926, Dept. of Artificial Intelligence, University of Edinburgh, 1998. - Arthur D. Kuo. The relative roles of feedforward and feedback in the control of rhythmic movements. Motor Control, 2002, 6, 129-145 (.pdf format) - Paul S. Katz, David J. Fickbohm, and Christina P. Lynn-Bullock. Evidence that the Central Pattern Generator for Swimming in Tritonia Arose from a Non-Rhythmic Neuromodulatory Arousal System: Implications for the Evolution of Specialized Behavior. Amer. Zool. 41: 962-975. doi:10.1093/icb/41.4.962 Carew J.T. (2000) Behavioral Neurobiology. Sinauer Associates, Inc. Sunderland, MA: 155-163 |Concepts in Neuroethology|| Feedforward · Coincidence detector · Umwelt · Instinct · Feature detector · Central pattern generator (CPG) ·NMDA receptor · Lateral inhibition · Fixed action pattern · Krogh's Principle·Hebbian theory· Sound localization |History of Neuroethology| |Methods in Neuroethology| |Model Systems in Neuroethology| |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Ehlers-Danlos syndrome (EDS) is a genetic disorder that can negatively influence the mouth’s function, resulting in a decreased quality of life. The teeth and the gums, as well as the temporomandibular joint, can be affected by this connective tissue disorder, but many people with EDS do not have any noticeable oral issues as a result of their condition. Furthermore, the systemic problems of EDS may occasionally make it difficult to provide basic dental treatment. Tooth and Gum Issues in EDS For the time being, researchers have not conducted comprehensive surveys of individuals with EDS in order to establish the impact of the condition on their oral and dental health. Single research, which included a survey of individuals with classical and hypermobile EDS, discovered a higher prevalence of oral issues, including discomfort, difficult tooth extractions, gum disease, and spontaneous tooth breaking in patients with EDS. Because EDS patients tend to have readily dislocated joints, the jaw may dislocate as well. Many people have delicate, easily damaged epidermis, which means they are more likely to get mouth injuries. Other complications include bleeding or infections as a result of the longer time it takes for wounds to seal and heal. Patients suffering from periodontal EDS are more likely to develop severe gum disease, resulting in tooth loss. The majority of oral issues experienced by patients with EDS will likely be comparable to those experienced by healthy individuals (e.g., tooth decay (caries) and gum disease (gingivitis and periodontitis)) as a result of the impact of dental plaque on the oral mucosa. Because there have been few studies, including large numbers of individuals with well-characterized EDS, it is difficult to determine the precise frequency of oral and facial abnormalities caused by EDS in detail. Each form of EDS has its own set of oral and facial characteristics, but in general, the more skin and mucosa laxity there is in patients’ skin and mucosa, the greater the likelihood of having orofacial characteristics. According to the evidence available, the hemorrhagic forms of EDS are the most likely to cause gingival (gum) bleeding. Pain and Laxity of the Mucosal Lining of the Mouth The jaw (temporo-mandibular joint, or TMJ) joint may become painful and dislocated due to EDS’s classical, hypermobile, and vascular forms, among others. In light of the joint’s laxity, the lower jaw (mandible) is more mobile and is more likely to dislocate from its fossa in the temporal bone (the base of the skull), leading the jaw to deviate away from the side of displacement and the patient to be unable to close the mouth. Sometimes the mandible will move on its own, and other times patients may devise a way by which they will be able to effortlessly move the lower jaw into the fossa, which is a natural process. However, it is essential to remember that pain in and around the TMJ is not always a direct reflection of EDS. For example, some people with EDS may experience symptoms of a much more common problem known as temporomandibular joint disorder (TMD), which causes pain in the joint and surrounding muscles and a possible limitation of mouth opening. This disease is widespread, and it does not appear to be a result of any anatomical abnormalities in the joints or muscles but instead appears to be a result of some form of psychological anguish. According to some studies, temporomandibular disorder (TMD) is most common in young people and maybe more common in women than in males. The symptoms are frequently responsive to analgesics, and they are commonly alleviated when the anguish of the individual experiencing them subsides. A well-defined procedure for managing joint laxity in EDS does not exist at the time of writing. Preventing subluxation or dislocation is primarily accomplished by not opening the mouth wide, whereas treatments for pain have included splints, local ultrasound, low-intensity laser, exercises, and acupuncture. Other treatments have included cognitive-behavioral therapy (CBT), transcutaneous electrical nerve stimulation (TENS), and antidepressant medication. Generally, the gums (gingivae) and periodontal tissues (the tissues that connect the teeth to the bones) are not impacted by EDS in the same way that other tissues are. On the other hand, Type VIII EDS has been linked to an increased risk of gingivitis and periodontitis, resulting in non-painful red bleeding gums, bad breath (halitosis), tooth mobility, and early age tooth loss in children. Periodontal disease has also been linked to both classical and vascular EDS, according to some researchers. Anomalies of the Teeth Dental anomalies have been explained in EDS, particularly in the classical and hypermobile types. These include high cusps and deep fissures in premolar and molar teeth, short or abnormally shaped roots with stones in the pulp of crowns, enamel hypoplasia (under development) with microscopic evidence of various enamel and dentine defects, as well as enamel hypoplasia (under development) with microscopic evidence of various enamel and dentine defects. It is possible that enamel flaws would predispose a patient to easy loss of the tissue of crowns (attrition), which, in turn, will result in a reduction in enamel calcification, increasing the risk of caries. Jaw Bone Anomalies Aside from the possibility of harm to the jaw joint described above, there is no compelling evidence that EDS leads to abnormalities in the jawbones or other structural problems. Multiple odontogenic keratocysts (which can cause local bone destruction of the jaws) have been reported in patients with vascular EDS. Having poor oral health has negative consequences for one’s overall health. Negative Consequences of Poor Oral Health Dental decay (caries) is a painful condition that limits one’s ability to eat certain foods and may eventually result in excruciatingly painful abscesses. Gingival disease (gingivitis) may create bad breath and make people feel self-conscious when they socialize, but periodontal disease can cause teeth to drift, change the smile, and interfere with the ability to chew food. In the case of EDS, there is the possibility of an additional negative impact from the physical and psychological effects of the laxity of the jaw joint. There is some evidence that EDS might lower nutritional intake and raise the risk of developing eating disorders for various reasons. As a result, reducing the risk of common oral illness is essential since it may add to the burden of issues associated with EDS and its complications. Everyone needs to practice good oral hygiene to avoid tooth decay and gum disease since this will help to prevent discomfort and the other symptoms listed above. Furthermore, the necessity for extensive dental treatment can be expensive financially and in terms of time (e.g., children miss school, adults, and cares have to take time away from work or other activities). As a result, it is essential for all people who have EDS to consume a diet that prevents the formation of caries and to maintain a high quality of dental hygiene that will reduce the chance of developing caries and gum disease. Advice on How to Keep Teeth in Good Condition Dental decay is caused by plaque, which produces acids from carbohydrates that eat away at the teeth and cause them to decay. As a result, the three most important concepts for reducing caries are as follows: - Clean the teeth to eliminate plaque - Reduce the consumption of sweets, which contribute to the formation of dental plaque, and - Use fluoride mouthwashes and toothpaste to protect the surfaces of teeth from the effects of acids. Avoid Sugary Agents The consumption of sweet sticky foods should be avoided at all times. Snacking on sweets between meals should also be avoided, and sweet foods should be consumed only at mealtimes. Candy and meals containing sweeteners other than sugar are less cariogenic than sugar, although they might induce gastrointestinal discomfort in certain persons. Dieting does not have to be tedious. Although sugars should not be avoided altogether, wise persons who practice good dental hygiene will have a lower chance of developing caries in most cases. Crisps, nuts (as long as they are not too hard and do not induce TMJ discomfort), and a variety of other flavourful agents provide modest quantities of sugar and may also promote saliva production, which can help to neutralize the effects of acids. Brush Teeth Thoroughly Using fluoride-containing toothpaste and an appropriate toothbrush, teeth should be cleaned at least twice a day to keep them healthy. It will be possible to use a variety of tooth brushing techniques (for example, a gentle up-and-down rolling motion or a figure-of-eight motion), but it is essential to remember that the teeth should never be scrubbed in a horizontal direction because this increases the risk of damaging the gums and any exposed root surfaces. Brushing should involve a moderate massage of the gum margin, as this will aid in the removal of any plaque that may have become trapped under this location over the day. Because toothbrushes only remove plaque and debris from the top and exposed (smooth) surfaces of teeth, it is necessary to clean the regions between teeth (interdental sites) in addition to the surfaces of the teeth themselves. Many different interdental products, like floss, interdental brushes, and interdental sticks, are available to help you clean between your teeth. Care must be used when using floss to prevent traumatizing the gums. In some cases, floss holders can make flossing more convenient, particularly for people who have difficulty reaching the rear teeth. EDS is unlikely to have any substantial consequences for interdental cleaning other than avoiding trauma and opening the mouth to a significantly greater extent. Fluoride in toothpaste and mouthwashes will reduce the resistance to decay of only the surface layer of enamel, but the fluoride in drinking water will not. It is suggested that you use fluoride-containing toothpaste twice daily. Fluoride mouthwashes can also be beneficial. However, they are unlikely to be required if a patient is already using fluoridated toothpaste in the first place. Antimicrobial mouthwashes have been shown to lower the risk of gingivitis and periodontitis and reduce bad breath. Mouthwashes are available in various flavors and strengths, and they should be used regularly. There is no conclusive evidence that using mouthwashes containing alcohol increases the risk of developing oral cancer. Visit the Dentist Regularly Dentists are trained to treat common dental diseases. A patient with a complicated condition or probable oral symptoms of EDS, for example, will be referred to an appropriate specialist, and they will be able to arrange for further research or treatment on their initiative. NHS Direct is a good source of information on the availability of dentists near you. Dentists with a poor understanding of EDS and its implications for oral health and dental care should send the patient to an appropriate expert for further evaluation and treatment. Consideration for Different Dental Issues When teeth are removed, bacteria from the gums enter the bloodstream and cause an infection. The bacteria may adhere to the valve(s) in individuals who have cardiac valve defects, resulting in inflammation (endocarditis) of the valve(s)—recommended that all patients with valvular abnormalities require antibiotics before tooth extractions to prevent bacterial infection. Before this, the National Institute for Clinical Excellence (NICE) said that the risk of endocarditis following tooth extractions was minimal and that antibiotics (antibiotic prophylaxis) were not necessary. They have lately revised their recommendations to state that antibiotic prophylaxis may no longer be necessary regularly. The decision on whether precaution will be essential or recommended will most likely be made on a case-by-case basis after the dentist consults with the patient’s doctor or cardiologist. Post-surgical Bleeding and Healing Patients with hemorrhagic forms of EDS may experience excessive post-extraction bleeding, which should be avoided. But the dentist will generally inject a hemostatic substance into the socket, gently stitch the gum, and perhaps provide a mouthrinse to prevent the clot from dissolving further in the mouth (tranexamic acid). There is minimal data to suggest that extraction sites heal poorly in EDS patients. Any signs of aberrant healing (such as persistent pain, swelling, or poor taste) should be reported to a specialist in oral and maxillofacial surgery, who will clean the region and provide local or systemic antibiotics if necessary. The Efficacy of Local Anesthetics On occasion, it has been reported that the efficacy of local anesthetics may be diminished in patients with EDS. Whenever this situation happens, patients will almost certainly be directed to an oral and maxillofacial surgery specialist who will guarantee that the most appropriate method or agent is utilized to ensure successful anesthesia. Gum Disease (Gingivitis and Periodontitis) Periodontal disease is less likely to occur if you maintain good dental hygiene. Moreover, individuals suffering from periodontal disease (regardless of their medical condition) should seek treatment from a periodontologist who will provide professional cleaning of the teeth and gums, and when necessary, surgical intervention to improve the gum status of the affected teeth. Because certain patients with EDS are more susceptible than others to developing mouth ulcers due to stress caused by a loose denture, dentures must be properly fitted and checked frequently by a dentist. The presence of pulp stones or the root’s unique form may make root canal therapy (endodontics) more difficult in the EDS setting. Endodontic problems may be best handled by a qualified professional in certain situations (an endodontist). Although there are no thorough data on the use of dental implants in individuals with EDS, it is predicted that there would be minimal adverse side effects. Given that implant implantation is a surgical operation, the same attention should be provided to antibiotic prophylaxis and post-surgical bleeding as in dental surgery. Orthodontic therapy for people with EDS may need to be adjusted in some cases since the teeth move more quickly than would be expected in some patients. Patients may need to wear an appliance for several months after their teeth have been appropriately positioned to guarantee that the teeth remain in the correct place. The stress of any orthodontic device may cause mouth ulcers in certain patients with EDS, and this is especially true for children with EDS. This can be mitigated by applying protective wax to the brace and, if necessary, applying an occlusive paste to any ulcerated areas. Mouth ulcers – Some individuals with EDS are more prone to developing ulcers in their mouths due to damage from their teeth or their dentures. These can be minimized by ensuring that no rough or sharp teeth or dental restorations are present and that dentures are well fitted and secure. A protective occlusive paste can be applied to the region where the damage is most likely to occur if ulcers develop. In contrast, a professional should evaluate any mouth ulcer that does not heal after more than 2 weeks, and that does not appear to have a local origin.
When society leaves new mothers alone and overburdened, Rose Volz-Schmidt helps them cope with the chaotic challenges of motherhood in the first months. She has created a broad network of volunteers and professionals to bridge the gap between families and the welfare system, and in doing so is strengthening young families and changing society's attitude towards motherhood. The New Idea Rose understood a crucial problem: The German cultural notion of the happy, self-sufficient mother is often far removed from women’s experience after childbirth. The majority of young, urban mothers are alone and overburdened, especially in the first three to four months, without family nearby, community support, or entitlement to official help (which is granted only to sick or teenage mothers). Rose is changing this scenario on several levels. By franchising her organization, “wellcome,” across Germany, she fills in the gap left by state welfare institutions and strengthens young mothers and families. She equips carefully selected partner organizations within the welfare system with the how-tos to build and coordinate local networks of volunteer caretakers—most are older women with grown children—to help young families during the critical period after childbirth. Rose taps a huge, unused resource with these older mothers, who welcome the opportunity to help their younger counterparts, and are proud to impart their knowledge. She helps the families and significantly lowers the risk of postpartum depression, stress, divorce, and infant health complications. She also improves existing welfare organizations by helping them offer a much-needed service to families in their region. Rose has built extensive networks between the regional wellcome hubs and partnerships with doctors, midwives, and nurses—who are with the mother near the time of childbirth—to spread the word about her service. Rose is ultimately challenging the perception of motherhood in German society, which implicitly assumes mothers are happy after childbirth and society should not hear from them. Rose teaches that childrearing is a collective societal endeavor, and shows how to assume this responsibility in a way that is rewarding for everyone involved. The birth of a child is often happy and momentous, but also a significant life-changing event. Mothers and fathers are often insecure in their new role and need support. A few decades ago—and in some rural areas today—families had access to a network of support during the early, challenging months after childbirth. Extended families and friends supported young mothers by cooking, babysitting, shopping, and generally helping to relieve them from some of their household burdens while they adjusted to their new responsibilities. The increased mobility of young Germans on all economic levels has fundamentally changed this traditional pattern of support. Urban parents are increasingly isolated from their extended families, who may be scattered across the country, with only a small circle of close friends. German social mores have not adjusted to this reality. Mothers are not expected to reach out to strangers for support during the early months after childbirth. There is a strong cultural expectation that mothers are aglow with maternal bliss and find fulfilment in their unique bond with their child. But this is often not the experience of many mothers immediately after childbirth. They feel overwhelmed; regardless of class background. This has resulted in a set of lasting problems for families and society. Up to 40 percent of young mothers today suffer from some level of postpartum depression which is compounded by shame. Mothers are ashamed they are not happy and feel selfish or deficient. The cultural notion of “happy mothers” prevents them from seeking help. Shame and depression also affects their relationships with their children when the mother’s stability and empathy is crucial for healthy child development. It may also affect women’s relationships with their partners, with studies showing that every fifth marriage in Germany fails during the year after childbirth. The German welfare state offers no support for young mothers during the transition time after birth, unless they suffer from health problems or are teens, and there is no effective distribution of information to put young mothers in contact with the range of existing services for them and their families. After the first three to four months, there are mother-child groups, breast-feeding support groups, and child care organizations. Rose discovered that there is also a vast, untapped resource in older mothers, many whose children have left home. They have more time and are happily willing to put their expertise to good use. So far, this resource sits untapped. Motherhood and raising children is often perceived as an individual task. Contrary to the African proverb that it takes a village to raise a child, in modern society, the responsibility to raise children lies exclusively with the parents—and more often with a single parent. Rose’s first challenge is to reach young mothers that are reluctant to ask for help. In each community she covers through her franchise, wellcome builds a referral network among professionals to help mothers with childbirth and pre-childbirth coaching. These professionals, including doctors, birth clinics, nurses, and midwives, distribute information about Rose’s organization to mothers after childbirth. They let mothers know that many other “normal” parents—not just delinquents (an immediate prejudice of many parents)—have benefited from wellcome’s support, and present this support as a normal part of early motherhood. When mothers approach the regional coordinator of Rose’s organization, they are assigned a volunteer from their neighborhood to help them cope with the challenges of motherhood. The volunteer’s role depends on the family’s situation. Sometimes she helps supervise the infant or older siblings, does housework, shops, or when asked, works with the mother to solve early parenting problems by sharing her experience. The volunteer puts the mothers in contact with a range of available services in her neighborhood, including local crèches and childcare facilities. Mothers pay a fee for volunteer services, which ranges from €1 to €4 per hour (depending on the family’s income), and helps to cover wellcome’s operating expenses. wellcome’s volunteers are primarily older women with older children. They are drawn from a large pool of interested mothers who contact wellcome after having read about it in the newspaper, heard about it by word of mouth, or are approached through the local wellcome franchise network. Volunteers work with one family at a time, and offer an average of six hours of support per week. The local wellcome coordinator tries to match volunteers with young parents in the same neighborhood, so that the volunteer has an intimate knowledge of the available local resources. Rose’s training sessions for volunteers are succinct and she insists on two qualities in her volunteers. First, they must be reliable; young mothers must know they can count on their volunteers at an appointed time. Second, Rose teaches volunteers to enter families as “angels”—present for a short time to do what is needed, as long as needed, and then disappear. She does not want her volunteers to come in as “coaches” or trainers, or as overbearing “grandmothers” or surrogate family members. She understands that young mothers do not need additional complications or to feel inadequate. Most simply need support and encouragement. Instead of investing a lot of time and money in training her volunteers, Rose has found it more effective to draw a “short line” between volunteers and the local wellcome franchise professional coordinator; this way volunteers can ask for help and support in difficult situations. She also offers volunteer meetings and trainings requested by volunteers. The team coordinators form the heart of each local wellcome chapter. The coordinator is responsible for building the referral network and supervising the volunteers, maintaining a comprehensive database of parenting and child support services in the region, and setting up a referral hotline. Every mother or father who calls and asks for help is referred to the relevant institutions even if what wellcome offers is not applicable to their specific problem. Rose uses her program as an interface between young parents and the state system to overcome the fragmentation of the support sector. Before providing support services, new coordinators must recruit at least fifteen volunteers and make wellcome known to a required set of important referral network partners. After the program is operating, coordinators organize and facilitate three annual meetings for their volunteers, where they can share experiences and discuss best practices. Coordinators are employed and paid by Rose’s carefully selected franchisees; typically welfare organizations or social organizations. For example, in areas with large immigrant populations she seeks organizations that work specifically with immigrants and in their language. There are many advantages to this franchise strategy. For the local franchisees, Rose’s program adds value (a new way to reach their target group and attract Wellcome publicity) to their work and they find it worthwhile to employ and pay a coordinator. Second, existing organizations have local legitimacy and local contacts which allows them to implement Rose’s program quicker and more effectively, to reach more families. Third, it is a cost-efficient way to expand. Partner organizations must agree to Rose’s franchise standards and rules. For instance, they must raise between €7,500-10,000 to set up a Wellcome hub under the direction of a trained welcome coordinator. This includes the Wellcome training and most of the personal costs for a working period of one year. They must also agree to strict evaluation rules and quality control. Additionally, the franchisees can incorporate the Wellcome program onto their homepage and receive media attention. Rose provides continual training and support to set up and manage the program. Since Wellcome can train only a limited number of new coordinators per year, Rose is creating an intermediate level of trainers nationwide for Wellcome to delegate much of the coordinator training. These trainers will enable Wellcome to quickly process more new applications, and expand faster throughout the country. At present, Wellcome has trained seventy coordinators in twelve of the German states, and there exist correspondingly seventy local franchisees with 700 volunteers—reaching 1,000 families a year. In 2006 an academic study was conducted to assess the effects of her work. It examined twenty-five mothers who received support from Wellcome alongside a control group that received no support. The difference was substantial: In Rose’s group, women’s self-reported well-being was higher, and rates of postpartum depression were lower. December 2007 marked a key turning point for Wellcome. In one week in Germany, five infants had been killed or died of neglect, and there was public uproar about parental abuse and neglect. In response, Chancellor Merkel issued a public endorsement of Rose and wellcome’s work, which Merkel presented as a way to detect early parental problems and help young parents avoid violence and neglect. After this endorsement, Rose seized the initiative and approached regional health ministries throughout the country for funding. Many of these ministries had previously given her informal encouragement, but now began giving money. This money is enabling her to fund the trainers who will greatly expand wellcome’s capacity. Her current operating budget is €300,000 per year, but she would like to grow to €500,000 within the next year. In future, Rose will use the networks she has built to deliver more services to young parents. She is now running a pilot project in Hamburg that targets “problem families”—identified by doctors or the social welfare system—and approaches them in a positive way. Her referral system gives vouchers for a “birthday fairy”—families can call and “order” a fairy (a social worker trained in early child development) who visits the family on the child’s first birthday. The volunteer arrives with a gift for both the mother and child, and begins to build trust. The “fairy” offers to help the mother with any child-rearing problems she has and quickly takes stock of the domestic situation to determine if serious domestic problems need to be addressed. The program is designed to target “at-risk” children—early intervention being very consequential. Rose would like to offer more extensive childcare by linking parents who can occasionally attend to each other's children. When Rose has expanded wellcome’s support network throughout the country, she will begin to deepen their range its services. Rose was born to a large family in a village of 600 people in the Black Forest. Her family had lived in the village since 1650, but neither she nor any of her five siblings stayed to make a life there. All moved to cities in different parts of the country, and left the supportive umbrella of the family and the village neighbourhood behind. When Rose’s first daughter was born, her husband was busy at work, and her family was 600 kilometres away. Rose felt alone and afraid. Though she had eagerly anticipated her daughter’s birth and was a professional social worker specializing in child care, she found the early months of motherhood more difficult than she expected. There had been plenty of support before and during childbirth, but now there was none. Rose has worked in family education for many years, and has started all sorts of innovative new approaches. She founded a support group for fathers—the first of its kind in Germany—to help educate them about how to contribute to early child-rearing. Eventually, she felt the program should be managed by a man, and found a male successor. She also founded a day care network to bring parents together to coordinate a system of mutual caretaking: Parents supervised each others’ children for certain amounts of time and helped overburdened mothers. Having been through early motherhood, she knew there was a huge gap in the welfare system that leaves mothers and families alone during the critical time after childbirth. Rose founded wellcome in 2002. Wellcome won a social venture competition, which earned consulting support from McKinsey. She describes this as a crucial period, when she began to think more ambitiously about systems change.
The Sarine river running through the medieval Swiss town of Fribourg acts as a language border between its inhabitants, with German speakers living on the east bank and French on the west. Fribourg (Freiburg in German) is one of several towns that straddle Switzerland's language divide. It is officially bilingual and as such its river also goes by its German name, the Saane. Switzerland's multilingual heritage sets it apart in Europe, with the four national languages – German, French, Italian and the little-spoken Romansch – contributing to about 10% of the country's gross domestic product, according to a 2008 study. English has entered the mix over the last two decades. Its influence has been spread by the numerous international firms headquartered in tax-friendly Swiss municipalities, its increasing use in academia and its general acceptance as an additional language in wider communication. "Over the last 20 years English has made quite a lot of inroads in Switzerland," said Daniel Stotz, an English-teacher trainer in Zurich and researcher into the role of language and Swiss identity. "In most cases now English is used in wider communication among non-native speakers. Quite a lot of Swiss adults have experienced the fact that English has become a company language. Sometimes it was forced upon them as well. I think some of this interest and perhaps pressure has trickled down to family life. "It is connected a lot to young people's life chances. There is a perception that English is important, that it allows you to get better jobs. It has a highly symbolic value as well," Stotz said. In a ruling last year, the government decided that the most important Swiss laws should be translated into English in response to growing demand for translation of legislation. Strong demand for English lessons in schools has also undermined the priority given to national languages in the curriculum. Switzerland's 26 cantons have agreed to introduce measures over the next few years whereby English will be taught in all primary schools alongside a second national language. Eight and nine-year-olds are already learning it as their first foreign language – ahead of another national language – in 10 cantons. Swiss multilingualism has been the subject of a four-year research programme by the National Science Foundation that aims to understand the role of language and help the government to map out "a new equilibrium", according to Walter Haas, president of the steering committee. The programme is currently compiling a final report from 26 research projects, which is due for review by government at the end of 2009. The findings show English has a place in Swiss culture, although not necessarily a dominant one. In one Bern University study, Swiss people viewed English as the most useful foreign language, although most opted to use one of the other national languages when first trying to communicate with someone from a different part of the country. Another study by the University of Teacher Education found that early English teaching later helped German-speaking pupils to learn French, while a third project by lawyers proposed making English a semi-official language in order to attract more foreign professionals to the country. Another contributor, University of Geneva economics professor François Grin, calculated that Switzerland's multilingual heritage gave it a competitive advantage worth $42bn – a tenth of GDP. "If society is going to invest money anywhere, investing in foreign languages, which in Switzerland means essentially one other national language and English, the rate of return is simply fantastic. By and large, we find that multilingualism is a very well paying asset," Grin said. Past research by Grin also pinpointed that English was more valued in German-speaking parts of Switzerland. As German is the majority language spoken by 63% of the population, it was more advantageous for Swiss Germans to know English than French or Italian. It was different in French-speaking regions. The 1997 study established that while English added 18% to salaries in German-speaking regions, it equated to a 10% pay difference in French areas, compared to 14% increases with German or Italian as a second language. Between 1990 and 2000 the use of English increased in the workplace by about 28% and overall use rose in line with other languages, according to census reports. According to Grin, this shows that multilingualism is expanding as a whole. "English is a very frequently used language but it is not replacing national languages. It plays a supplementary and complementary role," he said. One area where English is gaining prominence is within academia. Switzerland backs the 1999 Bologna Declaration, which aims to create a European space for higher education, and the Rectors' Conference of Swiss Universities has in the past acknowledged English as the "language of academia". It supports offering more courses in English as the best way of attracting foreign students. Grin says use of English in academia has grown significantly, but as an advocate for linguistic diversity, he warns that the dominance of any one language in intellectual circles risks "eroding creativity". "I believe we are better off with diversity than without, and that it is important to develop language policies that are conducive to the maintenance of diversity. This means if a hegemonic language becomes too overbearing, you have to keep this in check. "Switzerland defines itself not despite its multilingualism, but as a product of its multilingualism. It's a very deeply rooted cultural value. Without multilingualism, [there is] no Switzerland," he said. It is a view shared by the cross-cantonal educational authority, the Swiss Conference of Cantonal Education Directors. "In a multilingual state, the coordination and development of language teaching is particularly important," a spokeswoman said. "Therefore the notion of a 'lingua franca' will not be limited to English, but rather to an ensemble of languages used within a real context in order to achieve a linguistic exchange." She said under Swiss linguistic strategy English had and would continue to have "an important status as an international language". But, she added, it is still only part of a bigger picture in which Switzerland shares goals set by the Council of Europe to prioritise multilingualism by ensuring a range of languages, including English, are taught.
Summary and Analysis After learning the art of scientific rhetoric for four months, the narrator receives an invitation over the phone to go for a ride from Brother Jack. Expecting to go to the Chthonian, the narrator is disappointed when Brother Jack takes him to the El Toro Bar instead. But the narrator is excited to hear Brother Jack tell him that he has been appointed chief spokesman of the Brotherhood's Harlem District. Brother Jack takes the narrator to visit his new office, and introduces him to Brother Tarp, an elderly black man who seems genuinely glad to meet the narrator. The next morning at a Brotherhood meeting, the narrator is introduced to the other members of the Brotherhood as the new spokesman. Meeting Brother Tod Clifton, Harlem's youth director, the narrator senses that he might be a competitor for his new leadership position. Later, realizing that Brother Clifton is not interested in power or politics, he begins to relax and the two young men discuss their strategies for working with the Harlem community. Leaving the Brotherhood meeting, Brother Clifton and the narrator are attacked by a group of black men led by Ras the Exhorter. The narrator sees Ras strike Brother Clifton and raise his knife threateningly, then lower it and walk away. As the narrator and Brother Clifton start to leave, Ras accuses Brother Clifton of being a traitor. Furious at this accusation, Brother Clifton turns on Ras and knocks him out. Brother Clifton and the narrator walk away, determined to ignore Ras and rededicate themselves to the Brotherhood. The events in this chapter create a growing sense of danger and foreboding, prompting the reader to feel that things are out of place and contrary to expectations. To begin with, Brother Jack calls the narrator at midnight (the witching hour) and takes him not to the Chthonian, but to the El Toro (Spanish for "The Bull"), a Harlem bar that caters not to blacks, but to a Spanish-speaking clientele. At the El Toro, as the narrator studies the scenes of a bullfight on the wall panels behind the bar, he notices a calendar with a picture of a white girl in a beer ad, indicating the date as April 1 (April Fool's Day). At this point, the narrator is indeed being taken for a ride or, to put it another way, he is being played for a fool and fed a lot of bull. Another example of things being unexpected and out of place are the wall panels behind the bar. Expected to hold a mirror, they display bullfight scenes and a gored matador. Instead of seeing his own reflection, the narrator sees the matador's image — foreshadowing his own fate. The scene also raises several issues that the narrator might question, especially after spending four months studying logic and scientific rhetoric. Why doesn't Brother Jack congratulate him on his new position, or announce his new position to the other Brotherhood members? What kind of a spokesman will he be if he will be told what he can and cannot say? Why, if he is to speak for the people of Harlem, did Brother Jack move him to an apartment outside his district? Most of all, he might consider the irony of having a white man assign him to be a spokesman for black people. But once again, the narrator fails to ask questions that might help him make sense of this situation. The encounter that the narrator and Brother Clifton have with Ras and his men places their position in a new perspective, for while both men see themselves as leaders of the black community, Ras and his men see them as sellouts and Uncle Toms. Although Ras' ravings seem illogical and even racist, he does raise some significant issues, especially concerning the concept of selling out. In the black community, a sellout is a black person who accepts money or other personal gain by working for the system (the white power structure). This chapter raises the question: Is the narrator a sellout, or is he simply accepting a job that will enable him to earn a living by using his public speaking skills? A convincing case could probably be made for either side. Although Ras's argument appears to be purely emotional, he makes several valid points concerning the tactics whites use to manipulate blacks. However, by focusing purely on race, his speech loses power. His remark that "all brothers are the same color" doesn't ring true. So far, the narrator suffered his most bitter betrayals at the hands of his black brothers such as Lucius Brockway and Dr. Bledsoe. Representing socialism and Black Nationalism, respectively, Brother Jack and Ras characterize the contrast between the Brotherhood and Ras's followers. The Brotherhood supposedly advocates nonviolence and focuses on integration and cooperation as the only means by which people — both black and white — will be able to work together for the good of society as a whole, especially the poor and oppressed. In contrast, Ras's followers advocate freedom and equality even if it means fighting for these rights. The Brotherhood focuses on issues of both race and class, whereas Ras's followers emphasize race as the deciding factor. Although Ellison insisted in a later essay that the Brotherhood does not represent Communism, the striking resemblance between the communist philosophy and the Brotherhood can't be ignored. Both emphasize group vs. individual rights. By contrast, Ras's Black Nationalist philosophy, although rooted in racism and separatism, stresses independence, self-reliance, and individual rights. The Brotherhood may also represent the National Association for the Advancement of Colored People (NAACP) because it has been fraught with the same kinds of internal conflicts. Ellison undoubtedly knew that W.E.B. Du Bois, one of the NAACP's founders, eventually left the group because he felt it no longer fulfilled its mission as an active civil rights organization dedicated to fighting for equality and equal opportunity. Another important development in Chapter 17 concerns the relationship between the narrator and Brother Clifton. Although Tod Clifton is the darker brother, he has distinctly European features. He also has already attained a leadership role within the Brotherhood. Conversely, the narrator, whom Emma describes as "not black enough" to represent the black community, is less steeped in Brotherhood philosophy and even admits that he has some doubts and misgivings about the organization. But like Brother Clifton, he sees the Brotherhood as a supportive organization that will help him hone his leadership skills and achieve his goal of becoming a renowned and respected speaker. On a more practical level, he also sees his work with the Brotherhood as a means of economic survival and an opportunity for a new life, as symbolized by his new clothes, new job, and new apartment, all of which he owes to the Brotherhood. However, because both men are keenly aware that they have had to sacrifice many of their personal and cultural values to work for the Brotherhood, their encounter with Ras — who reminds them of their identity and responsibility to their African ancestors and the black community — is unsettling, especially for Brother Clifton. Another key character introduced in this chapter is Brother Tarp, who gives the narrator a portrait of Frederick Douglass, indicating his faith in the narrator, whom he sees as having the potential to become another Douglass. A former slave, Douglass (1817-95) went on to become one of the most famous nineteenth-century orators and statesmen. By giving the narrator a portrait of Douglass for his office, Brother Tarp demonstrates his faith in him as a potential leader of the black community. His act also indicates that he views the narrator not as another Booker T. Washington, who many blacks felt compromised his values to gain the financial and political support of influential whites, but as another Douglass, a man who freed himself from the mental and physical bonds of slavery to become a renowned and respected spokesman for freedom and equality. The narrator's initiation/indoctrination into the Brotherhood illustrates the process educated blacks (like Dr. Bledsoe) go through to be accepted into the system. Those who resist and refuse to play the game are often forced to the margins of society — such as Jim Trueblood, Mary, and the cart-man — or they are perceived as insane — such as the vet, the narrator's grandfather, and Ras the Exhorter. The narrator is, in fact, becoming Dr. Bledsoe, because the Brotherhood wants to make him the new Booker T. Washington. sectarianism narrow-minded, limited, parochial thinking. Uncle Tom a term of contempt for a black whose behavior toward whites is regarded as fawning or servile. perfidity betrayal of trust; treachery.
Track topics on Twitter Track topics that are important to you Central venous catheters are routinely used, however, with a complication rate exceeding 15%. Therefore, other types of venous catheters have been introduced such as a midline catheter. The purpose of the present study is to assess the efficacy and the safety of midline catheters compared to the standard care being a central catheter also inserted peripherally. Patients with indication for intravenous fluids or medicines for 5 to 28 days will be included in the study. In the United States more than 5 million patients each year get a central venous catheter. Indications for central venous catheterization include infusion of irritant drugs like chemotherapy or total parenteral nutrition, poor peripheral venous access and long term administration of drugs such as antibiotics. This ubiquitous procedure has many associated complications that result in morbidity, mortality, and increased healthcare cost. The overall complication rate is more than 15% and a great preventive effort is done. Catheter Related Bloodstream Infection (CRBSI) is a serious and feared complication associated with prolonged hospital stays, increased costs and risk of mortality. CRBSI is defined as the presence of bacteremia originating from an intravenous catheter. A lot of effort is done to reduce CRBSI including heightened attention to hygiene in placement and care, improved education and training, and placement of a team with specialized skills. Adherence to best practice for central line placement is shown to reduce risk of CRBSI . Moreover, central venous catheters (CVCs) are associated with deep vein thrombosis (DVT) and pulmonary embolism. Besides interruption in treatment, catheter-related DVT increases morbidity and mortality. Cancer and admission to intensive care are independent risk factors. Existing data report wide estimates of this adverse outcome, ranging from less than 1% to as high as 38.5%, dependent on the population studied, method of diagnosis, and use of prophylaxis measures. Peripherally inserted central catheter (PICC) is a CVC which placement and use have been widespread since it was first described in 1975. It is a well-established alternative to CVCs placed via subclavian or jugular veins. It is easy to place, safe and cost-effective compared to others often used central lines. PICC is inserted via a peripheral vein in the upper arm and terminates like other central lines in the vena cava superior. Placement and use is associated with few complications . Another peripherally inserted catheter is the midline which by definition is 7.5 to 20 centimeters long (3-8 inches) and thus not a CVC. The midline catheter was introduced in 1950s. It has undergone major improvement in material technology and techniques for achieving vascular access. It is inserted in the same peripherally veins as the PICC, but the tip is advanced no further than the distal axillary vein and is therefore classified as a peripheral intravenous catheter with corresponding advantages and disadvantages. The midline cannot be used to vesicants or irritants like most chemotherapy, vasoactive agents, or medications with extremely low or high pH values. The midline is suitable for use from 5 days until 4 weeks to drugs and solutions, which safely can be administrated in a peripheral venous catheter. Severe complication to placement and use of midline is rare, but due to previous problems primarily related to the midline catheter material, its use is limited . In a large review from 2006 the incidence of CRBSI among in- and outpatients having PICC or midline were estimated to 3.1% (95% confidence interval (CI) 2.6-3.7) and 0.4% (95% CI 0.0-0.9), respectively. It has been demonstrated that CVCs can be decreased through the use of midline catheters . A retrospective descriptive review from two hospitals in America showed the effectiveness of implementing a midline program resulted in a 78% reduction in CVC line-associated bloodstream infection. In a similar Australian retrospective cohort study in a ventilator unit population, a significant decrease in the rate of CVC line-associated bloodstream infections was found after use of midlines. It seems that the introduction and regular use of midlines when warranted may reduce the overall incidence of CRBSI and its sequelae in certain hospital environments. In a meta-analysis including 11,476 hospital admitted patients with PICC, DVT was found in 3.44% (95% CI 2.46-4.43). DVT occurrence in relation to midline is understudied, but is reported with a low incidence between 0-2 %. The overall incidence of DVT and related potentially secondary complications of both catheters seems low. The risk of minor complications such as pain, leakage or phlebitis is in a retrospective comparison study found to be 11.5% for midlines and 1.5% for PICC, respectively (P<0.001). The efficacy of the PICC is well studied, the incidence of side effects is known and its use is implemented all over the world. However, the efficacy of midlines compared to PICCs has not been evaluated prospectively. The present study aim to examine the efficacy and safety of midline catheters using standard care with PICC as reference. Patients eligible for screening for inclusion are identified among all patients where staff from a general ward requests a central line. The randomization is 1:1 between the PICC (control group) and a midline catheter (intervention group). After placement the patients will be closely followed until the day of removal of the catheter. To obtain information on length of hospital stay and mortality, the electronical journal will be checked until 90 days after catheter removal. The incidence of complications that occur will be registered and the two catheter groups will be compared. Risk and benefits by participating in the trial: In patients with poor peripheral venous access or need of long time administration of medicine or fluids a central line is routinely placed. The puncture site on the upper arm and placement technique are the same for both catheter types. Therefore the complications during placement are expected to be independent of catheter type. PICC require x-ray confirmation of tip placement, leading to additional costs and exposing the patient to unnecessary radiation. X-ray verification of tip position is not necessary when placing the midline catheter. By participating in this study the patients have benefit of more close observation and patients in the intervention group have the possible benefit of reduced risk of having a CRBSI. In the intervention group, the risk of minor complications like pain in relation to fluid or medicine administration, infiltration or phlebitis is expected to be increased compared with the control group. The discomfort in relation to the minor complications is expected to disappear without any persistent complications. Design: Single-center randomised controlled trial with in- and out-patients from medical and surgical departments. Catheter placement: The informed consent and randomization is performed prior to placement of catheters. With the patient supine ultrasound is used to identify the desired vein on the relevant upper extremity. Full sterile technique is used and includes the operator wearing sterile gown, mask, cap, and sterile gloves. The area is then prepared with chlorhexidine followed by adequate sterile draping. The Seldinger technique is used to insert the catheter. Successful placement in a vein is secured by aspirating blood from the catheter. The catheter is then flushed with minimum 20 mL saline. Depending on the randomization a PICC is placed and the tip position in vena cava superior is verified by a chest x-ray or a midline is placed without the need of x-ray verification. The patient is then returned to the medical or surgical ward. Recruitment procedure: Subjects are recruited among patients where staff from a general ward requests a central line. The anesthesiologist or special trained nurse responsible for inclusion will apply to the national regulations regarding informed consent to participation in a clinical trial. Hence, in addition to oral information the potential participant will receive written information including both the specific information on the current study as well as the general information pamphlet on participant rights when entering a clinical trial. All information and inclusion will be given by physicians or special trained nurses who possess the sufficient professional prerequisites to be authorized by the sponsor to have a direct involvement in the project. Information will be given in private and the participant will be allowed to have an assessor present. The participant will be given a brief reflection period before making their decision. As always it is voluntary to participate and the subjects can withdraw their commitment to participate at any time. Baseline variables: Date of birth, age, gender, ethnicity, height, weight, and medical history. Procedure related variables: Date of randomization, date placement, time used for placement, number of skin punctures, type and length of placed catheter, name of the access-vein, accidental arterial puncture, bleeding complications, and tip placement on chest x-ray (control group only - assessed from an anterior and posterior X-ray of thorax). Registrations at the wards: CRBSI, deep vein thrombosis, catheter removal date and cause. Phlebitis scale score: 0 No symptoms; 1 Erythema at access site with or without pain; 2 Pain at access site with erythema or edema; 3 Pain at access site with erythema or edema; streak formation; palpable venous cord; and 4 Pain at access site with erythema or edema; streak formation; palpable venous cord < 2.5 cm Infiltration scale score: 0 No symptoms; 1 Skin blanched; edema < 2.5 cm in any direction; cool to touch; with or without pain; 2 Skin blanched; edema in 2.5 to 15 cm in any directions; cool to touch; with or without pain; 3 Skin blanched, translucent; gross edema > 15 cm in any directions; cool to touch; mil-to-moderate pain; possible numbness'; and 4 Skin blanched, translucent; skin tight; leaking: skin discoloured: bruised; swollen; gross edema > 15 cm in any direction; deep pitting tissue edema; circulatory impairment: moderate-to-severe pain; infiltration of any amount of blood product, irritant, or vesicant. Sample size estimation and power calculation: The primary outcome is CRBSI. The power calculation is based on an expected incidence of 5% in the PICC group with reference to the literature and an expected incidence of 0% in the midline group with reference to a follow-up of the first 107 midline catheters inserted in patients at Aalborg University Hospital from the 5th of October 2017 to the 26th of February 2018. With an alpha of 0.05 and a beta of 0.2 (power 0.8) the sample size is 304 with 152 patients in each group. Statistical methods: Descriptive data will be presented in a baseline table according to catheter type. For normally distributed measurements the differences between groups will be compared using Student´s t-test. Variables considered not to be normally distributed will be analysed by Mann-Whitney´s U-test. The results from primary and secondary endpoints will be presented in a separate table also according to catheter type. The differences between groups will be compared using Wilcoxon two-samples test/Fisher´s exact or unpaired t-test. Statistical analyses will be performed using STATA software (version 14; STATA, Corporation, College Station, TX). A two-sided P value of less than 0.05 will be considered statistical significant. Midline catheter, PICC-line catheter Aalborg University Hospital Aalborg University Hospital Published on BioPortfolio: 2019-10-31T14:29:41-0400 Most important peripherally inserted central catheter (PICC) and Midline complications are thrombosis and catheter related blood stream infections. No large prospective observational study... The Midline catheter is a peripherally-inserted catheter, with the distal tip being placed at or below the level of the axilla. It is a relevant alternative to other catheters in case of l... The purpose of this study is to assess the non-inferiority for safety and efficacy of using Midline in comparison with PICC for intravenous therapy that do not require a central catheter. In this prospective clinical study, the researchers investigate the incidence of catheter-related thrombosis and catheter-related infection during indwelling central venous catheterization... The goal of this study is to determine if early placement of a midline catheter in patients with a central venous catheter (CVC) will decrease the number of days the CVC is in place. Patie... Use of a universal diagnostic catheter may decrease procedural time and catheter-exchange related spasm when compared with a dual-catheter strategy. The aim of this study was to identify preprocedural... Patients receiving parenteral nutrition (PN) as their primary source of nutrition are at high risk for both infectious and noninfectious catheter complications (catheter-related infections, catheter o... Urinary tract infections are the most common type of health care-associated infection, and greater than 75% of them are attributed to an indwelling urinary catheter. A catheter-associated urinary infe... Which catheter lock solution (CLS) works best and safe for patients with catheter-related blood infections (CRBSI) remains questionable. Therefore, we compared the efficacy of different catheter lock ... The incidence of catheter-related bloodstream infections (CRBSI) ranges from 2.2 to 5.5 episodes per 1,000 catheter-days. Our aim was to evaluate the utility of a generalized and prolonged gentamicin-... Infections resulting from the use of catheters. Proper aseptic technique, site of catheter placement, material composition, and virulence of the organism are all factors that can influence possible infection. A hindrance to the passage of fluids through a CATHETER. Regional infusion of drugs via an arterial catheter. Often a pump is used to impel the drug through the catheter. Used in therapy of cancer, upper gastrointestinal hemorrhage, infection, and peripheral vascular disease. Placement of a balloon-tipped catheter into the pulmonary artery through the antecubital, subclavian, and sometimes the femoral vein. It is used to measure pulmonary artery pressure and pulmonary artery wedge pressure which reflects left atrial pressure and left ventricular end-diastolic pressure. The catheter is threaded into the right atrium, the balloon is inflated and the catheter follows the blood flow through the tricuspid valve into the right ventricle and out into the pulmonary artery. Removal of tissue with electrical current delivered via electrodes positioned at the distal end of a catheter. Energy sources are commonly direct current (DC-shock) or alternating current at radiofrequencies (usually 750 kHz). The technique is used most often to ablate the AV junction and/or accessory pathways in order to interrupt AV conduction and produce AV block in the treatment of various tachyarrhythmias. Alternative Medicine Cleft Palate Complementary & Alternative Medicine Congenital Diseases Dentistry Ear Nose & Throat Food Safety Geriatrics Healthcare Hearing Medical Devices MRSA Muscular Dyst... Health care (or healthcare) is the diagnosis, treatment, and prevention of disease, illness, injury, and other physical and mental impairments in humans. Health care is delivered by practitioners in medicine, chiropractic, dentistry, nursing, pharmacy, a...
Illustration: Chaitanya Dinesh Surpur Sovereign debt can only be reduced through strong growth, inflation, a debt default or, in the case of foreign borrowings, currency devaluation. Apart from growth, all the other strategies involve some amount of transfer of value from savers, either by way of a reduction in the nominal value returned or decreased purchasing power. There are practical constraints in deploying these strategies too. Growth and inflation are currently low in developed economies; devaluation is difficult if every nation pursues the same policy of currency weakening; and a debt default, on the scale required, would destroy a large portion of the world’s savings as well as affect the solvency of the financial system, triggering a collapse of economic activity. As a result, policymakers refuse to allow write-downs of trillions of dollars worth of debt. In the absence of any politically acceptable and economically manageable solution, policymakers must rely on extend-and-pretend strategies combined with financial repression. Low rates and quantitative easing (QE) allow borrowings to be maintained to avoid a solvency crisis. Central banks are covertly using negative interest rates to reduce excessive debt levels by transferring wealth from savers to borrowers through the slow confiscation of capital. In the US, near-zero interest rates have reduced the interest cost of the $15 trillion-US banking system. The loss of annual interest income for savers is around $450 billion—from roughly $500 billion to just $50 billion a year. Negative interest rates reduce the principal of the debt directly.Investing and Nothingness The greatest puzzle is why investors would accept negative interest rates. There are several possible explanations. First, the need for security and safety may dictate investments in government bonds or insured bank deposits. These instruments are backed by the sovereign nation that has the ability to issue currency to make repayments. Second, returns are relative. In Europe, purchasing bonds yielding more than the official rate of the central bank, even if it is negative, is still the best alternative. Third, investors may be attracted by the opportunity for capital gains from price appreciation if they expect yields to become more negative. Fourth, foreign investors may be attracted by possible currency appreciation. Fifth, investors may be driven by real rather than nominal returns. Bonds with nominal low or negative returns may preserve or increase purchasing power where expected deflation is greater than the negative yield, providing positive real yields. In Japan, deflationary pressures support investment in zero- or low-yielding cash and government bonds. Sixth, investment mandates force fund managers to purchase negative yielding bonds, irrespective of the fact that it locks in a loss. Seventh, banks and insurance companies are forced to purchase negative-yielding securities. Liquidity regulations require these entities to hold high-quality securities. Banks have cash flow timing mismatches or gaps between deposits and loans which must be invested, usually in short-dated government bonds. Eighth, central banks with restricted investment choices are also buyers of negative yielding securities. For example, the European Central Bank’s QE allows it to purchase bonds with negative yields provided it can fund the bond purchases at a lower official deposit giving it a positive carry trade. However, large and persistent negative interest rates would meet significant resistance, triggering a wide variety of behaviours designed to avoid losses.Negative Adaptations Motivated by the desire to avoid an effective tax on savings in the form of negative interest rates, investors may resort to various strategies to preserve wealth. First, they can physically withdraw cash and hold it. In the 1990s, in Japan, low interest rates and concerns about bank failures drove significant withdrawals of cash driving a rapid growth in the sales of safety lockers. Though theoretically feasible, this is unlikely to be a realistic option for businesses, governments and wealthy individuals. The modest size of the largest denominations of notes ($100 or euro 500) is one constraint, besides security, transport and insurance concerns. Second, investors may avoid negative rates by resorting to a variety of near-cash instruments. One option would be bank cheques which are transferable. Investors could withdraw their savings or creditors obtain payment by bank cheques which would not be banked until needed or could be negotiated to pay for goods and services. Third, investors could hold savings in foreign currencies only converting it into a negative-yielding currency when needed. Fourth, real assets such as land, property and commodities, especially precious metals and collectibles, would be favoured as a store of value. Businesses may over-invest in inventories of production inputs which can be used later. Fifth, alternative payment behaviours offer a means of avoiding negative yields. There would be an inherent incentive to make payments quickly and defer receipt of funds due. This could be extended to pre-payments, where parties could pay for future obligations in advance. Pre-payment of taxes, suppliers or employees would be encouraged. Reversing normal practice, holders of credit cards could pre-pay running down the credit balance as required over time. Pre-paid instruments such as gift vouchers, transport passes or mobile phone cards can act as stores of value and negotiable instruments. These strategies avoid the effect of negative yields, but entail increased credit or performance risk. These innovations are socially and economically destructive. Funds become tied up in unproductive assets. Savings do not circulate to provide essential financing of social and industrial investment, perversely reducing growth. Capital allocation is distorted by the desire to avoid negative rates. New behaviours create new systemic risks. Payment systems and products, designed for positive interest rates, will alter the flow of funds and exposures within the economy when used in an unintended manner. The shift out of banking deposits affects the funding of banks. Ironically, this is inconsistent with bank regulations which favour retail deposit financing of financial institutions. The reduction and instability of funding as liabilities shift to certified cheques or pre-payments may reduce the ability of the financial system to extend credit, further hampering economic activity. In effect, the disruption from negative interest rates may damage the arrangements it is designed to preserve.Positive Action, Negative Reaction Effective negative rates would require the abolition of cash itself. To date, the case for banning cash has been couched in terms of deterring criminal acts like terrorism, eliminating tax avoidance, enhancing efficiency by faster funds flows, reducing costs or even improving hygiene by preventing contact with soiled notes. In September 2015, Andrew Haldane, chief economist at the Bank of England, argued that the presence of cash constrained central banks from setting negative rates to stimulate a depressed economy. In a future economic crisis, current low rates would restrict the effectiveness of monetary policy. Enhancing the ability to use negative rates would provide central banks with additional flexibility and tools to deal with a slowdown. It would be an imaginative mechanism for levying negative rates to confiscate savings. Abolishing cash requires radical change. Despite increasing reliance on electronic payments, cash is still extensively used. For small value transactions, especially in emerging markets, currency is used extensively. In effect, currency remains an important means of payment for legitimate, legal transactions. Elimination of currency has implications for social and financial exclusion. The cost of converting these users to digital payments is non-trivial. Central banks would lose financially. There would be a fall in seigniorage revenue, which is the difference between the minimal cost of creating currency and the investment return on government bonds. The amounts lost are significant. It would reduce the loss-absorption capacity of central banks and undermine a source of revenue affecting public finances. An exclusively digital or electronic payment system increases security and operational risks significantly. In his speech, Haldane accepted that public support for banishing cash was uncertain. Any such action is social and political. Citizens are likely to resist the loss of privacy. Where the elimination of cash is linked to negative rates, it would be seen as a tax on savers and state confiscation of savings. The intrusion of the state and authorities on this scale would become an explosive political issue. Negative rates point to the fact that the global economic system cannot generate sufficient income to service, let alone repay, current debt levels. (This is the concluding part of a two-part series)Satyajit Das is a former banker. His latest book is Age of Stagnation. He is also the author of Traders, Guns and Money and Extreme Money
A number of educational conferences on fire safety are available for all ages from children to adults. Educational conferences in day care centres give children the chance to learn about safe and preventive behaviours. The "Create Your Plan" conference empowers primary school pupils to become fire safety ambassadors. The aim of this conference is to teach participants how to safely use a portable extinguisher at the start of a fire. This conference is for groups who want to learn about safe behaviours before and during a fire. The aim of this presentation is to educate seniors about preventive behaviours and teach them how to react safely during a fire. This three-hour session teaches participants to use a portable extinguisher at the start of a fire.
By: José Ignacio Hernández To begin with, the “illegal migrant” label cannot justify forced transportation. And, in any case, that label doesn’t adequately characterize the situation of the Venezuelan migrants. As the International Organization for Migration has concluded, “illegal migration” is a dehumanizing term that stigmatizes human mobility and could pave the way for degradant treatment. Therefore, it is preferable to refer to “irregular migration”, to describe the human mobility conducted beyond the regular migration channels of the host country. The U.S. Law uses a similar expression: undocumented migrants, to describe the people that have entered the U.S. without fulfilling the regular migration controls. From that perspective, the Venezuelan people entering the U.S. through informal channels (like crossing the Rio Grande) are undocumented migrants -which does not prejudice their legal status. The legal status of undocumented migrants should be determined according to the nature of the flows from Venezuela. In that sense, the Inter-American Human Rights Commission has concluded that Venezuelan human flows should be treated as a humanitarian crisis due to the complex humanitarian emergency in Venezuela. Consequently, the host states -including the U.S.- have the duty to provide humanitarian assistance to the Venezuelan people. From that perspective, Venezuelan migrants are undocumented precisely because they are part of a humanitarian crisis that has forced them to leave Venezuela, facing perils such as the Darién Gap. A possible legal path is to consider the Venezuelan people as refugees. But the refugee concept in the U.S. is very narrow. It includes all the people outside the U.S. who are “unable or unwilling to return to” their home country, in this case, Venezuela. The impossibility to return must be based on “persecution or a well-founded fear of persecution on account of race, religion, nationality, membership in a particular social group, or political opinion“. This is a very restrictive concept that doesn’t meet the expanded concept of refugees adopted in the Inter-American Law, which includes the people that cannot return to their home country because of severe political and economic crises. That is the situation of the Venezuelan people. On January 19, 2021, the U.S. Government adopted deferred enforced departure (DED) measures for certain Venezuelans, considering that the Venezuelan migration was caused by “the worst humanitarian crisis in the Western Hemisphere in recent memory“. A few months later, the Government granted a temporary protected status (TPS) because “Venezuela is currently facing a severe humanitarian emergency”. If Venezuelan migrants are undocumented, it is because they are escaping from a humanitarian emergency, as the U.S. Government has recognized. Consequently, they should be treated following humanitarian standards, according to the Inter-American Human Rights Commission’s guidelines, which apply to the U.S. The forced relocation of Venezuelans is not a humanitarian treatment. This relocation was implemented with deceit and political use of the Venezuelan people. It is worth recalling that even undocumented migrants are human beings vested with unalienable rights, including human dignity. More particularly, the U.S. Law prohibits “cruel, inhuman, or degrading treatment or punishment of persons under custody or control of the United States Government”. That prohibition, of course, applies to undocumented migrants. Humanitarian Law encourages the resettlement of refugees and migrant as a policy aimed at reinforcing their human dignity. But forcefully transporting the Venezuelan to Martha’s Vineyard or the Vice President’s home in Washington, D.C., is far from humanitarian resettlement. It is degrading because Venezuelans are used as instruments (for any political purpose behind the forced transportation policies). But we need to consider the other side of the problem. If there are undocumented migrants is because there is a failure in the U.S. migration system that cannot provide regular pathways of entry. As the Los Angeles Declaration recently reinforced, the U.S. migration policy must provide regular avenues of access. At the same time, it must adopt a holistic policy to tackle the root causes of the Venezuelan migration crisis, that is, the criminal and predatory policies of Nicolás Maduro. Weakening Maduro’s capacity to conduct his predatory policies is, therefore, a way to address those root causes. What to do, then? The U.S. needs to design and implement a policy based on the humanitarian nature of the Venezuelan crisis. That policy could facilitate economic integration, solving some of the problems facing the U.S. economy due to the lack of workforce. The policy towards the Venezuelan crisis must be regional, though. This is not only a U.S. problem but for the region. It is necessary to reinforce regional cooperation to provide humanitarian assistance and tackle the root causes of the Venezuelan humanitarian emergency. Forcibly moving migrants from Texas and Florida to Washington, D.C., and Massachusetts, using the label of illegal migrants, will not solve the problem. Quite the contrary, it will aggravate it. The Los Angeles Declaration has the blueprint of the holistic policy the U.S. must implement. It’s time for action. José Ignacio Hernández is a Venezuelan lawyer. He is a fellow at Growth Lab- Harvard Kennedy School.
(h/t David) Sunny skies sound like a positive for energy production, but this week’s heat wave in California isn’t a boon for solar power. That’s because solar panels actually become less efficient as the mercury rises. CivicSolar, a solar-power systems distributor with offices in Oakland, Boston and Austin, Texas, says high temperatures can decrease a photovoltaic cell’s output by between 10 and 25 percent. The reason is illuminating: Photovoltaic cells work when energy-filled photons from the sun activate electrons on the solar panels. The electrons go from a resting state to an excited state, and the cells capture the resulting energy. At high temperatures, the resting state of the electrons goes up. As a result, the difference between the resting state and excited state is smaller, producing less power. The effect is more pronounced for homeowners who have installed rooftop solar arrays since those rarely have built-in cooling. “If you take a glass solar shingle and lay it on the roof, there’s no air going behind it, so it might get a lot hotter — it might get to 140 or 160 degrees Fahrenheit,” said Stuart Fox, CivicSolar’s vice president of technical sales. A study in the United Kingdom found that once a panel exceeds 107 degrees, its output drops by 1.1 percent for every 1.8-degree rise in temperature. Read rest at SF Chronicle