text
stringlengths
222
548k
id
stringlengths
47
47
dump
stringclasses
95 values
url
stringlengths
14
1.08k
file_path
stringlengths
110
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
53
113k
score
float64
2.52
5.03
int_score
int64
3
5
What's worse than an upset stomach? You've probably had an upset stomach. Most likely it was due to something you ate. But imagine bleeding from your stomach. That's a little different than your stomach just being upset. Stomach ulcers can be very serious. Diseases of the Digestive System Many diseases can affect the digestive system. Three of the most common diseases that affect the digestive system are food allergies, ulcers, and heartburn. - Food allergies occur when the immune system reacts to substances in food as though they were harmful “foreign invaders.” Foods that are most likely to cause allergies are pictured in Figure below . Symptoms of food allergies often include vomiting and diarrhea. These foods are the most common causes of food allergies. are sores in the lining of the stomach or duodenum that are usually caused by bacterial infections. They may also be caused by the acidic environment of the stomach. Stomach acids may damage the lining of the stomach. Symptoms typically include abdominal pain and bleeding. You can see how stomach ulcers develop at this link: - Heartburn is a painful burning sensation in the chest caused by stomach acid backing up into the esophagus. The stomach acid may eventually cause serious damage to the esophagus unless the problem is corrected. - Digestive system diseases include food allergies, ulcers, and heartburn. Use this resource to answer the questions that follow. - National Digestive Diseases Information Clearinghouse at http://digestive.niddk.nih.gov/ddiseases/a-z.aspx . Describe the following digestive diseases: - Crohn's Disease 1. Describe two diseases of the digestive system.
<urn:uuid:90bd4d1e-0132-405c-940f-cdcd39cb0d63>
CC-MAIN-2014-41
http://www.ck12.org/biology/Digestive-System-Diseases/lesson/Digestive-System-Diseases/r35/
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663743.38/warc/CC-MAIN-20140930004103-00364-ip-10-234-18-248.ec2.internal.warc.gz
en
0.932206
355
3.5625
4
The Threemile Gulch Prehistoric Archaeological District in Colorado is historically important due to the minimally disturbed and distinctive record of prehistoric human settlement found here. This area was repeatedly reoccupied from Late Paleoindian (a term for the first peoples who entered and inhabited North and South America during the final glacial episodes of the Ice Age) through the Late Prehistoric periods, also called the Pre-Contact period. Special techniques were used to make petrified wood useful for stone tool manufacture. Numerous quarry sites of the archaic peoples, for example, appear as broken up petrified wood logs or dug-out pits surrounded by hundreds of pieces of petrified wood debris. The most distinctive site types within the district are the petrified wood quarries. Archeologists have long used the distribution of distinctive raw materials to trace the movements of human populations in prehistory. Here an association was made between the lithic raw material and ethnic identity of the mountain-dwelling hunters and gatherers who worked the quarry sites in seasonal cycles, and those of another location, suggesting that this group may have moved into the area from the Palmer Divide area of Colorado. To see more photographs of this property go to our photostream on Flickr. Read the full file See our Weekly List (with previous highlights)
<urn:uuid:31cc5972-e594-4be9-b432-16c506a88186>
CC-MAIN-2015-18
http://www.nps.gov/nr/feature/weekly_features/11_09_23_Threemile_Gulch.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658061.59/warc/CC-MAIN-20150417045738-00262-ip-10-235-10-82.ec2.internal.warc.gz
en
0.956401
265
3.609375
4
What would you think of someone you noticed cheating during an exam? My guess is you’d say he was a dishonest person. He would be more willing to lie to get what he wants or steal something if he couldn’t get caught. After all, dishonesty is a part of personality, and honest people don’t cheat. But what if you were wrong? What if honest people do cheat, depending on the situation? What if honesty, laziness, patience and extroversion were more about context and less about character? Why Character Doesn’t Matter as Much as You Think Psychologists even have a name for it: fundamental attribution error. It’s the tendency to assume a person’s actions are the result of the person’s character. Seeing someone cheat and assuming he is, in general, a cheater, is the perfect example. Malcolm Gladwell shares research showing why this tendency is often wrong: “In the late 1920s, in a famous study, the psychologist Theodore Newcomb analyzed extroversion among adolescent boys at a summer camp. He found that how talkative a boy was in one setting—say, at lunch—was highly predictive of how talkative that boy would be in the same setting in the future. … But his behavior told you almost nothing about how he would behave in a different setting: from how someone behaved at lunch, you couldn’t predict how he would behave during, say, afternoon playtime.” [emphasis mine] The experiment has been repeated beyond summer camps. Gladwell points to a study at Carleton College which showed how neat and orderly a student was in his assignments told you almost nothing about how well-ordered his room or personal appearance were. This is a difficult realization to make. We’re hardwired to assume that our personalities are fixed, pervasive aspects of our lives. We think of ourselves as disciplined, extroverted, scrupulous or stoic—it’s frightening to discover these parts of our identity may matter less than we thought. The Upside of Attribution Error The positive implication of this bias means that while our positive traits may be less solid than we imagine, our negative ones are less stubbornly entrenched as well. Take shyness, for example. Do you consider yourself an introvert or extrovert? Most people confidently declare they are one or the other, as if it were an immutable part of their hardwiring. However, if the previous psychological study has any bearing, introversion and extroversion aren’t as solid as we believe. They’re aggregate descriptions of how you behave in a wide variety of different contexts. They indicate the average, they don’t constrain the range. If you see past attribution error, then, it’s easy to believe that someone could be both extremely extroverted and extremely introverted, depending on the context. I’ve definitely been described as both in different situations. If character matters less than we believe, why let labels define you? Yes, they may describe your behavior as an aggregate, but they aren’t your destiny for every situation. Tweaking the Specifics, Not the Big Picture A big mistake of a lot of self-help, in my opinion, is that it puts too much focus on the big picture of personality change. Authors declare you need to change your life by first becoming a better person. The problem is becoming a new person, at the core, is enormously difficult. Going from lazy to productive, shy to charismatic and pessimistic to optimistic take years of work and often fail. We become the story we tell ourselves, and that narrative is deeply entrenched. However, if our behavior is more situation than disposition, it may be possible to change how we behave in one area of our life at a time. Instead of going from lazy to productive in one identity-shifting epiphany, you can become productive in one routine or task at a time. This is the reason I’ve been obsessed with habit changes over the last seven years. Because changing a habit allows you to incrementally change who you are, how you live and the quality of your life. It doesn’t require rare spontaneous transformations, just step-by-step progress. The reason honest people cheat is that the adjective isn’t perfect. Each of us is honest and dishonest, lazy and disciplined, clumsy and adept, insecure and confident. The more we accept plurality in our personality, the less we let those labels bind us. Image thanks to comedy_nose
<urn:uuid:881dcab4-df8c-4020-b78a-782f35c44dd2>
CC-MAIN-2022-05
https://www.scotthyoung.com/blog/2011/04/26/why-honest-people-cheat/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00473.warc.gz
en
0.957973
955
2.84375
3
What Makes Florida Special Florida bass fishing can be very different from bass fishing found in other regions of the country. Florida’s lakes are typically natural, shallow sink holes formed thousands of years ago in the limestone bedrock found throughout the state. And these waters often have few natural offshore bottom structures. Florida largemouth bass lakes and rivers also frequently have limited visibility of a few feet or less. Different types of fishing lures and techniques are often required depending upon conditions and where an angler spends a day on the water in Florida. Bass will often use any cover they can find to lay in wait for prey, including small clumps of grass, boat docks, submerged tree stumps, or culvert pipes. Find the cover and you can find the bass. Water clarity varies greatly in Florida bass waters, so different lures and techniques are required depending upon conditions. 1. Plastic Worms, Craws and Lizards It seems like more bass are caught in Florida on soft plastic baits than any other lure type. What makes these lures so effective is their versatility. They can be fished weightless, Texas style with a bullet weight, Carolina style behind a sinker or flipped in heavy cover like a jig. You can swim them, bump them on the bottom or drop them next to inactive fish. They mimic actual bass forage and will fool bass of all sizes. The first plastic worms were imitations of large nightcrawlers. Later, ribbon tail worms were introduced and quickly gained popularity. Paddle tail worms add vibration and flash. Multi-colored and scent induced worms further enhanced their effectiveness. Today’s plastic baits can be amazingly natural or look nothing like anything found in nature. The most popular soft plastic bass bait colors in Florida are black and dark purple. Adding colored flakes to a plastic bait can enhance its fish catching appeal. Soft plastic bass lures come in a wide variety of sizes, shapes and colors. The most common worm and lizard size in Florida is 6 inches. Plastic lizards mimic small aquatic amphibians and are especially effective when bass are on the beds. If you only have one bass lure to fish in Florida, make it a soft plastic bait. Flukes are small minnow shaped soft plastic swim baits. They are available in several sizes from the tiny 3" fluke to the 7" magnum super fluke. Flukes are deadly bass baits, especially in the spring when bass are shallow chasing fry. Most anglers rig them weedless with the point of a wide gap hook laying on top of the bait. Being somewhat heavier than worms, they are easy to cast with little or no weight. Like other plastic baits, they come in a wide assortment of colors with watermelon being very popular. Flukes put out little vibration, so they work best in clearer water thrown on a light line. Some lures catch larger Florida bass than others. The spinnerbait is one of those lures. Spinnerbaits mimic the Florida bass’ most favorite foods, golden shiners and large shad. They come in all sizes from 1/8 ounce finesse baits to large 3/4 ounce thumpers. The average Florida spinnerbait is the 3/8 ounce size. It pays to buy quality in these baits. High quality blades give more flash and high quality hardware gives off more vibration. You will need two basic spinnerbait colors in Florida, chartreuse with gold blades and white with silver blades. The key to catching Florida bass on a spinnerbait is fishing them close to cover. A few inches can make a big difference. Spinnerbaits work exceptionally well in and around pads. Pads are a major fish holding structure in Florida. Cast back in or beyond pads and bring your lure under the pads as close as possible. Fish slow and deliberate, targeting isolated cover from a number of directions. Larger Florida bass often respond better to a slow moving spinnerbait than one bulging the water on the surface. Most spinning reels are too fast for spinnerbait fishing. A 5/1 baitcasting reel is perfect for throwing these baits. Learn more about spinning reels and baitcasting reels by checking out the buying guides from Baitstick. 4. Lipless, Rattling Crankbaits Lipless, rattling crankbaits have been around since the late 1950s and are still among the most effective bass baits for fishing in Florida. The baits come in many different sizes and colors. Most Florida bass are caught on the 1/2 ounce chrome with a green or blue back or gold with a black back being popular. These baits are what are generally called “Search Baits." They cover water quickly and are excellent bass locators. Lipless, rattling crankbaits bites are reaction strikes. It is best to retrieve these lures fast with a 7/1 or faster reel. There is something about fast moving erratic lipless, rattling crankbaits that causes bass to hunt them down. Even in the dead of winter, few bass can pass up one of these lures fished in the strike zone. It takes stamina to fish these lures all day, but the results can be worth the effort. 5. Top Water Lures Most bass fishermen find top water fishing to be the most exciting of all bass fishing techniques. Nothing equals the fun of watching a large bass inhale a top water lure. The key to using this bait is to throw it where others won’t. Accurate casting is a must as you should cast it far back into cover. Fish it with a short jerk and pause routine. Pay attention to the surface behind the lure as bass will often track this lure for some time before hitting it. Strikes range from vicious explosions to subtle or silent strikes. In cold front conditions, top water lures with rear spinners can also be effective. Fished on a light mono line, this lure can often help catch fish when nothing else seems to work. Do not use heavy line with this lure or you will diminish its effectiveness. Be prepared to catch a lot of smaller bass. If you need to get a bite in tough conditions, this is the bait to use. Color is not as important with this lure as others, though black back is often a popular choice. 6. Swim Jigs Florida is famous for its unique fisheries. Most of the top Florida lakes are quite shallow compared to what anglers may find elsewhere in the country. Lake Okeechobee is one of the most popular lakes for recreational anglers in Florida and a frequent stop on pro fishing tours. Though the lake’s surface area is nearly 730 square miles, its maximum depth is only twelve feet. While it does not take a lot to go deep for bass in Florida when an angler needs to go well below the surface, it helps to have the right gear. One of the most effective lures for reaching fish at deeper levels of lakes and rivers in Florida is the swim jig. Pro anglers use them and so do weekend anglers of nearly all levels, and all for good reason. The weighted head of the swim jig helps get the lure to fall quickly to the bottom and enables anglers to probe for bass lying low in the water. The weed guard on a swim jig also helps anglers to get into the grasses and pad stems that are common throughout Florida and to get bait into weedy areas that a spinnerbait would not be able to go. The skirts of the swim jig help attract fish on their own but can be paired with a plastic worm, craw, or chunk to make the swim jig even more appetizing for hungry bass. 7. Floating Minnows Another great surface lure for Florida bass fishing is the floating minnow. Fished on a light spinning outfit, it’s hard to match its ability to produce great numbers of bass. Floating minnow lures are generally fished on the surface with a twitch and pause cadence. They can also be fished under the surface as a jerk bait. These lures are great lures for beginners or anglers looking to get back into fishing. It’s nearly impossible to throw a 4" minnow for any extended period of time in Florida without catching a bass. Gold and silver are the popular colors for this lure in Florida. Avoid using snaps, leaders or swivels with this lure. Tie the line directly to the lure using a loop knot. 8. Plastic Frogs and Toads Big bass love eating frogs. Plastic frog fishing in Florida has always been effective. And plastic frogs and toads can be fished in places where no other lure can be worked. Rigged almost totally weedless, they can be fished in the heaviest cover. Because of this, plastic frogs are often best fished on heavy braided line. They can be tricky to use as bass often miss the hooks when the angler sets the hook too quickly. It’s best to wait until the bass has the lure fully in its mouth. This can be easier said than done for the new and even experienced angler from time to time. Be observant when fishing these lures as a bass will often track behind them for some distance before the strike. If this is observed, drop the bait as soon as open water is reached and the fish will often take the lure. In Florida, pads are the home of larger bass looking for a meal. Plastic toads are often fished as buzz baits with kicking legs replacing spinners. 9. Diving Crank Baits Typically made of wood or plastic with a front-mounted diving bill, diving crank baits are meant to mimic injured shiners, shad and bait fish. While it is rare to fish bass in Florida deeper than 15 feet, some of these lures can dive to as much as 30 feet when trolled or on a long cast. Smaller lures will work in shallow water with many anglers preferring square bills for fishing around downed trees and wood. Tuning these lures so they run straight is critical. This is accomplished by bending the line tie slightly left and right to obtain the desired track. Casting a deep diving crankbait all day is hard work due to the resistance these lures create when retrieved. Using thin diameter line and slowing down the retrieve will make the lure run deeper. These lures can be used as open water search baits or to target specific bottom structure. The most popular crankbait colors in Florida are green and blue black shad patterns. Chartreuse bait, like fire tiger, work well in stained water. Buzzbaits are some of the first bass lures ever developed. Worked fast on top of the surface, they cover a lot of water quickly and draw aggressive strikes. Buzzbaits are still some of the best bass search baits available. Like plastic frogs, they should be thrown with casting tackle and heavy line. The best buzzbait conditions are on top of surface weeds with small open pockets in the grass. Hungry bass stake out in the holes and will attack a buzzbait as it runs past. High speed reels are required to keep these baits on the surface. Strikes are aggressive with the hook ratio often being somewhat better than plastic frogs. Buzzbaits can attract larger fish in Florida. All Fishing Is Local Check in with your favorite independent bait and tackle shop or fishing store about what lures work best for where you want to fish next. Or try a Baitstick Box designed to conveniently bring the needed bait and tackle together for you and to offer it at an affordable price. Fish many of Florida’s top fishing destinations right out of the box. Seasoned anglers can resupply with known essentials and new anglers can fish regional rivers and lakes right out of the box. Baitstick partners with expert area suppliers and experienced anglers with local knowledge to bring together bait and tackle for Florida fishing. Baitstick Boxes are filled with handpicked bait and tackle with Florida fishing in mind. No subscription required! Try a Baitstick Box today.
<urn:uuid:9f20d0f7-66be-40d0-a5e8-3ec0370f3580>
CC-MAIN-2022-21
https://baitstick.com/10-best-florida-lures/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00393.warc.gz
en
0.947239
2,525
2.53125
3
The United Nations World Tourism Organisation (UNWTO) General Assembly ended yesterday. Zambia and Zimbabwe were co-hosting the five day event in the border towns of Livingstone (in Zambia) and Victoria Falls (in Zimbabwe) respectively. Hundreds of delegates descended on the two towns adding to the thousands of tourists that visit each year. The world-famous Victoria Falls are the main attraction of these two towns and this part of Southern Africa. The falls were named after British Queen Victoria by Scottish medical missionary David Livingstone. Coincidentally, this year marks the 200th anniversary of his birth. He is believed to have been the first European to have seen the falls. The falls have another name – Mosi oa Tunya, which means “the smoke that thunders” – but are known chiefly by the name Victoria. Just out of interest, why do you think both countries have kept use of the name Victoria over Mosi oa Tunya for their most famous tourist attraction? Could it be because Victoria is perceived to be a much easier name to pronounce for tourists and for marketing purposes? Is it more of a money spinner for that reason? Should ease of pronunciation matter at all!? Maybe policymakers on either side of the Zambezi just don’t consider a change of name one of their priorities? And maybe they are right. I’d be interested in reading your thoughts. What do you think?
<urn:uuid:2c88a033-21d1-46b5-b714-ce8f5b60837a>
CC-MAIN-2021-31
https://masukuonmymind.com/2013/08/29/victoria-vs-mosi-oa-tunya-whats-in-a-name/
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00095.warc.gz
en
0.965131
294
2.9375
3
- Juniperus communis L. - Juniperus oxycedrus L. - Pine family Parts Usually Used Berries and new twigs Description of Plant(s) and Culture Juniper is an evergreen shrub usually grows from 2 to 6 feet high in the United States, but may reach a height of 25 feet in Europe. Usually low-spreading or prostrate conifer. The bark is chocolate-brown tinged with red shredding off in papery peels. The needle-shaped leaves have white stripes on top and are a shiny yellow-green beneath. They occur on the branches in whorled groups of three and have two white bands on the upperside that are mostly broader than the green margins. Pale yellow or white flowers, appearing the second year, occur in whorls on one plant, green female flowers consisting of three contiguous, upright seed buds on another plant. Flowering time is April to June. The fruit is a small, fleshy, berry-like cone which is green the first year and ripens to a bluish-black or dark purple color in the second year. The bluish-black, rounded to broadly oval fruits (August to October) usually with 3 seeds are used in medicine and as a flavoring in gin and other alcoholic beverages. Also; Prickly juniper (Juniperus oxycedrus) used the same way. Grow in full sun in all climate zones and most soils. Juniper berries (Juniper utahensis) were known to the Shoshone Indians as "Sammapo." Washo Indians: "Paal." Paiute Indians: "Wapi." For rheumatism, the Native Americans put the green boughs of Juniper on the patient as he reclined, then they steamed the boughs and the patient drank tea from the leaves. Also, they used a tea from juniper berries, taken on 3 successive days, a cupful at a time, for birth control. Found in dry, infertile, rocky soil in North America from the Arctic circle to Mexico, as well as in Europe, northern provinces of China, and Asia. Canada to Alaska, south to mountains in Georgia, eastern Tennessee, north to Illinois, Minnesota; west to New Mexico, California. Found over a large part of the northern hemisphere. Analgesic, antibacterial, antiseptic, carminative, diuretic, diaphoretic, disinfectant, rubefacient (causes redness of the skin), stomachic, tonic, uterine stimulant, anti-rheumatic Alcohols, cadinene, camphene, flavone, flavonoids, glycosides, podophyllotoxin (an anti-tumor agent), vitamin C, volatile oils, resin, sabinal, sugar, sulfur, tannins, and terpinene Legends, Myths and Stories According to legend, when the Virgin Mary and the infant Jesus were fleeing from Herod into Egypt, they took refuge under a juniper bush. Juniper has long been associated with ritual cleansing. It was burned in temples as a part of the regular purification rites. There are several medicinal recipes that have survived in Egyptian papyri dating to 1550 BC. Folk medicine in central Europe used the oil extracted from the berries to treat typhoid fever, cholera, dysentery, tape worms, and other ills associated with poverty. In the 1500s, a Dutch pharmacist created a "new" inexpensive diuretic using the juniper berry. He called the new product gin. The drink caught on, for other reasons, and today the juniper berry is just one of several ingredients. Juniper gives the flavor to gin and other alcoholic beverages. Gin is a prevarication of the French word for juniper; genievre. Juniper makes a green dye the Native American weavers used to make Sally bags and Cornhusk bags. Juniper knots, used as torches, were used to light the dance floor in front of the Native American camps. Juniper berries (Juniperus monosperma), the bark, and needles were used for a brown-tan dye. They used the green juniper needles only and burned them, saved the ashes and added this to the dye. This was a fixant for the dye. The Juniper berries were pierced by the Native Americans and used as beads. They placed the ripe berries over ant hills, scattered about, the ants ate out the sweet streak near the seed, leaving the desired perforation by which to string the beads. In Sweden, the berries are made into a conserve. In Germany, a few berries are used in flavoring of sauerkraut. Laplanders drink a tea of the berries. Germans love the berries in Hasenbraten, Rehbraten and Schwabisches Sauerkraut. Wacholder Branntwein is a popular juniper berry flavored spirit sold in Germany, Austria and Switzerland. Hunters, trappers and Native Americans used the berries to flavor wild duck, goose, quail, rabbit, venison, etc. A French source says the berries are used to flavor marinates, thrushes, blackbirds, etc. In France, Vin de Genievre and Juniper Hippocras are made with berries. The Laplanders have a kind of beer, flavored with juniper berries and also add juniper to add flavor to spruce beer. The infusion of juniper berries is a popular domestic diuretic in Czechoslovakia. It contains considerable tannin and theine, a drug that goads body and nervous activity. Juniper is normally taken internally by eating the berries or making a tea from them. It is useful for digestive problems resulting from an underproduction of hydrochloric acid, and is also helpful for gastrointestinal infections, inflammations, gout, palsy, epilepsy, typhoid fever, cholera, cystitis, weak immune system, sciatica, to stimulate appetite, helps eliminate excess water, and cramps. Relieves inflammation and sinusitis. Helps in treatment of pancreas, prostate, kidney, and gallstones, leukorrhea, dropsy, lumbago, hypoglycemia, hemorrhoids, scurvy, kills worms, treats snakebites, cancer, and ulcers. Regulates sugar levels. The lye made of the ashes will cure the itch, scabs, and leprosy. Used as a diuretic. Juniper berries (Fructus juniperi) are most effective when used in combination with other herbs such as broom, uva ursi, cleavers, and buchu. Dried berries are excellent as a preventative of disease and should be chewed or used as a strong tea to gargle the throat when exposed to contagious diseases. When juniper oil is used in a hot vapor bath, it is useful to inhale the steam for respiratory infections, colds, etc. The pure oil should not be rubbed on the skin as it can be very irritating and cause blisters. The first day, take 4 berries, all of them at once or over the course of the day (at the beginning of the treatment, either way is possible). From the second day on, take one more berry each day than you did the previous day, until the daily dose totals 15 berries. The more berries you take each day, the more important it is to distribute them over the course of the day. It is advisable to divide the berries into 3 or 4 daily doses, drinking at least 1 full glass of water with each dose. Once you have reached a daily total of 15 berries, reduce the amount by one berry per day until you finally reach the initial dose of 4 berries again. This will stimulate appetite and glands. It should be performed twice a year, each time for a period of 24 As a spice, juniper is often used to enhance flavor, and counteract flatulence. Juniper oil, derived from the berries, penetrates the skin readily and is good for bone-joint problems; but the pure oil is irritating and, in large quantities, can cause inflammation and blisters. Breathed in a vapor bath, it is useful for bronchitis, consumption, and infection in the lungs. Juniper tar, or oil of cade, is produced by destructive distillation of the wood of another species (Juniperus oxycedrus) and is used for skin problems and for loss of hair. Formulas or Dosages Infusion: steep 1 tsp. crushed berries in 1/2 cup water for 5-10 minutes in a covered pot and strain. Take 1/2 to 1 cup per day, a mouthful at a time. If desired, sweeten with 1 tsp. honey (or raw sugar) unless used for gastrointestinal problems. Tea: use 1 tbsp. crushed berries in 4 cups water, cover saucepan with a lid. Boil down slowly to 2 cups. Strain and drink 1 cup during the day and a second cup at bedtime. Jam or Syrup: Adults take 1 tbsp., 2 times per day, in water, tea, or milk. Children take 1 tsp., 3 times per day, in water. Take an hour before meals as an appetizer. Dried berries: Chew a few a day. Sugars and vitamin C Extract: use 10 to 20 drops in liquid, up to 3 times daily. Tea: drink 1 cup, up to 3 times daily. May interfere with iron absorption and other minerals when taken The pure oil should not be rubbed on the skin as it can be very irritating and cause blisters. In large doses, or with prolonged use it can irritate the kidneys and urinary passages; therefore it is not recommended for those with bladder and kidney problems. Also large and/or frequent doses may cause kidney failure, convulsions, and digestive irritation. Avoid if acute cystitis or acute kidney problems are present until consulting Not recommended during pregnancy nor nursing mothers, as it is a uterine stimulant. May be taken during labor and delivery.
<urn:uuid:c3748740-3883-4b82-a8e4-2eef209830db>
CC-MAIN-2018-17
http://www.emedicinal.com/herbs/hapusha.php
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00044.warc.gz
en
0.898206
2,268
3.3125
3
|Photo courtesy of James Manhart, Texas A&M University| |UPDATE OF THIS SPECIES REVIEW:| |In March 2007, an extensive search was done to locate information on drooping juniper; see the list of source literature for the sources consulted. Little information was found, but that information has been added to the original review, and the references list has been updated. In substance, the review has changed very little from that originally published in 1993.| |A. Scott Hauser, April 24, 2007| Juniperus flaccida var. flaccida Schlecht. [14,15] Juniperus flaccida var. martinezii (Pérez de la Rosa) Silba Juniperus flaccida var. poblana Martinez Photo courtesy of James Manhart, Texas A&M University GENERAL BOTANICAL CHARACTERISTICS: This description provides characteristics that may be relevant to fire ecology, and is not meant for identification. An identification key is available . Drooping juniper is a native small tree or large shrub that is slow growing and long-lived . Height at maturity usually ranges from 25 to 30 feet (7.6-9.1 m) [37,44]. The national champion tree occurs in Juniper Canyon and is 55 feet (17 m) tall with a crown spread of 35 feet (11 m) and a circumference of 8.5 feet (2.5 m) . Juniperus flaccida var. flaccida reaches a maximum height of 39 feet (12 m) . The most conspicuous character of drooping juniper is its pendant branchlets [33,44]. Young drooping juniper trees usually have a narrow rounded crown. The bark is deeply furrowed and shreds into long strips . The globose, berrylike cone is from 0.25 to 0.5 inch (0.63-1.3 cm) in diameter . Each drooping juniper cone contains from 4 to 12 seeds (usually 6-8) that are 0.12 to 0.25 inch long [33,38,44]. The cones of J. f. var. flaccida contain from 4 to 13 (usually 6-10) seeds . Drooping juniper cones collected by Adams in the Chisos Mountains averaged 8.35 seeds/cone. Toxicity: The leaves of J. f. var. flaccida and J. f. var. poblana contain volatile oils [1,3]. The composition of volatile leaf oils in both varieties is available .RAUNKIAER LIFE FORM: Pollination: Drooping juniper is pollinated by the wind. Breeding system: Drooping juniper is dioecious [42,43,44]. Seed dispersal: Drooping juniper seeds are dispersed by birds and animals . At the time of this review (2007) there is no information relating to drooping juniper seed banking, production, or germination; seedling establishment or growth; or vegetative regeneration. Research on drooping juniper reproduction is sorely needed.SITE CHARACTERISTICS: Climate: Where drooping juniper grows in the Chisos Mountains, precipitation ranges from 8.7 to 27 inches (220-680 mm), with most falling from May to October [26,45]. It rarely freezes, and summer temperatures routinely exceed 100 °F (40 °C) . Elevation: In the Chisos Mountains, drooping juniper generally is found above 5,000 feet (2,000 m) . In Mexico, it occurs from 4,000 to 8,000 feet (1,000-2,000 m) [3,44]. The elevational range of J. f. var. flaccida in Texas and Mexico is 3,000 to 9,500 feet (900-2,900 m) .SUCCESSIONAL STATUS: Fire regimes: Fire is a common occurrence where drooping juniper occurs in the Chisos Mountains. Dick-Peddie and Alberico reported that lightning fires are probably highly localized, and are often confined to single trees. Downed woody fuels are usually scarce, and continuous fine fuels consist of herbs . Using fire scar data, Moir assessed that fire frequency in the Chisos Mountains ranged from 0.9 to 2.0 fires/century. The research conducted by Moir suggests a mean fire interval for the Chisos Mountains of approximately 70 years [29,30]. Research conducted by Leopold and Krausman in the Chisos Mountains showed a mean fire interval of 60 years. The following table provides fire-return intervals for plant communities and ecosystems where drooping juniper is important. Find fire regime information for the plant communities in which this species may occur by entering the species name in the FEIS home page under "Find Fire Regimes". |Community or Ecosystem||Dominant Species||Fire Return Interval Range (years)| |desert grasslands||Bouteloua eriopoda and/or Pleuraphis mutica||<35 to <100| |pinyon-juniper||Pinus-Juniperus spp.||<35 | |Mexican pinyon||Pinus cembroides||20-70 [30,40]| |oak-juniper woodland (Southwest)||Quercus-Juniperus spp.||<35 to <200 | Palatability/nutritional value: No information is available on this topic. Cover value: No information is available on this topic.VALUE FOR REHABILITATION OF DISTURBED SITES: Wood Products: Drooping juniper wood is durable and is used locally for fenceposts [42,44].OTHER MANAGEMENT CONSIDERATIONS: 1. Adams, R. P. 1972. Chemosystematic and numerical studies of natural populations of Juniperus pinchotii Sudw. Taxon. 21(4): 407-427. 2. Adams, Robert P. 1973. Reevaluation of the biological status of Juniperus deppeana var. sperryi Correll. Brittonia. 25(3): 284-289. 3. Adams, Robert P.; Zanoni, Thomas A.; Hogge, Lawrence. 1984. The volatile leaf oils of Juniperus flaccida var. flaccida and var. poblana. Journal of Natural Products. 47(6): 1064-1065. 4. American Forests. 2007. Drooping juniper: Juniperus flaccida. In: National register of big trees, [Online]. Available: http://www.americanforests.org/resources/bigtrees/ [2007, April 24]. 5. Arno, Stephen F. 2000. Fire in western forest ecosystems. In: Brown, James K.; Smith, Jane Kapler, eds. Wildland fire in ecosystems: Effects of fire on flora. Gen. Tech. Rep. RMRS-GTR-42-vol. 2. Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: 97-120. 6. Arno, Stephen F.; Gruell, George E. 1983. Fire history at the forest-grassland ecotone in southwestern Montana. Journal of Range Management. 36(3): 332-336. 7. Arno, Stephen F.; Scott, Joe H.; Hartwell, Michael G. 1995. Age-class structure of old growth ponderosa pine/Douglas-fir stands and its relationship to fire history. Res. Pap. INT-RP-481. Ogden, UT: U.S. Department of Agriculture, Forest Service, Intermountain Research Station. 25 p. 8. Baisan, Christopher H.; Swetnam, Thomas W. 1990. Fire history on a desert mountain range: Rincon Mountain Wilderness, Arizona, U.S.A. Canadian Journal of Forest Research. 20: 1559-1569. 9. Bernard, Stephen R.; Brown, Kenneth F. 1977. Distribution of mammals, reptiles, and amphibians by BLM physiographic regions and A.W. Kuchler's associations for the eleven western states. Tech. Note 301. Denver, CO: U.S. Department of the Interior, Bureau of Land Management. 169 p. 10. Dick-Peddie, William A.; Alberico, Michael S. 1977. Fire ecology study of the Chisos Mountains, Big Bend National Park, Texas: Phase I. CDRI Contribution No. 35. Alpine, TX: The Chihuahuan Desert Research Institute. 47 p. 11. Duncan, Wilbur H.; Duncan, Marion B. 1988. Trees of the southeastern United States. Athens, GA: The University of Georgia Press. 322 p. 12. Elias, Thomas S. 1980. The complete trees of North America: field guide and natural history. New York: Times Mirror Magazines, Inc. 948 p. 13. Eyre, F. H., ed. 1980. Forest cover types of the United States and Canada. Washington, DC: Society of American Foresters. 148 p. 14. Farjon, Alijos. 1998. World checklist and bibliography of conifers. 2nd ed. Kew, England: The Royal Botanic Gardens. 309 p. 15. Flora of North America Association. 2007. Flora of North America: The flora, [Online]. Flora of North America Association (Producer). Available: http://www.fna.org/FNA. 16. Floyd, M. Lisa; Romme, William H.; Hanna, David D. 2000. Fire history and vegetation pattern in Mesa Verde National Park, Colorado, USA. Ecological Applications. 10(6): 1666-1680. 17. Garrison, George A.; Bjugstad, Ardell J.; Duncan, Don A.; Lewis, Mont E.; Smith, Dixie R. 1977. Vegetation and environmental features of forest and range ecosystems. Agric. Handb. 475. Washington, DC: U.S. Department of Agriculture, Forest Service. 68 p. 18. Geils, B. W.; Wiens, D.; Hawksworth, F. G. 2002. Phoradendron in Mexico and the United States. In: Geils, Brian W.; Cibrian Tovar, Jose; Moody, Benjamin, tech. coords. Mistletoes of North American conifers. Gen. Tech. Rep. RMRS-GTR-98. Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: 19-28. 19. Gottfried, Gerald J.; Swetnam, Thomas W.; Allen, Craig D.; Betancourt, Julio L.; Chung-MacCoubrey, Alice L. 1995. Pinyon-juniper woodlands. In: Finch, Deborah M.; Tainter, Joseph A., eds. Ecology, diversity, and sustainability of the Middle Rio Grande Basin. Gen. Tech. Rep. RM-GTR-268. Fort Collins, CO: U.S. Department of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station: 95-132. 20. Jones, Stanley D.; Wipff, Joseph K.; Montgomery, Paul M. 1997. Vascular plants of Texas. Austin, TX: University of Texas Press. 404 p. 21. Kartesz, John T.; Meacham, Christopher A. 1999. Synthesis of the North American flora (Windows Version 1.0), [CD-ROM]. In: North Carolina Botanical Garden (Producer). In cooperation with: The Nature Conservancy, Natural Resources Conservation Service, and U.S. Fish and Wildlife Service. 22. Keeley, Jon E. 1981. Reproductive cycles and fire regimes. In: Mooney, H. A.; Bonnicksen, T. M.; Christensen, N. L.; Lotan, J. E.; Reiners, W. A., tech. coords. Fire regimes and ecosystem properties: Proceedings of the conference; 1978 December 11-15; Honolulu, HI. Gen. Tech. Rep. WO-26. Washington, DC: U.S. Department of Agriculture, Forest Service: 231-277. 23. Knobloch, Irving W. 1942. Notes on a collection of mammals from the Sierra Madres of Chihuahua, Mexico. Journal of Mammology. 23(3): 297-298. 24. Kuchler, A. W. 1964. Manual to accompany the map of potential vegetation of the conterminous United States. Special Publication No. 36. New York: American Geographical Society. 77 p. 25. Laven, R. D.; Omi, P. N.; Wyant, J. G.; Pinkerton, A. S. 1980. Interpretation of fire scar data from a ponderosa pine ecosystem in the central Rocky Mountains, Colorado. In: Stokes, Marvin A.; Dieterich, John H., tech. coords. Proceedings of the fire history workshop; 1980 October 20-24; Tucson, AZ. Gen. Tech. Rep. RM-81. Fort Collins, CO: U.S. Department of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station: 46-49. 26. Leopold, Bruce D.; Krausman, Paul R. 2002. Plant recovery and deer use in the Chisos Mountains, Texas, following wildfire. Proceedings, Annual Conference of Southeastern Association of Fish and Wildlife Agencies. 56: 352-364. 27. Little, Elbert L., Jr. 1979. Checklist of United States trees (native and naturalized). Agric. Handb. 541. Washington, DC: U.S. Department of Agriculture, Forest Service. 375 p. 28. Mitchell, Alan F. 1972. Conifers in the British Isles: A descriptive handbook. Forestry Commission Booklet No. 33. London: Her Majesty's Stationery Office. 322 p. 29. Moir, William H. 1980. Forest and woodland vegetation monitoring, Chisos Mountains, Big Bend National Park, Texas: Baseline 1978. Contribution No. 83. [Fort Davis, TX]: Chihuahuan Desert Research Institute. 63 p. 30. Moir, William H. 1982. A fire history of the High Chisos, Big Bend National Park, Texas. The Southwestern Naturalist. 27(1): 87-98. 31. Paysen, Timothy E.; Ansley, R. James; Brown, James K.; Gottfried, Gerald J.; Haase, Sally M.; Harrington, Michael G.; Narog, Marcia G.; Sackett, Stephen S.; Wilson, Ruth C. 2000. Fire in western shrubland, woodland, and grassland ecosystems. In: Brown, James K.; Smith, Jane Kapler, eds. Wildland fire in ecosystems: Effects of fire on flora. Gen. Tech. Rep. RMRS-GTR-42-vol. 2. Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: 121-159. 32. Plumb, Gregory A. 1992. Vegetation classification of Big Bend National Park, Texas. Texas Journal of Science. 44(4): 375-387. 33. Powell, A. Michael. 1988. Trees and shrubs of Trans-Pecos Texas: Including Big Bend and Guadalupe Mountains National Parks. Big Bend National Park, TX: Big Bend Natural History Association. 536 p. 34. Raunkiaer, C. 1934. The life forms of plants and statistical plant geography. Oxford: Clarendon Press. 632 p. 35. Schoenhals, Louise C. 1988. A Spanish-English glossary of Mexican flora and fauna. Tucson, AZ: Summer Institute of Linguistics, Mexico. 637 p. 36. Shiflet, Thomas N., ed. 1994. Rangeland cover types of the United States. Denver, CO: Society for Range Management. 152 p. 37. Simpson, Benny J. 1988. A field guide to Texas trees. Austin, TX: Texas Monthly Press. 372 p. 38. Standley, P. C. 1924. Trees and shrubs of Mexico. Contrib. U.S. Nat. Herb. Washington, DC: Smithsonian Press. 23: 849-1312. 39. Stickney, Peter F. 1989. FEIS postfire regeneration workshop--April 12: Seral origin of species comprising secondary plant succession in Northern Rocky Mountain forests. 10 p. Unpublished draft on file at: U.S. Department of Agriculture, Forest Service, Intermountain Research Station, Fire Sciences Laboratory, Missoula, MT. 40. Swetnam, Thomas W.; Baisan, Christopher H.; Brown, Peter M.; Caprio, Anthony C. 1989. Fire history of Rhyolite Canyon, Chiricahua National Monument. Tech. Rep. No. 32. Tucson, AZ: University of Arizona, School of Renewable Natural Resources; Cooperative National Park Resources Studies Unit. 47 p. 41. U.S. Department of Agriculture, Natural Resources Conservation Service. 2007. PLANTS Database, [Online]. Available: http://plants.usda.gov/. 42. Van Dersal, William R. 1938. Native woody plants of the United States, their erosion-control and wildlife values. Misc. Publ. No. 303. Washington, DC: U.S. Department of Agriculture. 362 p. 43. van Melle, P. J. 1952. Juniperus texensis sp. nov. -- West-Texas juniper in relation to J. monosperma, J. ashei et al. Phytologia. 4: 26-35. 44. Vines, Robert A. 1960. Trees, shrubs, and woody vines of the Southwest. Austin, TX: University of Texas Press. 1104 p. 45. Wauer, Roland H. 1971. Ecological distribution of birds of the Chisos Mountains, Texas. The Southwestern Naturalist. 16(1): 1-29. 46. Wright, Henry A.; Bailey, Arthur W. 1982. Fire ecology: United States and southern Canada. New York: John Wiley & Sons. 501 p. 47. Zanoni, Thomas A.; Adams, Robert P. 1975. The genus Juniperus (Cupressaceae) in Mexico and Guatemala: numerical and morphological analysis. Boletin de la Sociedad Botanica de Mexico. 35: 69-91. 48. Zanoni, Thomas A.; Adams, Robert P. 1976. The genus Juniperus in Mexico and Guatemala: numerical and chemosystematic analysis. Biochemical Systematics and Ecology. 4: 147-158.
<urn:uuid:ba5cdbbe-faa7-43e9-8762-6848330c6759>
CC-MAIN-2016-18
http://www.fs.fed.us/database/feis/plants/tree/junfla/all.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861743914.17/warc/CC-MAIN-20160428164223-00110-ip-10-239-7-51.ec2.internal.warc.gz
en
0.689572
4,063
2.703125
3
Sea ice is an integral component of the Arctic and Subarctic coastal environments. It is dynamic and its extent, thickness and distribution varies between seasons and years. It also varies in type and form, which are determined by the different physical and environmental conditions that cause it to form. For Inuit, sea ice is critical infrastructure and is a central part of culture, community and livelihood. Sea ice types include newly formed ice (weakly frozen ice crystals), nilas ice (less than 10 cm thick), young ice (10-30 cm thick), first-year ice (more than 30 cm thick) and old ice (ice that survived a full season, including melt). Sea ice forms can vary from small pancake ice (less than 3 m) to much larger ice floes (more than 20 m). Sea ice is usually classified as either pack (drift) ice or fast ice. Pack ice is generally made up of floes greater than 20 metres. It is dynamic and moves with currents and winds. Fast ice is attached to the shoreline, shoals or grounded icebergs and is relatively stable. Within sea ice there are areas of open water, called leads or polynyas, which are kept open by currents, tides and winds. These areas of open water are critical to the sea ice and ocean ecosystems. Sea ice is more than just frozen water. During formation, pockets of brine form within the ice, providing habitat for bacteria and algae. These ecological communities remain relatively dormant during the darkness of winter, but when spring arrives with more available light and warmer temperatures, the ice ecosystem becomes much more active. Bacteria populations increase and the ice algae begin to grow, especially at the bottom of the ice where there is a steady supply of important nutrients. As the ice continues to warm and begins to melt, the brine pockets within the sea ice begin to connect, forming brine channels. This results in structural changes in the ice and in an export of brine, bacteria and algae to the sea water below, providing food and nutrients to the ocean ecosystem. For Inuit, sea ice is critical infrastructure and is a central part of culture, community and livelihood. Ice is an extension of the Land — its existence is imperative for Inuit to travel and access crucial areas, as well as being a platform to the ocean and its resources. Sea ice connects Inuit, allowing for travel between communities and the four Inuit regions that make up Inuit Nunangat. The ice also allows Inuit to access harvesting areas (both on land and water) at different times of the year, depending on the seasonal patterns of the species and the condition of the sea ice. Furthermore, sea ice connects Inuit to historical and culturally significant areas, including cabins, seasonal camps, traplines and harvesting areas. The connection between Inuit and sea ice is inherent, healthy and strong. Inuit have extensive knowledge about the different types and forms of ice, how ice changes in relation to environmental factors and what changes happen during the different seasons of the year. This knowledge has been passed on for generations and is imperative to Inuit use and occupancy of sea ice. Relying on this knowledge allows Inuit to safely access areas of importance, to travel to other communities and to harvest food and resources as needed, even during the times of sea ice formation and breakup. In Inuktut, the importance of ice is highlighted by the dozens of different words that exist to define things such as sea ice form, type, location and age. The cycle of the sea ice formation, breakup and melt greatly affects the weather patterns and climate in the North. Reduced sea ice means more open ocean, more moisture in the air and warmer temperatures. It also means that once predictable seasonal weather patterns are now shifting and becoming unpredictable. Recent changes in the northern climate have led to increasingly dangerous sea ice and snow conditions, causing hunting areas and traditional travel routes to become inaccessible. The relationship between Inuit, environment and health is connected, and any change in the environment directly impacts Inuit health, including mental health, and well-being. Furthermore, the changes to sea ice do not just affect access to crucial areas, but they also affect the ecosystem dynamics of many different species of importance for Inuit, including those animals in the air, on the land and in the water. Inuit have implemented a variety of community-based monitoring programs to better understand the changing sea ice across the North. These programs are initiatives that provide opportunities for Inuit to measure and observe changes in sea ice and to understand these changes in the context of their communities and region. To better monitor the changes in sea ice in the North over long periods of time, Inuit are collecting baseline data on their surroundings. This ensures local residents are involved in research and monitoring, and that specific areas of concern are being addressed by the people who live there. Sea ice conditions are rapidly changing throughout the North, and Inuit are using their knowledge, in conjunction with modern technologies, to adapt and continue to use the sea ice as they have for generations. from Amazon.ca or Chapters.Indigo.ca or contact your favourite bookseller or educational wholesaler
<urn:uuid:6d641024-4c54-46ae-942e-406e4ddcc8b3>
CC-MAIN-2020-34
https://indigenouspeoplesatlasofcanada.ca/article/sea-ice/
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00193.warc.gz
en
0.957267
1,055
4.15625
4
As time moved on the lowly toothpick began to move up in the world. Bronze toothpicks (ouch) have been found as burial objects in some prehistoric graves in Italy and Switzerland. The Romans produced fancy examples in silver and mastic wood. The fabulously decadent Roman Emperor Nero once entered a banquet hall with a sporty silver toothpick lodged in his mouth, causing quite a stir. By the time the 17th century rolled around the toothpick had reached its zenith as a luxury item. Made from precious metals set with gemstones they were artfully stylized and enameled for the stylish set. The less fortunate made do with porcupine quills or twigs as they had for centuries. The toothpick we know today came about as the result of the industrial revolution, and the invention of the automatic toothpick making machine by Charles Forster. The Forster style machines are still in use today; one log will produce a million toothpicks! Birch logs are stripped into thin veneers which are cut into strips and finally stamped into the little objects we all know, made from solid birch, and rounded on both ends. The Cinnamon toothpick was born in 1949, made by drugstore owner August T. Baden as treats for the neighborhood children. They caught on in the 50's and by the 60's they were all the rage. Mr. Baden made millions of toothpicks until he retired in the early '90s.
<urn:uuid:0d7f11d5-b7e1-4bc9-ae0e-a02049b6e474>
CC-MAIN-2015-14
http://www.nucleartoothpicks.com/historyoftoothpicks.htm
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131303523.19/warc/CC-MAIN-20150323172143-00064-ip-10-168-14-71.ec2.internal.warc.gz
en
0.98483
302
2.671875
3
Used to replace a missing tooth, dental bridges involve fixing a false tooth in place by attaching it to one or two teeth on either side. Usually these supporting teeth are crowned to give extra strength, but there are various different types of bridges. The dentist will discuss these with you fully during consultation and treatment planning. Bridges are made of metal and porcelain or sometimes just porcelain and ceramics. Bridges are made of metal and porcelain or sometimes just porcelain and ceramics. What to expect: Depending on the amount of tooth preparation required, the dentist may first give you an injection to numb the tooth or teeth. The dentist uses a soft, moldable material to take impressions of your mouth. A dental technician will then make exact plaster models of your upper and lower teeth and gums which show how your teeth bite together. The teeth that will support the bridge are prepared to take the fixings and to make sure that the bridge is not too bulky. Another impression is taken of the teeth, and the dental technician uses this to make the bridge. An acrylic temporary bridge or temporary crown may be fitted in the meantime. At your final visit, the dentist will check that the bridge fits and make any minor adjustments. Then, after checking that you are happy, it is fixed permanently in place. Your dentist or hygienist will show you the best way of keeping your new bridge clean. If you do not want a bridge, you can have a removable partial denture. The dentist will explain how successful a bridge will be. If the supporting teeth are not strong enough, a denture might be better. If you have just had some teeth taken out, a denture might be made first, with a bridge fitted later when the gum has healed.
<urn:uuid:1c476937-60de-4a11-aa7b-99e1ce80a3f2>
CC-MAIN-2018-09
https://www.pennhilldental.co.uk/products/bridges
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813608.70/warc/CC-MAIN-20180221103712-20180221123712-00661.warc.gz
en
0.942911
368
2.90625
3
Pre-School (Nursery and Preparatory Grade) Junior School (Grades 1-6) The Pre-School curriculum teaches the children to think and use their talents effectively. It also teaches children to recognize their self-worth. The Pre-School curriculum lays emphasis on learning by observation, interaction and direct guidance. It incorporates a variety of activities associated with the learning processes dealing with various concepts such as colours, shapes, numbers and alphabets. The pupils of this age are consciously provided with experiences geared towards enhance academic skills. The thrust in the curriculum is “learning is fun!” The Junior School curriculum for grades 1-6 is designed to give a broad based education for the development of the individual child. It offers the core academic subjects our children need in this age of global information; Mathematics, Language, Literature, History, Geography, Fine Arts, Music, French, and Computer Studies. We also give consideration to the curriculum designed by the Nigerian Ministry of Education to ensure that a child is well prepared for the Common Entrance Examination in his/her final year.
<urn:uuid:9527f33e-d303-406f-b99b-4a2f1cc98298>
CC-MAIN-2023-50
https://www.lifeforte.org/lifeforte-junior-school/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00556.warc.gz
en
0.958347
228
3.4375
3
1902: Thomas Nast, one of the most influential people in media, the late 19th Century Cartoonist who invented or defined images including: 'Uncle Sam,' the Democrat Jack-ass (Donkey) the Republican Elephant, and Santa Claus as a fat jolly old elf giving presents to children* or scurrying down chimneys, the great Harper's Weekly Cartoonist died. Nast's pen, imagination, originality, wit, boldness, skill and directness was feared by politicians and the corrupt. NYC's Boss Tweed detested Nast's nasty depictions of him. When Tweed was on the lamb in Spain, he was recognized and captuted because of Nast's cartoons. Some big wigs set up a phony foundation which offered him $100,000 for Nast to study art in Europe. Tweed, ever the wag, bid the bribers up to $500,000 and then he declined the fellowship. Probably a good idea. Bumping him off in Europe would be one way of extending his study abroad. *Thomas Nast is featured in my Santa Claus the NYC Tour. Book a private tour of Santa Claus' true NYC history. I have some Nast Santa Claus drawings in the Destinations page and on my Santa the NYC Tour facebook page.
<urn:uuid:fed42888-97e8-4fa1-925c-82f671995269>
CC-MAIN-2018-22
http://jaredthenyctourguide.com/blog/2014/12/07/dec-7th-1902-a-tour-through-new-york-city-history.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866107.79/warc/CC-MAIN-20180524073324-20180524093324-00227.warc.gz
en
0.948798
272
2.78125
3
1) Which general formula applies to hydrocarbons with one double covalent between adjacent atoms? 2) The compound, 2-methyl-2-propanol, is an isomer of 3) Which organic compound is an electrolyte? 4) Alkanes are represented by the general formula 5) Which of the following compounds would have the highest boiling point? E. cannot be determined 6) Compounds that contain a hydroxyl functional group are known as 7) ________ are organic compounds that posses a carbon-carbon triple bond. 8) Which of the following class of organic compounds have a carbonyl functional group? D. a and b E. none of the above 9) How many isomers exist for the molecule C3H5Br? 10) Which of the following compounds cannot exist? E. All may exist 10 Questions About Alkanes and Structural Diagrams are answered in detail with explanations.
<urn:uuid:37c1b96c-fd0f-4196-842b-3c5ea23e3b3a>
CC-MAIN-2019-47
https://brainmass.com/chemistry/bonding/10-questions-about-alkanes-and-structural-diagrams-159518
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00509.warc.gz
en
0.750525
205
3.046875
3
Gerald Harvey Thompson OBE. MA. MSc. (Oxon). Hon. FRPS. (1917-2002) Life in the Colonial Services The hot and humid environment is one of the worst climates in the world and in those days the incidence of malaria and other diseases was high. He was scarcely comforted by the couplet: ‘Beware, take care of the bight of Benin; one comes out, though forty go in.’ The French did not invade and after doing an assortment of tasks to help the war effort (one of Gerald’s first duties on arrival in Ghana was to organise the collection of wild rubber to be sent to England for the manufacture of tyres. Other sources of rubber from, e.g. Malaysia, had been cut off due to flighting all over Burma He settled down for the next eight years to life as a District Forest Officer. During the early years he spent three weeks in every month camping in the jungle, returning to base to write monthly reports and to arrange the payment of staff. (He estimates he spent a total of three years camping alone in the jungle). His work included surveying a new forest reserve by chain and compass (excluding any village where possible) in order to protect timber supplies and water catchment areas, and drawing up working plans for their management. He had the help of twelve labourers who cut the boundary lines. Working from 7am to 3pm each day leisure time was spent collecting beetles associated with dead and dying trees; this was his hobby, and at night after a bath by the light of a tilley oil lamp he would mount his day’s collection of beetles and write up notes. Eventually he brought home to England 3000 mounted specimens with notes, two hundred and fifty tubes of spirit specimens (larvae and pupae) and various wood specimens showing damage. The adult specimens were all left at the South Kensington Natural History Museum, Department of Entomology where the staff were most helpful in providing identification. Many specimens were new to science, and some were named after Gerald e.g. Cerambycidae. Leptostylus thompsoni sp.n Elaphidion thompsoni sp.n Colydiidae. Sosylus thompsoni sp.n Scolytidae. Rhopalopselion thompsoni sp. n. Tiarophorus intermedius sp.n Carabidae. Hyperecterus minor Britton sp.n (sp.n = new species). Types and paratypes are now in the new Darwin Centre. In addition to running a forest district he was given two special jobs contributing to the war effort; the collection of wild rubber – Ficus sp. – by farmers – since the war extended to the plantations of Hevea sp. in the Far East. Payment was made in gunpowder for which the farmers were desperate in order to shoot ‘bush’ meat (antelopes). The solid wood boxes that the gunpowder came in were perfect for logs and therefore made excellent breeding cages for Gerald’s beetles. The second war effort contribution concerned the construction of furniture for the West African Army. This project was based on Koforidua in the Eastern Province. Gerald recruited three hundred carpenters working in about 40 workshops scattered throughout the town. A Togoland overseer assisted him and every day Gerald visited each workshop on foot to keep an eye on quality control. The main products were uniform cases made from mahogany (Khaya sp.) or cedar (Entandrophragma sp.) of which hundreds were made. He had to give his talks through an interpreter who spoke Twi, the language of the Ashanti people. One problem was that he could not check what his interpreter was saying so Gerald learnt Twi – a difficult tonal language in which one word can have several meanings. After three years he became fairly proficient and could converse with the locals . On admission to Hospital in Oxfordshire in his eighties he astounded the doctor who was treating him by talking to him in Twi – the doctors native language. He learnt many stories about Anancy (the spider) to amuse the children, and life in the ‘bush’ became much more interesting. One day he walked into a village to give a forestry talk and was somewhat shocked when all the children ran away screaming into the forest! On asking an elder of the tribe what was wrong he was informed that no white man had visited the village for fifteen years so none of the children had seen a white man and they thought he was an alien. The fact that Gerald was exceptionally tall at 6ft 5 ins was another factor! Everywhere he went he collected wood boring-beetles which were studied in his spare time until eventually he was able to submit a thesis for a research degree. This hobby, and his collection of classical records, helped to dispel the loneliness. Apart from a near escape from being bitten by a gaboon viper (10 mins to death) when he sat near it in the jungle one day, Gerald had only one serious during his years in the West African jungle. This occurred when he pitched his tent in a forest of high trees which, unknown to him, were growing in shallow soil There was no village nearby so the labourers made a shelter for themselves to sleep in that night. Well after dark a violent storm arose, with lashing rain and violent high winds, and soon – after much creaking – a large tree smashed to the ground whereupon he called everyone to his tent. Luckily the fly sheet had been erected as an extension to the tent and his staff of fifteen sheltered under that. In pitch darkness, illuminated by the occasional lightning flash they awaited their fate as trees were torn up all around the tent. Eventually the storm passed and dawn revealed how near they had all come to death. Surrounded by huge uprooted trees the tent remained unscathed. It seemed miraculous. They broke camp and left the forest as quickly as possible but the exit took many hours because of the fallen trees that had to be negotiated by the labourers with their loads. In one campsite, on a small level area on a hillside, there was a hollow log inhabited by a 6 ft. black cobra. As night fell it emerged from the log, went through Gerald’s tent on its way to hunt for food, and later, when Gerald was asleep, would return home via the tent again! Years later when Gerald camped at the same site the snake was still there – only by now it was much larger! Driver (army) ants were also a problem. One day Gerald found in the forest a bird with a broken wing; he carried it back to camp, put a splint on the wing and placed the bird in a cardboard box with food and water under the tent’s fly sheet. The next morning, on looking in the box to see how the bird was faring he saw only a collection of bones and feathers – army ants had passed through in the night. Gerald had slept soundly under his mosquito net because the four legs of his camp bed were each immersed in a tin containing kerosene, thus keeping the ants at bay. In 1946 Gerald spent his leave in Oxford preparing his thesis on Gold Coast Coleoptera. An MSc. was awarded before he returned to duty in 1948 on what was to be his last twenty one month tour to the Gold Coast – this time he was accompanied by his wife Joyce and three year old son David. He was posted to Bekwai – sixteen miles from Kumasi, where the exploitation of mahogany, cedar and Iroko (Chlorophora excelsa) was the chief native industry. Trees to be felled required a license issued by the Forestry Department and no tree below 9ft girth above the buttresses was licensed. Felling was by axe, a hazardous job. Crosscutting into logs was done by pitsaw (8 feet long with a handle at each end, operated by two men). Hauling logs to the roadside was achieved by laying down a track of ‘skids’ and sliding the logs over these – hauled by 40-50 men. Each railway station on the Takoredi/Accra railway line had a special area where logs for sale could be left for up to two years. Inspectors from the sawmills on the coast travelled the line selecting what they wished to buy. Some logs were converted into planks in the forest. This was done by the pitsawyersusing a pitsaw. The log was suspended over a large pit, one sawyer stood atop the log, the other in the pit. Their sawing could be surprisingly accurate. Gerald was glad to have seen the manual extraction of logs in the Gold Coast; it really was the end of an era. Five years later he saw the extraction of huge Douglas Fir and Hemlock from the forests of British Columbia using aerial ropeways and huge lorries each carrying 40 tons of logs. Marriage and the birth of his son David raised Gerald’s thoughts of transferring to a climate where family life could be enjoyed; the downside was that forest entomology was his main interest and there were few jobs in this specialised subject. Gerald’s chance came in 1950 when Dr R N Crystal, his former tutor at Oxford, decided to retire early and Gerald was asked to succeed him as University lecturer in Forest Entomology at The Commonwealth Forestry Institute. Returning to England would entail setting up house and for this Gerald had no money. He was not highly paid and had no savings; for his first tour he was paid £375 a year rising to £700 per year by 1948. He decided to stop collecting beetles in his spare time and start making furniture instead to take back to England. Carpentry had always been an interest. Unfortunately Bekwai had no electricity so no power tools could be used. Unlimited elbow grease was needed! Gerald managed to purchase some basic tools – saws, planes, chisels. The basis of construction would be by tongue and groove. Screws were only used in the refectory table for rapid disassembly. The garage became a workshop and for fourteen months he worked by the light of a tilley lamp from 6pm till bedtime. The timber used came from a mahogany log which had lain in Bekwai log yard for more than two years and had to be removed. No inspector had shown any interest in it as it was so heavily infested with ‘pinholes’ – tunnels made by species of scolytidae and platypodidae but investigation of the depth of penetration showed that no tunnels entered the heartwood which, moreover, showed a fiddleback figure throughout. It was, infact, a very valuable log which was sawn into boards. From this log Gerald made a refectory table with eight chairs, a carving table, sideboard, tea trolley, three easy chairs and coffee tables. The furniture was assembled and glued in England – taking up 34 crates. Some of the furniture is still in use today – over sixty years later.
<urn:uuid:e3862371-9305-471f-9a74-1d42b5b932e1>
CC-MAIN-2017-34
http://worldeducationalfilms.com/gerald-life-as-a-colonial-officer/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117911.49/warc/CC-MAIN-20170823074634-20170823094634-00258.warc.gz
en
0.986611
2,305
2.75
3
When we can’t predict what are all the results of our actions, we are dealing with uncertainty. Since the first humans found out that there are unpleasant surprises hidden in the future, uncertainty is a challenge for mankind. Since those days, we have developed many mechanisms to handle uncertain. Humans always try to resolve uncertainty by controlling the future. Yet, uncertainty surprises us every day. Uncertainty is one byproduct of complexity. When there are more autonomous and diverse agents that establish more interlinks between them, the odds of predicting all actions of all actors create uncertainty. Many times in the past we thought we gained control of the future, but reality taught us again and again that uncertainty is inevitable. Controlling the future is regretfully an illusion. We have our own limitations that prevent us from reducing uncertainty. We need to learn how to live with it. In this post, I want to cover the main ways humans developed over the years to deal with uncertainties. As a leader, you need to be familiar with those methods and verify if you are using the right tools for the current situation. Part of the expectation from you as a leader is also to try new methods that might end up with better results. Forecasting. We believe the past will return in the future, so we are using heuristics and algorithms to forecast the future based on past events. This effort contains many known activities like risk management, probability theory, and project management. Although using forecasting can reduce uncertainties, it is far away from eliminating them. Regretfully, uncertainty is stronger and leaves many wrong forecasts behind it. Yet, this is a viable tool. There are several methods that humans are using to reduce uncertainty by increasing known behaviors of part of the social systems. One method is the usage of norms, rules, laws, mental models and ethics (a belief what is right and wrong). Those tools create a more predictable behavior of elements in a social system and therefore reduce uncertainty. The other way is to create institutions that increase predictable behavior. Governments and language are two examples form daily life. In social organizations, leadership and management are key institutions that are trying to increase predictability or at least this one of their goals. Innovation and creativity are key for many reasons, one of them is dealing with uncertainty. Innovation and creativity enable us to think about unrealistic events that will fold and to create tools or methods to deal with them. Sometimes those unrealistic scenarios (or variations of them) will fold in reality and create uncertainty. Thanks to the effort we invested in tools and methods ahead of time, we can minimize or remove the impact of those uncertainties. Sometimes this effort will be a waste of time. Monitoring and alerting. If we developed capabilities to monitor for indications of uncertainty and alert when they found, we can give more time to prepare for uncertainty. This activity won’t minimize any uncertainty. It will buy us more time to deal with the uncertainty and therefore reduce its impact. Those activities also increase people’s sense of control, which helps in the remediation of uncertainties impact. Creating redundancies and looseness in a system is another way to create buffers that will enable people to have more time to deal with uncertainties. Those features create resiliency that enables the system to keep on working and people to deal with the impacts of uncertainty. buy the way, having a single decision-maker can be very problematic in case of uncertainty. The best way to deal with uncertainty is to create the capacity to adapt quickly to new changes. It’s a long process of evolution, of trail and error. This long process creates attributes, properties, and behaviors that will create adaptability when uncertainty hits the fence. In man-made systems, someone needs to make sure that people see uncertainty as a threat.
<urn:uuid:5d1034a6-b6ae-4780-941f-0d0fcb26fc51>
CC-MAIN-2021-10
https://ongalaxies.com/2019/10/18/complexity-dealing-with-uncertainty/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00356.warc.gz
en
0.947434
774
2.90625
3
Do rechargeable lithium-ion batteries exist in standard sizes like AA, AAA, C or D? They do, but they don’t operate at 1.2-1.5 Volts. According to Isidor Buchmann, author of Batteries in a Portable World: A Handbook on Rechargeable Batteries for Non-Engineers, this is due to safety concerns — people might try to charge them in chargers made for other AA batteries, where they might explode. Also, because Li-ion batteries operate 3.7V per cell, rather than the 1.2 to 1.5V of most cell batteries, designing a 1.5V lithium-ion cell would be expensive. A single 18650 battery can replace two CR123A batteries, although at a lower voltage (but much higher amperage). However, the 18650 is a wider cell and will not fit into a flashlight that is designed strictly for the narrower CR123As. Most modern tactical LED lights are designed to use a single 18650 or two CR123As, but it’s best to check before buying. See our article on The Best 18650 Batteries for more information. There are also numerous other types of lithium-ion batteries made for specific laptops and other electronics gadgets. There is currently no standard size for these lithium-ion cells.
<urn:uuid:00c75ef6-fa7a-49c3-876b-caab130fe809>
CC-MAIN-2015-11
http://www.metaefficient.com/rechargeable-batteries/rechargeable-litihiumion-aa-batteries-exist.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464303.77/warc/CC-MAIN-20150226074104-00130-ip-10-28-5-156.ec2.internal.warc.gz
en
0.927188
279
2.90625
3
Report on Turrialba (Costa Rica) — 3 December-9 December 2014 Smithsonian / US Geological Survey Weekly Volcanic Activity Report, 3 December-9 December 2014 Managing Editor: Sally Kuhn Sennert Please cite this report as: Global Volcanism Program, 2014. Report on Turrialba (Costa Rica). In: Sennert, S K (ed.), Weekly Volcanic Activity Report, 3 December-9 December 2014. Smithsonian Institution and US Geological Survey. 10.025°N, 83.767°W; summit elev. 3340 m All times are local (unless otherwise noted) OVSICORI-UNA reported that at 2128 on 8 December a strong Strombolian explosion at Turrialba lasted about 10 minutes and had no precursory activity. Ashfall (1 cm thick) and ballistics were deposited as far as 300 m W. Trace amounts of ashfall were reported in the Central Valley and in towns to the W and SW. Geological Summary. Turrialba, the easternmost of Costa Rica's Holocene volcanoes, is a large vegetated basaltic-to-dacitic stratovolcano located across a broad saddle NE of Irazú volcano overlooking the city of Cartago. The massive edifice covers an area of 500 km2. Three well-defined craters occur at the upper SW end of a broad 800 x 2200 m summit depression that is breached to the NE. Most activity originated from the summit vent complex, but two pyroclastic cones are located on the SW flank. Five major explosive eruptions have occurred during the past 3500 years. A series of explosive eruptions during the 19th century were sometimes accompanied by pyroclastic flows. Fumarolic activity continues at the central and SW summit craters.
<urn:uuid:daf6ffd9-06fb-422d-9416-9c5564c39ad9>
CC-MAIN-2021-31
https://volcano.si.edu/showreport.cfm?doi=GVP.WVAR20141203-345070
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00516.warc.gz
en
0.936559
385
2.859375
3
The ability to use an electric or magnetic field to manipulate the orientation of electric dipoles or magnetic moments associated with atoms, ions or molecules in a material provides a vast array of functions. In rare materials called magnetoelectric multiferroics, the dipoles are intimately coupled to the moments, and a single field can control both1. After the field is applied, however, the dipoles and moments typically all have the same orientation, and the original pattern that they formed is lost. In a paper Nature, Leo et al.2 show that, in two particular materials, a magnetic field can flip each of the dipoles or moments while preserving the structure of the original pattern. The work illustrates how the complex coupling in these materials could be used to uncover other, previously unobserved electric and magnetic effects. When most materials are placed in an electric field, their positive and negative charges shift by a tiny amount (less than 0.1 nanometres, which is about the radius of an atom). This microscopic movement leads to a macroscopic, measurable response: an electric polarization. In ferroelectric materials, however, clusters of ions assemble in a way that results in electric dipoles and a macroscopic polarization, even in the absence of an electric field. Ferroelectrics are typically composed of domains — mesoscopic regions, often 100 nm to several micrometres in size, in which dipoles are aligned. Applying a strong electric field to a ferroelectric material causes all of the dipoles to point in a single direction, erasing both the original domain pattern and any engineered functions of the domain structure or of the boundaries between domains3. There is a magnetic analogue to this phenomenon. A ferromagnetic material contains concerted arrangements of electron magnetic moments, which are located on specific sites of the material’s atomic lattice. These moments generate a macroscopic magnetization that can be controlled using a magnetic field. Most ferromagnetic materials are also composed of mesoscopic domains. Despite the apparent macroscopic similarities between ferroelectricity and ferromagnetism, materials that exhibit both phenomena, known as multiferroics, are exceedingly rare4. Magnetoelectric materials — those in which electric and magnetic properties are coupled, but that do not necessarily possess ferroelectric or ferromagnetic order — are also uncommon. Most exotic are magnetoelectric multiferroics, in which ferroelectricity and ferromagnetism are intrinsically coupled. This coupling holds great potential for next-generation devices, such as data-storage units that run on ultra-low power, highly sensitive magnetic-field detectors5 and energy-efficient nanoscale motors6. Much of the research focus on magnetoelectric multiferroics so far has centred on the control of magnetism using electric fields of ever-decreasing strength7. Identifying multiferroics is a great challenge. In the current work, however, Leo and colleagues recognize that once such a material is identified, the complex parameters that give rise to this state of matter can be combined or manipulated in completely distinct ways. They illustrate this new way of thinking about multifunctional materials by considering the intertwined electric and magnetic properties of two such materials, imaging the domain structure while applying an external magnetic field. The authors observed domains in the materials using a technique called optical second-harmonic generation. In this approach, two photons interact with a material to produce a single photon that has twice the frequency of the incident photons. The technique is sensitive to the spatial and magnetic (point-group) symmetry of the material’s lattice, making it a powerful probe of structural, electronic and magnetic order. Of particular relevance to the authors’ work is that second-harmonic generation is sensitive to magnetism even when the magnitude of the magnetic moments in the material is 1,000 times smaller than that of the moments in a typical ferromagnet1,8 — a sensitivity that can be matched by few complementary techniques. Leo et al. studied ferromagnetic domains in one of the materials as a perpendicular magnetic field was swept across the material, and ferroelectric domains in the other material during application of a parallel magnetic field. They found that when the field was gradually changed from one direction to the opposite direction, the boundaries between the domains moved. But, remarkably, when this process was complete, the polarization or magnetization of each domain was reversed and the original domain pattern was recovered (Fig. 1). Such an effect is similar to switching the black and white squares of a chessboard, without changing the boundaries between the squares. It is in sharp contrast to what is usually observed when a uniform field is applied to a material: an alignment of all the electric dipoles or magnetic moments, or in the chess analogy, a conversion of all the squares to a single colour. The authors explain the inversion effect as being due to the coupling of three order parameters — variables that describe the alignment of dipoles or moments in a material. The first parameter represents the observed domain distribution. The second parameter, which is unaffected by the applied magnetic field, imprints the original domain pattern onto the first parameter. Finally, the third parameter, which is directly controlled by the field, causes the observed domain distribution to be inverted. Leo and colleagues’ results suggest that the coupling of multiple order parameters is generic, but it remains to be seen how frequently it manifests in other materials. Perhaps more importantly, however, the study shows how multiple order parameters in certain materials can be exploited. Although magnetoelectric multiferroics have garnered much interest because of their strongly coupled magnetization and ferroelectric polarization, future work might find ways to combine the many order parameters in these materials to derive new functions. Precisely what other relationships might be lurking between these parameters is uncertain. Nevertheless, the authors’ demonstration of domain-pattern inversion resulting from the coupling of three order parameters is a big step forward in our understanding of complex coupling in multiferroic materials. Nature 560, 435-436 (2018)
<urn:uuid:917ff0df-22ae-4c84-b06b-9e678d836db6>
CC-MAIN-2018-51
https://www.nature.com/articles/d41586-018-05982-5?error=cookies_not_supported&code=0abc33ba-48f1-4359-9cc5-ff4dc65d3075
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00391.warc.gz
en
0.910983
1,249
3.640625
4
The possession of a soul, mind and body is considered humanity’s greatest gift. In the quest to attain good health, humans have been impelled to search for cures for many ailments. The act of physical healing constitutes one of the most important of human endeavors. Through the test of time, ethno-medicine has evolved into intricate art that has been perpetuated by the passage of knowledge down through the generations within each ethnic group. From existing evidence on drug usage by various ethnic tribes, it can be surmised that the inhabitants of the Indochinese peninsula developed their own systems of traditional medicine long before Sukhothai became a thriving capital of the region. Although no evidence has yet been found to substantiate such claims, a stone inscription dating back to the ancient Khmer Empire states that during the reign of King Chaiworamon VII made merit in accordance with Buddhist beliefs by ordering the establishment of 102 Arokayasala (hospitals) in an area that is now north-eastern Thailand. Policy support for the development of traditional and herbal medicine was launched in 1977 through the 4th National Economic and Social Development Plan. Knowledge of therapeutic usage of these herbal medicinal products has become a valuable “heritage of local wisdom” which has been transferred from generation to generation within the family, village, society or even throughout the whole country. A lot of Thai people still believe in the efficacy of herbal remedies as well as traditional medicine practices. Usage of herbal medicinal products has remarkably been increased in accordance with the global trend of people returning to believe in the superb of natural therapy. As being a large natural resource of medicinal plants, Thailand is also alert to this global trend. Many research institutes are now turning back to the studies of herbal medicinal products or traditional medicine development. Local knowledge is recognized as a valuable inheritance. Study and research on the potential medicinal plants including extracting and purifying the active or principle components from plants to be used as medicinal products are conducted in many academic and governmental research institutions. New technology of manufacturing has been applied to produce the herbal medicinal products of higher efficacy and in more appropriate dosage forms. The government has emphasized the promotion of traditional and herbal medicine by integrating it into primary health care activities. Formally, all herbal traditional recipes were regulated as herbal or traditional medicines. Then, there have been research and development utilizing modern technology to innovate pattern of consumption. These herbal products are now classified as modern herbal medicinal products. The period of 1994 to 2000 was designated as the “Decade of Thai Traditional Medicine Development” focusing on the promotion of studies, research and development of health related products and health technologies, and increased capacity in producing traditional medicines and training in Thai traditional massage. It could be said that, for almost a century, Thai traditional medicine had been a non-formal medicine system without substantial support and development from government. Only in the last decade did the Ministry of Public Health show endeavors to develop the whole system of the indigenous medicine. In 1993, the National Institute of Thai Traditional Medicine was established and in 2002 reformed to be under the Department of the Thai Traditional Medicine and Alternative Medicine.
<urn:uuid:636b7b7b-acab-4380-af9a-5676e513d986>
CC-MAIN-2020-05
https://www.shenmantra.com/articles/thai-massage/evolution-of-thai-herbal-medicine/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00158.warc.gz
en
0.957241
640
3.25
3
Theme VI: Extraction methods The percolation is the most common procedure for the preparation of tinctures and fluid extracts. The percolator is a conical vessel with a top opening in which is placed a circular drilled lid allowing the pass of liquid and subjecting the materials placed on it to a slight pressure. The bottom has an adjustable closure to allow passage of the fluid at a convenient rate. The plant material is moistened prior to their placement in the percolator with a proper amount of menstruum, it´s placed in a sealed container and leave stand for approximately four hours. After that time the plant material must be conveniently placed in the percolator so as to allow the even passage of fluid and the complete contact with the plant material. The percolator must be filled with liquid and covered up. The bottom outlet is opened until get a regular dripping and then closes. More menstruum is added to cover all the material and must stand to soak in the percolator closed for 24 hours. After this time leave it to drip slowly and added enough menstruum to a proportional volume of 3/4 of the total volume required for the final product. The wet mass is pressed to extract the maximum residual fluid retained and supplemented with sufficient menstruum to get the proper proportion, it´s filtered or clarified by decantation.
<urn:uuid:68698539-4ab3-48df-b4a9-36ec6fe6f3ac>
CC-MAIN-2020-05
https://www.medicinalplants-pharmacognosy.com/pharmacognosy-s-topics/extraction-methods/percolation/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00548.warc.gz
en
0.924475
278
2.640625
3
Reducing the effects on the environment by carbon emissions has been a concern that has grown during the past few decades from a niche problem into a top priority that is heavily debated among politicians The idea of combatting climate by reducing pollution was introduced in legislation by former President George H.W Bush, according to Drew Landry, assistant professor of government at South Plains College. “H.W Bush came up with this idea called ‘Cap and Trade,’” Landry told the Plainsman Press in a recent interview. “Businesses that pollute would be given a cap as to how much they could pollute in a year. If they were coming out under that cap, they would trade what they had left to businesses that needed to pollute more, hence the name. It wasn’t the best idea in the world. But it was a step forward.” According to Landry, the Republican party took issue with the Cap and Trade legislation. However, in the ‘90s, the Democratic party picked up the idea of capping pollution and it became part of the 2000 campaign. Landry says that the political polarization of climate change comes from what organizations back political parties. “That’s where you get a lot of the polarizing aspects of it,” he said. “Big business has conducted its own tests on it and said, ‘No. Climate change is not human-caused. It’s not affected by oil.’ Whereas the Democrats would be funded by left groups like the Sierra Club and other environmental agencies.” According to Landry, at the time, Republicans were in favor of having clean air and clean water, but they had concerns that the job market would be affected by the change. However, Landry says that because the concerns of Republicans differ from that of the Republican party, they must be approached with the issue from a different perspective. “You talk to them about energy dependence and energy jobs,” Landry said. “Then you have a different conversation and you can start to reach across isles that you didn’t think you could.” In March, National Public Radio reported that Texas led the nation in wind energy as the fastest growing state in the industry, with wind turbine technician being a leading job title in the United States. Just South of Lubbock, in Scurry County, lies one of the state’s largest producers of wind turbines, according to Landry. These advancements toward clean energy in Texas have been possible through the Electronic Reliability Council of Texas, known as ERCOT, a non-profit, membership-based council. ERCOT provides about 90 percent of the electric power in the state. ERCOT is also known as the Texas electric power grid. ERCOT’s wind energy produces more than 25 percent of the organization’s energy. According to Landry, Lubbock’s main power source, Lubbock Power and Light, or LP&L hopes to work with ERCOT. He says that LP&L’s switch from the Southwest Power Pull to the Texas Grid would make it easier for the city of Lubbock to begin installing wind turbines and solar panels. “There’s an interesting relationship between LP&L and the Lubbock City Council,” Landry said. “Their board has to have an approval from the City Council and the City Council has to approve their move to ERCOT. Then we will disconnect from the Southwest Power Pull. The Texas Public Utility Commission will have their vote in 2018.” In the summer, the United States pulled out of the Paris Climate Agreement under the administration of President Donald Trump. The agreement was a global coalition that was meant to curb carbon emissions that contribute to our changing climate. “He [President Trump] said it [removing the United States from the Paris Climate Agreement] would help out with jobs.” Landry said. “His idea was that the more regulation you have over businesses, the less that they would be able to function. He would rather have a United States agreement.” According to Landry, President Trump and those in his party argue that the coal and oil industries will suffer from the switch to clean energy. He says that the research that goes into measuring the effects of climate change are questioned out of skepticism. “They criticize the findings and the methodology,” Landry said. “They criticize the whole basis of the science. With all of the stuff that is going on, I think that the Texas Public Utility Commission would have to allow for LP&L to join ESCOT.”
<urn:uuid:78547ba5-308a-4dcb-9679-0b12cb6eea27>
CC-MAIN-2018-39
https://plainsmanpress.com/2017/12/13/politicians-continue-struggling-with-climate-control-solution/
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159470.54/warc/CC-MAIN-20180923134314-20180923154714-00498.warc.gz
en
0.978152
980
2.984375
3
By Christina Larson High on the Tibetan plateau stands a rustic observation station. Comprised of two low sheds with corrugated steel roofs and one 90-foot tower, it is located in the no man's land known as Hohxil -- one of the world' s last remaining wilderness areas. The tower's spindly silhouette is the only man-made structure in this swath of mountains. To reach its observation deck, you must climb up a side ladder, grasping tightly its cold metal rungs. (At night, the temperature sinks as low as minus 40 degrees F.) The view from the observation deck is one of uncanny stillness: for miles and miles, nothing seems to move, save for an occasional antelope darting across dry amber grasslands. This little outpost on the "roof of the world," as the Tibetan plateau is sometimes called, is usually unmanned. But for the last dozen years, for a few weeks each spring and autumn when weather permits, a small band of devoted volunteers and rugged Chinese scientists have trekked to the station to record the annual migration of the endangered Tibetan antelope. In recent years, they've taken on an additional mission: inviting experts to travel with them to collect data on the melting of the adjacent Himalayan glaciers, from which all the great rivers of Asia spring. As such research holds global significance, it's hardly an exaggeration to label this scrappy outpost a watchtower for the planet. The reason the wildlife station exists at all is somewhat of a miracle. It was not built by the government, but by an independent network of local Chinese and Tibetan environmentalists. Erected in 1997, in the midst of a citizen-led campaign to stop rampant poaching of the Tibetan antelope, it is an unusual monument to bottom-up initiative in China. Throughout the 1990s, a self-organized band of vigilante environmental watchdogs, calling themselves the "Wild Yak Brigade," had patrolled the Hohxil area against poachers. The wildlife station bears the name, in memorial, of one Tibetan environmentalist shot by poachers in 1994. "If government cannot act first," a friend of his told me, "We believe someone must." Perhaps surprisingly, the station -- once a veritable rest stop in the wilderness for activists -- still stands. In many cases, the Chinese government snuffs out citizen activism, but in a few instances, it takes its cue from their efforts. Enhancing law-enforcement (i.e, cracking down on illegal poaching) was a cause that Beijing found it could embrace, after the activists successfully thrust the animal's plight into the national conversation. The numbers of Tibetan antelope have rebounded slightly in recent years. And reflecting its status as a cause célèbre, the government selected it as one of five animal mascots of the Beijing 2008 Olympics. This under-appreciated interplay between Chinese environmentalists -- most of whom remain unknown in the West -- and the government will be one factor shaping the future course of the country's vast, looming environmental battles. And perhaps, the world's. Today, the observation station has a secondary use: as a temporary crash pad for glacier researchers. The vast ice sheets stretching across the greater Himalayas (which includes the Tibetan plateau) sustain every major Asian river, from the Yangtze and Yellow Rivers in China; to the Ganges, Indus, and Brahmaputra in India; to the Mekong and Salween in Southeast Asia. Alas, any aura of timelessness is misleading; change is coming quickly. As the climate warms, the Tibetan plateau is heating up about three times faster than the global average. This is due in part due to its high elevation and in part to the compounding effect of lost snow cover that would otherwise reflect back some sunlight. The prominent Chinese glaciologist Yao Tandong estimates that the Himalayan glaciers could be two-thirds gone by 2060 -- jeopardizing the fresh water available downstream for more than a billion people. When I tried to visit the Tibetan Zhuanlong Temple, in northwestern Gansu province, no one answered my knock at the door. The lama had gone. I was told he had left to spend two weeks praying at the source of each of the nearby streams -- glacier-fed trickles down the mountains -- that in recent years had receded. For the first time in the living memory of any local villager, regular religious services were on hiatus. Melting ice had rendered their temple silent. Christina Larson is a contributing editor at Foreign Policy magazine and a Schwartz fellow at the New America Foundation. Follow or message her on Twitter at @larsonchristina.
<urn:uuid:24a2232b-d0d9-4bf7-b450-482df0bef034>
CC-MAIN-2017-09
https://www.theatlantic.com/international/archive/2011/03/a-watchtower-on-the-roof-of-the-world/72956/
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00616-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953557
962
2.546875
3
Canada’s native peoples protest chronic poverty and government attacks 31 December 2012 The last weeks of 2012 have seen protests by Canada’s impoverished native population snowball. As Attawapiskat First Nations Chief Theresa Spence enters her fourth week of a hunger strike aimed at drawing attention to the federal government’s failure to honor native treaty rights, aboriginal peoples across the country continue a series of road and rail blockades, demonstrations, and sympathy hunger strikes in what has become known as the “Idle No More” movement. This mobilization, although initially unassociated with Spence’s protest, has dovetailed with the Chief’s actions and served to highlight recent Conservative government actions aimed at opening up vast swathes of native land for capitalist resource-extraction projects. Bill C-45, the Conservative’s second omnibus 2012 budget bill, makes changes to the Indian Act and Navigable Waters Act that open the way for the de facto privatization of native lands and significantly reduce environmental regulation. The discovery of huge, new mineral deposits in the Canadian North and the drive to push through pipelines to transport Alberta tar sands oil to the U.S. and Asian markets has spurred the Conservative government of Stephen Harper to seek to level all perceived “roadblocks” to mega-project development. Bill C-45 removes most of Canada’s rivers and lakes from the list of federally protected waterways, thereby significantly weakening environmental oversight. It also reduces funding to native band councils, which often use such monies to commission costly environmental and health studies. These measures are in line with Harper’s earlier insistence that environmental hearings on the Northern Gateway pipeline to the British Columbia coast be “fast-tracked.” Amendments to the Indian Act will also make it far easier for corporations to lease lands on native reserves, eliminating such requirements as the need to secure the support of a majority of a reserve’s residents, not just a few band councilors. The government’s corporate-friendly changes will generate a resource and land grab free-for-all that will enrich a few at the expense of the many. Spence has made the most moderate of demands—a meeting between treaty chiefs, Harper and (perhaps) Governor-General David Johnston to discuss issues of chronic poverty on native reserves and the federal government’s longstanding failure to fulfill its obligations under Treaty 9, including the provision of proper health care and education. Negotiated at the beginning of the Twentieth Century when Canada was experiencing an earlier resource boom, Treaty 9 covers Attawapiskat and other Northern Ontario “First Nations” (Indian) bands. Harper and Johnston have curtly refused Spence’s meeting request. Undaunted, the Attawapiskat Chief has vowed to carry through her hunger strike to the death, if necessary. “I am willing to do what I have to do,” she told reporters. “If I have to take my last breath, I will. But it’s not going to stop there. There is a message out there from the youth who say, if the chief doesn’t make it, they will still make more noise… My journey is going to continue.” However, over the past several days enormous pressure has been brought to bear on Spence to abandon her protest, by the corporate media, the political establishment and many of Spence’s fellow Chiefs. Spence’s hunger strike and the Idle No More movement that sprang up virtually simultaneously were initiated over the heads of the official native leadership organized in the Assembly of First Nations (AFN). That body, led by Shawn Atleo, has been increasingly criticized for its close relationship to the Harper government and its attempts to “partner” with big business. This past weekend, Percy Bellegarde, leader of the Federation of Saskatchewan Nations, echoed the sentiments of many in the AFN in calling for Spence to end her action. Canada’s mainstream media has also weighed in against any movement outside of the control of the AFN. An editorial in The Globe and Mail demanded an end to Spence’s hunger strike, stating that it smacked of “coercion” and went on to claim that Harper is a “friend” of Canada’s aboriginal peoples because in 2008 he issued a meaningless “apology” for the Canadian capitalist state’s abuse and neglect of the native peoples. A look at the recent history of Spence’s reserve sheds much light on Canadian capitalism’s treatment of the aboriginal peoples and the renewed corporate drive to exploit their lands. The Attawapiskat chief represents 2,800 Cree who live along the James and Hudson’s Bay coasts in Northern Ontario. The isolated reserve suffers from abysmal housing, including severe overcrowding and mold-contaminated buildings that lack running water and electricity; substandard, portable school structures, because a federally promised school was never built; sky-high prices and chronic unemployment; and massive sewage spills. United Nations health and education inspectors have twice compared conditions on the reserve to those in Third World countries. When in 2011, Spence declared a “state of emergency” on the reserve due to the lack of proper housing, the Harper government first denied there were any problems, then erroneously accused the native band of mismanagement and illegally stripped the elected chief and band council of management rights on their own reserve. This bureaucratic maneuver was eventually overturned by the courts. Just 90 kilometers from the main settlement lies a one-billion-dollar diamond mine operated by transnational mining conglomerate De Beers. Despite being on Attawapiskat land, all royalties are remitted to the Ontario government in Toronto. Fully 80 percent of the labour force is recruited from outside the region. In 2009, the Attawapiskat launched a temporary blockade of the winter road to the mine to protest their impoverishment in the shadow of De Beer’s riches. De Beers, which is being acquired by mining giant Anglo-American, saw its latest reported annual profits increase by 48 percent to $1.24 billion. There are hundreds of Attawapiskats across the country. Life spans for native people fall far below the national average. Diseases such as tuberculosis are rampant in some communities. Education opportunities are deplorable—fewer than 50 percent of students on reserves graduate from high school. Almost half of all residences require urgent, major repairs. Boil water advisories are, on average, in effect at any given time on over a hundred of the 631 native reserves. Suicide rates are astronomical. In one reserve that was evacuated because of a contaminated water supply, 21 youth between the ages of 9 and 23 killed themselves in one month alone. Incarceration rates for aboriginals are nine times the national average. A native youth is more likely to go to prison than to get a high school diploma. Poverty conditions are not restricted to those living on reserves. Natives in urban centres, which comprise about half of the one million overall population, have the country’s highest unemployment rates, second only to the rates for native reserves. Nationwide, 48 percent of natives are unemployed. The burgeoning Idle No More movement, which is largely comprised of native youth—reserve and urban—has launched a number of actions to protest Harper’s Bill C-45. Highways have been blocked in Northern Ontario and Alberta. Demonstrations have been held in Vancouver, Yellowknife, Whitehorse, Edmonton, Saskatoon, Regina, Winnipeg, Toronto, Hamilton, Ottawa and Halifax. In Sarnia, Ontario natives have blockaded a key railway line that transports propane shipments to and from the “Chemical Valley” which lies to the city’s south. An injunction requested by CN Rail against the blockade has been issued but police have yet to remove the protestors. Last week, Canadian Propane Association CEO Jim Facette wrote to Sarnia Mayor Mike Bradley advising him to “take the necessary steps” to remove the blockade. “Sarnia is a key point of departure for the transport of propane to Ontario, Quebec and Atlantic Canada,” he wrote. Should the situation not be remedied, he continued, significant business revenue would be lost. The local Aamjiwnaang native band has insisted that no permits have been issued to allow transport of dangerous substances across native land. As the Idle No More movement gains momentum, a series of opposition political leaders have presented themselves in front of the television cameras to ostensibly support Chief Spence’s demand for a high-level native summit with Harper and the Governor-General. New Democratic Party leader, Tom Mulcair, has appealed to Harper to meet with Spence so as to diffuse the wider protests. But Mulcair’s transparent posturing belies his position on the key underlying factors motivating the protests: corporate rapaciousness, environmental degradation, widening poverty and the abrogation of treaty rights. He advocates a “pro-business, common sense solution” to energy development, including the tar sands, has dropped even the call the NDP made at the last election for a slight rollback of corporate tax cuts, and is touting Canada’s social democrats as the champions of fiscal responsibility. Prior to Christmas, Mulcair made it known that his party would be even more miserly with the budgetary purse strings than the Harper government. “What’s a paradox,” he told the Canadian press, “is that these are essentially conservative themes that I’m evoking in the sense that it would be very conservative to say ‘Don’t look for a handout, be self-reliant, pull yourself up by your bootstraps’, that sort of stuff.”
<urn:uuid:208185fa-5a1c-4525-a921-ad539f168dcf>
CC-MAIN-2016-50
http://www.wsws.org/en/articles/2012/12/31/cana-d31.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00165-ip-10-31-129-80.ec2.internal.warc.gz
en
0.953048
2,044
2.515625
3
from The American Heritage® Dictionary of the English Language, 4th Edition - n. The language family containing the Eskimoan and Aleut languages. from Wiktionary, Creative Commons Attribution/Share-Alike License - proper n. A language family native to Greenland, the Canadian Arctic, Alaska, and parts of Siberia. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. - n. the family of languages that includes Eskimo and Aleut Sorry, no etymologies found. Regardless of politics though, the term Eskimo-Aleut remains the term of choice for linguists to refer to the language family with which Inuktitut, Yupik and Aleut are affiliated. Speakers of Cherokee say no-qui-si.xix And in West Greenlandic, an Eskimo-Aleut tongue, the word is ulluriaq. With the health of Eskimo-Aleut terrain now at stake, is anyone entitled to discourage its inhabitants many already bilingual in Danish from acquiring competence in English? Inuit, a language of the Eskimo-Aleut family that, via Danish, gave Global English the word kayak, now has fewer than fifty thousand speakers. Plus, it helps that the underlying stop in the Proto-Steppe plural marker *-it is confidently word-final as both Uralic and Eskimo-Aleut show. There's a proposed connection between Indo-European and Eskimo-Aleut other than Nostratic? The Uralic dual in *k shared also with Eskimo-Aleut is not just a Nostraticist's romantic tale. With the exception of the Eskimo-Aleut family that straddles the Bering Strait and Aleutian Islands, this is "the first successful demonstration of any connection between a New World language and an Old World language," Nichols said. To add to the complication however, not all Eskimo-Aleut speaking peoples find the term Eskimo insulting at all4. We see in Uralic-Yukaghir, Chukchi-Kamchatkan and Eskimo-Aleut languages a shared theme of subjective-objective conjugation and again there are two different sets of endings that seem to be quite ancient e.g.
<urn:uuid:ed073127-c8cb-42f8-963c-67b346977754>
CC-MAIN-2017-09
https://www.wordnik.com/words/Eskimo-Aleut
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00053-ip-10-171-10-108.ec2.internal.warc.gz
en
0.904455
484
3.625
4
The South China Sea maritime dispute has evolved considerably over the past five decades. Once it was once a regional dispute over maritime claims left over from the past that did not trouble the main business of governments at the time. Accordingly, China has resorted to pressure tactics against the ASEAN claimants, Vietnam and the Philippines in particular, to recognise its claim. The need to define maritime borders was an accompaniment to the task of state formation in the postcolonial era. This was China's southern maritime frontier, an indeterminate area that was distant from the mainland and not part of the empire proper. In April 1935, the committee drafted a map of the South China Sea, 30 The U-shaped line was supposed to be the median line between China and the coastal states, but the baselines used were unclear. The oil reserves of the South China Sea attracted attention in the late 1960s and early 1970s. |Title of host publication||The South China Sea Maritime Dispute: Political, legal and regional perspectives| |Editors||Leszek Buszynski, Christopher B. Roberts| |Place of Publication||Abingdon and New York| |Publisher||Routledge Taylor & Francis Group| |Publication status||Published - 2015|
<urn:uuid:9fb2c6d9-9c95-4c64-baa2-21271db70401>
CC-MAIN-2022-49
https://researchprofiles.anu.edu.au/en/publications/the-origins-and-development-of-the-south-china-sea-maritime-dispu
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00026.warc.gz
en
0.91405
297
3.15625
3
SpaceX is a company born out of an audacious dream- human colonization of Mars. Elon Musk, the Paypal co-founder and Tesla Motors CEO, believes that the permanence of human race in this universe depends entirely on becoming multi-planetary as soon as possible. Analyzing the situation Musk decided to take matters into his own hands. The cost of launching mass into space had not kept pace with the technological advancements seen in nearly every other endeavor. He set things right by creating a company laser focused on cost reductions with respect to space launch services. The industry leader in launch capability ULA, a joint venture between Boeing and Lockheed martin, has long profited from cost-plus contracts from the government. This arrangement provides little incentive to lower costs. The payload is often government reconnaissance satellites costing billions of dollars. This meant that whether the launch cost $100 million or $300 million, the total cost to deliver the intelligence capability didn’t change drastically. Musk made the observation, “Obviously the lowest cost you can make anything for is the spot value of the material constituents. And that’s if you had a magic wand and could rearrange the atoms. So there’s just a question of how efficient you can be about getting the atoms from raw material state to rocket shape.” The raw material costs of a rocket were by his own estimate only 2% of the cost of rocket manufacturing. To this end, the engineering design and manufacturing process was determined to align with this strategic business objective: lowest cost of manufacturing. At SpaceX careful thought was given to how to make each design decision as cost efficient as possible. Instead of opting to use use old proven heritage 1960’s designs and technologies. SpaceX decided to design a rocket from a completely blank slate with the objective of lowering the cost of manufacturing. Another big piece of lowering the cost came from vertical integration. Over 70% of each Falcon 9 is manufactured or assembled on site in Hawthorne, California. This gives the company huge advantage over competitors that are at the mercy of single sourced rocket assemblies from 3rd parties. It also gives the company advantage when it comes to quality, cost, and schedule control. Musk and SpaceX operate with the philosophy that simplicity drives down cost and improves reliability. To drive home this point Musk asks whether a Ferrari is more reliable than a Camry or Corolla. SpaceX pays more than lip service to this idea. Simplicity is king. First, the every Falcon 9 rocket is designed exactly the same no matter the payload or final destination. The mission could be delivery of telecom satellite to LEO (low earth orbit) or a military satellite to GEO (geosynchronous orbit); regardless, the manufacturing is the same. Second, the fuel tanks for both the first and second stages are identical only varying in length. The first stage rocket and second stage rockets are nearly identical with only minor modifications to optimize for vacuum conditions the second stage will experience. Giving up perfectly tailored specifications for general purpose and uniform manufacturing has enables SpaceX to go from raw materials to rocket in a very cost efficient manner. The key to low cost manufacturing for SpaceX is standard components and modular design. The modular design is what allows the Falcon 9 rocket to also serve as the workhorse for the Falcon Heavy rocket. The Falcon Heavy rocket is built from 3 Falcon 9 rockets which means that there are 27 engines lifting a payload as heavy as a fully loaded passenger jet (117,000 lbs) into orbit. This will deliver two times the payload for 1/3 the cost of the current launch offerings. Imagine how expensive air travel would be if after one flight of a Boeing 777 the plane could never be used again and was only worth its scrap value. That is the current affairs in the rocket launch industry. After delivering their payloads, the rockets fall to the earth. Even if the rockets or their boosters are recovered from ocean, the effect of salt water means that they are worth a minuscule fraction of their pre-launch value. SpaceX is vigorously pioneering technology that will enable rockets to come back to earth and land themselves. This means that they can be refueled, loaded with new cargo, and sent on their way back to space. This would mean that the variable costs would include inspection, light maintenance, and refueling. This could bring the cost of launching satellites down by a factor of 10 times or more. The company has made multiple attempts at landing their rockets, and they have come very close on two occasions. The current strategy is to land the rockets on floating barges built exclusively for this purpose. The latest news is that the company is seeking FAA approval to try and land their next rocket on land near Cape Canaveral. SpaceX was able to reduce the cost to manufacturing a new rocket by 75% relative to the competition. By aligning the design, engineering, and manufacturing processes, SpaceX is able to undercut all other launch providers. The company is generating massive value to commercial customers. The company is poised to really take off when rocket re-usability becomes a routine operation. Air&Space – http://www.airspacemag.com/space/is-spacex-changing-the-rocket-equation-132285884/?no-ist SpaceX – http://www.spacex.com/news/2013/09/24/production-spacex KahnAcademy Interview – https://www.khanacademy.org/talks-and-interviews/khan-academy-living-room-chats/v/elon-musk Parabolic Arc – http://www.parabolicarc.com/2015/03/26/usaf-phase-subsidy-ula/ Space.com – http://www.space.com/21386-spacex-reusable-rockets-cost.html
<urn:uuid:21d9f38f-d98c-449d-8ac0-f01b4bfab615>
CC-MAIN-2022-33
https://digital.hbs.edu/platform-rctom/submission/spacex-bringing-the-costs-of-space-back-down-to-earth/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00457.warc.gz
en
0.946056
1,224
3.015625
3
As of October 2006, there were approximately 535 citations to the seminal 1977 paper of Misra and Sudarshan which pointed out the quantum Zeno paradox (more often called the quantum Zeno effect). In simple terms, the quantum Zeno effect refers to a slowing down of the evolution of a quantum state in the limit that the state is observed continuously. There has been much disagreement as to how the quantum Zeno effect should be defined and as to whether it is really a paradox, requiring new physics, or merely a consequence of ?ordinary? quantum mechanics. The experiment of Itano, Heinzen, Bollinger, and Wineland, published in 1990. has been cited around 347 times and seems to be the one most often called a demonstration of the quantum Zeno effect. Given that there is disagreement as to what the quantum Zeno effect is, there naturally is disagreement as to whether that experiment demonstrated the quantum Zeno effect. Some differing perspectives regarding the quantum Zeno effect and what would constitute an experimental demonstration are discussed. Proc. of the Sudarshan Symposium, Journal of Physics: Conference Series Perspectives on the quantum Zeno paradox, Proc. of the Sudarshan Symposium, Journal of Physics: Conference Series, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=50483 (Accessed December 7, 2023)
<urn:uuid:49b2780f-87f0-4992-b772-95782abc7853>
CC-MAIN-2023-50
https://www.nist.gov/publications/perspectives-quantum-zeno-paradox
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00568.warc.gz
en
0.946366
292
2.640625
3
Losing weight and maintaining your shape is a common fear for both men and women alike in this day and age. A lot of people have become health conscious because of the situation around them. They see that not eating healthy could make their body weight spiral out of control. But fear not! Here are a few simple tips to maintain your body weight and to stay healthy. Eat less as the day goes on: Dieting is extremely important, but that’s not the only thing that matters. This applies to everyone, even those who are not dieting but are conscious about their weights. Eating less as the day goes on is important. You should not miss breakfast as it is the most important meal of the day. You should eat healthy for lunch, but know your limits, it should be less than the amount you ate for breakfast, and don’t forget to eat a light dinner. Sleep well: That’s right. Sleeping is extremely important. Your body rests and recovers from the events of the day. A lack of sleep affects you physically and can slow your performance of the day. Getting 6-8 hours of sleep a night should be enough to refresh you. Yoga: Although it might seem that yoga is only for spiritual practice, but it is for more than just that. It is a very healthy way to stay fit as it stretches muscles and ligaments and tones your body. What’s more, it is relaxing and clears your mind. So do try it out! Drink more water: Water is necessary for a good health. It removes all the unwanted toxins in the body, keeps you hydrated, it reduces hunger (yes it does), and it also keeps the skin smooth and makes you look younger. One should drink a minimum of 64 ounces a day and more if possible. Sugar: Watch the sugar. Diet soda, fruit juices and low fat foods do not contain less sugar, however, people believe that they do. It is a good idea to check the sugar contain of everything before you consume it. Whole wheat: Whole wheat is good for you. Whole wheat has more fiber and it is good for many different ailments as such as cancer, heart disease, strokes, and diabetes. So try to substitute your white bread for whole wheat bread, it tastes almost the same, you won’t even know the difference. Ride a bike: Bicycling is an excellent and fun way to exercise. Not only does it strengthen your cardiovascular systems, but it is also enjoyable as you get to enjoy the goodness of nature and fresh air. Do a few squats every now and then: Not only is it excellent for your hamstrings and calves, it also maintains the shape of your legs. Although it might be a little difficult to start off with, but you will definitely enjoy its benefits. Dance away: Dancing is another fun way to stay in shape. It does not matter whether the dance style is slow or fast, what matters is that you are moving your body.
<urn:uuid:b7069fe7-b186-4576-8152-43a4723b4a52>
CC-MAIN-2021-43
https://breathinghappy.com/6-ways-to-tune-up/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00125.warc.gz
en
0.970557
620
2.6875
3
Mathematics is a creative, unique and highly inter-connected discipline that has been developed over centuries, providing the solution to some of history’s most intriguing problems. It is essential to everyday life, critical to science, technology and engineering, and necessary for financial literacy and most forms of employment. A high-quality Mathematics education therefore provides a foundation for understanding the world, the ability to reason mathematically, an appreciation of the beauty and power of Mathematics and a sense of enjoyment and curiosity about the subject. Using the Programmes of Study from the New National Curriculum 2014 it is our aim to develop: - a positive and resilient attitude towards mathematics and an awareness of the fascination of mathematics - competence and confidence in mathematical knowledge, concepts and skills - an ability to solve problems, to reason, to think logically and to work systematically and accurately. - initiative and an ability to work both independently and in cooperation with others - an ability to communicate Mathematics - an ability to use and apply Mathematics across the curriculum and in real life - an understanding of Mathematics through a process of enquiry and experiment Decisions about when a child will progress should always be based on the mastery of pupils’ understanding and their readiness to progress through small steps. Pupils who grasp concepts rapidly should be challenged through being offered rich and sophisticated problems. Those who are not sufficiently fluent with earlier material should consolidate their understanding, including through additional practice, before moving on. The national curriculum for Mathematics reflects the importance of spoken language in pupils’ development across the whole curriculum – cognitively, socially and linguistically. The quality and variety of language that pupils hear and speak are key factors in developing their mathematical vocabulary and presenting a mathematical justification, argument or proof. They must be assisted in making their thinking clear to themselves as well as others and teachers should ensure that pupils build secure foundations by using discussion to probe and remedy their misconceptions. Knowledge skills and understanding Through unique planning and preparation we aim to ensure that throughout the school children are given opportunities for: - practical activities and mathematical games - problem solving - individual, group and whole class discussions and activities - open and closed tasks - a range of manipulatives to engage all learners - progression through concrete, pictorial and abstract representations. - working with computers as a mathematical tool We are currently using White Rose Maths Scheme of work which is composed of reasoning and varied fluency. This develops the children’s understanding of number from Reception and builds on different concepts through school. It is based around the National Curriculum 2014 and set into blocks throughout the year. Children build on prior knowledge in each block developing a mastery understanding of Mathematics. Staff have undertaken weeks of training to support their planning and ensure children have access to different Mathematical strategies. Each class teacher is responsible for the Mathematics in their class in consultation with, and with guidance from, the Mathematics subject leader. This has also been supported by the White Rose Maths team on weekly training. The subject leader regularly attends half day updates from the Lancashire Maths team, the information from which is passed on to staff during staff meetings. The approach to the teaching of Mathematics within the school is based on four five principles: - a Mathematics lesson every day - a clear focus on direct, instructional teaching and interactive oral work with the whole class and group. - an emphasis on reasoning and varied fluency. - Y1 – Y6 organise a daily lesson of between 45 and 60 minutes for Mathematics. Lessons are planned using a common planning format and monitored by the Mathematics subject leader. Teachers in the Foundation Stage (Reception) base their teaching on “Development Matters” to ensure that the children are working towards the “Early Learning Goals for Mathematical Development”. Additional documentation provided by the Maths and Early Years Co-ordinators is also used. The Foundation Stage teacher delivers whole class teaching and adult focus led Maths activities together with the teaching assistant each day. The children also access a range of Maths activities within continuous provision. Towards the end of the Reception Year, teachers aim to draw together the elements of a daily mathematics lesson. The readiness of the cohort is taken into consideration when preparing for transition into Year 1. Special Educational Needs Children with SEN are taught within the daily Mathematics lesson and are encouraged to take part when and where possible (please see the section on differentiation). Where applicable children’s IPPs incorporate suitable objectives from the New Curriculum and teachers keep these objectives in mind when planning work. Within the daily Mathematics lesson teachers not only provide activities to support children who find Mathematics difficult but also activities that provide appropriate challenges for children who are high achievers in Mathematics We incorporate Mathematics into a wide range of cross-curricular subjects and seek to take advantage of multi-cultural aspects of Mathematics. In the daily Mathematics lesson we support children with English as an additional language in a variety of ways. eg. repeating instructions, speaking clearly, emphasising key words, using picture cues, playing Mathematical games, encouraging children to join in counting, chanting, finger games and rhymes. It is recognised by the school that high quality next steps marking of maths is an essential tool to enhance children’s learning. Marking should be both diagnostic and summative and school policy believes that it is best done through conversation with the child but acknowledges that constraints of time do not always allow this. All teachers employ a policy of next steps marking regularly in each child’s book at an appropriate level for the child’s understanding. For younger children this will more often be in the form of verbal feedback. In the older year groups children are expected to respond to the marking themselves. Also see the School marking policy. From year 1- 6 the children are assessed on a half-termly basis and at end of each block in the scheme, then are given an age related expectation from teacher assessment. The results of these are recorded on a spreadsheet for all staff to view and for the Senior Leadership Team to monitor. Regular arithmetic and times table tests are carried out and results recorded by individual teachers. Reporting to parents Parents are given opportunity to discuss their child’s progress on two separate occasions throughout the year. Written reports are distributed at the end of the summer term. Teachers use the information gathered from their half-termly assessments to help them comment on the progress of individual children. Sessions are held occasionally to inform parents about how to enhance their child’s learning in Maths and to inform them of some of the alternative methods of calculation. During the course of the year, Foundation Stage parents are invited to attend a short, weekly “Working Together” session which offers the opportunity to play and work informally alongside their child in school and talk to staff. Parents also have the opportunity to attend Numeracy Workshops. Year 2 and Year 6 parents have access to SAT’s Workshops in the Autumn Term as well as a SAT’s meeting with class teachers in Spring Term. Children are placed in groups of similar ability for mathematics lessons. There is flexibility within these groups so that a child may be altered to another group if their performance suggests that it would be beneficial for them. The majority of mathematics lessons in KS1 and KS2 will be differentiated at three levels. Usually there will be a common theme with tasks being set at an appropriate ability for each group. Some groups will be supported by the teacher or a teaching assistant while others will work independently. Practical resources are provided at all abilities to enhance the learning and deepen discussion. Some lessons provide open ended tasks where differentiation will be by outcome. Some lesson may be planned to allow children to work in mixed ability groups thus allowing higher ability children to consolidate their learning by discussing with and teaching children of lower abilities. Monitoring and Evaluation The Maths subject leader follows an annual action plan which has been prepared in line with the whole school development plan. The Mathematics subject leader is released regularly from the classroom in order to monitor standards of planning and teaching and to carry out scrutinies of children’s work. Support is given, if necessary, to ensure all staff are adhering to the agreed written calculations policy and planning format. Findings from any monitoring are discussed initially with the Senior Leadership Team and is also shared with teaching staff as appropriate. Practical resources to support learning are stored both in individual classrooms where they are easily accessible to all children. These are used on a regular basis to ensure a solid understanding of the fundamentals of Mathematics. Additional resources are stored centrally in the Mathematics resources cupboards. Each classroom has a maths ‘working wall’ showing examples of the topic currently being covered and has a interchangeable display of mathematical symbols, numbers, times tables and vocabulary appropriate to the age range. This will be used to support children’s understanding of concepts and shows children correct layout formats. Homework, either written or on-line, is given out on a regular basis in Y1-Y6 by each class teacher and parents are encouraged to be involved in their child’s learning. In Foundation Stage, a “Newsletter” sheet is sent home fortnightly informing parents of the learning that is taking place. The Governing Body A governor responsible for Mathematics is identified from the governing body, currently Mrs A.M. Allonby, who regularly observes maths lessons throughout the school and has regular meetings with the Subject Leader. Governors are invited to attend any Maths workshops or training days. The Subject Leader regularly reports to meetings of the curriculum committee of the governing body. Updated: February 2019 Subject Lead: Mrs V Holland
<urn:uuid:1220585d-f39f-4955-aef2-f6f1b027b62f>
CC-MAIN-2022-27
https://hippings.lancs.sch.uk/curriculum/maths/maths-documents
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00272.warc.gz
en
0.952268
2,040
4.28125
4
For the sake of the present discussion, perhaps it is sufficient to define ideas as stopping places in the stream of consciousness, definite forms in the flux of human experience. The flux can be frozen into ideas that take on different shapes, which can then be exchanged, processed, consumed, played with, applied, imported and exported. We can create formulas and constructions and form these into arguments, which can be channeled back into the current of life. We can never really experience another's experience but we can think another's thoughts. But what does the circulation of ideas signify? And what role do journals play in the process? The circulation of ideasThe relationship between ideas and practice, and ideas and interests is always under- or overdetermined, there is no one-to-one correspondence. The quality and kind of results an idea might engender can never be predicted. Opinion is divided: on the one hand there is a position that ideas are "just ideas", mere toys for thought, abstractions, fictions which are good for playing with and for combining with each other, but the real causes of important events exist in the material and willed world, in the sphere of economic and power relations. This implies an aesthetic attitude that deems the excitement and imaginative appeal of ideas to be more important than their veracity or practical application. On the other hand there is the view once shared, among others, by Marxists: ideas are merely vested interests or after-the-fact rationalizations in disguise – and, as such, used for justifying actions. Then there is an outlook that recognizes the originative priority of ideas. It is summarized in John Maynard Keynes' famous passage: "The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas." This is a reworked version of Märt Väljataga's contribution to a debate with Marc-Olivier Padis on "New ideas in Europe". You can find Padis' contribution here. Looking back into the history of ideas we can see how the most general concepts (pragmatism, idealism, positivism, essentialism, relativism and so on) can, in different times and different environments, inspire entirely opposing political attitudes and movements. Ideas do not alter in time alone – so that the thoughts of long defunct thinkers blossom later in the activities of men of action – but also in space. And in both cases, interesting metamorphoses may occur. Usually,ideas start out as particular responses to particular challenges, but they may later take on a life of their own, travelling across borders and mutating into something else. An idea in one culture may be "just an idea", a toy for thought, but, elsewhere, becomes the most literal guide to action. It happened in nineteenth century Russia: ideas about transforming society, which in the West were often considered theoretical, playful or frivolous thought experiments, were transformed in a new environment into practical programmes and, on new soil, yielded a fruit quite contrary to initial expectations. A recent example: a couple of years ago, British journalist Peter Pomerantsev quoted a Russian blogger who had noted that "the number of references to Derrida in political discourse is growing beyond all reasonable bounds. At a recent conference the Duma deputy Ivanov quoted Derrida three times and Lacan twice." Pomerantsev remarked: "In an echo of socialism's fate in the early twentieth century, Russia has adopted a fashionable, supposedly liberational Western intellectual movement and transformed it into an instrument of oppression." But there are reverse examples of how the most pressing political events and concerns may attract interest abroad, but merely as interesting and abstract ideas. German intellectuals of Kant's era looked at the French Revolution with a kind of aesthetic detachment, taking far less notice of the spilt blood than the sublimity of the ideas expressed in slogans. And today, there is an extreme divergence in the reception of the eurocrisis. A painful reality for many countries, for the Germans it is still just a distant noise from beyond the borders, just an idea. Finally there are constant and justified complaints about how the connection between ideas and intentions on the one hand and the results on the other has broken down in today's complex world, where unintended consequences threaten to become the norm. Just as the strength and the character of the impact of ideas remains unclear and unpredictable, the way in which they are circulated – both transnationally and within national borders – remains mysterious. The exchange of ideas still takes place to a significant extent within national cultural spaces. And just as the world economy is a system of inequalities, the same is true of the market for ideas, in which exporting and importing nations have a role to play too. The current account of peripheral nations is in permanent deficit: we import more ideas than we export. Often the conditions of this exchange are also unjust. An example from the area in which I have been involved as an academic: the most important ideas of the twentieth century in literary theory, including formalism, structuralism, dialogism and the phenomenology of reading, were developed in eastern and central Europe (including the German-speaking countries); upon migrating to the English speaking world, they were often transformed into travesties, which are now re-imported into the East as if they were luxury goods. The relationship between supply and demand is complex. Ideas are imported to meet the local demand but the demand may also be created by the supply. The latter is especially the case in the academic sphere, where abstract imported ideas desperately seek local applications. The communication of ideasInternational currents of communication have only just begun to be studied. These studies were referred to by the late Bernhard Peters at the European Meeting of Cultural Journals in 2004 in Tallinn. He stated: "Cultural exchange, flows of ideas and arguments, flows of books, magazine articles, newspaper pieces as well as newspaper reports, references in articles and so on are markedly more dense between many European countries and North America, or more specifically with the United States, than are flows between many European countries." And these currents are asymmetrical too, as Perry Anderson has noted: "Despite much European investment in the United States, there is scarcely any evidence of reciprocal influence at all." This may also explain the famous quip by Timothy Garton Ash: "If I want to reach the widest European intellectual audience, the best way is to write an essay in The New York Review of Books." The observations of Ash, Anderson, and Peters belong to the past decade, but it is hard to believe that the transatlantic currents of communication have changed much. True, there seems to be a certain European isolationism in dealing with the financial crisis, and the Unites States is globally not as powerful as it was a decade ago. This relates to one of the points that Marc-Olivier mentioned with reference to Europe: the shift in cultural foci. Globalization means, among other things, the decline of the relative importance of the West (Europe and USA) and the rise of other regions. How is this trend felt in Estonia? Just one personal memory: in the early 1980s university students attracted to China and India mostly had philological interests confined to the ancient societies of China and India – to Buddhist texts, Taoism and ancient poetry. Today both places have immediate material and economic relevance on a global scale. But still, as immigration from these parts of the world to Estonia and Estonian economic relations with countries beyond EU borders are both rather limited, the rise of Asia is manifested first and foremost in certain lifestyle phenomena (from cuisine to New Age trends). How do the other topics discussed by Marc-Olivier – liquid modernity, the change of spatial orientations, the dilemmas of open society – manifest themselves in Estonia? Perhaps also in a rather indirect manner: probably as a general sense of insecurity and vulnerability, as a worry about emigration, demographic trends and national inheritance – concerns that politicians have few scruples about manipulating. So some of these topics surface as ideas (the rise of new regions, the emergence of megacities) and some remain emotional undercurrents that we have yet to articulate (liquid modernity, new identities). But is there any role for cultural journals in the international and national exchange of ideas? It is a painful question. In his important essay "Kidnapped West" (1983), Milan Kundera described how he tried to communicate the loss felt in Prague after the Soviet invasion: "I arrived in France and tried to explain to French friends the massacre of culture that had taken place after the invasion: 'Try to imagine! All of the literary and cultural reviews were liquidated! Every one without exception!' Then my friends would look at me indulgently with an embarrassment that I understood only later. [...] If all the reviews in France or England disappeared, no one would notice it, not even their editors." It is a poetic exaggeration, of course. At least the editors would notice. I can recall several issues of Vikerkaar devoted to the problems mentioned by Padis (emerging nations: 4-5 /2010; urban studies: 4-5 /2004; secularization: 1-2 /2008; intellectual property and new techologies: inter alia, 10-11 /2011). Vikerkaar's membership in the Eurozine network has delivered some of the thoughts expressed in our pages to an international readership: Tonis Saarts on the causes of 2007 riots in Tallinn; Tiit Hennoste on the transformations of the Estonian media landscape, and Rein Müllerson on the spread of democracy. But still, the impact of cultural reviews with limited circulation on public opinion and political agenda remains doubtful to say the least, although they may have a long-term influence on the development of culture in the widest sense. But as Bernhard Peters emphasized in his Tallinn speech, this "trickle-down" effect is hard to measure empirically. In the last quarter of a century, attitudes to gender and family issues, to our bodies and descendants, to nature and to minorities have changed considerably in both the East and the West. It can't be ruled out that the roots of some of these developments reach back to debates that took place in the distant past and reported upon in obscure cultural reviews. We know from history how post-communist Polish foreign policy was successfully devised during the 1970s in the pages of the émigré journal Kultura in Paris. We know also about the role that the Estonian literary magazine Looming played in the late 1980s in formulating the current official doctrine of legal continuity and citizenship. I would like to point out at least three ideas or notions that originated in the Estonian public sphere and, specifically, in the pages of Vikerkaar. These concern respectively the past, the present and the future of Estonian society. In 2003, my colleague Marek Tamm (re-)introduced the Nietzschean idea of monumental history into Estonian historiographical debates, which enabled the recognition of the extent to which choices made in history-writing were determined by current concerns. Second, in 2008, Tonis Saarts diagnosed the mainstream of Estonian politics of the previous decades as "ethnic defense democracy". This formula summarizes the way in which Estonian political choices were constrained by petty ethnic concerns and fear-mongering, resulting in the "securitization" of political discourse, lack of bold visions and magnaminity – the consequences of which could be self-defeating. The remedy here could be the Habermasian idea of "constitutional patriotism" proposed in the Estonian context by the legal scholar Lauri Mälksoo, which would switch the focus of national identification from ethnic culture to political institutions and the rule of law. A heightened historical self-reflexivity (Tamm), an accurate diagnosis of present ills (Saarts), and a proposal for the future made in good faith (Mälksoo) – all of which is not so insignificant. At least nobody can say that the cultural reviews have not tried. - Vincent Descombes, Modern French Philosophy, Cambridge University Press, 1980, 7. - Peter Pomerantsev, "Putin's Rasputin", London Review of Books 33, no. 20 (20 October 2011): 3-6, www.lrb.co.uk/v33/n20/peter-pomerantsev/putins-rasputin. - Bernhard Peters, "Ach Europa", Eurozine, 21 June 2004, www.eurozine.com/articles/2004-06-21-peters-en.html. - Perry Anderson, "Force and consent", New Left Review 17 (September-October 2002), newleftreview.org/II/17/perry-anderson-force-and-consent. - Timothy Garton Ash, "The European Orchestra", The New York Review of Books, 17 May 2001, www.nybooks.com/articles/archives/2001/may/17/the-european-orchestra/. - The article first appeared in English as "The Tragedy of Central Europe", The New York Review of Books, 26 April 1984. - Tonis Saarts, "The Bronze Nights", Eurozine, 10 October 2008, www.eurozine.com/articles/2008-10-10-saarts-en.html. - Tiit Hennoste, "From spring to autumn", Eurozine, 13 November 2009, www.eurozine.com/articles/2009-11-13-hennoste-en.html. - Rein Müllerson, "From democratic peace theory to forcible regime change", Eurozine, 22 August 2012, www.eurozine.com/articles/2012-08-22-mullerson-en.html; ibid., "Liberté, égalité and fraternité in a post-communist and globalised world", Eurozine, 29 September 2010, www.eurozine.com/articles/2010-09-29-mullerson-en.html, and ibid., "Crouching tiger hidden dragon: Which will it be?", Eurozine, 29 April 2010, www.eurozine.com/articles/2010-04-29-mullerson-en.html. Original in Estonian First published in Vikerkaar 3/2013 (Estonian version); Eurozine (English version) Contributed by Vikerkaar © Märt Väljataga / Vikerkaar
<urn:uuid:597517ae-6579-4092-9f88-2cb05fd35093>
CC-MAIN-2014-49
http://www.eurozine.com/articles/2013-05-17-valjataga-en.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400372202.67/warc/CC-MAIN-20141119123252-00035-ip-10-235-23-156.ec2.internal.warc.gz
en
0.94486
3,130
2.59375
3
Proton beam therapy–a more precise form of radiotherapy–to treat the childhood brain cancer medulloblastoma appears to be as safe as conventional radiotherapy with similar survival rates, according to new research published in The Lancet Oncology journal today. Importantly, the findings suggest that proton radiotherapy may not be as toxic to the rest of a child's body as conventional radiotherapy. The study was led by Dr Torunn Yock, Massachusetts General Hospital, Proton Center, Boston, MA, USA, and colleagues. Medulloblastoma is the most common malignant brain cancer in children, and develops at the rear and base of the brain, near the bottom of the skull. Medulloblastomas are rapidly growing tumours that, unlike most brain tumours, spread through the cerebrospinal fluid to different locations along the surface of the brain and spinal cord. Conventional treatment usually involves surgery to remove the tumour, photon radiotherapy and chemotherapy. However, patients are often left with significant side effects including hearing loss (which can severely impact a young child's learning and language development), effects on cognition, hormone function as well as toxic effects on the heart, lungs, thyroid, vertebra and reproductive organs as a result of healthy bodily tissues being exposed to radiation. Typically, the younger the patient is at the time of treatment, the worse the long-term effects are. Compared with traditional radiotherapy, proton beam therapy is highly targeted and is used to treat hard-to-reach cancers, with a lower risk of damaging the surrounding tissue and causing side effects [see image]. Proton beam therapy entered the news headlines in 2014, especially in the UK and Europe, when UK parents Brett and Naghmeh King took their son Ashya from Southampton General Hospital, UK, without doctors' permission so that he could be treated with proton beam therapy in Prague in the Czech Republic. At the time, proton beam therapy was not available on the UK National Health Service (NHS), although the NHS later agreed to fund his treatment. Two UK centres for proton beam therapy are currently being planned (Manchester and London) which are due to open in 2018 . In this new study, a total of 59 patients aged 3 to 21 were enrolled between 2003 and 2009. Most patients (55) had the tumour partially or completely removed through surgery. All patients (59) received chemotherapy as well as proton beam therapy. On average, patients were followed-up for 7 years. At 3 years after treatment, 12% of patients had serious hearing loss. This rose to 16% at 5 years. Patients also displayed problems with processing speed and verbal comprehension, but perceptual reasoning and working memory were not significantly affected. At 5 years, over half (55%) had problems with the neuroendocrine system which regulates hormones – with growth hormone being the most commonly affected. However, the study reported no cardiac, pulmonary, or gastrointestinal toxic effects which are common in patients treated with photon radiotherapy. At 3 years after treatment, progression-free survival was 83%. At 5 years, progression-free survival was 80%. The authors say: "Our findings suggest that proton radiotherapy seems to result in an acceptable degree of toxicity and had similar survival outcomes to those achieved with photon-based radiotherapy. Although there remain some effects of treatment on hearing, endocrine, and neurocognitive outcomes–particularly in younger patients–other late effects common in photon-treated patients, such as cardiac, pulmonary, and gastrointestinal toxic effects, were absent." They conclude: "Proton radiotherapy resulted in acceptable toxicity and had similar survival outcomes to those noted with conventional radiotherapy, suggesting that the use of the treatment may be an alternative to photon-based treatments." Writing in a linked Comment, Dr David R Grosshans, Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, USA, says: "I believe that radiation oncologists have always understood that our treatments are associated with the potential for severe adverse effects. I also believe that many in radiation oncology embrace new technology, not simply to have the latest and greatest innovations, but rather to reduce the effect of radiation therapy on patients' quality of life. Nowhere in oncology is this more important than for paediatric cancers." He concludes: "This study sets a new benchmark for the treatment of paediatric medulloblastoma and alludes to the clinical benefits of advanced radiation therapies." NOTES TO EDITORS: Study funded by US National Cancer Institute and Massachusetts General Hospital
<urn:uuid:bf88240b-2434-4b9d-ace0-3638f8b7eebf>
CC-MAIN-2021-39
https://scienmag.com/the-lancet-oncology-proton-beam-therapy-offers-potential-to-treat-childhood-brain-cancer-with-fewer-severe-side-effects-than-conventional-radiotherapy/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057036.89/warc/CC-MAIN-20210920101029-20210920131029-00379.warc.gz
en
0.958427
939
3.421875
3
Hemochromatosis (HH) is a disease that results from excessive amounts of iron in the body (iron overload). Hereditary (genetic) hemochromatosis (HHC) an inherited disorder of abnormal iron metabolism. Individuals with hereditary hemochromatosis absorb too much dietary iron. Once absorbed, the body does not have an efficient way of excreting iron excesses. Over time, these excesses build to a condition of iron overload, which is a toxic to cells. Glands and organs, including the liver, heart, pituitary, thyroid, pancreas, synovium (joints) and bone marrow burdened with excess iron cannot function properly. Symptoms develop and disease progresses. There are several types of genetic hemochromatosis. These include: Type I or Classic (HHC); Type II a, b or Juvenile (JHC); Type III or Transferrin Receptor Mutation; and Type IV or Ferroportin Mutation.
<urn:uuid:7278cc2e-7af7-495c-afc4-0df3ec2abee8>
CC-MAIN-2014-52
http://www.irondisorders.org/hemochromatosis/
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769844.62/warc/CC-MAIN-20141217075249-00072-ip-10-231-17-201.ec2.internal.warc.gz
en
0.893198
209
3.96875
4
Pool Service Glossary DIATOMACEOUS EARTH: The filtering medium of the DE filter, this dry powder is the fossilized remains of the ancient plankton, diatom. DIATOMACEOUS EARTH FILTER: A filter tank containing fabric covered grids which hold the DE powder up against the flow of the water. DIVERTER VALVE: Used in a twin port skimmer, a diverter allows the operator to manipulate the amount of flow from the main drain and skimmer to the pump. DRAIN: Also called the main drain, this plumbing fitting is the start of one suction line to the pump and is usually situated at or near the center bottom of the pool. FILTER: A device used to remove particles suspended in the water by pumping water through a porous substance or material. FILTER ELEMENT: A device inside a filter tank designed to entrap solids and direct water through a manifold system to exit the filter. Cartridge filter elements and DE filter grids are two examples. FILTER MEDIUM: A finely graded material, such as sand, diatomaceous earth, polyester fabric or anthracite coal that removes suspended particles from water passing through it. FILTER PUMP: The device that pulls water from the pool and pushes it through the filter on its way back to the pool. FILTRATION RATE: The rate of water pumped through a filter, in gallons per minute (gpm). GATE VALVE: The type that spins “lefty-loosey; righty-tighty”. GAS VALVE: An electronic valve in the pool heater that directs gas flow from the meter to the pilot and the burner tray. GROUND-FAULT CIRCUIT-INTERRUPTER: A GFCI device protects a circuit from branching off by de-energizing the path of electricity very quickly when it senses current loss. An important safety device around water. GUNITE: A dry mixture of cement and sand mixed with water at the “gun”; hence the name. A gunite operator “shoots” the pool’s rough shape, while finishers trowel after. HEATER: A device used to heat the water. It may be electric, fuel operated or solar powered heat. HEAT PUMP: The antithesis of the air conditioner, the heat pump’s cooling coil removes heat from the air while the condenser coil transfers it to water cycling through it. INFLUENT: The water coming into and up to the impeller from the suction lines. These pipes are under vacuum pressure. LATERALS: Elongated, capped plastic nipples at the bottom of a sand filter which are slotted to allow for water passage while keeping the sand in the filter tank. MECHANICAL SEAL: A seal behind the impeller which prevents water from running out along the shaft of a motor. aka; pump seal. MOTOR: A machine for converting electrical energy into mechanical energy. Your motor is known as the dry end of the filter pump. It drives the impeller, which moves the water. MULTIPORT VALVE: A 4 or 6 position valve combining the functionality of several valves into one unit, revolutionizing pool plumbing. The six common functions are described below: - Filter: This is normal water flow through the filter, say, top to bottom. This is where the valve sits 99% of the time. - Backwash: When the pressure gauge indicates, you will need to backwash the filter. When the handle is turned to backwash, the flow through the filter is reversed, say, bottom to top. The effluent water (out of the filter) is directed to the waste line. - Rinse: After backwashing, it’s a good idea to rinse for 15-20 seconds to remove any residual dirt that may “poof!” back into your pool after backwashing. Rinse flows through the water in filter fashion, say, top to bottom, but effluent is sent out the waste line. - Recalculate: This setting bypasses the filter, water coming into the multiport does a U-turn and heads back towards the pool. Used only when the filter is broken (at least it’s circulating), or when adding specialty chemicals which specify using this setting. - Drain / Vacuum to waste: This useful setting allows you to vacuum up large volumes of debris that would either clog the filter or pass through it because of its small size. Dirt that is vacuumed passes right out the waste line. It is also the setting of choice when draining the pool or lowering the water level (if you didn’t need to backwash, which also lowers the water level). PLASTER: A common type of interior finish applied over the concrete shell of an in ground swimming pool. PRESSURE GAUGE: A device indicating pressure in a filter system. Provides a determination of how the system is operating, and informs us when service is required. PRESSURE SIDE: The return side of the plumbing. The section from the pump impeller towards the pool. PRESSURE SWITCH: A switch used in pool heaters which opens when the flow rate is insufficient for safe heater operation. This disrupts the circuit in the heater, preventing it from firing. PLUNGER: The sliding disc assembly that changes valve position in a push-pull valve. For example; up for backwash, down for filtration. PUSH-PULL VALVE: A two position valve used for backwashing sand or DE filters. PUMP: A mechanical wet-end, powered by an electric motor, which causes hydraulic flow and pressure for the circulation of the pool water. PVC: Polyvinyl chloride, which is used to make flexible and rigid PVC pipe used for pool plumbing. RATE OF FLOW: Quantity of water flowing past a designated point within a specified time period, measured in gallons per minute (gpm). RESTRICTED FLOW: The term used to describe a condition preventing full flow of water. Restriction can occur with full skimmer or strainer baskets, obstructions in the plumbing, dirty filter, undersized plumbing or equipment, or placing devices like, heaters, cleaners or fountains in the circulation system. Restriction on the suction side creates higher vacuum, (or suction) while on the pressure side creates higher pressure. RE-BAR: Reinforcement bar, used to add strength to a concrete. After excavation of an in ground pool, a steel cage is formed out of re-bar, and the gunite shell is shot over and surrounding it. SAND FILTER: A filter tank, usually fiberglass or ABS plastic, filled with sand and gravel. The pump diffuses water over the top of the sand bed, and forces it through the sand and into the laterals on the bottom. SKIMMER: A surface skimmer is a plumbing fitting set at water level, containing a weir mechanism and a debris basket. The skimmer is part of the suction side circulation system. SKIMMER BASKET: Beneath the lid, the basket strains debris, as the first line of defense in filtering the water. SKIMMER NET: Attached to a telescopic pole, a leaf rake is a very useful tool in keeping the pool clean. Also called a skimmer net are the flat, “dip and flip” nets, which aren’t so useful. STRAINER BASKET: The second line of defense is a basket at the pump. The holes in this are smaller than those in a skimmer basket, and prevent the pump impeller from clogging up. SUCTION SIDE: The plumbing prior to and carrying water to the pump. This side is under vacuum pressure. SPA: A filtered, hot water vessel with hydrotherapy jets and air induction. Can be portable or installed permanently. Jacuzzi is a brand name. TEST KIT: What you should be using more frequently to determine the water balance in your pool. TIME CLOCK: A mechanical device that controls the timed operation of your electrical equipment, primarily your filter and booster pumps. TURNOVER: The amount of time it takes your pump to move all the water in your pool through the filter and back again. Usually, pools are designed for an eight hour turnover. VACUUM: Refers to the low pressure condition created in the suction line. Also refers to the cleaning process of sucking leaves, algae and debris from the pool floor. VALVES: A device placed in the plumbing line which restricts or obstructs water flow to create desired hydraulics, or may permit flow in one direction only (as in a check valve). WEIR: The device in a skimmer that controls the amount of water coming into the skimmer, and keeps debris inside. That “flapper-gate thing”.
<urn:uuid:8f51c903-cecf-4a28-91d5-451580629dda>
CC-MAIN-2019-09
https://www.360poolservices.com/glossary/
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481122.31/warc/CC-MAIN-20190216210606-20190216232606-00341.warc.gz
en
0.907364
1,890
3.125
3
Hawaii lawmakers passed a bill Tuesday that would prohibit the sale of over-the-counter sunscreens containing chemicals they say are contributing to the destruction of the state's coral reefs and other ocean life. If signed by Gov. David Ige, it will make Hawaii the first state in the country to pass such a law and will take effect on Jan. 1, 2021. "Amazingly, this is a first-in-the-world law," state Sen. Mike Gabbard, who introduced the bill, told the Honolulu Star-Advertiser. "So, Hawaii is definitely on the cutting edge by banning these dangerous chemicals in sunscreens." The chemicals oxybenzone and octinoxate, which are used in more than 3,500 of the world's most popular sunscreen products, including Hawaiian Tropic, Coppertone and Banana Boat, would be prohibited. Prescription sunscreens containing those chemicals would still be permitted. As NPR reported, a 2015 study of coral reefs in Hawaii, the U.S. Virgin Islands and Israel determined oxybenzone "leaches the coral of its nutrients and bleaches it white. It can also disrupt the development of fish and other wildlife." Even a small drop is enough to damage delicate corals. At the time, researchers estimated about 14,000 tons of sunscreen lotions end up in coral reefs around the world each year. Opposition to the ban came from sunscreen manufacturers, including Bayer, the maker of Coppertone. And the state's major doctors group said the ban goes too far. The Star Advertiser wrote: Bayer said there are limited, active ingredients available within the U.S. with the same proven effectiveness as oxybenzone for sunscreens over SPF 50. The Hawaii Medical Association said it wanted the issue to be studied more deeply because there was a lack of peer-reviewed evidence suggesting sunscreen is a cause of coral bleaching, and overwhelming evidence that not wearing sunscreen increases cancer rates. Meanwhile, awareness campaigns about the damage caused by commercial sunscreen have spurred the growth of Hawaiian-made natural products, reported Outside. Many Hawaiian businesses are not waiting for the governor to sign the law. They have begun implementing their own bans. "Nonprofits, athletes and hotels in Hawaii are starting to create their own regulations for what can and can't be used," said Caroline Duell, founder of the Safe Sunscreen Council and owner of a natural-sunscreen company.
<urn:uuid:b25d1494-f1df-4784-9f9d-bacc8d2e7a3c>
CC-MAIN-2021-17
https://kansaspublicradio.org/npr-news/hawaii-approves-bill-banning-sunscreen-believed-kill-coral-reefs
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00580.warc.gz
en
0.951451
509
2.5625
3
(Cross-posted at BlogHer) The Fordham Institute today released a report on two fascinating studies about the state of high-achieving students under the Bush Administration’s No Child Left Behind (NCLB) initiative. The studies indicate that while the lowest-achieving (10th percentile or below) students have indeed made gains as measured by standardized tests since (but not necessarily because of) the institution of NCLB, the highest-achieving students are languishing, making almost no improvement and in many cases not receiving the same amount of attention and opportunity as the lowest-achieving students. Among the report’s findings: - Teachers are much more likely to indicate that struggling students, not advanced students, are their top priority. - Low-achieving students receive dramatically more attention from teachers. What’s wrong with that, you ask? Shouldn’t we be putting our resources where they’re most needed? Teachers don’t think so. Even though their schools are devoting the majority of their resources to struggling students, 86 percent of teachers in the study indicated that schools should focus equally on all students, and not focus so heavily on those who are in the lowest percentile. Another interesting tidbit from the study: Low-income, black, and Hispanic high achievers on the eighth-grade standardized math test were more likely than their struggling peers to be taught by experienced teachers. These students also were as likely as their higher-income peers to have teachers who majored or minored in math. The Fordham Institute explains the implications of its studies: Neither of these studies sought a causal link between the No Child Left Behind Act and the performance of high-achieving students. We cannot say that NCLB “caused” the performance of the nation’s top students to stagnate any more than it “caused” the achievement of our lowest-performing pupils to rise dramatically. All we know is that the acceleration in achievement gains by low-performing students is associated with the introduction of NCLB (and, earlier, with state accountability systems). Neither can we be sure from these data that teacher quality explains why some low-income, African-American, and Hispanic students were able to score in the top 10 percent on the 2005 eighth-grade math NAEP, though there does appear to be a relationship between the experience and education of math teachers and their students’ performance. The national survey findings show that most teachers, at this point in our nation’s history, feel pressure to focus on their lowest-achieving students. Whether that’s because of NCLB we do not know (though teachers are certainly willing to blame the federal law). What’s perhaps most interesting about the teachers’ responses, however, is how committed they are to the principle that all students (regardless of performance level) deserve their fair share of attention and challenges. Were Congress to accept teachers’ views about what it means to create a “just” education system–i.e., one that challenges all students to fulfill their potential, rather than just focus on raising the performance of students who have been “left behind”–then the next version of NCLB would be dramatically different than today’s. The authors of the report write that this unintended consequence–the lack of progress of high-performing students–is “worrisome for America’s future competitiveness.” What says the blogosphere? Plenty, even though the report was just released today. On the Fordham’s blog, Flypaper, Mike Petrilli explains how schools might better measure their accountability to students under a revised version of NCLB: Everyone’s right that policymakers can tweak No Child Left Behind to create incentives for schools to pay attention to the top students and the bottom students (and everyone in between). A new version of the law could, for example, expect schools to help all of their students make progress over the course of the year (not just the ones below “proficiency”). It could give schools credit for helping more students reach the “advanced” level on state tests (though these still not be high enough). And it could allow out-of-year testing so that assessments could accurately measure how far above grade level bright students are—and could then determine whether or not they are staying well above grade level over time. Eduwonkette writes about the liability of models of accountability that, like NCLB, are based on proficiency rather than growth. Systems that focus on a proficiency goal ask lower-achieving students (and their schools) to make larger gains than higher-achieving students, who likely have already met or exceeded the proficiency target. Accordingly, the high-achieving child grows less than does her struggling peer. This model doesn’t take into account students’ initial levels of achievement. At the Core Knowledge blog, Robert Pondiscio writes about his frustration with schools’ relative lack of attention to the highest-achieving students: These are the students I refer to as “Not Your Problem” kids. As a teacher, when I raised concerns that my brighter student[s] were bored and neglected, and expressed frustration at my inability to sufficiently differentiate instruction to challenge them, I was dismissed by an assistant principal who pointedly said “those kids are not your problem.” She meant I was to focus on getting my low-achieving students to proficiency; the high achievers were already there and could be left to their own devices. I’m positively giddy to see this issue getting attention. It was my No. 1 concern as a classroom teacher. Melissa Westbrook provides a nice roundup of this study and another one that came out this week about the efficacy of the SAT on predicting college success. She’s glad to see low achievers making progress–after all, she points out, “If it had been high achievers moving forward 16 points and lower achievers moving ahead 3 points, there would have been howls.” Her conclusion? “We need to work for all students across the board.” Corey Bunje Bower doesn’t see a problem with the study’s findings, especially when one looks at the problem from an international perspective: Given that the low-performing students in the U.S. lag behind low-performers in other countries while high-performing students hold their own against other high-performers (previous post), it’s hard for me to see this as anything but a good thing. Brigitte D. Knudson has a long, thoughtful post on the commodification of education. An excerpt: Beyond the implications for college, No Child Left Behind and the standardized testing movement have fueled an entire industry. In Michigan, where I teach, the Michigan Department of Education is required under No Child Left Behind legislation to provide Supplemental Education Service Providers — tutors — to students whose schools or school districts fail under the act. Interestingly, this has allowed tuturing centers, like Sylvan, to prosper (Sylvan Learning Centers represent the largest block of SES Providers on MDE’s 7-page list, with 25 of the 112 available choices for parents in the state, no doubt partly because of their corporate identity and slick marketing provided to franchise owners). Education is now big business — for-profit in many cases. There are even seminars aimed at prospective small-business owners to suck them into the “Education Industry,” because education, like everything else in the United States, is not about our children, it’s about making a dollar. Yes, folks, the business model applied to education — the commodification of education. They even have a website: EducationIndustry.org. Don’t let the sweet little pictures of parents and children fool you — it’s not about parents and children, it’s about entepreneurs making money. Even if your kid isn’t at a failing school and you need a tutor, these places can work to get you LOANS! One Sylvan Learning Center touts that it will work with “SLM Financial, a subsidiary of Sallie Mae, to ensure children get the tutoring help they need. Testing to improve our students? How about testing to exploit a new economy? SAT. ACT. Tutoring Centers. Remedial tuition dollars. Banking and loans. Anyone smell a skunk? How about No Child Left Unscammed? Personally, I don’t have a problem with entrepreneurs entering the educational field–after all, there are some truly excellent educational consulting and publishing companies out there. However, like Knudson, I do have a problem with businesses taking advantage of parents–particularly low- and middle-income parents–because of a government mandate that students perform at a particular level on a standardized test. Over at Eduwonk, Andrew Rotherham elaborates on the tough choices highlighted by the Fordham Institute’s report. Choices do have to be made. It doesn’t mean that we throw different groups of student under the bus, but any accountability system that holds people accountable for everything holds them accountable for nothing. So choices have to be made about emphasis. And considering the yawning achievement gaps, graduation rate gaps, and outcome gaps that separate poor and minority students from other students, that’s where I’d argue the emphasis should be placed. And, within those groups of students on the wrong end of the achievement gap are plenty who with better schools would also be recognized as gifted. There are certainly steps that policymakers can take to help lessen the zero-sum nature of these choices. They can, for instance, also reward schools that do a great job with high achieving students as well as closing gaps (something they can do under No Child Left Behind now but few do in a meaningful way). Or, we can think about various non-regulatory accountability strategies, for instance giving parents more choices within the public system, to create some countervailing forces. And of course, states and localities should invest in programs for gifted kids and ways to stretch them. What are your thoughts?
<urn:uuid:9b74e959-ea4b-43c6-8b90-aff34c0460b6>
CC-MAIN-2016-30
http://multiculturaltoybox.com/category/antiracism/
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830064.24/warc/CC-MAIN-20160723071030-00146-ip-10-185-27-174.ec2.internal.warc.gz
en
0.957569
2,142
2.875
3
Jamaica Bay is situated within the boroughs of Brooklyn and Queens, New York City. Approximately 8 miles long by 4 miles wide, it covers 26 square miles, and opens into the Atlantic Ocean via the Rockaway Inlet. Jamaica Bay is recognized by the U.S. Fish and Wildlife Service as a coastal habitat deserving preservation and restoration of habitats which contribute to sustaining and expanding the region's native living resources. Jamaica Bay is a highly productive habitat for a variety of fish and wildlife species. These species breed and use the area as a nursery for juvenile birds that reside in the area during winter, and for migratory birds that stop-over during the fall and spring seasons. The Jamaica Bay Marsh Islands are at the heart of the complex urban ecosystem of Jamaica Bay that is a part of the U.S. Department of the Interior, National Park Service - Gateway National Recreation Area, the first urban national park established in 1972, and is a key component of the President's America's Great Outdoors initiative. The marsh islands complex is an integral part of the Jamaica Bay ecosystem that has been targeted for restoration. It is estimated that approximately 1,400 acres of tidal salt marsh have been lost from the marsh islands since 1924, with the system wide rate of loss rapidly increasing in recent years. From 1994 and 1999, an estimated 220 acres of salt marsh were lost at a rate of 47 acres per year. Left alone, the marshes were projected to vanish by the year 2025, destroying wildlife habitat and threatening the bay's shorelines. To date, there is no consensus among ecological experts on the cause of the erosion of the marsh islands, which range from rising sea levels and warmer temperatures to nitrogen input from storm water run-off. Representatives from federal, state and local agencies have helped to jumpstart the ecological process acknowledging that these daunting challenges to restoring an urban estuary need to be overcome. In response to these losses, under the U.S. Army Corps of Engineers' Continuing Authorities Program (CAP), the New York City Department of Environmental Protection and the New York State Department of Environmental Conservation requested assistance in implementing one or more marsh island restoration projects. A 2006 report titled "Jamaica Bay Marsh Islands, Jamaica Bay, NY, Integrated Ecosystem Restoration Report" recommended restoration of three marsh islands: Elders Point East, Elders Point West and Yellow Bar Hassock. As of 2005, Elders Point was comprised of two islands, Elders East and Elders West totaling only 21 vegetated acres. Originally one island comprised of 132 acres, the loss of marsh in the center portion severed the two ends, resulting in two separate islands connected by mudflat. U.S. Army Corps of Engineer activities at Elders Point East Marsh Island in 2006-2007 involved restoring 40 acres of marsh constructed for mitigation purposes to offset environmental impacts of the New York & New Jersey Harbor Deepening Project. In 2010, the U.S. Army Corps of Engineers, in partnership with the Port Authority of New York and New Jersey, the New York State Department of Environmental Conservation (NYS DEC), the New York City Department of Environmental Protection (NYC DEP) and the National Park Service (NPS) restored approximately 40 additional acres at Elders Point West as a result of the beneficial use of dredged material from the HDP. The restoration plan for Elders East and West included restoring the existing vegetated areas and the sheltered and exposed mudflats by placing dredged sand up to an elevation suitable for low marsh growth. This included hand planting more than 700,000 plants (grown from local seed stock by the National Resources Conservation Service (NRCS) on East and replanting more than 200,000 plants on West. On Elders East, smooth cord grass or salt marsh cord grass (Spartina alterniflora) was planted throughout the low marsh zone. A mixture of salt marsh cord grass, salt meadow cord grass or salt hay (Spartina patens), and spike grass (Distichis spicata) were planted in the zones between low marsh and upland. As part of the New York/New Jersey Harbor-Jamaica Bay Multi-Project Initiative, sand from the Ambrose Channel was beneficially reused from the Harbor Deepening Project to create an additional 87 acres of marsh island habitat within Jamaica Bay. During February and March 2012, 375,000 cubic yards of sand was placed at Yellow Bar Hassock Marsh Island resulting in 67 acres of new marsh island and approximately 45.4 acres of wetlands, including ~ 13.3 acres of hummock relocation. Twenty eight acres of low marsh seeding, 17,175 high marsh plants, and 21,859 high marsh transition plants. Marsh construction was completed on 2 August 2012. Replanting damaged/lost vegetation from Hurricane Sandy will take place in the Spring of 2014. In September and October 2012, Ambrose Channel sand was also beneficially used to restore an additional 30 acres of marsh islands at Black Wall (155,000 cubic yards of sand, 20.5 acres) and Rulers Bar (95,000 cubic yards of sand, 9.8 acres) as part of the U.S. Army Corps of Engineer’s Beneficial Use Program with local partners the New York State Department of Environmental Conservation, the New York City Department of Environmental Protection, and The Port Authority of New York and New Jersey. The New York State Department of Environmental Conservation, the New York City Department of Environmental Protection with the local non-profit organizations EcoWatchers, Jamaica Bay Guardian and the American Littoral Society, completed a community-based planting effort to vegetate 30 new acres created at Black Wall and Rulers Bar with the above referenced plants in June 2013. The marsh island restoration efforts are being monitored by a project team that is providing valuable data on the cause of problems and assisting to identify optimum effective future restoration options. This program also has significant implications for the future success of restoration activities from beneficially using sand from the Operations and Maintenance (O&M) Program. SUMMARY OF MARSH ISLAND WETLAND ACRES RESTORED Elders East …………….. Approximately 40 acres Elders West …………… Approximately 40 acres Yellow Bar Hassock ...... Approximately 45 acres Black Wall …………….. Approximately 20 acres Rulers Bar ……………… Approximately 10 acres For information, contact: Lisa Baron, Project Manager
<urn:uuid:ff46bb86-8935-4dd0-956f-6371678b227f>
CC-MAIN-2016-22
http://www.nan.usace.army.mil/Missions/CivilWorks/ProjectsinNewYork/EldersPointJamaicaBaySaltMarshIslands.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053379198.78/warc/CC-MAIN-20160524012939-00233-ip-10-185-217-139.ec2.internal.warc.gz
en
0.925147
1,315
3.46875
3
Taken from a booklet titled: Historical Sketches of Berrien County By Robert C. Myers, Curator 1839 Courthouse Museum Eau Claire Excitement April 4, 1922 was certainly one of the most exciting days in Eau Claire’s history, for it was on that day that bandits held up the Eau Claire State Bank. The robbery touched off a wild chase and gun battle between the criminals and a furious posse of citizens. The four robbers were all fellow employees in a Gary, Indiana factory. Their leader conceived the idea of robbing the Eau Claire bank while he was visiting a relative there, and he enlisted the aid of his friends. The four men stole a six-cylinder Pratt touring car in Elkhart and headed north to Eau Claire, where they spent a day or so familiarizing themselves with the town and local roads. At about 9:30 Tuesday, morning the four bandits parked the Pratt in front of the bank. Two of the men got out while the other two, in best bank robber tradition, kept the motor running. Their leader walked into the bank carrying a large leather satchel, approached bank president Homer E. Hess, who was busy making ledger entries, and demanded that he stick up his hands. When he hesitated, the robber stuck two revolvers in his face and Hess complied with the order. His partner, meanwhile, held a gun on two other bank employees. Hess was ordered to stand with the other employees, and as he walked across the room he made a misstep. The robber fired instantly, grazing the bank president’s stomach and also attracting the attention of Eau Claire citizens outside the bank. In the tense confusion the employees somehow managed to step on floor alarm buttons, sounding an alarm in two locations in town. The bandits scooped up $1,185 in cash from the bank till, in their excitement and haste missing $3,000 more which lay in plain sight, and ran to their getaway car. As they sped off a deputy sheriff fired several shots, missing the bandits but punching holes in their touring car. Unfortunately for the robbers, they had not counted on two things: the quick think of telephone operator Mrs. Jack Claxton and the fury of Eau Claire’s citizens. Mrs. Claxton, not knowing which way the bandits were going, simply opened all the telephone lines and cried out, “The Eau Claire bank has been robbed and the bandits are headed your way!” Her message, of course effectively alerted everyone in the area, and the entire countryside was instantly up in arms. Several men pushed a heavy lumber wagon across the road and almost immediately saw the bandits speeding toward it. The big touring car whirled around and retreated the other way, followed by the Eau Claire men in a Ford pickup truck, exchanging revolver fire for shotgun blasts. The getaway car finally ran off the road and mired itself in a mud hole. The robbers fled through a field to a tamarack swamp, pursued by dozens of Eau Claire men armed with an assortment of rifles, pistols and shotguns. One robber was shot in the arm as dashed across the field and was captured, but the others held out for a short time in the swamp until a few close rifle shots induced them to surrender. The four bandits pleaded guilty to bank robbery at their court arraignment two weeks later. On April 24 they were each sentenced to thirty to thirty-five years in prison. Note from RC Ferguson: The Central Office Operator could do this. All Phone in the entire District would ring one loooong ring for emergencies.
<urn:uuid:80e2105e-95ff-46fc-9248-cd37c5acfcbc>
CC-MAIN-2020-24
http://eauclairemi.com/AboutUs/LocalHistory.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00367.warc.gz
en
0.97303
749
2.796875
3
Humidity refers to the amount of water vapor that is present in the air. Factors such as air temperature and controls over evaporation affect the humidity of a parcel of air. Several ways exist to express the level of air humidity, each providing a different piece of information. Absolute humidity measures the weight of water vapor per unit volume of air and is expressed in units of grams of water vapor per cubic meter of air (g/m3). Air temperature and atmospheric pressure affects absolute humidity, resulting in information that is not very useful. For that reason, absolute humidity is not often used as a unit of measurement . Specific humidity measures the weight of water vapor per unit weight of air. This unit of measurement, expressed as grams of water vapor per kilogram of air (g/kg), does not change depending on temperature or atmospheric pressure. Specific humidity as a unit of measurement is more useful than absolute humidity. Relative humidity refers to the ratio of the amount of moisture in the air at a certain temperature to the maximum amount of moisture that the air can retain at the same temperature. In other words, relative humidity measures how much of the moisture capacity of the air is used. Relative humidity is expressed as a percentage and is highest during rain, usually reaching 100 percent. Vapor pressure measures the partial pressure that water vapor creates. It is expressed as millibars, much like atmospheric pressure. Volumetric expansion and temperature don't affect vapor pressure. This measure is closely related to saturation vapor pressure, which refers to the amount of pressure that water vapor in saturated air creates.
<urn:uuid:0d4d7f89-7c20-43bc-a8fb-f4bb56828bee>
CC-MAIN-2020-29
https://www.ehow.com/list_6827719_units-used-measure-humidity_.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00528.warc.gz
en
0.902222
321
4.34375
4
Spicy chips, like hot Cheetos, will not do any harm to the developing fetus. At present, eating spicy foods during pregnancy is considered generally safe, except for some issues regarding worsening of particular pregnancy symptoms. Spicy Foods During Pregnancy There are a lot of concerns surrounding consumption of spicy foods during pregnancy. There is no evidence to a number of traditional views about spicy foods being harmful to pregnant women and their unborn offspring. A lot of Pregnant Woman Crave Spicy Food It is actually common for pregnant women to crave spicy foods (Bayley et al., 2002). Although direct reasons for this are unknown, a most likely explanation is due to the hormonal changes that happen during pregnancy. These changes affect a pregnant woman’s senses of taste and smell, and in turn, their food preferences and aversions as well. Food cravings may be linked to the same changes in hormones that make pregnant women feel nauseous early in their pregnancy (Bayley et al., 2002). In 2014, Orloff & Hormes investigated other theories regarding pregnant women’s cravings for spicy foods. One theory that they found is that pregnant women often feel hot, and spicy foods stimulate them to sweat and cool down. Another theory suggested that food cravings may be a result of nutritional deficits. The same study also revealed that cravings during pregnancy are common across different cultures, but specific food types may be dictated primarily by cultural influences. However, not one of these conclusions can be proven with certainty (Orloff & Hormes, 2014). Spicy Foods Can Worsen Morning Sickness During the first trimester of pregnancy, spicy foods can worsen morning sickness. If a pregnant woman already suffers from an all-day nausea, consuming spicy foods can trigger the release of stomach acid and make nausea and vomiting worse. Spicy Foods Can Trigger Heartburn Spicy foods are more likely to cause heartburn in the second and third trimesters of pregnancy. The lower pressure on the esophageal sphincter, caused by the increasing pressure on the abdomen from the growing baby inside the uterus, makes gastroesophageal acid reflux more common during this time (Ramu et al., 2010). It is safe to eat spicy foods anytime during pregnancy, but they can worsen symptoms of heartburn during the latter stages and can cause increased discomfort for pregnant women. Spicy Foods Can Cause Diarrhea Spicy foods can cause diarrhea, bloating and increased gas in any person, and not just in pregnant women. Capsaicin is considered the active component in chili that is responsible for the gastrointestinal side effects of spicy foods (Hammer & Vogelsang, 2007). Capsaicin irritates the stomach and digestive system, leading to diarrhea in people who are not used to it. During pregnancy, the main concern with diarrhea is the possibility of dehydration. Therefore, pregnant women who eat spicy foods must increase their intake of fluids and make sure that they remain well hydrated. Moreover, if a pregnant woman is not used to eating spicy foods but suddenly craves for them, it is best to start slowly with milder types and fewer amounts, before working her way up, to build tolerance. Myths Surrounding Spicy Food During Pregnancy Spicy Foods are Harmful for the Baby and Can Burn the Baby’s Eyes, Leading to Blindness There is no scientific evidence to this belief. Consumption of spicy foods during pregnancy is safe and cannot cause harm to the baby. Pregnant women develop food aversions, including avoidance of spicy foods, associated with their symptoms of nausea and vomiting during the early trimester of pregnancy. Placek et al. in 2017 reported that food aversions during pregnancy, including avoidance of spicy foods, supports a protective role against plant teratogens. However, their results also showed aversions to staple foods like rice, which does not support the same theory. Therefore, culturally transmitted food aversions may play a larger role. Spicy Foods Can Cause a Miscarriage and Help in the Induction of Labor Spicy foods do not cause premature birth. Neither can it help induce labor, despite consumption of spicy foods being one of the common nonprescribed methods of trying to hasten labor. In 2011, Chaudhry et al. revealed that as much as 50.7% of pregnant women often turn to traditional customs in order to induce labor and end the discomforts they feel during pregnancy. Over 10% of the women they surveyed said that they have tried eating spicy foods. However, their results showed that these attempts had no effect on labor. Craving Spicy Foods During Pregnancy Predicts the Baby’s Gender This myth circulated in 2008 when Daily Mail Online announced that according to a survey done by gurgle.com, pregnant women who craved spicy foods were more likely to be carrying baby boys. In spite of this claim, the reality is that no clear conclusion can be drawn from that survey. Cravings, unfortunately, are not a reliable predictor for an unborn baby’s gender. Eating Spicy Foods When Pregnant Can Make the Baby Move Inside the Womb There are anecdotal reports of pregnant mothers sensing their babies kick or move more often after they have eaten spicy foods. However, these stories have not been proven scientifically. What is currently known is that babies in utero respond when their mothers have eaten any type of food (with no preference to any particular kind). In 2014, Bradford & Maude did a survey among pregnant women and their baby’s activity. They found that 74% reported an increased activity during meal times. The most likely reason is the increase in blood sugar after food consumption. Spicy Foods and their Safety During Pregnancy These are safe whether green, red or any other type of chili pepper. Just make sure to wear gloves and avoid touching your eyes and skin, especially when handling a hot variety. Wash your peppers properly and wash your hands before and after food preparation. All spicy chips are safe in pregnancy. Popular brands are Hot Cheetos, Doritos Flamin’ Hot, Seabrook Scorchin’ Hot Fire Eaters, and Takis Fuego. However, these snacks are also high in sodium and must be consumed in moderation. Bottled, shelf-stable varieties of hot sauce from supermarkets are safe to consume. Just be cautious around “fresh” hot sauce or salsa. Pregnant women should check if the ingredients used were cooked or pasteurized. The safety of consuming mayonnaise depends on the ingredients used. Check the label whether pasteurized eggs were used, instead of raw eggs, to avoid the risk of contamination with Salmonella. Commercial mayonnaise is usually safe, but homemade ones should be verified for use of any undercooked eggs. Spicy Tuna Roll Sushi Tuna, whether spicy or not, should be thoroughly cooked for it to be safe for consumption in pregnancy. Raw and undercooked fish may harbor bacteria and parasites that can cause food borne illnesses. Therefore, raw sushi should be avoided by pregnant women. In addition, tuna can contain high levels of mercury that can be passed down to the fetus and cause neurologic damage. Hence, it is also advised that pregnant women limit their intake of tuna dishes. Indian, Thai, or other types of curry are safe for pregnant women to eat. Nevertheless, keep in mind your tolerance to spicy foods and your susceptibility to having heartburns and indigestion. Eating spicy foods during pregnancy will not hurt your growing baby. However, if a pregnant mother is not used to consumption of these foods, she may develop uncomfortable symptoms and even worsen the pregnancy discomforts she may already have. Therefore, although pregnant women can safely give in to their cravings for hot Cheetos, it is best to take it slow and limit their intake according to their tolerance for spicy foods.
<urn:uuid:4559a743-9e9c-4d8a-a340-ba15b40fc540>
CC-MAIN-2022-05
https://birthingforlife.com/can-you-eat-hot-cheetos-while-pregnant/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00434.warc.gz
en
0.952852
1,665
2.609375
3
Smart Clothes for Medical Uses: What is the best strategy for people to avoid getting cancer? Is the desktop nanofabrication tool a viable option for low-cost, easy nanotechnology? Nanotechnology Sources to Help You Research Nanomedicine Journal is an open access journal that includes abstracts of current research as well as many free articles. Are microbes that create chemicals and antibiotics going to help us prevent infections? What are Biomacromolecules and why are they important? Do Interesting science projects benefits of nanotechnology for medical uses outweigh the risks? Why do birds have such beautifully colored feathers? What is gene therapy? Will nanotechnologies make it possible for people to live in outer space? What is nanotechnology for medical use? Why is nuclear fusion always just out of reach? How important is climate change legislation? How can chemists help prevent allergies? What is a chimera and how could it help stem cell research? Will delivery drones be bringing us our pizza and mail? How do cells protect the body from disease? Is it possible to predict the next pandemic? Can memory loss and dementia be prevented? What has been the impact of colonoscopy testing on colon cancer rates? Can nanomaterials be used to reduce CO2 emissions? How are robots going to improve medicine? What is the best chemical process of microbrewing beer? What is the effect of nanotechnology on research and development of medical technologies? Is using drones for warfare a good or bad idea? What is the future of surgical robots? How helpful is it to the environment and is it worth the extra cost? Nanogears Source How can microelectronics be used to help people with chronic ailments? How can nanotechnology be used to work with DNA? What is the West Nile virus? How likely is it that a pandemic will arise that will kill large numbers of people in the world?Fifth Grade Science Fair Project Ideas. bsaconcordia.com's 5th grade science projects enable kids to apply everything they've learned over the course of their elementary school careers in order to discover some pretty cool and. Tenth Grade Project Ideas ( results) Email. Print. Science Buddies' tenth grade science projects are the perfect way for tenth grade students to have fun exploring science, technology, engineering, and math (STEM). These are 16 of the most impressive teenager-led science projects we could find. And they all began with a simple question and a love of science. These aren't foam volcanoes. These are cancer. Get ready to take first place with these challenging and interesting science fair project ideas for kids of all ages. Browse now. At a loss on how to help your kid win the day at her science fair? We love these easy experiments found on Pinterest. 11 Cool Science Fair Projects from Pinterest | Parenting. Find hundreds of FREE popular science projects. Ideas include building a simple motor with a magnet, dissecting an owl pellet, and making a solar oven.Download
<urn:uuid:64d3c4e5-4cf2-43c0-9328-7ea6803b2441>
CC-MAIN-2019-04
http://futitocydopurubaj.bsaconcordia.com/interesting-science-projects-8479484794.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00636.warc.gz
en
0.916438
621
2.53125
3
All books talk but good books listen as well! As we launch ‘BOOK HOOKS’ reading club and make some exciting changes to the choice of reading material in the LitPro home reading program, I would love your support and help as we assist our children to select “good fit” books and reading material that aligns with their level of maturity and interests. An important part of becoming a successful independent reader is being able to choose “good fit” books for yourself. That is, a book that is not too easy or too difficult, but “just right”. To help students make a “good fit” book choice they can use the IPICK guidelines: I look at the book Purpose (Why do I want to read this book?) Interest (Am I interested in the topic?) Comprehend (Do I understand what I am reading?) Know (Do I know most of the words?) They can also use the ‘5 finger method’ or do this exercise in their heads. Students choose a book and start reading a page. Each time they come to a tricky word, they raise a finger. If five fingers are raised, students can then decide if the book is really a good fit for their current reading ability or too hard. This year, students in Year 3 to 6 who participate in the Lexile home reading program will be able to choose books from across the collection using the coloured star guidelines to help them make ‘good fit’ reading choices. Children will also have the opportunity to read material that is outside of the Lexile collection and they will be able to complete an electronic book summary and review which will be posted to the ‘BOOK HOOKS’ blog. In this way they will have the opportunity to gain teacher added points. There will be all sorts of fun and exciting things for our readers when they come along to ‘BOOK HOOKS’ reading club, some of which include Readers picnics, Lunch and Literature, Millionaire meeting and marshmallows, Winter reading Wonderland and Wordless Book read alouds. Reading will be the main homework activity this year so please help to provide time for reading widely, wildly and choice. Comic books, Nonfiction, Fiction, Magazines, Cookbooks! The same book over and over! And if you have time, a little bedtime reading is always loved and appreciated. ‘Reading aloud and talking about what we’re reading sharpens children’s brains. It helps develop their ability to concentrate at length, to solve problems logically, and to express themselves more easily and clearly.’ Mem Fox. Gail Mitchell, Head of Primary
<urn:uuid:cf22e8d7-7d28-461e-8b43-43c4139f3102>
CC-MAIN-2020-40
https://glasshouse.qld.edu.au/reading-is-a-conversation/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00482.warc.gz
en
0.948377
561
2.84375
3
What you need to know about Amenorrhea Amenorrhea refers to the absence of or seizing of the menstrual flow due to several factors such as health conditions, birth control use, and most commonly, pregnancy. There are two types of amenorrhea. 1. Primary amenorrhea: This refers to the absence of a menstrual period in a girl who has/is exhibiting puberty but hasn’t gotten a period by the age of 15. Hormones, problems with the reproductive organs are usually the main cause of primary amenorrhea. 2. Secondary amenorrhea: This occurs in women who have had menstrual flows previously, which later stops for three to four months consecutively. The most common cause of secondary amenorrhea is pregnancy, though other factors can cause a woman’s period to seize. The causes of secondary amenorrhea are more extensive. The most common reason is pregnancy, but other causes include: Physical and Mental Stress Gynaecological or Reproductive (Organ) Disorder Extreme Weight Changes The treatment plans for amenorrhea largely depends on what type it is (primary or secondary) and also the cause. For primary amenorrhea, surgery may be needed to repair reproductive organs and/or genetic conditions. In the case of secondary amenorrhea, the reason is first ascertained before a possible solution is prescribed. If amenorrhea is caused by birth control or drastic lifestyle changes like extensive exercise, physical stress or weight loss, doctors may prescribe an alternative contraceptive and suggest lifestyle changes. For both primary or secondary amenorrhea, the first point of call should be a visit to the doctor or gynaecologist to determine what could be wrong. Self-medication is not advised.
<urn:uuid:a31ce85b-cc61-4dd4-827c-d623f6ffacf4>
CC-MAIN-2022-21
https://www.flamevibes.com/what-you-need-to-know-about-amenorrhea/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00426.warc.gz
en
0.925881
381
3.296875
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Folie à deux (literally, "a madness shared by two") is a rare psychiatric syndrome in which a symptom of psychosis (particularly a paranoid or delusional belief) is transmitted from one individual to another. The same syndrome shared by more than two people may be called folie à trois, folie à quatre, folie en famille or even folie à plusieurs (madness of many). Recent psychiatric classifications refer to the syndrome as shared psychotic disorder (DSM-IV) (297.3) and induced delusional disorder (folie à deux) (F.24) in the ICD-10, although the research literature largely uses the original name. This case study is taken from Enoch and Ball's 'Uncommon Psychiatric Syndromes' (2001, p181): - Margaret and her husband Michael, both aged 34 years, were discovered to be suffering from folie à deux when they were both found to be sharing similar persecutory delusions. They believed that certain persons were entering their house, spreading dust and fluff and "wearing down their shoes". Both had, in addition, other symptoms supporting a diagnosis of paranoid psychosis, which could be made independently in either case. This syndrome is most commonly diagnosed when the two or more individuals concerned live in proximity and may be socially or physically isolated and have little interaction with other people. Various sub-classifications of folie à deux have been proposed to describe how the delusional belief comes to be held by more than one person. - Folie imposée is where a dominant person (known as the 'primary', 'inducer' or 'principal') initially forms a delusional belief during a psychotic episode and imposes it on another person or persons (known as the 'secondary', 'acceptor' or 'associate') with the assumption that the secondary person might not have become deluded if left to their own devices. If the parties are admitted to hospital separately then the delusions in the person with the induced beliefs usually resolve without the need of medication. - Folie simultanée describes the situation where two people, considered to independently suffer from psychosis, influence the content of each other's delusions so they become identical or strikingly similar. Folie à deux and its more populous cousins are in many ways a psychiatric curiosity. The current Diagnostic and Statistical Manual of Mental Disorders states that a person cannot be diagnosed as being delusional if the belief in question is one "ordinarily accepted by other members of the person's culture or subculture" (see entry for delusion). It is not clear at what point a belief considered to be delusional escapes from the folie à... diagnostic category and becomes legitimate because of the number of people holding it. When a large number of people may come to believe obviously false and potentially distressing things based purely on hearsay, these beliefs are not considered to be clinical delusions by the psychiatric profession and are labelled instead as mass hysteria. Being defined as a rare pathological manifestation, folie à deux is rarely found in general psychology or social psychology text books, and is relatively unknown outside abnormal psychology, psychiatry and psychopathology. There have been reports that a similar phenomenon to folie à deux had been induced by the military incapacitating agent BZ in the late 60s, and most recently again by anthropologists in the South American rainforest consuming the hallucinogen ayahuasca (Metzner, 1999). In the mediaEdit - (1994) Heavenly Creatures is a film directed by Peter Jackson and starring Kate Winslet and Melanie Lynskey. It was set in New Zealand and inspired by a true story where two teenage girls develop a relationship so strong and peculiar that they believe the only way to stay together was to kill one of the girls' mother. These girls were thought to have folie à deux. - (1998) Folie à deux was the title of an episode from season 5 of The X-Files, aired in 1998, where Agent Mulder shares the belief with a telemarketer that employees of the telemarketing firm are monsters. - (2006) 'Folie à Deux' is the title of a short film written and directed by Devin Anderson. The film was shot in 2006 and is currently in post-production. - (2006) Folie à deux was referenced and defined in an episode from season 2 of Criminal Minds entitled "The Perfect Storm", which aired October 4, 2006 on CBS, in which a pair of serial killers kidnapped, tortured, and murdered several young women; in this episode, the primary, or dominant, perpetrator was a woman. - (2007) The film Bug portrays a folie à deux involving a man and woman who believe they are infested with government-implanted, nano-technological insects. - Halgin, R. & Whitbourne, S. (2002) Abnormal Psychology: Clinical Perspectives on Psychological Disorders. McGraw-Hill. ISBN 0072817216 - Enoch, D. & Ball, H. (2001) Folie à deux (et Folie à plusieurs). In Enoch, D. & Ball, H. Uncommon psychiatric syndromes (Fourth edition). London: Arnold. ISBN 0340763884 - Wehmeier PM, Barth N, Remschmidt H (2003). Induced delusional disorder. a review of the concept and an unusual case of folie à famille. Psychopathology 36 (1): 37-45. - Hatfield, Elaine, Caccioppo, John T., & Rapson, Richard L. (1994). Emotional contagion (Studies in Emotional and Social Interaction), Cambridge, UK: Cambridge University Press. - Metzner, Ralph, editor. Ayahuasca: Human Consciousness and the Spirits of Nature, New York, NY: Thunder's Mouth Press. - Folie à deux episode synopsis from IMDb, the Internet Movie Database. - Folie à deux musical project from the purple forest. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:574650aa-e1dc-4b70-9119-07eeda18cdff>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Shared_psychotic_disorder
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900738
1,285
3.359375
3
Helpful teaching tool for enhancing teaching history! Community Review for Reporting the Revolutionary War I do wish that it was more entertaining or colorful, but still, it takes on aspects of the Revolutionary War that are not covered in textbooks. I do believe students will obtain a better understanding of what caused the war and why Colonial Americans stood up for themselves and were willing to become independent no matter the cost. We gain some insight on the Stamp Act and concepts of taxation through these lessons as well. How I Use It This website has documents that can be used to teach history in an enhanced way. There are actual letters of correspondence to enhance student understanding of what transpired that caused us to pull away and seek our own form of government. One lesson is designed to show various viewpoints of Colonial Americans and the effects of taxation on them.
<urn:uuid:34288c7e-57d4-40ee-b124-744f6b90df9b>
CC-MAIN-2023-23
https://www.commonsense.org/education/reviews/reporting-the-revolutionary-war/teacher-reviews/4098416
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650409.64/warc/CC-MAIN-20230604225057-20230605015057-00151.warc.gz
en
0.960791
169
3.203125
3
This page is part of Section Six: | the More Information section of barkingdogs.net Why Dogs Bark When it comes to vocalizing dogs, the general rule is: the warmer the weather, the more the dogs will bark. Of course, you expect to hear more barking as the weather gets warmer, because the dogs are more likely to be outside and you are more likely to have your windows open. But I don't think it's just that you hear them more, I'm convinced they actually bark more as things heat up, especially at night. Dogs seem to nap more in the heat of the day, which means, when the sun goes down and things cool off, they are refreshed, wide awake and ready to burn off some energy. So it's not surprising that, in addition to more barking in general, you get an awful lot more nocturnal racket in warm weather. I've lived in neighborhoods that had a high quality of life in the winter, only to deteriorate into a sucking pit of misery in the dog days of summer. It's so frustrating that after waiting all day for the cool of the evening, you end up having to sleep in a sweltering room with the windows closed because of the noise of a dog who is standing outside barking as he enjoys the cool night air. Nature Versus Nurture Beyond the weather though, what accounts for the differences in barking patterns between dogs? Why does one dog bark while another dog, in the exact same situation, remains silent, or, for that matter, why does a given dog bark at one thing and not another? Tens of thousands of years ago, some of our ancestors came into the possession of baby wolves. The pups fell in with people so early in life that they bonded with them and found a niche in human society. As successive generations of these animals were born and raised among people, our ancestors noticed differences between the individuals. Some were bigger, some were smarter, some were faster, some were easier to train, some were better swimmers and some were better hunters. At some point people realized that, if they mated two of their domesticated wolves who were strong in the same trait, they were likely to produce offspring who were even stronger in that characteristic. They found that by mating two great hunting wolves, they could produce a litter of superior hunters, or by mating two obedient wolves, they could beget a more obedient litter. There began to develop then, a science of selectively breeding wolves as a way of customizing that population to better meet the needs of their human companions. In some places the major problem was one of security. In those locations, humans bred guard wolves characterized by their predisposition to sound the alarm and defend the group. In another place they raised large, powerful animals because they required beasts of burden. In other places the humans bred swift runners to hunt on land, while people elsewhere worked to produce strong swimmers that could help with the harvest of waterfowl. Humans around the world began to selectively breed wolves or the descendants of wolves. They sought to make each successive generation more in the likeness of what they conceptualized as the ideal canid companion. Over time, through the process of selective breeding, the descendants of those first cubs ceased to be wolves and evolved into dogs. Some of the humans who genetically engineered the evolution of the dog had little use for canines that barked much, so they selected breeding stock that was more vocally restrained. Those dogs are the ancestors of today's quieter breeds. On the other hand, some humans intentionally bred dogs who showed a marked predisposition to vocalize. Out of those dogs came, among others, today's terriers who lapse into intense, frantic barking with little or no provocation. So, we say that some dogs just "naturally" tend to bark a lot, which really means that, by virtue of their genetics, they are predisposed to bark. It's important to note that having a genetic predisposition to bark doesn't mean that the dog has to bark. It just means that he is inclined to do so. The extent to which a dog will tend to bark is determined by his genetics. However, whether or not the dog actually barks is ultimately determined by the consequences of vocalizing. If barking works out well for the dog, he will bark some more. If barking consistently brings about an undesirable consequence, the dog will soon stop barking. So, like most other behavior, barking is the product of its consequences. There is then an interplay between the dog's natural tendency to bark and the consequence that follows barking. For a dog strongly predisposed to be silent, just a bit of punishment is enough to discourage barking. In contrast, for a dog strongly predisposed to bark, it takes a conscientious owner administering a well thought-out program to keep a serious barking problem from developing. Therefore, in answer to the question of why a particular dog is barking in a disruptive manner, it is fair to say he is doing so because his owner failed to arrange the consequences with enough care to ensure the proper behavior of the animal. In other words, he is barking because of the way you have arranged, or failed to arrange, his environment. For most dogs, there is some natural inclination to bark at the mail carrier, the neighbor's cat or other such stimuli. But whether a dog acts on his inclination to bark at a particular thing at a particular time is a matter of conditioning/training, and training your dog is your responsibility. Natural Instinct + Conditioning + Alternatives = Frequency of Barking A dog's barking, then, is a function of his natural inclination, in combination with his behavioral conditioning. There is one other important variable that influences barking behavior, which is the alternatives available to the dog. If a dog has plenty of other interesting things to do, he can be easily dissuaded from barking, even if he has a strong natural predisposition to sound off. On the other hand, if the dog's only alternative to barking is sitting alone in silence, then it will take a more focused effort to keep him in line. Underestimating the Needs of Dogs Dogs are pretty damn bright. Most people underestimate the potential of their dog because they mistake their inability to teach the animal for the animal's inability to learn, but even a stupid dog is a far sight smarter than most people imagine. Underestimation of the canine species is a common shortcoming among humans. We also underestimate the canine capacity to experience emotional distress and, worst of all, we underestimate their needs. Dogs are extremely social animals that need to be included as valued members of a family group. They need the mental stimulation of new places, new people and novel situations. They need to walk and explore and interact in intensive games with both humans and dogs, and they need the opportunity to learn and face challenging situations. The rule is, the smarter the dog, the more he needs these things. The Canine Need For Exercise Dogs are a lot like children in a way. You can only expect them to sit still for so long. Some breeds have a capacity to exercise that is twenty times greater than that of humans, and they actually need to get out and push themselves physically. The terriers, sporting and Nordic breeds, being chief among them. It is extremely difficult for a dog to behave in a civilized manner when he is surging with physical energy he needs to burn off. If you deny him the opportunity to romp, you should not be surprised to find that misbehavior, very likely in the form of recreational barking, soon follows. Most dogs really need to get 45 minutes a day of active exercise. That means running, chasing, romping, fast walking, swimming, or the like. Running your dog next to a bicycle can also be a good way to go if your situation allows. If you are going to be leaving your dog alone all day, you should take special care to exercise him in the morning before you leave for work. Then he can sleep and rest up when you're gone, as opposed to looking for ways to express his vast reserve of untapped energy. It's definitely true that some breeds of dogs need an amazing amount of exercise; however, some others don't need, nor can they tolerate, tremendous physical exertion. The amount of running necessary to warm up your Husky is more than enough to run your Basset hound to death. So, read up on your breed, and know his capacity for exercise before you sign him up to run that marathon with you. Also, keep in mind that dogs need to be given time to get in shape. Start by giving your dog a little exercise and build on that slowly as the dog's physical condition gradually improves. Barking As A Function of A Lack of Need Fulfillment When you look closely at the situation of a chronically barking dog, you will usually find the animal's need for exercise and stimulation is not being well met. Of course, if the owner would take responsibility for training his dog, that would put an end to the noise. But if you go beyond that to address the question of why the dog wants to bark, you'll see that boredom, loneliness, and unexpressed energy are at the heart of the problem. When you hear a barking dog, you are usually listening to the sad tale of a neglected animal that desperately needs to have his life restructured. Written by Craig Spanish translation - Traducción al español This website and all its content, except where otherwise noted, are © (copyright) Craig Mixon, Ed.D., 2003-2017.
<urn:uuid:d7be790c-157e-47f8-bbb8-06b3712108fc>
CC-MAIN-2017-22
http://barkingdogs.net/why.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609061.61/warc/CC-MAIN-20170527210417-20170527230417-00488.warc.gz
en
0.971992
1,991
3.09375
3
"Technics and Civilization" first presented its compelling history of the machine and critical study of its effects on civilization in 1934 - before television, the personal computer, and the Internet even appeared on our periphery. Drawing upon art, science, philosophy, and the history of culture, Lewis Mumford explained the origin of the machine age and traced its social results, asserting that the development of modern technology had its roots in the Middle Ages rather than the Industrial Revolution. Mumford sagely argued that it was the moral, economic, and political choices we made, not the machines that we used, that determined our industrially driven economy. Equal parts powerful history and polemic, "Technics and Civilization" was the first comprehensive attempt in English to portray the development of the machine age over the last thousand years - and to predict the pull the technological still holds over us today. Publisher: The University of Chicago Press Number of pages: 528 Weight: 794 g Dimensions: 23 x 16 x 3 mm "The questions posed in the first paragraph of Technics and Civilization still deserve our attention, nearly three-quarters of a century after they were written." - Technology and Culture "A brilliant historical and critical account of the effect of the artificial environment on man and of man on the environment, a necessary account, one for which we have waited too long in English." - New York Times"
<urn:uuid:9b8cbb0a-e32a-4191-9954-1f951b510a04>
CC-MAIN-2022-27
https://www.waterstones.com/book/technics-and-civilization/lewis-mumford/9780226550275
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104364750.74/warc/CC-MAIN-20220704080332-20220704110332-00776.warc.gz
en
0.933699
299
2.828125
3
Knowing what level of intensity you are working out at is important because most people benefit from exercising at both moderate and vigorous intensity. Also, varying the intensity of your workouts, vigorous intensity some days and low intensity other days, helps you recover. Aside from fitness trackers, there is another tool that you can use to measure your intensity. What’s really great about it is it’s free, simple to use, and doesn’t require batteries or software. It is called the Rate of Perceived Exertion (RPE) scale or sometimes called the Borg scale (after its inventor Gunnar Borg). How It Works The tool has two scales you can use; one is from 6 to 20 and the other from 0 to 10. Both are accurate and scientifically sound. Using an RPE scale to gauge your exercise intensity (low, moderate, or vigorous) has many advantages, one being that medication, heat, or how you feel that day won’t influence the measurement. 6 to 20 Scale The reason the scale begins at 6 and ends at 20 is because these values represent the resting heart rate (about 60 beats per minute) and maximal heart rate (about 200 beats per minute) of a healthy young adult. So that means 6 would equal sitting at rest, while 20 would be all-out maximal exercise you can only do for seconds. On this scale: - 8-10 = activity of light intensity, like walking at a leisurely pace or slowly bike riding on a flat road. - 12-14 = moderate intensity, like jogging or walking briskly. - 16-18 = vigorous intensity, like playing basketball or interval training. This scale works very well in an exercise setting. 0 to 10 Scale The principal of the 0 to 10 scale is similar. The 0 represents rest and 10 represents all-out maximal exercise. The rest of the scale is as follows: - 2-3 = light intensity - 4-5 = moderate intensity - 7-8 = vigorous intensity In my experience many people like this scale for its simplicity. It is also used in other ways, like when measuring difficulty breathing during exercise for those who have lung problems. What I truly love about these RPE scales is that they force you to be in tune with your body. Here’s an example: You are training for a 5 km running event and have your heart rate zones established. You strap on your heart rate monitor and head to the gym to run on the treadmill because it’s too icy outside. Ten minutes into your run you notice that your heart rate is super high and not at the moderate zone you were aiming for. Odds are, being that you are indoors, it is hotter and your body is working harder to cool itself. This can make your heart rate higher. What can you do? Resort back to the RPE scale and focus on running at a pace that feels like 13 out of 20: right where you want to be. Harvard School of Public Health: The Borg Scale of Perceived Exertion
<urn:uuid:e53a0637-4fa9-4164-b218-59b85fbffc44>
CC-MAIN-2018-17
https://www.healthyfamiliesbc.ca/home/blog/how-use-rate-perceived-exertion-scale
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00548.warc.gz
en
0.951064
634
3.25
3
The great hall of the Hagia Sophia in Istanbul is a soaring vaulted space, vast and imposing and seemingly symmetrical. The massive central dome and the smaller domes surrounding it are richly decorated with tiles and frescoes, and at the corners hang enormous round wooden panels imprinted in flowing gold-lettered Arabic script with verses from the Qur’an. Sunlight streams through high windows, spotlighting the extraordinary ornamentation, illuminating dust particles suspended in midair and casting unpredictable angular shadows. Originally constructed in 537, the Hagia Sophia is a case study in adaptive reuse. Over the course of fifteen centuries this imposing Byzantine building has served as a Greek Orthodox church, a Roman Catholic basilica, an imperial mosque, and finally a secular museum. Each successive occupant, save for the museum, targeted certain aspects of the interior for preservation while removing or covering over others. (The museum has retained everything, selectively peeling away bits of Islamic plaster to unveil glimpses of Christian imagery hidden underneath: ornately detailed mosaics of Jesus, Joseph, and Mary, and assorted saints.) In 1453, after the fall of Constantinople, the conquering Ottoman Turks converted the then Eastern Orthodox cathedral into a mosque and set about making it symmetrical, in accordance with the principles of their faith. Even when an arch or buttress wasn’t structurally necessary they carefully painted one in, trompe l’oeil style, to create the impression of symmetry. Seen from below, the intersecting geometry of domes and arches looks absolutely perfect. The flowing curvilinear lines of decorative mosaics and frescoes are crisp and clean and it’s difficult to tell the difference between the real architectural features and the faux. But if you climb the worn and hollowed stone steps to the balcony level, you discover something quite different. At close range much of the perfect geometry is revealed to be a bit lopsided, the apparently precise linework actually somewhat rough and inexact, the painted edges as wobbly as if they’d been executed by an amateur working with a worn brush. This is baffling at first – how could something so perfect turn out to be so coarse? – until you realize that worshippers wouldn’t have ventured upstairs. They never saw the ceiling at close range. They came to the mosque to pray, prostrated on their prayer mats, facing east toward mecca; to venerate Allah, sheltered by the great dome overhead that was a potent expression of his power. What we have here is not the illusion of perfection; it is the perfection of illusion.
<urn:uuid:8f90c64c-f57c-44e5-bacd-38021f0b7dc8>
CC-MAIN-2018-51
https://mainstreetdesign.com/notes/illusion-perfection
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00583.warc.gz
en
0.937428
535
2.84375
3
Semantic distribution of the terms in the Gersum database The terms included in the Gersum database reflect the very wide range of lexico-semantic fields where we can find Norse loans during the Middle English period1. This, in turn, is a testament to two factors: (1) the typological proximity between Old English (particularly the Anglian dialects) and Old Norse2; and (2) the sociolinguistic situation in many parts of Anglo-Scandinavian England, where the speakers of the two languages would have had close social contacts and where, generally speaking, the two languages are likely to have enjoyed similar status (as opposed to the uneven distribution of prestige — a.k.a. diglossia — that characterised the contact between Old/Middle English and Latin, for instance).3 The remarks below provide an overview of the richness of the data, although examples have been restricted to those terms which are certainly or very likely from Old Norse (i.e. terms categorised as A, B and C, with no doubling of the consonant). The typological proximity of the languages is particularly suggested by the significant number of function words (i.e. words which help establish grammatical relations but do not have much lexical meaning) that made their way into English. These terms include personal pronouns (þay ‘they’, þayr(es) ‘their(s)’), other types of pronouns (e.g. boþe ‘both’, same ‘same’), prepositions (e.g. (a)gayn(ȝ) ‘against’, fraward ‘away from’ and ouerþwert ‘across’), conjunctions (e.g. þoȝ ‘although’, or ‘than’, as well as fro ‘after’ and til ‘till, until’, which could act as a preposition or a conjunction), modal verbs (e.g. mon ‘must’), numerals (e.g. aȝtand ‘eighth’, hundreth ‘hundred’) and even some forms of the common verb to be: ware ‘were’. The members of the sere word family (see below, under relative properties) are particularly interesting in this respect because they are formed on the basis of the dative form of the Old Norse 3rd person reflexive pronoun (viz. sér), which was borrowed into Old English as an adjective (cp. ME sere ‘different’) rather than as a pronominal form; the use in English reflects the distributive sense (‘one by one, separately’) of the Norse pronoun. The preposition outtaken ‘except for’ exemplifies the opposite process, where a lexical term (cp. the verb take ‘take’; see below under possession) has taken part in a process of word-formation (compounding) and the new compound has become a function word (this process is often referred to as grammaticalisation). This is an indication that the lexical term has become fully integrated into the language and can, therefore, take part in the same processes of language change as native words. As one would expect, the overwhelming majority of the terms recorded in the database are lexical words, as these are the terms that are most commonly transferred in situations of language contact. When discussing these terms, it is very helpful to follow the semantic classification presented in the Historical Thesaurus of English (HTE) and that is the approach adopted below. However, the discussion here does not go into as much detail in terms of the taxonomical classification within each lexico-semantic subfield as the HTE. Thus, those readers interested in the make-up of the subfield and the semantic relations between the Norse loans and their near-synonyms should consult the full version of the HTE. Please note as well that, while clear quantitative information can be given in relation to the representation of the various Gersum categories in the database, the same is not possible for the semantic classification of the terms, because, on the one hand, many of them are polysemous (e.g. scaþe could refer to both physical harm and morally unacceptable behaviour) and, on the other, even the same sense could be associated with different semantic categories. Accordingly, the data is described from a qualitative rather than a quantitative perspective. For a more detailed classification of the data, see Pons-Sanz (forthcoming). Most of the terms included in the database are associated with the subfields belonging to the domain that the HTE calls the world: the universe: e.g. sterneȝ ‘stars’. the earth: we find here terms referring both to land and the landscape (e.g. bonk ‘hillside, slope, bank’, clynt ‘rocky cliffs’, fell ‘fell, precipitous rock’, flat ‘plain’, grofe ‘cave’, howis ‘mounds’, ker ‘thicket or marshy ground’, myre ‘mire, swamp’, nabb ‘rock’, sckerres ‘rocks’, scowtes ‘jutting rocks’, skayued ‘wild, desolate’, wylsum ‘desolate’), terms referring to water (e.g. gill-stremes ‘streams from a gorge’, terne ‘lake, pool’) and terms referring to the weather (e.g. blaste ‘blast (of wind)’, ryng ‘storm’, skwe ‘cloud’, snart ‘bitterly’). life, death and health: while some terms refer to death (e.g. deʒe ‘die’, drowne ‘immersed, drowned’, slaȝtir ‘slaughter’ and the related compound manslatir ‘manslaughter’) and (ill-)health (e.g. skayned ‘grazed’), there are also various terms referring to the body, mainly its parts and their features (e.g. ande ‘breath’, campe ‘rough (said of hair)’, herneʒ ‘brains’, hores ‘hairs, (eye)lashes’, legg ‘leg’, loue ‘palm, hand’, mun ‘mouth’, neue ‘fist’, skinnes ‘skins’, swange ‘waist’, wykez ‘corners (of the mouth)’). animals: there are here terms referring to animals (e.g. egg ‘egg’, galt ‘boar’, kid ‘kid, young goat’, nowte ‘bull, cattle’) or their bodily parts (dok ‘tail; trimmed hair (of tail, etc.)’, giles ‘gills’, wynge ‘wing’) as well as terms referring to the noises that animals make (e.g. ȝarmand ‘howl’). plants: besides terms referring to particular plants (e.g. bracken ‘bracken, fern’, risonis ‘stalks of corn’), we also find terms referring to parts of a plant (e.g. blom ‘flower, bloom’, rote ‘root’) and terms referring to plants in relation to their growth and development (e.g. scoghe ‘wood’). food and drink: while the terms for food refer mainly to the products themselves (e.g. kakeȝ ‘cakes’), those for drink refer to their containers (bekyr ‘beaker’ and scole ‘cup, scale’). textiles and clothing: Norse-derived terms in this subfield refer mainly to the clothes that characters wear (e.g. gere ‘clothes, gear’ and the related verb gere ‘to clothe, attire’, skyrtez ‘skirts, lower part of flowing garment or covering; flaps of a saddle, saddle-skirts’). physical sensation: here we find terms referring to the senses, particularly hearing (e.g. lote ‘sound, noise; word’, lyþen ‘to hear’, rowste ‘voice, roar’, schout ‘shout’, skrike ‘shouting, cry’, ȝarm ‘clamour’), as well as terms referring to sleeping (e.g. dreme ‘dream’) and terms connected with dirtiness (e.g. mokke ‘filth, muck’). matter: terms in this subfield refer to different types of matter (fire: e.g. bale ‘blaze, fire’ and the related bele ‘burn’, forbrent ‘burned up’, kynd ‘burnt’ and the related kindill ‘kindle, set fire to’, swyþes ‘burns up’; liquid: e.g. hellid ‘poured’; light: e.g. skyre ‘bright’; colour: e.g. blayke ‘yellow’, blo ‘dark, dusky, grey’, littis ‘colours’) and the (bad) condition of matter (e.g. mourkne ‘rot’, roten ‘decayed, decomposed’, moulynge ‘mould, mouldiness’). existence and causation: we find here a number of terms referring, not to creation, but to damage and destruction (e.g. brestes ‘breaks, bursts’ and the related tobrest ‘break in two’ and vnbrosten ‘unbroken’, ryue ‘to rip, cut open, cleave’, snayped ‘nipped cruelly’, toriuen ‘tear asunder, shatter, break up’, tyne ‘lose, destroy, ruin’). Particularly interesting here is also the adverb algate ‘at any rate’, where we see that the noun gate, a term originally referring to a path or a way, has undergone semantic change to refer to something more abstract (i.e. the way in which things happen). space: we find here terms referring to a particular place (e.g. stad ‘placed’, sete ‘seat’), terms referring to a relative position (e.g. hilen ‘cover’ and the opposite vnhyles ‘uncovers’, kay ‘left’, lyfte ‘lift’, vplyften ‘uplift’ and the related loft ‘high place’, which we also find in the adv. / prep. alofte ‘above, at the top; on’, loȝe ‘low’ and the related adverb bilooghe ‘below’, melle ‘middle’, ouerþwert ‘postponed’, rayse ‘raise’), terms referring to direction (e.g. heþen ‘hence’, þethen ‘from there’ and wheþen ‘whence’) and terms referring to shape (e.g. gerrethis ‘hoops’, vmbeþour ‘round about’). time: e.g. ay ‘always’, litid ‘delayed’, nyȝter-tale ‘night time’, tite ‘soon, quickly’ and the related as-tit ‘at once, in a moment’ and titely ‘quickly’. movement: a significant number of terms refer to different types of movement (e.g. balteres ‘rolls around, hobbles’, flitt ‘move’, kest ‘throw, offer’, which we also find in vmbekesten ‘throw about’ and kest ‘throwing, stroke’, dumpe ‘plunge’, dungen ‘struck’, hitte ‘hit, smite’, renne ‘run’, swayf ‘swinging blow’, rayke ‘wander, depart’, stakirs ‘staggers’, wayue ‘waive, swing’). action or operation: a significant number of terms in this category refer to the general act of doing something (e.g. gareȝ ‘makes, causes’), a particular action (e.g. lait ‘to seek, search’), the preparation to carry out an action (e.g. busk ‘get ready, array’, bounet ‘prepared’ and the related boun ‘ready, arranged’, grayþe ‘get ready’ and the homonymous adjective meaning ‘ready’) or the manner in which an action is carried out, particularly quickly and violently (brathþe ‘violence, impetuosity’ and the related broþely ‘suddenly; fiercely, violently’, race ‘headlong course; hurry; stroke’, snart ‘sharp’, wyȝt ‘quick; strong, fierce’ and the homonymous adverb meaning ‘swiftly; ardently’). Also important here are terms referring to prosperity or adversity (e.g. gaynly ‘suitably, conveniently, readily’, various members of the hap word family such as hap ‘good fortune, happiness’, vnhap ‘misfortune’ and vnhappy ‘unfortunate’, haille ‘success’, þriue ‘thrive’), the harm that an action can cause (e.g. woþe ‘danger, peril’, scaþe ‘harm, injury; wrong, sin’ and other members of its word family such as scaþel ‘dangerous’, skatheles ‘without injury’ and skathely ‘with injury’), one’s behaviour (cost ‘manners, disposition’, gaynly ‘gracious’, haylse ‘greet’, hendelayk ‘courtesy’, menskefully ‘gracefully’ and menskly ‘courteously’) and one’s ability to carry out an action (e.g. haȝer ‘skilful, well-wrought’ and the related hagherlych ‘fittingly, properly’, sleȝt ‘cunning, skill; device, stratagem; act of practised skill’ and the related sleȝe ‘skilfully made’). Many of the terms in this category overlap with those referring to movement. relative properties: in this category we find terms referring to agreement and harmony (e.g. (bi)seme ‘beseem, suit, be fitting’ and various members of its word family such as seme ‘seemly, fair, excellent; becomingly, fairly', semly ‘seemly, fitting; comely, fair’ and the homonymous adverb meaning ‘becomingly, excellently; pleasantly, sweetly’, semlyly ‘becomingly’ and vnsemely ‘improper(ly)’, naytly ‘well, properly’, þryftyly ‘with propriety’), similarity or lack thereof (e.g. odde ‘odd’ and the related oddely ‘exceptionally’, same ‘same’, slik ‘such’), quantity (e.g. helder ‘rather, more’, minne ‘less’, score ‘sets of twenty’, wont ‘lack’ and the homonymous verb meaning ‘to want, to lack’, þryuande ‘abundant’ and the related þryuandely ‘abundantly; excellently’), strength (wayke ‘weak’ and the related waykis ‘grows weak’ and waykned ‘was enfeebled’) and a whole-part relationship (e.g. sere ‘separate, individual’ and the related serlepes ‘individual, single, in turn’, serelepy ‘various, separate, different’). the supernatural: the database includes a couple of words referring to magic and the occult: wandez ‘wand, magic wand’ and demerlayke ‘magic arts’. The second HTE general category in terms of the number of Norse-derived terms in the database is the mind, where terms referring to emotion are particularly numerous: mental capacity: we find here mainly terms referring to memory (e.g. forgat ‘forgot’), knowledge or lack thereof (e.g. fele ‘hide’, layne ‘conceal; remain silent’, lugged ‘lurched’), belief (e.g. gesse ‘conceive, form an idea’, trayst ‘sure’ and the related traistis ‘trust, have confidence’) and expectation (e.g. dased ‘to be bewildered, dazed’). attention and judgement: terms in this subfield refer to attention (e.g. vndertake ‘take in, perceive’), judgement (e.g. skyl ‘reason, judgement’ and the related adjective skylful ‘reasonable, righteous’), enquiry and the provision of an answer (e.g. frayst ‘ask, seek’, sware ‘answer’), esteem (e.g. mensk ‘honoured’ and other members of the word family such as menske ‘honour, fame; courtesy’, mensked ‘honoured’ and menskful ‘of worth, noble’, rose ‘praise’, þryuen ‘fair, grown, honourable, worthy') and contempt (e.g. broþely ‘wild, vile’, bruxleȝ ‘reproves’, heþyng ‘contempt, scorn’, vnþryuande ‘unworthy, ignoble’ and the related vnþryvandely ‘poorly, improperly’). goodness and badness: e.g. wale ‘choice, excellent, fair, noble’. The terms in this category overlap with many associated above with harm (under action or operation), esteem and contempt (under attention and judgement) and morality. emotion: Norse terms made a significant impact on this subfield, with many of them having a prominent position for the expression of zeal and enthusiasm (e.g. þro ‘intense, steadfast, bold; angry, fierce’ and the related adverbs þro ‘earnestly, heartily, eagerly’ and þroly ‘heartily; urgently; violently’), passion (e.g. luf-lowe ‘fire of love’ and forbrent ‘burnt up’),4 anger (anger ‘anger; sorrow’ — the latter was its original meaning — and the related angirs ‘grows angry’, broþe ‘angry, fierce, grim’, gnaistes ‘gnashes’, waymot ‘bad-tempered’), mental pain or suffering (e.g. syt ‘grief, sorrow’), hatred (e.g. laith ‘loathsome, hateful’), calmness (e.g. spakid ‘became calm’), humility (e.g. lowe ‘humbly, lowly’ and the related loȝly ‘humbly, with deference’, meke ‘humble, submissive; gentle, compassionate’ and the related mekyn ‘humble’, mekely ‘humbly, compassionately’ and mekeness ‘meekness, humility’), fear (e.g. aghe ‘awe, reverence, terror’ and other members of its word family such as aȝed ‘frightened’, aȝefullest ‘most formidable’ and aghlich ‘terrible’, rad ‘afraid’, skere ‘fear’ and the related scarreȝ ‘take alarm, startle’, stiggis ‘starts in alarm’, vgly ‘gruesome, threatening’) and courage (e.g. aȝlez ‘without fear’, derf ‘bold, audacious, doughty, stout’ and the related adverb deruely ‘boldly’). will: terms here refer mainly to one’s free will or intention to do something (e.g. attle ‘intend, prepare’ and the related noun atlyng ‘intention’, wale ‘choice, range to choose from’ and the homonymous verb meaning ‘choose’), one’s willingness to do it (e.g. bayn ‘willing, obedient’, grayþely ‘readily, promptly’) or one’s motivation to do it (e.g. eggyng ‘urging’). possession: besides the possession of something or lack thereof (e.g. mysse ‘lose, lack’), the terms in this category refer mainly to the processes of acquisition (e.g. adill ‘acquire, earn’, gete ‘get, seize, fetch’ and the related noun get ‘something one has got’, take ‘take, accept, receive, capture’ and the related ouertake ‘overtake, regain?’, taking ‘capture’) or giving out (e.g. bitan ‘given, assigned’, gif ‘give, grant’ and the related noun gifte ‘gift, giving’, ȝette ‘grant’). language: the terms in this category refer mainly to the process of speaking and what is uttered (e.g. call ‘call, name, summon’, carp ‘speak, say, converse’ and the related nouns carp ‘talk, conversation, discourse’ and carping ‘speech, words’, kest ‘utter’ and the related words kest ‘speech, utterances’, vpcaste ‘proclaimed, uttered’, neuen ‘name, call, mention’ and the related adjectival form vnneuened ‘unsaid’, tyþing ‘word; message, information’), the expression of a request (e.g. bayþe ‘ask; agree, consent’, bone ‘request, boon’) and of a refusal or negation (e.g. nay ‘no’, nite ‘refused’), as well as the manner in which speech is produced (e.g. aloȝ ‘quietly’). society, the third of the HTE's domains, is the least well represented in the database, although there are also some important terms in this category: society and the community: here we find terms referring to social and kinship relations (e.g. sister-sunes ‘sister’s sons, nephews’) and the presence or absence of dissent (e.g. saȝte ‘peace’ and the related terms saȝte ‘at peace’, saȝtle ‘make peace, reconcile’, saȝtlyng ‘reconciliation’, vnsaȝt ‘unreconciled, unappeased’). inhabiting and dwelling: we find here mainly references to any dwelling and the concept of settling down somewhere (e.g. bigge ‘settle, found, build, make’ and other members of its word family such as bygyng ‘dwelling, home’ and bygly ‘inhabitable, pleasant’, and won ‘dwelling, abode’), different types of dwellings (e.g. boþe ‘booth, arbour’), groups of dwellings (e.g. þorpes ‘villages’) and parts of a building (e.g. wyndow ‘window’). armed hostility: here we have terms referring to troops of warriors (e.g. sopp ‘company, troop’) as well as their equipment (e.g. bruny ‘mail-shirt’, grayn ‘blade of axe, spike?’, gunnes ‘war engines’, klubbe ‘club’, sparþe ‘battle-axe’, stel-gere ‘armour’, wapen ‘weapon’ and the related wapened ‘armed’). authority: while some terms refer to power itself (e.g. ouirlaike ‘superiority, conquest’) and what authority allows one to do, such as summon someone (e.g. cal ‘summons’ and the related bycalle ‘call upon, summon) or restrain them (e.g. rekanthes ‘chains’ and lausen ‘release, free, loosen’), others refer to those with power (e.g. cayser ‘emperor, ruler’) and those who are under someone else’s power (e.g. bonde ‘bondmen, serfs’, carle ‘churl’, swaynes ‘servants’, þral ‘serf, slave’). law: here we have a couple of members of the lawe word family, which has a long standing in English records: lawe ‘law; faith; style’ and louyly ‘lawful’. morality: the terms connected to this subfield are associated with both positive (e.g. forgif ‘forgive’, saklez ‘innocent’, skete ‘pure’, vnsakathely 'spiritually unharmed, the pure’) and negative (e.g. lastes ‘sins, vices’, vnþryfte ‘wickedness, folly’ and the related adverb vnþryftyly ‘wantonly’, wrange 'wrongdoing, injustice; harm, evil, hurt, sorrow' and the homonyms wrang 'evil, perverted; wrong’ and wrang ‘unjustly, wrongly’) moral values. faith: perhaps unsurprisingly, we do not have many Norse-derived terms referring to religion, but we do have some, such as kirk ‘church’ and hap ‘one of eight beatitudes’. travel: besides some of the terms associated with movement above, which could also be included here, we find terms referring to the concept of travelling in general (e.g. kayre 'to go, ride’, trone ‘go, march’), means of travel (e.g. gate 'way, road, path’, gayn ‘direct, straight’, wro ‘secluded place, passage') and specific references to travel by water and parts of a ship (e.g. bulk ‘hold (of a ship)’). farming: e.g. snape ‘poor pasture’. occupation and work: the most important terms here are those referring to the equipment that one might need to carry out work (e.g. caraldes ‘casks’, kyste ‘chest’, sekke ‘piece of sack-cloth’, wyndas ‘windlass’). leisure: we find here references to leisure in general (e.g. layke ‘to play, amuse oneself’ and the related nouns layk ‘sport, entertainment’ and laykyng ‘playing’, tayt ‘pleasure, sport, play’, tom ‘leisure; time’), specific activities that one might engage in for leisure, particularly hunting (e.g. bayted ‘baited (by dogs), fed’, wayth ‘(meat gained in) hunting’) and those enjoying such leisurely activities (e.g. gest ‘guest’). 1 The situation during the Middle English period differs very significantly from what we find in Old English texts, where a significant proportion of Norse-derived terms can be said to be part of the technical vocabulary referring to social classes, measurements, as well as legal and nautical terminology; see further Pons-Sanz (2013). 2 See Frede Nielsen (1985) for a study of the morphological and phonological similarities between the two languages, and Townend (2002) for an argument in favour of the existence of mutual intelligibility between their speakers. 3 This statement is, of course, a broad generalisation that does not take into account the situation in the various areas in terms of the differing power structures and concomitant numbers of Scandinavian speakers (see Pons-Sanz 2004 for a study of the sociolinguistic situation in Northumbria), nor the impact that Cnut’s reign is likely to have had on the status of Old Norse (see Townend 2001 in this respect). See, however, Lutz (2012 and 2013) for a somewhat different view about the sociolinguistic relations between Old English and Old Norse. 4 We see in these terms interesting examples of the conceptual metaphor passion is fire. Historical Thesaurus of English, available at <ht.ac.uk>, last accessed on 29th September 2019. Lutz, Angelika. 2012. ‘Norse Influence on English in the Light of Contact Linguistics’, in English Historical Linguistics 2010: Selected Papers from the Sixteenth International Conference on English Historical Linguistics, Pécs, 23-27 August 2010, ed. by Irén Hegedüsand Alexandra Fodor, Current Issues in Linguistic Theory, 325 (Amsterdam: John Benjamins), pp. 15-42. --- 2013. ‘Language Contact and Prestige’, Anglia, 131: 564-90. Nielsen, Hans Frede. 1985. Old English and the Continental Germanic Languages: A Survey of Morphological and Phonological Interrelations, Innsbrucker Beiträge zur Sprachwissenschaft, 33, 2nd ed. (Innsbruck: Institut für Sprachwissenschaft der Universität Innsbruck). Pons-Sanz, Sara M. 2004. ‘A Sociolinguistic Approach to the Norse-Derived Words in the Glosses to the Lindisfarne and Rushworth Gospels’, in New Perspectives on English Historical Linguistics: Selected Papers from 12 ICEHL, Glasgow, 21-26 August 2002, Vol. 2: Lexis and Transmission, ed. by Christian Kay et al., Current Issues in Linguistic Theory, 252 (Amsterdam: John Benjamins), pp. 177-92 --- 2013. The Lexical Effects of Anglo-Scandinavian Linguistic Contact on Old English, Studies in the Early Middle Ages, 1 (Turnhout: Brepols). --- Forthcoming. ‘The Lexico-Semantic Distribution of Norse-Derived Terms in Late Middle English Alliterative Poems: Analysing the Gersum Database’, in New Perspectives on the Scandinavian Legacy in Medieval Britain, ed. by Richard Dance, Sara M. Pons-Sanz and Brittany Schorn, Studies in the Early Middle Ages (Turnhout: Brepols). Townend, Matthew. ‘Contextualizing the Knútsdrápur: Skaldic Praise-Poetry at the Court of Cnut’, Anglo-Saxon England, 30: 145-79. --- 2002. Language and History in Viking Age England: Linguistic Relations between Speakers of Old Norse and Old English, Studies in the Early Middle Ages, 6 (Turnhout: Brepols).
<urn:uuid:58bed31a-77fa-432d-85e4-45821df588c2>
CC-MAIN-2021-43
https://www.gersum.org/about/semantic_distribution_of_the_terms_in_the_database
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00612.warc.gz
en
0.884135
7,384
2.890625
3
“I jumped into the swimming-pool today.” “Fortunately, the pool was heated.” “Unfortunately, I cannot swim.” “Fortunately, it was not deep.” Which beginning do you find more entertaining? I was very surprised to find that many, many girls at the Writers’ Club find the second story more promising. I would choose the first, any day. How does this activity work? It’s a hugely entertaining one, which I learned from the book Creating Stories with Children by Andrew Wright. Someone begins the story, and then each of the other participants must contribute one sentence, alternating between beginning with ‘Fortunately’ and ‘Unfortunately’. It helps to introduce the idea of plotting and the wonder of surprising the reader.
<urn:uuid:785d169e-de5b-4e78-83f2-0fbd68de67c2>
CC-MAIN-2019-09
https://www.varshaseshan.com/fortunately-unfortunately/
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249501174.94/warc/CC-MAIN-20190223122420-20190223144420-00507.warc.gz
en
0.972805
169
2.59375
3
Lamphey Bishops Palace Has been described as a Certain Palace (Bishop) There are masonry ruins/remnants remains |Name||Lamphey Bishops Palace |Alternative Names||Lamphey Court; Lanfey The site of the Bishops’ Palace at Lamphey was an estate of St. David’s from before the Norman invasion until the Reformation. The date of the original timber construction remains unknown; the earliest surviving elements, being the limestone rubble western Old Hall and undercroft, date to the early thirteenth century. The remainder of the buildings are largely the work of the late thirteenth-earlier fourteenth century, with later alterations. There are the remains of great halls and chapels raised over basements, two gatehouses and a large barn or granary. The distinctive arcading is similar to that found at St Davids Bishop's Palace and Swansea Castle There are also extensive remains of a medieval ornamental landscape. The Palace changed hands at the reformation and continued as a noble house into the seventeenth century, declining thereafter. In the nineteenth century the site was laid out as a garden associated with the gleaming classical mansion erected to the north-west. (Coflein–ref. Turner, 2000) The palace of the bishops' of St Davids consisted of an irregular array of splendid apartments clustered on the south side of one of a series of large walled courts. It is best recorded in the fourteenth century and continued as a noble residence into the seventeenth century. The main surviving features are the park to the north-west, the courts themselves and a remarkable series of fishponds and other water features. These can be interpreted with reference to surviving late thirteenth-early fourteenth century records. The main approach was from the village to the south and passed over a bridge and dam that ponded back the valley bottom stream into a lake, providing an appropriate setting to the palace buildings The great courts are thought to have been planted in part with orchards and gardens. Here were grown apples cabbages and leeks. The park lies above the palace to the north-east. It was a roughly rectangular area of about 70ha enclosed by an earthen bank and with a lodge, now Lamphey Lodge, at its centre. There was grazing within for '60 great beasts, as well as the wild animals'. The park had been walled and substantially reduced in area by the early nineteenth century. In the woods on the western edge of the park are the earthworks of four fishponds, probably breeding ponds, and between this and the walled eastern court are the substantial remains of a series of fish holding ponds and the ruins of a fish larder house. The palace buildings and enclosures were reused for the grounds and gardens of Lamphey Court (NPRN 265874), a gleaming classical early nineteenth century mansion to the north-west of the palace (NPRN 22219). Source: CADW Register of Parks & Gardens in Wales: Carmarthen, Ceredigion & Pembroke (2002), 234-9 (Coflein) The palace of the Bishops of St David's from the C13 and probably much earlier and until the mid C16. It has important surviving works which have been associated with Bishops Richard Carew, Henry de Gower and Edward VAUGHAN. The palace was surrendered to the Crown by Bishop William Barlow in 1546, whence it was granted to Richard Devereux (and the line of the Earls of Essex). In 1683, probably after damage in the Civil War, the palace was sold to the Owens of Orielton, and in 1821 to Charles Mathias. In the time of Owen tenure the buildings were neglected or converted to farm use, but preservation commenced under the Mathias family followed by H. M. Office of Works and Cadw. Early C13: Fragments remain of the Old Hall and its undercroft. It is not clear with which bishop this first surviving work is associated. In the hall, two lancets at N, one blocked. Hearth at S with a round chimney above. In the undercroft: slit windows with wide embrasures. Local limestone rubble. Alterations in C16. Late C13 (associated with Bishop Carew): the Western Hall (replacing the old hall which became a kitchen) and its undercroft. The hall has a fireplace at the centre of the N wall with the stub of a round chimney. The external corbels of this fireplace are carved as little pendants. Windows with Early English stiff-leaf caps to scoinson colonettes. Painted plaster in imitation of stone courses, with a flower motif stencilled onto some of the 'stones'. Parapet with crenellations and loopholes. An attached latrine block at the SE corner. Undercroft: windows with stepped high sills above what appear to be seats. In the walls are the sockets of the floor joists carrying the original timber floor laid above a longitudinal bridging joist. Local limestone with dressings in a coarse freestone. In later centuries the Western Hall continued as the main hall of the Palace. The undercroft was vaulted over. Windows converted to Tudor form. An attic storey and a new latrine block at S were added. Early C14 (associated with Bishop Gower): A long narrow hall (or suite of rooms?) and undercroft added at the E of the Palace. The main stairs are against the N wall, above the undercroft porch. There are corbels for a pentice roof sheltering the stairs. The hall was roofed with six trusses, for the wall-posts of which there are corbels about 1.5 m above floor level. Pairs of trefoil-headed lancet windows with window seats. The E end of the hall is served by a fireplace with a conical chimney. A latrine wing is attached at SW. At the top of the walls is an arcaded parapet, of less developed type than that of Bishop Gower at St David's. Local limestone rubble with sandstone dressings. This building has a fine undercroft which now appears as a single vault, slightly pointed at the apex. The springings of several of the eleven cross-ribs survive, but the ribs have almost completely disappeared and the straight construction-joints in the stonework above rib positions are visible. A building at the E of the inner ward containing additional accommodation (the 'red chamber') may be contemporary. Early C16 (associated with Bishop Vaughan): Fragments of a chapel, with a modern gateway at the E. Sacristy at N. Fragments of Tudor windows. A fine Perpendicular E window survives. Wards: The inner ward gatehouse, now standing in isolation: two storeys, with gatekeeper's room above. Altered stairs at N, incorporating a mounting block. Pitched floor in the gateway. Shallow vaulted floor above. In the NE corner of the upper room there is a fireplace. Parapet arcading after the Gower style. There remain fragments of an extensive outer ward, to the N and W of the main buildings. Here the most important structure was Bishop Vaughan's great corn barn, the lower part of the N wall of which survives. Also fragments of the outer gatehouse. A later outer precinct wall to the S facing the stream and fishponds. (Listed Building Report) This site is a scheduled monument protected by law This is a Grade 1 listed building protected by law Historic Wales CADW listed database record number The National Monument Record (Coflein) number(s) County Historic Environment Record |OS Map Grid Reference||SN018008
<urn:uuid:7415955f-81e4-4bf8-bada-37170dfd02f7>
CC-MAIN-2022-27
http://castlefacts.info/castleDetails/castleDetails3?uin=20945&fromGateHouse=Y
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103269583.13/warc/CC-MAIN-20220626131545-20220626161545-00122.warc.gz
en
0.952838
1,717
3.046875
3
The idea for today’s post comes from the wonderful website Red Ted Art. This site is a treasure trove of art and craft projects with videos for kids of all ages. PRE-K activity: SHAVING CREAM VALENTINES: https://www.redtedart.com/valentines-cards-for-kids/ Materials: trays, shaving foam, red and pink non-toxic acrylic paint, craft sticks, hearts cut from white copy paper. 1. Squirt shaving foam into a deep tray and drizzle red and pink paint over the top of it. 2.Take Popsicle sticks and gently swirl the mixture around until the paint leaves a trail behind it through the shaving foam. 3. Press large paper hearts down into the mixture leaving them to completely dry. Benefits of Messy Play for your Child Whether you love to get messy or not, there is no doubt that the benefits of letting your kids enjoy messy play at home, definitely outweigh the post-session clean up operation. With careful planning and preparation, there is no reason to stress the mess of setting up a messy play activity for your child. Instead sit back and enjoy watching them learning as they play. - Exploring different materials and textures with their hands provides an excellent work out for their developing fine motor skills. - Allowing your child to take charge of how they use the activity is great for their confidence and self-esteem. - It can be used to stimulate all of their senses or just a few at a time. - Listening too and talking to your child about what they are doing, works not only on building their language skills, but it is also helping to expand their growing creativity and imagination. It may look like a pile of mush to you, but listen to how your child’s imagination can bring it to life. Credit and thanks for the benefits of messy play go to: http://www.craftykidsathome.com/horsie-horsie-messy-play/ No matter how many times we grumble and grouse about new year’s resolutions, many of us still look forward to a fresh start and chance to do things differently in the new year. As a writer, I usually make goals that reflect my aspirations and hopes in the writing field. Completing a draft of a new novel. Sending a finished project to my agent, hoping for a sale. Reading as many MG, YA, and PB’s as I can throughout the year. These have been recurring goals as I browse past journal entries. This year I want to add something new: For every PB, MG, and YA book I read, I hope to post a review on Amazon and Goodreads to boost the visibility of my author friends and help spread the word about books I really enjoy. Popular authors like Kate DiCamillo, Neil Gaiman, Katherine Applegate, Jacqueline Woodson, etc, don’t have to worry about getting reviews. Everything they write is featured everywhere we look. Rightly so and deserved. But for many of us who have written quality books for children, and received praise and accolades from friends and acquaintances, as well as the children we write for, I want to take it one step further. It takes only a few moments to write a sentence or two stating what you enjoyed about a book. It goes a long way toward helping an unknown author receive recognition and maybe sell a few more books. (Caveat: If you don’t like a book, spare the author any venomous reviews. There is enough negativity in this world.) I hope some of you will join me in spreading the word and “paying it forward” for a favorite author. Here’s a link to a printable calendar for keeping track of all those reviews…or for setting your own goals. HAPPY NEW YEAR and Happy Reading! It’s hard not to get caught up in the excitement of a new Star Wars Movie. Especially one featuring the original characters. For the Star Wars fans in your house, why not check out the amazing crafts, food items and activities found on the Red Ted Art site. In addition to the YODA craft below, it features Princess Leia Cupcakes, Chewie Cookies, Wookies, Death Star Watermelon, Milk Carton Storm Troopers, and many more fun to make crafts. You can start off with the easy YODA craft that even the youngest children can make. Just follow the pattern shown. Here’s the link to the site for more than 30 STAR WARS activities, crafts and recipes. http://www.redtedart.com/2014/12/26/30-star-wars-crafts-activities/ Have fun, and MAY THE FORCE BE WITH YOU!
<urn:uuid:400198f3-da95-4e02-a9ee-4465a32aacd6>
CC-MAIN-2019-26
https://darlenebeckjacobson.wordpress.com/tag/httpwww-redtedart-com/
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000545.97/warc/CC-MAIN-20190626194744-20190626220744-00219.warc.gz
en
0.936903
1,012
2.609375
3
Expectant mothers do everything in their capacity to prevent themselves, and their fetus from catching infections. Vaccinations, proper hygiene while cooking and avoiding outside food are some simple ways to prevent catching these infections. Sometimes, despite take all the possible precautions and steps, certain infections that can cause pregnancy problems. Here are 5 common infections that can cause pregnancy problems when left untreated: - Chicken Pox Although chicken pox is not dangerous by itself, it can be harmful for a pregnant woman. If the mother has never been exposed to it and has never been vaccinated, chicken pox can possibly affect the fetus. This itchy and contagious disease can put the baby at risk, if it occurs during the first or second trimester and if the mother herself is not immune to it. It may cause Congenital Varicella Syndrome in the baby and lead to birth defects such as skin scarring, malformed limbs, microcephaly and other neurological problems. - Mosquito-borne Infection Dengue, malaria, chikungunya and other mosquito-borne infections have symptoms such as fever, headache, chills, nausea and joint pain. In rare cases, these infections can cause miscarriage. Babies can catch these infections if the mother has fever or any other symptoms up to four days before delivery, or one day afterwards. Babies exposed to infections can suffer from fever, difficulty in feeding, skin problems, and seizures. In such cases, the baby should be monitored by doctors for about a week. - Sexually Transmitted Infections (STIs) Sexually Transmitted Infections (STIs) such as chlamydia are often easily treated with a round of antibiotics. Many pregnant women feel ashamed going to a doctor for a suspected STI, but it is important to get help during pregnancy. Any STI could adversely impact the baby during pregnancy or even during birth. There is an increased risk of premature birth, underweight baby, or even a miscarriage. Therefore, every pregnant woman should get herself checked out if she suspects an STI. - Bacterial Vaginosis Bacterial Vaginosis is caused due to an imbalance in normal bacteria levels in a woman’s genital area. About 10-30% of all women will experience this during pregnancy. Although not particularly risky, there is some evidence to indicate that bacterial vaginosis during pregnancy can increase the risk of premature birth or having an underweight baby. It may also lead to miscarriage or stillbirth. It is important for pregnant women to get themselves treated immediately in order to avoid any possible complications. - Food Borne Illnesses Since pregnant woman have a compromised immune system, they are more susceptible to food borne illnesses such as listeriosis. This can lead to blood infection, meningitis, and other potentially life-threatening illnesses. Although rare, most of the detected cases are among pregnant women. Listeria can infect the placenta, amniotic fluid, baby, or all three, and lead to stillbirth or miscarriage. In such a case, baby is more likely to be born prematurely. There are many other infections that can cause problems during pregnancy. Apart from the above, pregnant women should be watchful against infections such as herpes, gonorrhea, hepatitis B, Group B strep, HIV, and toxoplasmosis etc. Taking small steps such as getting recommended prenatal tests, washing hands or practicing safe sex can help prevent infections. If you need to know more about infections, our doctors at KIMS Cuddles are always there to answer your queries. **Information shared here is for general purpose. Please take doctors’ advice before taking any decision.
<urn:uuid:72968297-0a6f-41fb-a6ed-dc4b75c7ee8c>
CC-MAIN-2023-06
https://www.kimscuddles.com/5-infections-that-can-cause-pregnancy-problems/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00026.warc.gz
en
0.94307
757
2.875
3
Neuro Linguistic Programming (NLP) In case you are wanting to know what NLP is, usually do not fret as you are usually not on your own. Inside of a nutshell, NLP or Neuro Linguistic Programming is defined by many professionals and practitioners to be a set of abilities and styles which you could utilize purposefully attain outcomes and ambitions they never considered doable. Some could define It is just a physique of data or self-control that aims to learn the how and why of human thinking, behavior patterns and social interaction. It is a strong Instrument that empowers and allows you need to do regardless of what factors better. During the 12 months 1970, John Grinder and Richard Bandler developed Neuro Linguistic Programming. On the other hand, There exists issues to determine Neuro Linguistic Programming per se due to the fact equally Grinder and Bandler employed this kind of obscure and ambiguous language that it may be interpreted in different ways from whatever they definitely initially meant. That is why it experienced in no way relished Significantly guidance Neuro-linguistic programming within the faculty of psychology for the reason that some contend that its advantage and authenticity like a psychotherapeutic strategy is disputable. Now, the concern is exactly what NLP is for and what is in it that piques the curiosity of many? Determined by what I have go through, it might be utilized in many areas within our lives. Some promises that it might educate you to become far more unbiased. It may help you make a powerful foreseeable future and a private pathway. There are several authorities that claim that it may develop a solid positive mental Mindset and may instruct you the way to slow down and respect the compact things that make up The entire of your daily life. So basically, NLP has a little something for everybody, it is about ‘modeling excellence’ Therefore you could become far better in almost everything you are doing in everyday life. Regardless if you are Ill or nutritious, person or corporation there is one thing in your case. As I’ve Formerly talked about earlier, quite a few critics said that NLP to be a psychotherapeutic technique is disputable regarding its advantage and authenticity. Some critics also contend that the fashionable interpretation of Neuro Linguistic Programming seems to have adjusted from the original strategy into something else. Also, there are actually other people who feel that NLP therapists are only into fraud. Some others can also be apprehensive over it because it only gives off short term Increase as opposed to an extended and continuing 1. Finally, for Other people they feel that Neuro Linguistic Programming is especially about tricking people into believing They can be Unique and they can become greater. Though these negativities to NLP exist, nonetheless some people swear by it. That it will help an individual Reside an even better lifetime, it can help just one change his or his Frame of mind. It may also aid the person by modifying the actions in this type of way that she or he will become extra dynamic. The person is usually anticipated to become considerably more independent or a lot more Innovative. In addition, we have to generally bear in mind NLP just isn’t science; Therefore there is no certainty of its efficacy as it generally depends upon the person application of your tactics and the level of competency that somebody is accustomed to.
<urn:uuid:0d8ca1ac-309c-4f3b-b841-81be30aa424a>
CC-MAIN-2023-40
https://ttkrfu.com/neuro-linguistic-programming-nlp/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.19/warc/CC-MAIN-20230923162848-20230923192848-00817.warc.gz
en
0.964744
689
2.609375
3
This temple has a history of how Mangalore got its name. It is about 3 Kms from the city center. Goddess Mangala Devi, the divine mother was immensely pleased with the devotion of Bhargava and told him that she would dwell in his place as "Mangala Devi" to be worshipped by devotees. People started knowing this place as Mangalapura which later became Mangalore. The word Mangalore is derived from Goddess Mangaladevi, the main deity of the temple. The temple built in memory of the princess of Malabar Mangale in the 10th century. The Goddess Mangala is worshiped as Shakti. The festival is celebrated during 9 days of Navaratri . Top Things To Do In MangaloreView All Best Travel Packages in MangaloreView More Packages Add Your Travel Story your email address will not be published. Required field are marked * Comments will go through a verification process for security reasons.
<urn:uuid:f28d17f7-da5b-4133-904f-481c86c22aea>
CC-MAIN-2017-09
http://www.tourtravelworld.com/india/mangalore/mangaladevi-temple.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00309-ip-10-171-10-108.ec2.internal.warc.gz
en
0.977399
198
2.546875
3
1. Read your teacher's writings, blogs, articles, syllabus, etc. If you don't understand something or disagree, ask and discuss...you will learn something. You and your family are paying a lot of money for this knowledge. Be curious. 2. As a student, you are at the bottom of the food chain. Learn from those above you. As a student you don't know enough to know what you don't know. As you learn more you understand what more there is to learn. 3. If you are not consistently placing in the top 2 or 3 at your school auditions, why would you expect to win a professional audition? Your study is not necessarily to prepare you to win an audition, it is to make you a musician of such quality that employers will want to hire you for your expertise. 4. The path to excellence is well worn. Stay on the path. You are not different. You have not found a new way. There are no "million dollar ideas,” but only "million dollar executions.” Hard work outdistances talent. There are no shortcuts. 5. Turn your phone and other distractions off during your practice. What you are trying to accomplish requires all of your focus. Don’t get sucked-in to mediocrity 6. Invest your time. Don't spend your time. Quality of practice is more important than quantity of practice. The ideal is a large quantity of quality practice. 7. Live aggressively and use each day to its fullest. You will never get it back. 8. Be curious, be interested, seek knowledge and progress. Apathy may seem "cool" but it only leads to mediocrity. Be proud to be good. Don't pretend it's not important. 9. Be the best you can be. You are cheating yourself by doing anything less than your absolute best. Don’t be upset with the results you don’t get with the work you don’t do. Remember…the world needs ditch diggers too. 10. Be a doer, not a talker. Don’t pretend too be something you are not. Become great so that you do not have to pretend.
<urn:uuid:212fe407-5126-4e7e-a651-6dbf92ac8c52>
CC-MAIN-2018-17
http://peterellefson.blogspot.com/2014/06/elucidations-2014.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00101.warc.gz
en
0.959633
459
2.734375
3
Trees and plants have long been held sacred to communities the world over. In India, we have a whole variety of flora that feature in our myths, our epics, our rituals, our worship and our daily life. There is the pipal, under which the Buddha meditated on the path to enlightenment; the banyan, in whose branches hide spirits; the ashoka, in a grove of which Sita sheltered when she was Ravana’s prisoner; the tulsi, without which no Hindu house is considered complete; the bilva, with whose leaves it is possible to inadvertently worship Shiva. Before temples were constructed, trees were open-air shrines sheltering the deity and many were symbolic of the Buddha himself. Sacred Plants of India systematically lays out the sociocultural roots of the various plants found in the Indian subcontinent, while also asserting their ecological importance to our survival. Informative, thought-provoking and meticulously researched, this book draws on mythology and botany and the ancient religious traditions of India to assemble a detailed and fascinating account of India’s flora.
<urn:uuid:af4c2409-269b-41a1-bd76-201e730be7d3>
CC-MAIN-2018-34
http://earthcarebooks.com/product/sacred-plants-of-india/
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00494.warc.gz
en
0.962696
228
2.875
3
Our brains use the image formed on the retina of the eye to give us a picture of the outside world. sometimes the brain is misled and interprets the information wrongly.The we see an optical illusion. Straight lines can be made to appear curved. Two lines of the same length can be seem to be of different lengths. The eye can be misled by what else is surrounding these lines . We often have a false impression of movement . Suppose two trains are sitting in a station. and then one starts to move. The passengers of the other train often think incorrectly, that their train is leaving the station .They were expecting to move, and when they see movement assume their train is departing. THE LAW OF PERSPECTIVE: As an object moves into the distance it appears to grow smaller .In the, same way the parallel lines , like the edges of a road or a railway line . give the illusion drawing together as they become further away.These things appear to us is perspective. The picture can be seen in either of two ways. If you stare at it for long enough you will see first a candlestick. Then suddenly it will look like two faces. You will not see both candlestick and faces at the same time Optical illusions. In figure 1 and 2 the two red lines appear to be of different lengths because of the angle of the green lines. They are actually exactly the same length. When we watch a film the eye sees the slight changes in detail in each picture as a continuous change, which it interprets as movement.The square holes on the edges of the film guide it through the projector. Each of your eyes is shaped like ball . Most of the ball is safely shielded inside your head. At the fron of the eye is a transparent outer layer called the cornea. The coloured part of the eye is called the iris. The middle of which appears black. This is actually a hole , called the pupil. It automatically grows bigger or smaller to let in more or less light. From the pupil the light passes through a lens. This can alter it shape to focus images from near or far on to the retina. The retina is the black wall on the inside of the eye. It contains millions of light sensitive cells called rods and cones. These cells convert the light into electrical messages which are carried to the brain by the optic nerve. Every image that is focused on the retina is upside-down. The brain automatically turns the image the right way up in our minds. Each eye sees a slightly different view of an object .Because of this we see a rounded or three dimensional viewof things.If we have one eye in the centre of our faces, objects would seem much flatter. A camera uses one lens to produce an image. This is why a photograph looks two-dimensional. Some More Interesting Pictures on Optical Illusion: 1) Keep in mind that this is a static image. It is not animated in any way. but as your vision moves back and forth the center area seems to be moving toward the center (contracting) and the outer edges seem to be moving away (expanding) from the center. Also worth noting is that if you fixate on a point in the center and don't move your eyes this anomalous motion will stop. 2) A scintillating grid illusion. Shape, position, colour, and 3D contrast converge to produce the illusion of black dots at the intersections 4) Motion Clock : Here is an anomalous motion illusion that was created for me by Herman . If you don't see the motion, slowly move your eyes around the clock face
<urn:uuid:3907c4e6-10a8-4422-9efb-4fb39456aba8>
CC-MAIN-2017-43
http://manashsubhaditya.blogspot.com/2012/03/optical-illusions-science-of-optical.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825057.91/warc/CC-MAIN-20171022022540-20171022042540-00073.warc.gz
en
0.944025
747
3.75
4
Scientists find key to stubbing out nicotine addiction During a study of two interconnected regions of the brain known to play a role in addiction – the medial habenula and the interpeduncular nucleus (IPN) – scientists found that changes to a particular set of neurons in nicotine-addicted mice could reduce the rodents’ dependency on the drug. When exposed to nicotine, the medial habenula is supposed to send a signal to the IPN telling it to limit its effects by preventing maximum intake. In its research, the team from New York’s Rockefeller University, Mount Sinai Medical School and the National Institute of Biological Sciences in China, found that prolonged exposure causes changes in a group of neurons known as Amigo1, a change that disrupts the communication between the habenula and the IPN. This means the message to “stop smoking” is never delivered, causing higher levels of addictiveness. The study, published in the journal PNAS, showed how mice who had been served nicotine-water in a chamber for six weeks displayed their addictive behaviour by choosing to remain in a chamber when given the choice of leaving to spend time in another. When the team later conducted the same experiment on mice genetically modified to remove the Amigo1 neurons, they found that the mice did not display a preference for the drug by choosing to stay in another chamber rather than the one they were served nicotine. "If you are exposed to nicotine over a long period you produce more of the signal-disrupting chemicals and this desensitizes you," scientist Ines Ibanez-Tallon of Rockefeller University told Medical Express. "That's why smokers keep smoking." Cigarette smoking is responsible for more than 480,000 deaths per year in the US, according to the CDC, with the total economic cost of smoking thought to be around $300 billion per year. Worldwide, tobacco consumption causes nearly six million deaths per year. If current trends continue, that figure will rise to eight million by 2030. Earlier this month research from Bristol University in the UK found that after 500 participants were asked to select a smoker and non-smoker from 23 sets of twins, both men and women were able to identify the smoker. While the research found that non-smoking elements were found to be more attractive to the opposite sex, it noted a similar trend from same sex people.
<urn:uuid:4e0a424d-5e4c-44e6-8713-e010597afce0>
CC-MAIN-2021-10
https://www.rt.com/news/414499-nicotine-addiction-neurons-study/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00100.warc.gz
en
0.957565
487
3.125
3
Cancel culture or call-out culture is a modern form of ostracism in which someone is thrust out of social or professional circles – whether it be online, on social media, or in person. Notably, many people claiming to have been cancelled often remain in power and continue their careers as before. How is cancel culture used in a sentence? How to use cancel culture in a sentence. In Cuba its culture commenced in 1580, and from this and the other islands large quantities were shipped to Europe. Yet a child coming under the humanising influences of culture soon gets far away from the level of the savage. What dies woke stand for? Woke (/woʊk/ wohk) is a term, originating in the United States, that originally referred to awareness about racial prejudice and discrimination. It subsequently came to encompass an awareness of other issues of social inequality, for instance, regarding gender and sexual orientation. Is it canceling or Cancelling? The forms of cancel in American English are typically canceled and canceling; in British English they are cancelled and cancelling. Cancellation is the usual spelling everywhere, though cancelation is also sometimes used. How do you use woke in a sentence? Woke sentence exampleThe sound of voices woke her. Cassie woke the next morning in the cool of dawn. When she woke again, the room was dark. I just woke up, Moira. Later she woke to find Connie asleep in a chair beside her gurney. Remember, he woke you.More items What are the 2 types of cancellation? The forms of cancel in American English are typically canceled and canceling; in British English they are cancelled and cancelling. What is cancel culture in simple words? : the practice or tendency of engaging in mass canceling (see cancel entry 1 sense 1e) as a way of expressing disapproval and exerting social pressure For those of you who arent aware, cancel culture refers to the mass withdrawal of support from public figures or celebrities who have done things that arent socially What is TikTok cancel culture? Cancel culture is basically when an influencer is called out and publicly shamed for a mistake that they might have made in the past. 2020 was the prime year for cancel culture because many people had a lot of time on their hands while being in quarantine. TikTok was the main app that people were getting cancelled on.
<urn:uuid:f343f47f-d15c-48d7-88f8-72b487f764e7>
CC-MAIN-2021-43
http://birthday-press.com/qax84983.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00417.warc.gz
en
0.970493
497
3.15625
3
WatergateA general term used to describe a complex web of political scandals between 1972 and 1974. The word specifically refers to the Watergate Hotel in Washington D.C. At this complex, the office of the Democratic National Committee was burgled on June 17th, 1972. The burglary and subsequent cover-up by the Nixon administration eventually led to moves to impeach President Richard Nixon, who resigned his presidency on August 8, 1974.Guzman in Guatemala The CIA carried out a covert operation in 1954 to depose the democratically elected Guatemalan President Jacobo Arbenz (aka Guzman) and end the Guatemalan Revolution in 1954 known as the Guatemalan Coup D’Etat. Code-named Operation PBSUCCESS, it installed the military dictatorship of Carlos Castillo Armas, who turned out to be the first in a series of US-backed dictators to rule over Guatemala.Mossadegh in Iran Mossadegh became Prime Minister of Iran in 1951, and attempted to nationalize Iranian oil production, which had previously been run and exploited by the British. The British stopped oil production in hopes of making Mossadegh more agreeable, but this failed, and the British quickly abandoned the idea of declaring war on Iran due to personnel issues. As such, they decided to install a new government in Iran, so MI6 partnered with the CIA to stage communist upheaval, and therefore justify sending in troops. The Shah, whose dictatorship was marked by torture and repression, was installed by the US and Britain in a 1953 coup.
<urn:uuid:2c229455-f837-4398-8448-8b3decb7369e>
CC-MAIN-2021-39
https://www.coursehero.com/file/p1rriol/NSC-68-National-Security-Council-Paper-NSC-68-entitled-US-Objectives-and/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00646.warc.gz
en
0.965788
309
2.984375
3
VINCI Gives Kids Their Very Own Tablet – And a Jumpstart to Education [GIVEAWAY!] October 20, 2011 © VINCI Tablet Ever since the iPad made its debut onto the marketplace last year, the tablet has become the ‘it’ device to covet. Not only did other manufacturers create tablets to compete with the iPad, but a slew of tablets for kids emerged on the market as well. The VINCI Tablet is one of them that is geared towards kids aged 0-4 and was created by a mom who realized that her electronic devices were not meeting the needs of her young child. Dr. Dan Yang, a telecom entrepreneur, created the VINCI Tablet to provide a kids-focused device based on developmental science, innovative technology and early childhood education, with safety as the first priority. The whole premise of the device is to provide kids with relevant and age-appropriate games, activities and learning apps that encourage young kids to learn at their own pace. The design of the tablet itself is very unique and demonstrates the intended age group. The rubberized wide grip handles that surround the device are intended to be chewed on, drooled on and held with sticky hands. The VINCI is an 7-inch Android-based touchscreen device that is created from the safest, non-toxic materials possible and comes bundled with Early Learning Apps, Games, Story Books and Music Videos, which are all part of an ‘Early Learning Curriculum’ that the device is based upon. There is also a 3-megapixel rear-facing camera that is used in the tablet’s game designs to help children through the learning process, or for simply taking photos of your little ones. VINCI creator Dr. Yang feels that the biggest differentiator between the VINCI and other kids-focused tablets is its learning curriculum. It was created from intensive research efforts on the ways in which children learn and promotes “constructive” play which engages the various cognitive areas that are developing through any given age range. The learning system is classified into three different learning levels: One caveat: If you want to be able to grab the VINCI and check your email or Google something, you’ll be out of luck because there is no Wi-Fi or Internet access available on the device – which is a true indication of the age group it targets. Dr. Yang did that purposely so that young kids aren't able to venture beyond their apps into the dark reaches of the Internet. Other caveat: The device isn’t cheap. The basic version (VL-1001), which offers 4GB storage and comes with Level 1 learning activities sells for $389. The enhanced version (VL-2001) that comes with 8GB of storage and Levels 1 and 2 is $489. Both systems can be upgraded to higher learning levels with an additional membership fee. When asked about the high price of the device and if parents are willing to pay this much for a learning device, Dr. Yang said, “Absolutely. Parents see it as an early investment into their children’s education.” The VINCI tablet can be purchased from the company’s site or through online retailers. Click here for more information. *And one lucky Screen Play reader will be the winner of a VINCI tablet loaded with Level 1 and 2 software! Simply click here to enter. The giveaway runs from Thursday, Oct. 20 – Thursday, Oct. 27. (Read the official rules here.) Good luck! If you have a question for Screen Play or would like to submit a product for consideration, please contact LetsPlugIn@gmail.com.
<urn:uuid:4bbec8bf-93c9-4e9e-b12c-e2337a04fd36>
CC-MAIN-2014-10
http://www.parenting.com/blogs/children-and-technology-blog/jeana-lee-tahnk/vinci-gives-kids-their-very-own-tablet-and-jumpst?con=blog&loc=bottomprev
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704396/warc/CC-MAIN-20140313024504-00033-ip-10-183-142-35.ec2.internal.warc.gz
en
0.960989
757
2.625
3
Minestrone with buffalo worms and mealworms (grasshopper garnish optional); termite porridge; ‘land shrimp snack’ made of grasshoppers or locusts with hot pepper oil, lime and salt and protein bars made with cricket flour. These are just a handful of recipes from “The Insect Cookbook: Food For a Sustainable Planet” published by Dutch entomologists Arnold van Huis and Marcel Dicke, along with cooking instructor Henk van Garp. The books main aim is to open our eyes to the fact that our aversion to insects as a food source is senseless and outdated. Unlike livestock and other forms of animal protein, insects are plentiful and nearly everywhere. Whilst, culturally we tend to overlook the possibility of caring for ourselves by insect means, they are a nutrient-rich and sustainable food-source that deserve consideration. Jon Foley, head of the Institute for the Environment at the University of Minnesota, recently referred to the global food crisis as ‘the other inconvenient truth’ stating that he believes we are at a ‘critical crossroads’. Currently, the population of the world increases by about 75 million people each year. According to the United Nations Panel on Global Sustainability, the world will need at least 50% more food and 30% more water by 2030. As developing countries adapt to modern needs and their economies grow, their demand for meat will increase and to meet this we will need to triple our food production. Unfortunately with current agricultural practices this is an impossible goal. A recent report issued by the U.N.’s Food and Agriculture Organization (FAO) promoted human consumption of insects as an environmentally sustainable means of feeding the planet. Although it was met with disgust and tossed aside by many, others such as food expert, Ruth Reichl, former editor-in-chief of Gourmet and award winning author recently told the New York Times that “We should all be eating insects, and we all will be eating insects. They are a perfectly reasonable source of protein.” Like it or not, eating insects (or entomophagy) provides a far more sustainable source of protein than our existing consumption of meat and animal products. Also, most edible insects are very protein rich while being comparatively very lean. For example, a cricket has all the essential amino acids that beef contains but is far higher in iron and calcium. Other insects can provide other micronutrients such as B-vitamins, beta-carotene, and vitamin E. Not only are insects often more nutritious but they are also a potential solution to the current inefficient food system because of their marginal environmental impact. In general, insects are extremely inexpensive and relatively safe and only require a fraction of the feed, space, water and maintenance of conventional livestock. The current livestock industry is estimated to be responsible for 17-18% of greenhouse emissions and accounts for 70% of all land cleared for agriculture. Almost half of global water is used to produce animal-based foods. Insects, on the other hand, can live off agricultural byproducts such as food waste (e.g. fruit peels) and only a tiny portion of them produce methane, with those that do only producing very small amounts. Also as insects are poikilothermic (i.e. their body temperature remains the same as their surroundings), they are much more efficient at converting nutrition into protein. For instance, crickets need 12 times less food than cattle to produce the same amount of protein and unlike the cruel practice of factory farming, crickets and other bugs actually thrive when they are packed on top of each other. Convinced yet? With an estimated 1417 species of insects being regularly eaten by over 2 billion people across 3,000 ethnic groups in 80% of countries around the world, it seems it is the Westerners who will suffer most in the long run. But why are we so squeamish when it comes to the idea of chomping down on nice slice of crittle (a cricket and peanut brittle hybrid)? Most of the Western world readily eats prawns and shrimp, which are arthropods-just, like insects, spiders and millipedes! Therefore, we need to get over this idea that insects are disgusting and stop trying to live in an insect-free world where everything is sterile and clean (after all we wouldn’t be here without the pollinating insects!). But before you get too excited and run down to the local park with a homemade pooter, take heed of the following advice. Like plants, some insects are good for you and some are toxic, aswell as the fact that you can never be sure that wild insects haven’t been exposed to pesticides, therefore, only farmed insects should be consumed. There are also several ‘pestaurants’ opening up around the world with a variety of insects to suit every palate. Experts also caution that we must be careful to develop sustainable cultivation and harvesting methods, as there are examples of human overconsumption that has led to the collapse of some insect species. Jakub Dzamba, a man who’s researching radical approaches to urban agriculture, is working to build insect farms that can go right into the walls of an apartment building. The idea is that families could feed their food scraps and leftovers to the crickets, and then eat those same crickets, thus solving the dual agricultural problems of production and distribution. It would therefore seem that these six-legged critters might just find a spot at our table in the not-so-distant future. If we can just start to accept and overcome our fear of munching on food that creeps and crawls, the future looks a little brighter for us humans. Unfortunately, I’m not so sure the same can be said for the future of our possible new food source, who have more of a reason now than ever, to stay away from the ‘light’!
<urn:uuid:95e94a98-3804-42bb-bd2c-e53a4fd36306>
CC-MAIN-2017-51
http://ournewclimate.blogspot.com/2014/05/eating-insects-to-save-world-dont-let.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595342.71/warc/CC-MAIN-20171217093816-20171217115816-00386.warc.gz
en
0.9497
1,225
2.84375
3
More than a toy, Roominate is a movement designed by three women who majored in engineering, math and science, at prestigious institutions MIT, Caltech, Stanford and Penn. Alice Brooks, Jennifer Kessler, and Bettina Chen developed Roominate, a stackable, customizable dollhouse-type toy that girls build themselves and outfit with working circuits to inspire girls to pursue degrees and careers in STEM (science, technology, engineering and mathematics) because, “Only 15% of female first-year college students intend to major in STEM, and less than 11% of engineers are women.” The innovative trio turned toymakers also want to change the fact that most toys designed for young girls are dolls and princesses. The Roominate Kit comes with wooden building pieces and circuit components that girls may use to construct, design and expand upon their own unique rooms. Watch Roominate come to life in the video after the jump. Alice, Jennifer and Bettina offer, “We want to inspire your daughters to be the great artists, engineers, architects, and visionaries of their generation. We intend to give them every tool to reach that potential.” The women have been developing their Roominate prototype with over 200 girls ages 5-12, and they note, “Most young girls have never made a circuit, but they love the intuitive experience our color-coded circuits provide. Connecting the circuits brings the rooms to life; a fan or a light can instantly make a room interactive.” Roominate sucessfully funded their Kickstarter campaign by raising almost $86,000, surpassing their goal to raise $25,000.
<urn:uuid:a456ff09-87c4-46d1-bf19-c42e3274f1ce>
CC-MAIN-2017-04
http://www.inhabitots.com/roominate-designed-to-spark-girls-interest-in-technology/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954672
334
2.984375
3
Whenever I ask someone what programming languages they know, I get one of two answers: 1.) Some form of object oriented programming, or 2.) HTML. HTML, technically, is not a programming language. Because of this, it usually gets a bad rap, and not too many people want to learn it once they find out that they won’t be able to program their dream game in HTML. Although the field is changing with HTML5 coming onto the scene, many people still do not see the importance in learning the easy to understand language of HTML, no matter what your occupation. HTML stands for Hypertext Markup Language, and it operates much like word processor, or more like word processors operate a lot like it. HTML is used to set up the web page for the rest of the languages to be put in it. A basic HTML document is nothing pretty, usually consisting of many left aligned headings and paragraphs, with a plain white background. Boring, right? This is what usually turns people off from HTML, because it gives the impression that HTML cannot do anything “cool.” This is simply not true. Although HTML has no computing power, as in it cannot do any math, it still offers a lot of insight into how the website can operate. For example, if you are making a web page using a CMS, and having no prior experience in web design, then you can create a fairly nice website in an a few hour, complete with text and a pretty theme. However, if you know some HTML, you can have a site up in minutes, with all the knowledge of how the website functions and is structured. If something goes amiss with your CMS, you may be helpless, unless you realize that the problem is a misplaced image, or a wrongly placed <h1> tag.
<urn:uuid:372f173a-0c34-4b33-8e65-36801a952964>
CC-MAIN-2020-29
http://maxmarksdesigns.com/?p=146
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00217.warc.gz
en
0.949329
370
3.296875
3
New Living Translation Then Boaz called ten leaders from the town and asked them to sit as witnesses. King James Bible And he took ten men of the elders of the city, and said, Sit ye down here. And they sat down. Darby Bible Translation And he took ten men of the elders of the city, and said, Sit down here. And they sat down. World English Bible He took ten men of the elders of the city, and said, "Sit down here." They sat down. Young's Literal Translation And he taketh ten men of the elders of the city, and saith, 'Sit down here;' and they sit down. Ruth 4:2 Parallel CommentaryWesley's Notes on the Bible 4:2 Ten men - To be witnesses: for though two or three witnesses were sufficient, yet in weightier matters they used more. And ten was the usual number among the Jews, in causes of matrimony and divorce, and translation of inheritances; who were both judges of the causes, and witnesses of the fact. 1 Kings 21:8 So she wrote letters in Ahab's name, sealed them with his seal, and sent them to the elders and other leaders of the town where Naboth lived. Her husband is well known at the city gates, where he sits with the other civic leaders. Jump to PreviousBoaz City Elders Responsible Sat Seated Seats Sit Ten Jump to NextBoaz City Elders Responsible Sat Seated Seats Sit Ten LinksRuth 4:2 NIV Ruth 4:2 NLT Ruth 4:2 ESV Ruth 4:2 NASB Ruth 4:2 KJV Ruth 4:2 Bible Apps Ruth 4:2 Biblia Paralela Ruth 4:2 Chinese Bible Ruth 4:2 French Bible Ruth 4:2 German Bible Ruth 4:2 Commentaries
<urn:uuid:8c04daef-518f-4a63-8971-548d53c59f86>
CC-MAIN-2019-47
https://www.biblehub.com/nlt/ruth/4-2.htm
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00049.warc.gz
en
0.942218
418
2.8125
3
Amanda Porterfield, ed. American Religious History. Malden, Mass. and Oxford, UK: Blackwell Publishers, 2002. xiv + 338 pp. $46.95 (paper), ISBN 978-0-631-22322-1; $119.95 (cloth), ISBN 978-0-631-22321-4. Reviewed by David Thomas (Department of History and Political Science, Union University) Published on H-AmRel (July, 2002) Religious Diversity, Religious Freedom Religious Diversity, Religious Freedom American Religious History is a collection of secondary and primary source documents designed to be used in a course of the same title, or perhaps in a more general survey of United States history. Porterfield has pulled together a diverse, intelligent selection which will allow professors and students plenty of latitude for exploration, discussion, and learning. She has chosen breadth of coverage over depth in any particular faith tradition and so her document selections range widely across the religious spectrum. The book opens with an essay by the editor highlighting the foundational elements of American religious history. Porterfield unpacks four: religious freedom, individual experience, family life, and social reform. These intertwining aspects of our religious heritage build on one another and account for nearly all of the innovation, conservativism, and argument present in the various religious traditions of the United States. A persuasive, well-written essay, this provides the overarching themes which guided the choice of documents and can serve as a foil against which to compare other synthetic arguments. Part 1 includes nine historical essays, each roughly twenty pages in length. Perry Miller's "Errand into the Wilderness" is followed by essays dealing with shouting Methodists, establishment Protestantism, gender ideals in fundamentalism, Catholic survival strategies, the development of conservative Judaism, the introduction of Buddhism, and gender conflicts for black Muslim women. The most general is Albanese's essay on diversity and syncretism, "Exchanging Selves, Exchanging Souls: Contact, Combination, and American Religious History." Otherwise, each piece is a highly specific essay devoted to a topic of importance to an individual tradition. Jay Dolan's essay on Catholicism, for example, addresses Catholic efforts to adjust to the progressive individualism of American culture, a crucial issue for American Catholics. Porterfield appears to have made the assumption that other sources will provide the links to tie these detailed pieces together, for only Albanese's essay addresses broad themes that cover two or three hundred years and several religious traditions. This is not a complaint, just an observation on what the book does and does not do. The essays provide a series of opportunities to dive headlong into the depth and complexity of specific historical developments. The reader will need to turn elsewhere for synthesis, narrative, and thematic development. This is an excellent group of historical essays. I dissent from only one selection, Simmons's essay "Striving for Muslim Women's Human Rights." Most of the essay is written in the first person and reflects upon the author's personal journey of faith and political activism. While this makes for a great read and is sure to confront many religious and social assumptions for students, I might have considered it as a primary source, perhaps in place of the selection by Malcolm X. The thirty-four primary source documents complement the historical essay, but I would like to see more of them. Porterfield's selections lay out the Puritan foundation well, with five indisputably important--if predictable--authors: Winthrop, Hutchinson, Williams, Bradstreet, and Edwards. As I read this, I wondered about other religious traditions and about the nature of religious freedom. About half the colonies had some element of utopian thought directing their founding; should this be included? Would it be good to include an early Maryland law on religious toleration? That would certainly be an eye-opener for students with fixed, modern assumptions about the meaning of "religious freedom." The Revolutionary period is thinly represented by a selection from Jefferson and one from an Iroquois leader, Handsome Lake, despite the fact that these were foundational years for two of the most powerful religious influences in American history, Methodism (Francis Asbury) and Catholicism (John Carroll). The tumult of the nineteenth century gets better press, with selections from Finney, Jarena Lee, Emerson, and Dickinson. Porterfield could not include everything, but I did miss the immigrant stories and some representation from the slave quarters, both of which could add diversity and depth. Abolitionists and women's rights advocates were often intensely religious people, as well, and since reform was one of Porterfield's major themes some mention of these seems appropriate. Selections from the later nineteenth century include Brownson (Catholic), Pratt (Latter-Day Saints), Wise (Jewish), and Eddy (Christian Science). Twentieth-century documents address the liberal/conservative split, Catholic and Jewish developments, American Indian influences, spiritual arguments in the Civil Rights Movement, and the multiple challenges of feminism. The book closes with a selection by Ralph Reed. Porterfield covers more of this century than the others, but this was also a busy century filled with important changes, many of which did not make the cut. Pentecostalism was left out, as was the spirituality of the environmental movement; Vatican II was just touched on. Nonetheless, Porterfield captures the energy and fragmentation of this century better than the previous two. With such a competitive, energetic history, more documents could always be included. The goal in such a book, though, is not to be thorough; the goal is to provide avenues for students to explore those four foundational elements the editor mentioned in her introduction: religious freedom, individual experience, family life, and social reform. Of these four, religious freedom and individual experience are represented the best, while social reform and family life both take a back seat. I enjoyed reading this book; I found the selections interesting and challenging and I believe students would as well. Porterfield has written her introductions to each selection fairly and with integrity. With this topic bigotry is so easy, yet she models a very professional entrance into the world of the believer. This is a fine reader, with the potential to raise important questions about both religious history and personal belief. If there is additional discussion of this review, you may access it through the list discussion logs at: http://h-net.msu.edu/cgi-bin/logbrowse.pl. David Thomas. Review of Porterfield, Amanda, ed., American Religious History. H-AmRel, H-Net Reviews. Copyright © 2002 by H-Net, all rights reserved. H-Net permits the redistribution and reprinting of this work for nonprofit, educational purposes, with full and accurate attribution to the author, web location, date of publication, originating list, and H-Net: Humanities & Social Sciences Online. For any other proposed use, contact the Reviews editorial staff at firstname.lastname@example.org.
<urn:uuid:9f3dd214-719a-4ee0-9353-9cc3c98eb074>
CC-MAIN-2014-49
http://www.h-net.org/reviews/showrev.php?id=6479
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010590.31/warc/CC-MAIN-20141125155650-00238-ip-10-235-23-156.ec2.internal.warc.gz
en
0.946063
1,435
2.5625
3
Let's play a quick game of "Would You Rather" survival-style. If you were out in the Alaskan wilderness, by choice or circumstance, would you rather have a knife or a gun with you? As you make your choice, consider the environment. Since the Alaskan wilderness covers an area of more than 90,000 square miles (233,098 square kilometers), you could encounter any number of survival situations. During the winter, you might wander over an expanse of snow and ice, and in the summertime, you're witness to greener foliage and ambling wildlife. You could be on the side of a mountain, on top of a glacier or in a forest. Because of its role with many basic survival necessities, you're better off in the Alaskan wilderness, or any wilderness for that matter, with a knife. But what about protecting yourself from wild animals? What good would a knife do if you're not alive to use it? Although a gun could certainly help you kill Alaskan predators, such as the black bears, Alaska's Department of Natural Resources warns that people with guns often hurt themselves more frequently than they do bears [source: Alaska Department of Natural Resources]. Also, grizzly bears, in particular, will usually shy away from attacking if you stand still, raise your arms and speak to the animal in a commanding voice. How about shooting a gun as a distress signal? You can use reflective surfaces, like a large knife blade, to create a bright sunspot that you can flash three times as the international signal for rescue needed. If used correctly, people can spot this type of signal from more than 10 miles away (16 kilometers) [source: Tawrell]. And unless you want to lug around a cache of ammunition, a knife will likely prove more lasting for the long haul. If you aren't convinced yet that knives are a cut above firearms, read the next page to learn more ways they can help you withstand the unique challenges of the Alaskan wilderness.
<urn:uuid:4a82a764-00d7-44cd-b67e-755ac1a676cd>
CC-MAIN-2022-05
https://adventure.howstuffworks.com/survival/wilderness/alaska-knife-or-gun.htm#pt1
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00089.warc.gz
en
0.941253
419
2.8125
3
What is an accessibility feature, and how can it help me? An accessibility feature (also known as Assistive Technology or AT) generally refers to built-in software or a third-party app that can help people with impairments and disabilities use popular consumer devices. Most popular devices such as computers, smartphones and tablets already have useful accessibility features out-of-the-box as part of what you get with best-selling operating systems like Google Android and Microsoft Windows. Here are some examples of common accessibility features in popular products: - Screen reader: A text-to-speech application that reads out computer and Internet-related information to assist people who are blind or vision impaired. - Screen magnifier: A magnification tool for enlarging screen content. - Closed captions: enables dialogue and audio effects in a video to be displayed as text on a screen to support people who are Deaf or hearing impaired. - Themes: High-contrast themes allow people with visual impairments to change the colours to a more comfortable setting (such as white-on-black), and increase the size of mouse pointers and text. - On-screen keyboard: Enables people with mobility impairments to ‘type’ by using a pointing device to select letters and words on the screen. - On-screen alerts: Visual messages can appear in place of audible sounds to help people who are Deaf or hearing impaired. While the quality and type of accessibility features vary between operating systems and devices, most modern consumer computers, smartphones, tablets and media players contain them in some form.
<urn:uuid:faecd4fe-a616-4af1-ae3f-f567297e046f>
CC-MAIN-2019-22
https://affordableaccess.com.au/whats-accessible/what-is-an-accessibility-feature/
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258439.57/warc/CC-MAIN-20190525184948-20190525210948-00513.warc.gz
en
0.920793
326
3.703125
4
Can I Touch Your Hair? Poems of Race, Mistakes and Friendship By Irene Latham and Charles Waters, Illustrated by: Sean Qualls and Selina Alko Carolrhoda Books, 2018 Ages 8-12, Grades 3-6 Other formats: e-book Nothing is ever truly black and white. That’s what two classmates in the book Can I Touch Your Hair? learn in fifth grade when they reluctantly wind up paired together for a writing project and believe they have nothing in common. Irene and Charles’s differences in gender, style, and friends are already stark. Throw into the mix the fact that Irene is white and Charles is black, and both students fear they’re in for an uncomfortable and unmanageable few weeks. Yet, once each begins to write on the same subject as the other about his or her life experiences and perspectives, Irene and Charles discover that while the differences between them are indeed tangible—in shoe shopping, dinner conversations, church services, hairstyles, and favorite sports—their differences are unique preferences or circumstances that can be appreciated. They also learn that color is only skin deep. Even with varying experiences, opportunities, and challenges, at the end of the day, their matters of the heart aren’t so unalike at all, and thus, a friendship unfolds. Readers will experience Charles’s perspective on why it’s annoying to have someone touch his hair, and Irene helps readers understand how one can make awkward fumbles in expressing herself even with the best of intentions. This book could serve as a great conversation starter for adolescents from middle school age to older youths. Adults may even find it helpful to read these poems with a child and share their own experiences navigating race, identity and friendships. The vibrant illustrations by artists Sean Qualls and Selina Alko are an excellent companion to these compelling poems and will help young readers make sense of what it means to stretch beyond one’s comfort zone to try and understand others. – SHA
<urn:uuid:0b729b94-5387-4b74-acb2-64bcfd4af855>
CC-MAIN-2021-31
https://girlsofsummerlist.com/2018/06/18/can-i-touch-your-hair-poems-of-race-mistakes-and-friendship/?shared=email&msg=fail
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00019.warc.gz
en
0.941518
425
2.984375
3
Team Deseret celebrated mission completion on April 26, 2012. Click above to view details from the commemorative ceremony, including information products, photos and videos from this significant milestone. Deseret Chemical Depot (DCD) began storing chemical weapons in 1942 and once stored 13,676 tons of chemical agents, which totaled more than 44 percent of the nation’s original stockpile. DCD’s original stockpile consisted of various munitions and ton containers, containing GB, GA and VX nerve agents or H, HD, HT and Lewisite blister agents. The depot also served as the location for the Tooele Chemical Agent Disposal Facility (TOCDF) and the Chemical Agent Munitions Disposal System (CAMDS). CAMDS once served as the primary research, test and development facility for the nation’s chemical weapons elimination program; closure of this facility was completed in April 2013. Destruction of chemical weapons by the TOCDF, the first full-scale disposal facility in the continental United States, began in August 1996. The last chemical agent munition in the DCD stockpile was destroyed on January 21, 2012. The Army worked in partnership with Utah state and local government agencies, as well as federal agencies like the U.S. Environmental Protection Agency and the Centers for Disease Control and Prevention, to safeguard the local community and protect the environment as we stored and disposed of these chemical weapons. Safety and Security The safety of workers, the public and the environment are paramount to the success of the chemical weapons disposal mission. The U.S. Army Chemical Materials Activity (CMA) oversaw the secure storage of chemical munitions at DCD to ensure that they were safe. Once munitions were slated for disposal, they were transported, treated and disposed of following strict internal processes and regulatory requirements. The CMA remains committed to creating a safer tomorrow by safely storing the remaining two stockpiles in Colorado and Kentucky and safely assessing and treating recovered chemical warfare materiel through its Non-Stockpile Chemical Materiel Project—permanently eliminating the threat of aging chemical weapons to our communities and our Nation. Public Participation and Community Relations The Utah Citizens' Advisory Commission, whose members include area residents appointed by the governor, is a focal point for public participation in the Army's weapons storage and disposal program in Tooele until the chemical weapons stockpile was eliminated in 2012. The Commission was disbanded in 2012 as well. The Chemical Stockpile Emergency Preparedness Program works closely with your community and state emergency professionals to develop emergency plans and provide chemical accident response equipment and warning systems. To learn more about the Army’s chemical weapons disposal mission visit the Tooele Chemical Stockpile Outreach Office. TOCDF Closure Update [1,420KB pdf] 6/16/2014 Tooele, UT - Tooele Chemical Agent Disposal Facility Closure UPdate for June 2014 TOCDF Closure Update [1,310KB pdf] 4/24/2014 Tooele, UT - Tooele Chemical Agent Disposal Facility Closure Update for April 2014 TOCDF Closure Update [2,103KB pdf] 3/20/2014 Tooele, UT - Tooele Chemical Agent Disposal Facility Closure Update for March 2014 TOCDF Closure Update [616KB pdf] 1/23/2014 Tooele, UT - Tooele Chemical Agent Disposal Facility Closure Update for January 2014
<urn:uuid:c4693015-1c31-40f5-82ed-5c16f13f82d9>
CC-MAIN-2014-49
http://www.cma.army.mil/tooele.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006593.41/warc/CC-MAIN-20141125155646-00213-ip-10-235-23-156.ec2.internal.warc.gz
en
0.936304
729
2.875
3
The story of the Exodus is one of salvation, God’s people being led into freedom out of Egyptian slavery. The stories which follow however are not as upbeat. Once Israel leaves Egypt, they are faced with the questions of not only “where” are they now but also “who” are they now. The people who had been slaves in Egypt were now freemen in a wilderness wasteland. Before they could form a nation, they had to learn what it meant to be God’s people. I believe there is a reason God leads Israel about in the wilderness for so many years. A people who try to form a nation and establish new norms for behavior without understanding its own identity will quickly regret it. While no one enjoys wondering in the wilderness, sometimes a little uncomfortable chaos is necessary to help us understand what is supposed to come next. The same is true for every congregation. In the wilderness, we learn to tell our story. Immediately on the other side of the Red Sea, Moses and Miriam began teaching the people a song which retold their exodus out of Egypt (Exodus 15). It was essential for them to understand what had just happened and be prepared to retell that story for future generations. Likewise, a church needs to appreciate its history, its story. Where have we come from? How did we get here? Who made this possible? When we learn to tell the story of God at work in our lives, we will be prepared for what God will do next. In the wilderness, we learn to depend on God. In a desert, you lose all illusions of being able to provide for yourself. When water is given to you from rocks, you realize that you are totally dependent on God (Exodus 17). This is the same moral Paul gave to the Corinthians. “So neither he who plants nor he who waters is anything, but only God who gives the growth” (1 Corinthians 3:7). Our church must also learn that the God who provided for our past will provide for our future. In the wilderness, we learn to fail before we succeed. Most of the wilderness stories are sad: Miriam’s leprosy (Numbers 12), Nadab and Abihu’s death (Leviticus 10), Korah’s rebellion (Numbers 16). It turns out we humans are stubborn folk, and it takes us a few attempts to really get on board with God’s will. The wilderness is a time for failure, but also a time to learn from those failures. “Know, therefore, that the Lord your God is not giving you this good land to possess because of your righteousness, for you are a stubborn people. Remember and do not forget how you provoked the Lord your God to wrath in the wilderness (Deuteronomy 9:6-7a).” But the learning precedes the success God has in store, and God gives us opportunities to learn from our mistakes. What awaits beyond the wilderness? If we take our time through difficult seasons, if we learn our story and tell it often, if we learn to depend on God, and if we learn from our mistakes, the promised future is prepared and worth all the wilderness waiting. “Hear, O Israel: you are to cross over the Jordan today, to go in to dispossess nations greater and mightier than you … Know therefore today that he who goes over before you as a consuming fire is the Lord your God” (Deuteronomy 9:1-3).
<urn:uuid:98796629-70df-4a6d-b440-f116adfaa65f>
CC-MAIN-2020-24
http://benpreachin.com/wilderness-wandering/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00599.warc.gz
en
0.966111
724
2.921875
3
This Physics Central webpage contains an article that provides basic information on ionizing radiation and its effects on human beings. The website uses simple terminologies, symbols and diagrams to display the information. The information contained in the website also aims at dispelling popular misconceptions among the public. Also, natural sources of radiation and their amounts are discussed. Key terminologies and related subjects contain links to external websites that have more information on them. The web article also has a list of references for the user to gain further information this subject. American Physical Society. Ionizing Radiation and Humans – The Basics. College Park: American Physical Society, June 12, 2011. http://www.physicscentral.com/explore/action/radiationandhumans.cfm (accessed 24 January 2017). %0 Electronic Source %D June 12, 2011 %T Ionizing Radiation and Humans – The Basics %I American Physical Society %V 2017 %N 24 January 2017 %8 June 12, 2011 %9 text/html %U http://www.physicscentral.com/explore/action/radiationandhumans.cfm Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:afc7249c-7c07-4b39-9df8-ab6d12c0407e>
CC-MAIN-2017-04
http://www.compadre.org/Informal/items/detail.cfm?ID=12072
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891841
273
3.6875
4
Radiotherapy to the brain can make you feel very tired during and after treatment. Tiredness due to brain radiotherapy You might have radiotherapy for: - a tumour that started in the brain (a primary brain tumour) - cancer cells that have spread into the brain from another part of the body (secondary brain tumour) You might not feel tired at the beginning of your treatment. The tiredness usually comes on gradually as you go through your treatment over a number of weeks. By the end of the course of treatment you may feel very tired. The tiredness is a direct effect of the treatment. It is due to the body using up your energy reserves to repair healthy cells damaged by the radiotherapy. If you are taking steroids, you might also find that you feel extremely tired when you stop taking them. Travelling to the hospital for treatment can also make you tired. Unfortunately, the tiredness doesn't go away immediately when the treatment ends. It usually carries on for at least 6 weeks. In a few people, the tiredness can become very severe a few weeks after treatment has finished. You may also feel drowsy and irritable. This is a rare side effect and is called somnolence syndrome. It is an extreme tiredness that can make you feel very drowsy and sleep for up to 20 hours a day. You might also have headaches, a high temperature, loss of appetite, nausea, vomiting, and irritability. Symptoms usually occur 3 to 12 weeks after the end of radiotherapy treatment and can last a few days or several weeks. It doesn't need treatment and gets better on its own over a few weeks. Coping with tiredness You might feel weak and lack energy as well as being tired. It can sometimes help to sleep for a short time during the day. Rest when you need to. Various things can help you to reduce tiredness and cope with it, for example exercise. Some research has shown that taking gentle exercise can give you more energy. It is important to balance exercise with resting.
<urn:uuid:dda270a4-1509-4d2b-9528-5abd9ef2ed6e>
CC-MAIN-2017-39
http://www.cancerresearchuk.org/about-cancer/cancer-in-general/treatment/radiotherapy/side-effects/brain-radiotherapy/tiredness
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690340.48/warc/CC-MAIN-20170925055211-20170925075211-00547.warc.gz
en
0.980196
434
2.8125
3
Popular Science Monthly/Volume 46/March 1895/The Lesson of the Forest Fires |THE LESSON OF THE FOREST FIRES.| By BELA HUBBARD, LL. D. VOYAGERS on the upper lakes in August last were involved in clouds of smoke which settled over the waters. These were often so dense as to render navigation dangerous and to occasion frequent collisions. They obscured the sun, which appeared a dull red ball in the sky. This smoke extended as far east as the Atlantic and south to Georgia. The cause was soon apparent: forest fires were raging in the lands about the lakes. By these fires in lower Michigan property to the extent of thousands of dollars was destroyed; in the Upper Peninsula the burned area is reported at over one thousand square miles. But these devastations were insignificant compared with those in Wisconsin and Minnesota, in each of which States the losses amount to many millions of dollars. In Wisconsin the areas burned over ranged from fifty to one hundred and forty miles in extent. Individual lumbermen lost in standing pine from ten thousand to five hundred thousand dollars. All this was accompanied with the destruction of entire villages and crops as well as great loss of human life. A witness reports, "The bodies which dot the heated and black expanse give the scene the appearance of a battlefield." From Minnesota the news is even more appalling. Between Pine City and Carleton, a distance of one hundred and thirty miles, whole towns were swept out of existence. In one alone, Hinckley, at least two hundred people perished. Nineteen villages are wholly or partially destroyed, and many million feet of lumber. It is fairly computed that in this State alone five thousand square miles in area have been thus devastated. Minnesota contains about seventy thousand square miles; supposing two thirds of this area to be timbered land, one may count on the fingers of his two hands how many years of such devastation will deprive this State of every vestige of its timber. Terrible as has been the destruction from forest fires in 1894, the phenomena to which it has borne witness have been by no means unprecedented in our history during the last half century. I will recall those of a single year only. The present generation can not have forgotten the year 1871, made memorable by the great fire in Chicago, preceded by forest fires in Wisconsin and Minnesota and followed by similar fires in Michigan. From July to November, a period of five months, the rainfall in the latter State did not exceed six inches, and the entire precipitation of the year was only two thirds the normal amount. Early in October disastrous fires overspread portions of Wisconsin THE LESSON OF THE FOREST FIRES. 587 and Minnesota, burning over three thousand miles of territory. On the 8th of October occurred the great fire which consumed a large part of Chicago. On the same night the cities of Holland and Manistee, in Michigan, were laid in ashes, and during the week succeeding came news of devastating fires in other parts of the State. The new county of Huron was almost entirely swept over, and a large part of Sanilac County. Nearly all the villages on the Lake Huron coast were destroyed, and at least five thousand inhabitants left houseless. Houses, fences, crops, timber, all were burned ; and many people perished, being unable to escape the rapid march of the flames and smoke. Not less than two thou- sand square miles of country, wholly or partially timbered, were completely burned over in Michigan during this disastrous year. The Lower Peninsula contains forty-four thousand square miles. If we estimate about one half, or twenty thousand square miles, as timbered, it would require but ten such fires as that of 1871 to sweep the State clean. Forest fires nearly as disastrous have occurred in other States and other years, but these will suffice for our purpose. What is the origin of these forest fires ? Are they prevent- able ? Upon whom lies the responsibility ? These questions open a large field of inquiry and involve the whole subject of our for- est system, or want of system, and management good or bad of our woodlands, from the first settlement of the country. This is too large a subject to be treated as it deserves in a single paper, but even a brief consideration may make clear facts of the great- est scientific importance and serve to inculcate a lesson which can not be too strongly enforced. The extent and magnificence of the forest growth of the United States at the beginning of our existence as a nation surpassed that of any land of equal extent on the globe. In the number of species and the size of its trees, both deciduous and evergreen, it exceeded by five times that of Europe. Such a forest spread almost un- broken from the Atlantic to the Mississippi. An equally dense forest, mostly conifers, and many of a size before unknown, occu- pied the Pacific slope ; while between stretched an almost treeless region comprising nearly half the territory of the United States. What a treasury of wealth belonged to the new nation in its woodlands if properly husbanded! But to its first possessors these were an incumbrance, to be got rid of as speedily as pos- sible, in order that place might be made for another source of national wealth agriculture. Since that early period how great has been the change ! The forest area, which seemed to its first possessors so vast, and such an obstacle to civilized progress, has in a single century almost disappeared. �� � 588 THE POPULAR SCIENCE MONTHLY. Computations have been made, from time to time, by com- petent persons, including our efficient forestry chief, Prof. Fernow, of the number of cubic feet of wood of all kinds annually used by our people for all purposes. Into these I do not propose to enter. It must suffice to say that the total annual consumption has been variously estimated at from four to eight million acres of wood- land. Forest fires are responsible for ten million acres more, or nearly double all other causes combined. The United States east of the Mississippi conta,ins about five- hundred million acres. Assuming one half to be timbered land, and that ten million acres cover the actual annual consumption and destruction, our woodlands will practically last only another quarter of a century. A peculiar feature about this excessive depletion of our for- ests is the wasteful and improvident manner in which it has been accomplished. Nowhere else has such waste been witnessed. Lands have been so cheaply obtained, and their resources have appeared so boundless, that it seems hardly to have occurred that there could be any limit. Not only have no means been resorted to for renewal of the woodlands, but all who have had to do with the forests whether lumber barons or poor settlers alike have looked to personal gain, with no regard to the future. Especially has this been the case with lumbermen in the pine districts. A noble pine tree is felled ; one, two, or three saw logs are cut off, and the remainder left to litter the woods and to decay. Nor have the unsold Government lands escaped. Universally have these been plundered, as if Uncle Sam had no rights in his forest domain which his family were bound to respect. Nor has it been easy, if possible, to exact justice against plunderers, for juries will seldom convict, and are likely themselves to be parti- ceps criminis. Besides, the law, or at least custom, allows set- tlers to take whatever timber they need for their buildings and fences, and the question is seldom asked where sawmills in a sparse community obtain their supplies. Forest fires have accompanied the lumbermen, and it will be observed that the most extensive and disastrous ones have occurred in the pine districts. Nature's records show that before the advent of the white settler fires often swept the prairies and oak openings, and doubtless the peculiar character of these is largely due to this fact. The Indians were hunters, and the needs of the chase were met by the annual burning of the grass, which harbored game while it hindered the chase. Usually the damage to timber thus occasioned was but little, though in the course of years many a fine tree succumbed to repeated attacks. But the Indians never ruthlessly destroyed the woodlands. The white hunter, too, who roamed the woods before they were occu- �� � THE LESSON OF THE FOREST FIRES. 589 pied by the tiller of the soil, left behind him no disastrous traces of' his presence, or, if a conflagration sometimes followed his camp fires, it occurred but seldom, and was never intentional. Both the aboriginal wood- dweller and his venatic successors looked upon the forest as the gift of the Great Spirit, to be reverenced by man as a sign of the bounty of a beneficent Cre- ator, and not to be wantonly desecrated. The practice of burning the old and dry grass in unoccupied lands, in order that a younger and mpre tender growth may give pasture to cattle, is still common in some of our States, and its results, though of benefit to a few, are disastrous to the gen- eral welfare. In Florida the cattle men have long been omnipo- tent. They have sway in the Legislature, which enacts laws to suit their wishes, even to the extent of prohibiting towns and vil- lages from passing ordinances to prohibit the running at large of cattle. A considerable portion of the State is thus annually burned over. Nor is it the grass alone that burns, but fire com- municates to the pine trees, thousands of which yearly succumb. Meantime fences must be maintained to keep out cattle common- ers, only to be often burned in their turn. Worse than all, the humus in the sandy soil is burned out, and the future wealth and resources of the State are destroyed, to privilege a few, whose entire interests are not a thousandth part in value of the ruin they accomplish. At this day and everywhere may be encoun- tered tracts of utterly barren and worthless land, in the midst of comparatively fertile, whose fertility has been thus destroyed. In northern California similar aggressions are committed by the sheep-herders, and the Government reserves have to be pro- tected by the army, acting as patrols. There is another aspect more important even than the value of the pecuniary loss to the country from the extraordinary and rapid consumption of its forests, and which still more strongly concerns the future of the nation. I refer to the effects of defor- estation upon the climate and soils. Although there is not entire agreement among scientists as to the effect of the removal of forests upon the climate, and especially the rainfall, the following propositions seem to be well established : 1. That the temperature is hotter in summer and colder in winter than when the country was covered with forests. This is a natural result of exposure of the soil to more active radiation and consequent frost. 2. The winds have a more uninterrupted sweep, and so the country is both dried up and refrigerated. 3. The rainfall is either less in amount, or its advantages are to a great degree lost. Forests retain the moisture that falls and do not allow it to go to waste. �� � 590 THE POPULAR SCIENCE MONTHLY. 4. The humus in the soil, and the soil itself on the hills and slopes, are washed away by the rains, and carried to the lower lands and to the rivers, a large part being lost altogether. Abundant examples from the Old World might be adduced to fortify this position, and to show how numerous and great have been the changes from fertility to barrenness by the neglect to heed the warnings of Nature, But these are so well known to even the unscientific traveler and reader that I forbear. Most of us who have lived in America, even a single genera- tion, will recall many facts that warn us how closely we are following the path that has led older countries to ruin. Streams with which we were familiar in childhood have shrunken or dried up. Springs have failed ; the hills are bare and desiccated. How different the aspect of the older settled portions from what they appeared to eyes that beheld them less than a century ago ! How real this description by Bryant : " Before these fields were shorn and tilled, Full to the brim our rivers flowed ; The melody of waters filled The fresh and boundless wood ; And torrents dashed, and rivulets played, And fountains spouted in the shade." Now these woodlands no longer echo the song of the poet, and the melody of waters is exchanged for the rush and roar of the torrent. Droughts are now the rule rather than the exception. Our pastures dry up and are of little service for several weeks during the year. The more tender fruits can not be successfully grown where abundant crops greeted the days of old. Many of the most hardy trees and shrubs are killed by the depth to which frost penetrates the soil. So great and so indiscriminate has been and continues to be the destruction of the protecting woods as to create in the states- man and the philanthropist a well-founded alarm lest our coun- try be soon reduced to the condition of those regions of the Old World to which I have alluded. Let us now inquire, What has been done in this country for the protection and preservation of the forests ? In all the chief governments of Europe elaborate systems of forestry have long been established, to the end that the timber should be safe from all unnecessary destruction ; that it shall be allowed to grow in situations where experience has proved its importance in the amelioration of climate and the preservation of the sources of river supplj^, and to secure the timber supply by replanting. In this country the general and State governments have only slowly �� � THE LESSON OF THE FOREST FIRES. 591 awakened to the importance of legislative control and the estab- lishment of a forest policy. The first important forest movement began with the enact- ment by Congress of the Timber Culture Act of 1873, having reference to the comparatively treeless region west of the Missis- sippi River. By this act the planting to timber of forty acres of land conferred the title to one hundred and sixty acres of the public domain. Even this law was in advance of real knowledge on the subject of forestry and of other conditions. It failed to produce the expected result, and after a few years was repealed. The first act of Congress looking toward a definite forest policy, enacted in 187G, required the Commissioner of Agriculture to appoint " some man of approved attainments, with a view of ascertaining the annual amount of consumption, importation, and exportation of timber and other forest products ; the probable supply for future wants ; the means best adapted to the preserva- tion and renewal of forests ; the influence of forests upon climate ; the measures successfully applied in various countries, and to report upon the same." In 1878 Mr. Franklin B. Hough made his first report, a volume of six hundred and fifty pages. He alludes to acts of Congress, passed as early as 1817 and 1837, under which reserves were made of such lands as had a growth of live oak and cedar for shipbuilding purposes ; and that in 1854 the heads of the several land offices were authorized to investigate the repeated spoliations of public timber, to seize any timber found cut with- out authority, and to bring the offenders to the attention of the proper officers of the law. Many of the States had before this taken hold of the subject, so far as to offer premiums for the planting, and in some cases exemption from taxes, especially to encourage the planting of trees along the highways, and also laws for the preventing of forest fires. In some of these States, as in Michigan, forestry as a science is taught in the colleges, though as yet no school of forestry has been established, as is done in every country in Europe, in which the general or local government are owners of woodlands. State forestry associations have also been formed, Minnesota claiming the first, in 1878. In 1875 a National Forestry Associa- tion was formed, which since 1882 has met yearly, in widely separated localities. All these have been instrumental in arous- ing public interest, in issuing information on forest subjects, and in procuring legislation, especially regarding public reservations. This movement has resulted in the enactment of a law by Congress permitting the setting aside, by proclamation of the President, of portions of the public lands, in the Western States and Territories, for permanent forest reservations. Previous to �� � 1892 the General Government had made several extensive reservations, as parks, for preserving and opening to pleasure-seekers some of the natural wonders of our land, besides others for military purposes—viz.: |Yellowstone Park, Wyoming, containing||2,888,000||acres.| |Yosemite National Park, California, containing||960,000||"| |Sequoia National Park, California, containing||100,000||"| |General Grant National Park, California, containing||8,000||"| |Hot Springs National Park, Arkansas, containing||2,529||"| During the administration of President Harrison several other and large reserves were added to these, so that we now have in all over seventeen million acres. In the memorial presented to the President by the American Forestry Congress it is declared that the object of such reservations is to increase the sum total of the productiveness of our territory, the lands reserved being those that are unfit for agriculture, but capable, under wise management, of producing a greatly increased amount of forest products annually. Neither bona fide settlement of agricultural land, nor the right of prospecting for and opening mines, are to be interfered with. Demands for wood material are to be satisfied in a large and equitable manner; while it is sought to minimize the destruction by forest fires and wasteful and erroneous methods. The association further declared that such reservations wouldthe needs of forest protection unless the number is sufficiently large to embrace practically all the remaining public woodlands. Several of the States have also recognized the importance of setting apart reserves of woodlands. In the great State of New York this sentiment had become so strong by 1872 that a commission was appointed to inquire into the expediency of legislation for vesting in the State the title to the timbered Adirondack region, and converting it into a public park. But public opinion was not sufficiently ripe, and the destruction of timber and absorption by corporations and individuals went on as before. It was not until 1893 that a bill was passed which provides for the acquisition by the State of the control of large districts, in addition to the half million already owned by the State, to be held in forest for the preservation of the sources of the chief rivers; for its future timber supply; for game preservation, and for the free use by the people for health and pleasure. Nearly one million acres have thus far been set aside. How far this legislation if perfected will prove valuable depends upon the wisdom of the management. In its inception there is the highest wisdom. Notwithstanding the public interest awakened and the laws THE LESSON OF THE FOREST FIRES. 593 enacted, both by the General Government and the States, very little has yet been accomplished toward the restriction of waste, the preservation of timber, protection from plunder, or preven- tion of forest fires. Senator Dawes, in speaking of the invasion of the public lands, declared that " the ingenuity of the lawmaker has not yet equaled that of the spoliator," And even Mr. Fernow has pro- nounced, as his private opinion, that the United States has not yet reached the stage in the depletion of its forests when it is possible to carry out a really protective forest policy, and that this will not be accomplished until the country is reduced to the same condition of deforestation that the countries of the Old World had attained before remedial means were adopted. If this be true, we can only sit with folded hands and pray that this consummation may be speedily reached. Others, too, have joined the pessimistic strain, and argued that, " so long as the present conditions continue, the destruction of the forests is inevitable, and any policy of forest preservation is impossible." I, for one, will not believe that our citizens are so blind to ex- perience, or so indifl'erent or so powerless in this matter. It is true that no government can prevent wasteful methods of lumbering so long as timbered lands are held as private prop- erty, and virgin forests can be bought at a rate so cheap that careless management will still leave a profit. But governments can control the process on land owned by them, by withholding the land from market, awaiting the time not far distant when the timber can be sold under such regulations as will make the most of its resources. If this were done, lumber owners would soon find their interest to lie in more provident methods ; and increased values would make them saving of their resources. As to forest fires, since no plea of the public welfare avails to in- duce lumbermen to burn their debris, or to get rid of it in any way that is not directly repaid, and appeals to patriotism or a regard for the interests of their neighbors are unheeded, the strong arm of the law must be stretched out to compel. Why this has not been done it is hard to say : common if not statute law gives redress, and holds the owner of land account- able to his neighbor for negligence that endangers him. Is there warrant, either in a court of law or of common sense, that the owner of land may cut his timber and pile up the remnants to dry and become combustible material, with danger to his neighbor's timber or other property in case of fire, without being held accountable to him for the damage ? Probaby the legal aspect of the case is not well understood, and the results have been so long submitted to perhaps because the injured are themselves similarly situated toward others, and therefore can VOL. XLVI. 43 �� � 594 THE POPULAR SCIENCE MONTHLY. not come into court with clean hands that sufferers have come to believe that such disasters are unavoidable. It should be the practice of forestry associations to disseminate wholesome instruction on this head, and to present practicable plans for meeting the difficulties of the situation. Whatever the remedy suggested, it should ever be borne in mind that the owner of forest property, and especially corporations, have pur- chased for the purpose of converting the timber into money in the cheapest and most rapid manner possible, and that they are, as a rule, indifferent to the future of the region. They must also inculcate the principle that no legislation is effective, unless well- organized machinery is provided for its enforcement. The mere holding of a man or a railroad liable in damages for such acts of carelessness and indifference as I have mentioned, and for setting fire to woods, is not sufficient. Infraction of the law should be made a criminal offense, punishable by the severest penalties. It should be made the duty of counties and town- ships to appoint fire wardens, as is provided in Pennsylvania and Maine paid officials, who should exercise a vigilant watchful- ness, and use extra precautions in exceptionally dry seasons. At such times the town should take upon itself the work of clear- ing away litter and all combustible material that add to the dan- ger of fire. These should be burned or got rid of under constant inspection, at a time when the fire is not likely to spread. In case of a conflagration started, the wardens should be empowered when necessary to summon assistance. In France safety belts of trees not readily burned are planted on each side of the railway track where it passes through a pine forest. Roads, trenches, and cleared spaces are also so constructed as to prove a safe- guard ; the cost is paid partly by the authorities and partly by the landowners. Heavy penalties are imposed for kindling fires within certain prescribed limits. Among many suggestions for a forest policy in the older States, that for Pennsylvania commends itself, in a bill now be- fore the Legislature of that State. It provides that the Gov- ernor shall appoint a commission of two persons a competent engineer and a practical botanist who shall examine and report upon the important watersheds of the State, for the purpose of determining how far the presence or absence of the forest cover- ing may affect the water supply; also the amount of standing timber, and a measure for securing timber supply in the future. The Pennsylvania Forestry Association, in recommending the bill, points out the fact that the vast forests once covering all the head waters of the principal streams are nearly gone ; the splen- did oak and other timber is almost exhausted ; fires destroy two million dollars' worth of timber each year ; timber thieves escape �� � THE LESSON OE THE E ORE ST EI RES. 595 unpunished ; cattle kill the young growing timber, and no effort is being made to protect and renew the forest growth. There is not a State in the Union that does not need to adopt similar precautionary measures, and these should be accompanied with some practical plan for management. It would be well for each State to have a single forest commissioner appointed by the Governor, whose duty it should be, in addition to the collecting of such statistics as above, to organize in each county and town- ship a system of fire ivardens or patrols ; to see that special pre- cautions are taken in cases of unusual peril ; to ascertain the causes of fires and who is responsible, and to prepare evidence. He should be a man fully instructed and thoroughly competent, should be well paid, and should be held personally responsible. All officials appointed to such service should be removed as far as possible from political affiliations, should be under civil-service rules, and the position should be permanent during good behavior. The adoption by the General Government of a national forest policy can not be much longer delayed, although Congress is very slow to act. In the sale of the treeless portions of the public do- main the Government may require that a certain portion be planted in trees as soon as the proper conditions, means of irriga- tion, etc., exist, and that a certain proportion of the timbered land be kept in timber, the title to be dependent upon the stipu- lated conditions. Whether the United States will eventually come to adopt the methods of administration of the timbered lands in vogue in Europe is a question that time must determine. The country has as yet few persons that have been educated to for- estry as a profession, and simple rules must suffice for the pres- ent. Both the General and State governments possess, in the right of eminent domain, the power to preserve and condemn where necessary such lands as it shall be decided the public ben- efit requires to be maintained as forest in perpetuity. Private rights must give way to public utility. The owners of premises which have become a menace must be made to contribute their proper share of the expense of protective measures and forest police. A forest policy is at last taking form. A bill, introduced by Senator Paddock at the close of the Congress of 1892, provides in the first place for a survey to determine the extent and location of all forest lands, after which the President is to withdraw from sale all such lands, except those found to be more favorable for agriculture than for forest these reserved lands to be trans- ferred to the Department of Agriculture, where a Forestry Bureau exists. It provides for a Commissioner of Forestry, to be appointed by the President, with consent of the Senate, who shall have con- �� � 596 THE POPULAR SCIENCE MONTHLY. trol of all the forest reservations and timbered lands, subject to supervision of the Secretary of Agriculture, who shall appoint inspectors as assistants. Each reservation to have one superintendent, who shall have full charge and control of the reservation for which he is ap- pointed, and be responsible to the central bureau, and have such assistants as may be needed. Rangers to be appointed by the Commissioner of Forestry to act as police, against trespass and fires, and to supervise the timber operations. Full details of forest management are specified, into which I shall not here enter. To create as quickly as possible an efficient protective service, the army may be employed for this purpose, as has already been done in the Yellowstone and California Parks. The system pro- poses a separate and complete administration, conducted by com- petent men under expert instruction, and, while the protecting of watersheds is of sufficient importance to warrant expenditure out of Government funds, the service should be made to pay for itself by the sale of surplus forest material. The suggestion that the army be employed for policing the public forests is an admirable one. It has already done good serv- ice in this direction, and it will prove to be a constabulary force in which the country has full confidence. Military training has given the army a thorough organization and an esprit de corps, and it is free from political influence. Officers of the army made the best commissioners in Indian affairs which the country has ever had, and gained for themselves a just reputation for faith- fulness, honesty, and courage. They will be equally good custo- dians of our forest domain. Were our army twice as large as it now is, it would be too small for war, but would find too little employment in time of peace, unless its services are used in civil channels. To supply qualities that are wanting for this particular service a chair of Forestry should be established at West Point, to give such instruction in forestry science as the case requires. If the reforms here outlined, whether embodied in the Pad- dock bill or tlie McRae bill, or commended to our situation by foreign experience, shall be persistently urged by forestry and other associations, and the United States Government, heedful of the danger of neglect or delay, shall respond with promptness and energy and a proper regard for the future of the nation, a forestry policy will be inaugurated which will meet present re- quirements, and which may be extended and improved to serve all future needs. Then the lesson of the forest fires will not have been learned in vain.
<urn:uuid:514b3057-daea-4c10-998a-bd651b0b698a>
CC-MAIN-2015-27
https://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_46/March_1895/The_Lesson_of_the_Forest_Fires
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376073161.33/warc/CC-MAIN-20150627033433-00239-ip-10-179-60-89.ec2.internal.warc.gz
en
0.967562
6,344
3.40625
3
Bangladesh’s Constitution indicates that health is a basic right, and that the government is responsible for ensuring citizens’ access to healthcare. Yet the current system is failing those living in rural areas, even though they make up the majority of the population. Mohammad Tarikul Islam discusses the structure of health services in rural areas and where main challenges lie. National economic and social development depends to a degree on the status of a country’s health facilities. The healthcare system reflects the socio-economic and technological development of a country and is also a measure of the responsibilities a community or government assumes for its people’s well being. The Constitution of the People’s Republic of Bangladesh guarantees that ‘health is the basic right of every citizen of the Republic’. The Ministry of Health and Family Welfare leads on policy formulation, planning and enforcement. Within this, there are five Directorates: health Services, family planning, nursing services, drug administration and health engineering. In recent years, health policy has focused on sustaining provision of basic services to the entire population, particularly to under-served rural communities. The Union Parishad is the lowest tier of the local government of Bangladesh and plays an important role in rural development. One of its responsibilities involves providing health security to the rural population, which is a challenging task given that only 30% of Bangladeshis live in cities and there is limited infrastructure and a lack of health professionals in rural areas. A standard upazila (district sub-unit) in Bangladesh will have a Health & Family Welfare Center at Union Parishad level and Community Clinics at village level. The Welfare Centre offers general health services, and basic reproductive, maternal, and child health care services for local people free of charge, and each one has a Medical Assistant trained for three years in disease prevention, health education and basic first aid, and a Family Welfare Visitor who receives 18 months of training in family planning, reproductive health, and pre- and post-natal care. Community Clinics are government run (having taken the place of local clinics which were established as part of a donor driven mega programme initiated on pilot basis from 1996-2000). They are mostly used by people living within a half-mile radius, but around 50 percent of rural women are not aware of their existence and many rural people prefer to consult with a palli chikitshak, a local village doctor without any formal healthcare training. This is perhaps unsurprising given that clinics are ill managed and understaffed and therefore associated with poor quality care and attention to patients. Public healthcare is supplemented by efforts led by local entrepreneurs, NGOs and international organisations. A number of local NGOs like BRAC have special reproductive health care programs and facilities for providing antenatal and safe delivery care. There are also numerous private clinics throughout the country and many doctors from the public hospitals deliver services part-time in these clinics to supplement their incomes. The clinics operate on a fully commercial basis and are therefore costly, but those who have the resources prefer them because they are seen as offering better quality than public hospitals. However, private clinics lack accountability because they cannot be regulated by the government. Despite the facilities and support provided by both the government and private/NGO providers, rural healthcare in Bangladesh is therefore inadequate. For every one million people there are just 241 physicians, 136 registered nurses and 10 hospitals (making the availability of hospital beds 1 for every 4000 people). The available literature suggests that health security for rural people is undermined by the lack of physicians, employees and nurses’, misdiagnosis, negligence towards patients, irresponsibility, absenteeism and a lack of professional ethics. Furthermore, although the bulk of the population of Bangladesh lives in rural areas, most doctors are based in cities and towns. Doctors are deterred from serving in the villages due to the absence of proper capacity development, accommodation, quality education, transportation facilities, and the lack of career prospects. The Union Parishad also struggles to push for improvements due to the scarcity of resources and dynamic leadership. A comprehensive National Health Policy was introduced in 2011 and re-emphasised every citizen’s basic right to adequate health care and the state and the government constitutional obligation to provide the necessary infrastructure. Its stated objectives were to strengthen primary health and emergency care for all, expand the availability of client-centred, equity-focused and high quality health care services, and encourage people to seek care based on rights for health. However, this has not been effectively enforced over the last five years, and as a result health provisions are unevenly distributed across the country and access to the best care will be prohibitively expensive for the majority of rural Bangladeshis. It is obvious from the above discussion that the Union Parishad is one of the most important units of government in overseeing the health security for rural population but it is confronted with chronic problems which it does not have the resources to address on its own. Central government support and enforcement of existing legislation alongside greater cooperation with civil society organisations, the media, academics and the donors is needed to enable local government to ensure a higher standard of basic healthcare in rural areas. This article gives the views of the author, and not the position of the South Asia @ LSE blog, nor of the London School of Economics. Please read our comments policy before posting. About the Author Mohammad Tarikul Islam is the Faculty Member to the Department of Government and Politics, Jahangirnagar University, Bangladesh. He previously worked at the United Nations for seven years on projects relating to local governance, democracy, disaster management, the environment and climate change.
<urn:uuid:55ed7ef3-0c24-4aba-89d8-6b1a31bb4ecf>
CC-MAIN-2019-51
https://blogs.lse.ac.uk/southasia/2016/08/23/despite-constitutional-guarantees-bangladesh-is-failing-to-deliver-adequate-healthcare-to-rural-citizens/
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541281438.51/warc/CC-MAIN-20191214150439-20191214174439-00201.warc.gz
en
0.960168
1,142
3.015625
3
Some textiles are more difficult to test for color consistency than others. A textile manufacturer can easily test a swatch of solid blue jersey fabric because the material is flat, opaque, and homogenous. For these types of smooth, solid textiles, all the manufacturer has to do is take one reading on a spectrophotometer to determine whether the dyed jersey fabric meets the manufacturer’s color standards. But not all textiles are homogenous in color; fabrics like corduroy, heavy knits, and terry cloth have texture variations that making them much more difficult to measure with a spectrophotometer. If the manufacturer measures the color of such fabrics in just one small area of the sample, there’s no guarantee that this measurement will match other measurements taken of the same fabric—move just one inch to the right of your first measurement and you’ll likely find that the color reading is completely different. When testing for color consistency, how do manufacturers compensate for textured or non-homogenous products like this? One method is to average the color measurements in order to get an overall sense of the product’s color. By averaging your sample measurements, you’ll ensure that your color readings are as accurate as possible, even when working with materials that vary significantly in texture. However, in order to use this method properly, you’ll need to know when it’s appropriate to average your samples and when you should take only a single reading. After all, taking multiple measurements of the same sample can be a time-consuming process, so it’s important to only average measurements for the products that actually require this added level of attention. What is Sample Averaging? Sample averaging is an optional color measurement method that allows you to take numerous readings of the same sample or batch in order to obtain a result that best represents the product as a whole. When you take multiple measurements of, say, toothpaste that contains colorful microbeads, each new measurement will likely be slightly different from the last. A nearly-clear toothpaste may appear mostly translucent, but if one area of your sample contains slightly more blue exfoliating particles than another area of the sample, the spectrophotometer could provide you with a color reading that isn’t representative of the entire batch of product.1 As a result, the spectrophotometer may flag the sample because it appears too blue in just one small area. By taking multiple measurements of the same sample of toothpaste and averaging the results, you can get a more accurate sense of whether the product actually falls within color tolerance. There are two ways that you can average color samples of your products: optical or statistical. Here are the key differences between each method: An optical average measurement is performed by a spectrophotometer automatically. Using a color sensor, the instrument observes all of the available spatial data from the sample area of view and averages this finding in order to provide you with a single overall reading. The larger your area of view is, the more accurate this reading will be, as the instrument will have more data available to work with. However, an optical reading alone isn’t always appropriate for every sample. If you have textured samples or samples that vary in color from one area to another, then just one reading may not tell you everything that you need to know about the overall color of the product. Instead, you’ll need to take the additional step of a statistical average measurement. A statistical measurement goes beyond what many spectrophotometers perform automatically. When you take a statistical average of your sample, you make multiple optical measurements in succession, then calculate a total average measurement for all of those results. Manufacturers have a choice between two different types of statistical measurement methods: - Multiple Readings of One Sample: The first option is to take multiple readings of the same sample in different areas. For instance, if you’re measuring shag carpeting, you may set your sample in the spectrophotometer’s area of view so that only the lower right corner of the carpet swatch is visible. Then, once you get your first reading, you can rotate the swatch 90° and measure a different portion of the sample. Repeat this step as many times as you would like, until you feel as though the measurements you’ve taken fairly represent the entire sample. Once this is done, you can calculate the average of all of these measurements, which should give you an accurate representation of the entire sample. To facilitate this process, some spectrophotometers come equipped with sophisticated color measurement software like EasyMatch QC, which will average all of your separate measurements for you. - Multiple Readings of Multiple Samples: The second option for taking statistical averages of your samples is to take multiple measurements from the same batch or lot of products. This may be useful if you manufacture products that are themselves mostly homogenous, but that may vary in color between each other. A good example of this is in baked goods.2 A loaf of bread may be a solid, even shade of brown, but that loaf may appear darker in color than others baked in the same batch. To test whether your products fall within color tolerance, you can take multiple readings of different product samples, then average those readings to get an overall idea of how that batch compares to other batches. If you’re a bread manufacturer, you may find that one large batch of bread appears consistent in color from loaf to loaf, but if you compare the average readings of the entire batch to that of yesterday’s batch, you may find that yesterday’s batch was much lighter in color, on average. It could be a sign that your ovens are too hot, or that there is another issue in your manufacturing line. By averaging color for the entire lot, you can identify problems like this quickly, before they impact future products. When You Should Average Your Samples A wide range of industries average their color measurements in order to ensure that every product falls within color tolerance. Some of the most common examples of products that benefit from averaging include: - Translucent liquids that contain suspended particles (like toothpaste or gel exfoliators) - Thick, clear gels that contain air bubbles (like hand sanitizer) - Products that have scratches or grooves on the surface (like laminate flooring) - Hazy samples (like frosted glass) - Samples that vary in color or texture from one area to another (like yarn) To average measurements for products like this, it’s usually wise to set your spectrophotometer to the largest area of view possible in order to get a more accurate reading. In addition, you may wish to rotate or refill your samples at least two to four times so that the spectrophotometer has a large number of measurements that it can average. Although averaging your samples can offer you a more accurate reading of your products, this method isn’t necessarily the right choice for every industry or manufacturer. One potential downside of averaging is that it can take additional time to perform and in many cases, the spectrophotometer’s first measurement of the sample will closely match the findings of a statistical average measurement. For instance, if you want to measure a sample of paint, which is opaque and typically smooth in texture, then a single color measurement may be perfectly adequate for your needs. However, some manufacturers still choose to take multiple readings of paint products as an extra precaution.3 This is because environmental factors, such as sample preparation, may impact the color measurement results. In general, you may wish to average your sample measurements if you want to obtain the most accurate color reading possible for your samples, or if your samples vary too much in texture or color to get an accurate reading from one measurement alone. Some of today’s advanced spectrophotometers are capable of taking multiple measurements of a sample automatically, and many will handle all of the essential calculations for you. It’s never been easier to average your measurements, and this method may just be the best option for your purposes. For more than 60 years, HunterLab has worked closely with manufacturers in a wide range of industries that rely on accurate color measurements to produce the best products possible. Our instruments are currently being used in industries ranging from textiles to cosmetics and from solid samples to liquids to powders. Our flexibility of service, coupled with state-of-the-art spectrophotometers and software, makes HunterLab a leader in the field of color measurement. Contact us to find out how you can get started with our advanced color measurement tools and services. - “Microbeads”, https://us.pg.com/our-brands/product-safety/ingredient-safety/microbeads ↩ - “How to Measure the Quality of Baked Goods”, July 16, 2015, http://www.colourmeasure.com/knowledge-base/2015-07-16-how-to-measure-the-quality-of-baked-goods ↩ - “How Does Paint Color Matching Work?”, December 16, 2015, https://allisonsmithdesign.com/how-does-paint-color-matching-work/ ↩
<urn:uuid:9bd5e14c-4b94-49bf-96c9-20cfc89fc8fa>
CC-MAIN-2020-24
https://blog.hunterlab.com/blog/color-measurement-2/when-sample-averaging-appropriate-in-color-measurement/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402885.41/warc/CC-MAIN-20200529085930-20200529115930-00076.warc.gz
en
0.937393
1,925
3.1875
3
Saint Elmo's fire, luminous discharge of electricity extending into the atmosphere from some projecting or elevated object. It is usually observed (often during a snowstorm or a dust storm) as brushlike fiery jets extending from the tips of a ship's mast or spar, a wing, propeller, or other part of an aircraft, a steeple, a mountain top, or even from blades of grass or horns of cattle. Sometimes it plays about the head of a person, causing a tingling sensation. The phenomenon occurs when the atmosphere becomes charged and an electrical potential strong enough to cause a discharge is created between an object and the air around it. The amount of electricity involved is not great enough to be dangerous. The appearance of St. Elmo's fire is regarded as a portent of bad weather. The phenomenon, also known as corposant, was long regarded with superstitious awe.
<urn:uuid:1f892525-1beb-4980-9b3b-21f08e9934cb>
CC-MAIN-2015-48
http://www.factmonster.com/encyclopedia/weather/saint-elmo-fire.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00194-ip-10-71-132-137.ec2.internal.warc.gz
en
0.958849
182
3.25
3
Protecting the environment with geosynthetics Dr. R. Kerry Rowe Canada Research Chair in Geotechnical and Geoenvironmental Engineering, Queen’s University, Canada Geosynthetics are now widely used to contain fluids and protect the environment. Applications include most modern landfills, lagoons for contaminated fluid and drinking water, dams, and mining applications where loss of fluid to surface water or groundwater must be minimized. These systems often involve a single liner with welded panels of geomembrane liner, a geosynthetic clay liner, or a composite liner with a geomembrane over a clay liner. For large landfills or other higher-risk applications, a double liner system with a geocomposite or granular drain between two liners is used. Most frequently designs have used materials that meet a minimum set of commonly specified index parameters. This lecture draws together field observations, long-term experimental data, and theory to show how, and why, these systems have worked so well while highlighting the importance of design and construction considerations that, if overlooked, can cause problems. It then discusses the means of avoiding pitfalls. Login to view the video of this meeting.
<urn:uuid:e2eed83e-5e09-4a16-b236-daeeeb129350>
CC-MAIN-2019-43
https://australiangeomechanics.org/videos/protecting-the-environment-with-geosynthetics/
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00401.warc.gz
en
0.895066
250
3.203125
3
Poverty must be eradicated for kids to have a chance Researchers say that deep poverty places young kids at high risk for poor health and development. Far too often people who are personally removed from the true facts about the horrible pain and suffering associated with poverty insist the poor are simply lazy and do not deserve any special consideration to help them get ahead in life. This is simply not true. Being caught in poverty is like sinking in deadly quicksand and it takes a lot of outside help to be lifted out of that. One of the largest promoters of poverty in the United States and elsewhere are the psychiatrists One of the largest promoters of poverty in the United States and elsewhere are the psychiatrists whose dark ages attitudes and practices steal all potential from people instead of nurturing that potential. Things are so horrible with the psychiatric monopoly on what the psychiatrists insist is mental health care that the United Nations has blasted excessive medicalizing and torture by psychiatrists. This is not a joke. Poverty is associated with increased rates of child abuse and is a number 1 killer. The National Center for Children in Poverty reports about 15 million kids in the United States, which is 21 percent of all children in the country, live in families who have incomes which is below the federal poverty threshold. About 43 percent of American children in a country known as being a center of great wealth live in families with low incomes. A child's ability to learn can be hit hard by poverty The majority of the kids living in poverty have parents who work. However low wages and employment which is unstable leaves their families struggling on the fringes of society. A child's ability to learn can be hit hard by poverty which can contribute to serious social, emotional, and behavioral problems. There is an association between poverty and poor health and mental health. Poverty actually represents the single greatest threat to the well-being of children. It has been reported by Columbia University Mailman School of Public Health that there is risk for poor health and development for young kids in deep poverty. According to a study which has been released by the National Center for Children in Poverty (NCCP) at Columbia University’s Mailman School of Public Health, kids caught in deep poverty, wherein their family income is less than 50 percent of the federal poverty line, do even worse on health and development indicators than other kids in poverty. There are a higher percent of young kids who are in deep poverty who suffer from obesity and increased blood lead levels. It has been determined that the percent of young kids in deep poverty who have increased lead levels is three times higher than the portion which is seen in poor children, and greater than 17 times higher than in non-poor children. This is associated with learning and behavior problems. The researchers also say they found a higher percentage of young kids in deep poverty in comparison to kids in poverty with parents in poor or fair health or mental health. There is increased parenting stress in these families and a perception of a lack of social support and security in the neighborhoods where they live. A lower number of kids in deep poverty were judged by their parents as flourishing A lower number of kids who are living in deep poverty were judged by their parents as flourishing in comparison to other kids in poverty. Flourishing refers to a composite measure which is a reflection of parents’ view of the child’s resilience, affection, curiosity, and positive mood. Sheila Smith, PhD, director, Early Childhood at NCCP, says that deep poverty clearly makes large numbers of American kids vulnerable to health and developmental problems which limit their opportunities in life. Poverty really is a killer which stops the lives of children from progressing before they ever have a chance to get started. It's a fantastic sociopolitical concept to envision all kids and their families having equal opportunities to get ahead in life with no need for outside assistance. However, this becomes a delusional concept when kids and others are stabbed in the back with poverty. Acts of omission in dealing with this serious problem of poverty are as serious as acts of commission. The United States has the potential to be a great nation because of its inherent potential to allow individuals to generate great wealth. However, unless unjust roadblocks, such as poverty, to having a chance to become a part of the American dream of being wealthy are lifted the country simply will not be a great country. No country with so many wasted lives from poverty can claim it is actually great.
<urn:uuid:87332957-9c62-4718-be78-f7c390b9c5a2>
CC-MAIN-2018-09
https://www.emaxhealth.com/11402/poverty-must-be-eradicated-kids-have-chance
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812405.3/warc/CC-MAIN-20180219052241-20180219072241-00045.warc.gz
en
0.97162
898
2.515625
3
Enhanced Oil Recovery (EOR) is well known for its potential to unlock up to 80% of the world’s otherwise unrecoverable oil reserves. Current EOR methods include: |Thermal EOR||Gas EOR| A combination of thermal, gas, chemical, hydrodynamic and other EOR methods in a regime optimised for the oil reservoir’s characteristics Main Chemical EOR Techniques Surfactant Flooding boosts oil production by lowering interfacial tension, increasing oil mobility thus allowing better displacement of the oil by injected water. Surfactant EOR improves the wetability of porous rocks allowing water to flow through them faster displacing more oil. Polymer Displacement increases the viscosity of water injected into the oil reservoir enabling it to exert more pressure on the oil without forcing its way past and simply flowing through. Because this method relies on increasing the viscosity of water it is less effective on low permeability rock structures. Alkaline Displacement relies on the chemical interaction of alkali, oil and rock. When introduced to an oil field the alkaline agent reacts with the oil, forming surfactants which reduce interfacial tension. This allows oil to pass through porous rock more effectively. Microbiological Treatment introduces specific micro-organisms to an oil field which metabolise some of the hydrocarbon, in turn producing byproducts which assist in oil recovery. These bi-products include solvents, acids, alcohols, bio-polymers, bio-surfactants and gasses. Advantages of Surfactant Flooding in Enhanced Oil Recovery Recent developments in Surfactant Enhanced Oil Recovery have greatly reduced the surfactant concentration required for effective oil recovery. Initial development in the 1970’s and 80’s used anywhere between 2-12% surfactant concentration which, when combined with the cost of the surfactants themselves proved cost prohibitive. Recent advances in both research and surfactant product technology have lowered chemical concentrations within the range of 0.1-0.5% dramatically lowering the amount or chemical required. Surfactant manufacturing is now also delivering more advanced and safer EOR products at a lower cost than ever before. The best news is that these new advances have actually come at an overall improvement for the environment and human health. Some of the newest and most effective EOR surfactants are derived from plant resources such as sunflower oil, soy and corn oil. Envirofluid’s Triple7 EOR, for example, is comprised of amine reacted free fatty acids, fatty alcohols, esters and wax esters derived from soy, corn and seed oil fractions. It is readily biodegradable, non-toxic and non-hazardous. Effective Where Polymers and Alkali Don’t Work There are many instances where Polymer Displacement and Alkaline Displacement are ineffective. Because polymer injection increases the viscosity of the flood medium it is generally only effective on highly porous rock. In low permeability rock, polymer displacement quickly reaches a point at which it no longer produces results. Some oil reservoirs also present environments such as high salinity, which is not suitable for common alkalis. |A Breakthrough Surfactant in Enhanced Oil Recovery| Triple7 EOR is an advanced non-ionic surfactant formulation consisting of micelles designed to enhance oil extraction where traditional EOR extraction techniques are no longer efficient. Designed for low concentration surfactant assisted flooding Triple7 EOR increases field permeability, overcomes capillary force, cohesion force and conglutination force barriers to oil recovery.
<urn:uuid:a9489ba6-fe65-49ae-a348-e2fc71de13b1>
CC-MAIN-2021-10
https://www.envirofluid.com/articles/enhanced-oil-recovery-techniques-surfactants-in-the-chemical-eor-process/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00613.warc.gz
en
0.906279
760
2.921875
3
A collection of fluid around the liver can be due to a number of causes. The fluid could be ascites blood, urine, chyle, bile, pancreatic juice or pus. This fluid can be due to any trauma, obstruction of bile duct, infection, abscess, cirrhosis, autoimmune diseases, heart failure etc.Pneumonia, kidney and liver disease, heart failure and autoimmune diseases all can have pleural effusion. A clinical correlation is very important for confirming the diagnosis.
<urn:uuid:71cb071d-0604-49da-9c35-db428feb29ba>
CC-MAIN-2013-48
http://healthquestions.medhelp.org/pancreatitis-and-ascites
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163824647/warc/CC-MAIN-20131204133024-00050-ip-10-33-133-15.ec2.internal.warc.gz
en
0.868893
105
2.546875
3
Article ID: DD135 | By: Scott B. Rae Summary The new reproductive technologies give great hope to infertile couples and make many new reproductive arrangements possible. They also raise many difficult moral issues. Artificial insemination by husband is considered moral, but artificial insemination by donor raises questions about a third party entering reproduction. In vitro fertilization is acceptable within limits: the couple should ensure that no embryos are left in storage and that the risk of selective termination is avoided. Commercial surrogate motherhood raises problems because it is the equivalent of selling children, can be exploitative of the surrogate, and violates a mother’s fundamental right to raise her child. Even altruistic surrogacy raises questions about the degree of detachment the mother must have from her unborn child to successfully give it up after birth. On March 27, 1986, Mary Beth Whitehead gave birth to a little girl whom she named Sara. That same day, Elizabeth and Daniel Stern named the same baby Melissa. Both were convinced that the child (called Baby M in the press) belonged to them, and both were prepared to take drastic measures to win custody over what they thought was their child. The Sterns had hired Whitehead to bear their child. She was, and is to this day, the most publicized person to perform the role of a surrogate mother. Their contest over that child was carried on in court for almost two years, and it illustrates the potential problems and complexities involved with many of the new reproductive technologies. Medicine has made some remarkable advances in the field of reproductive technology. The term reproductive technology refers to various medical procedures that are designed to alleviate infertility, or the inability of a couple to produce a child of their own. These include artificial insemination, in vitro fertilization (or “test-tube” babies), and surrogate motherhood. When successful, these technologies are the miracle of life for couples who have often spent years trying to have a child, and who have exhausted all other avenues for conceiving a child of their own. But many of these techniques raise major moral questions and can create thorny legal problems that must be resolved in court. These new technologies make possible all sorts of interesting childbearing arrangements. Here is a sampling of what is now possible for couples contemplating parenthood in unconventional ways: (1) A man who cannot produce sperm and his wife want to have a child. She is artificially inseminated with sperm from an anonymous donor, conceives, and bears a child. (2) A woman who cannot produce eggs and her husband want to have a child. They hire a woman to be inseminated with the husband’s sperm, and she bears the child for them. (3) A woman is able to produce eggs but is unable to carry a child to term. She and her husband “rent the womb” of another woman and she gestates an embryo that was formed by laboratory fertilization of the husband’s sperm and his wife’s egg. (4) A lesbian couple wants to have a child. One of the women provides an egg, and after it is fertilized by donor sperm, the embryo is implanted in the uterus of her partner. (5) A couple desiring to have children cannot produce any of the sperm or eggs necessary for conception. So the woman’s sister donates the egg and the man’s brother donates sperm. Fertilization occurs in vitro, that is, outside the womb, and the embryo is transferred to the wife of the couple, who carries the child. As mentioned above, these new reproductive technologies raise complicated issues, not only for the law, but also for morality. What is society to say to these technologies that, in many cases, redefine the family and turn traditional notions of reproduction upside down? In addition, since many of these issues are not directly addressed in Scripture, in what way does the Bible speak to these issues? Artificial insemination is a relatively simple procedure in which sperm, either from the woman’s husband or a donor (if the husband is unable to produce sperm), is inserted into the woman’s uterus directly rather than through sexual intercourse. It is normally the first infertility treatment a couple will try because it is simple to accomplish, involves no pain for the woman, and is inexpensive compared to other reproductive technologies. It is most often employed when a woman’s husband has a low sperm count, or his sperm has difficulty in reaching the woman’s egg. When the woman’s husband’s sperm simply needs help in fertilizing the egg, artificial insemination by husband (AIH) is performed. Most people have no moral difficulty with such a procedure. It is simply viewed as medical technology providing assistance to what could not be accomplished by normal sexual intercourse. The genetic materials that are combined when conception occurs (and frequently it takes more than one insemination for conception to occur) belong to the woman and her husband, and they are the ones who plan to raise the child. Most people agree that there are no morally significant differences between AIH and procreation by intercourse. The exception to this is the Roman Catholic tradition, which views most reproductive interventions — including contraception — as a problem (see below). There are many cases, however, in which the husband is not able to produce sperm at all. In these cases, instead of artificial insemination being performed with his sperm, a donor provides the sperm. This is called artificial insemination by donor (AID). The donation is almost always made anonymously so that the father cannot be traced by the child, nor can the father elect to make contact with the child, potentially disrupting a harmonious family. In most cases, the sperm of two or three donors is mixed together, thus making it easier to conceal the identity of the father. AID raises ethical questions that are not raised by AIH. Since AIH takes place between husband and wife, the integrity of the family is maintained, and there is continuity between procreation and parenthood. But AID introduces a third party into the reproductive matrix, and someone who donates sperm to be used for AID is now contributing genetic material without the intent to parent the child that will be produced through the use of his genes. The assumption of Scripture is that children will be raised by the people to whom they are genetically related. The Bible assumes the concept that only husband and wife will be parents of children. There is a continuity between the genetic and social roles of parenthood. The Bible never clearly defends this notion; it simply assumes it. Perhaps the reason for this is that it is a notion that does not need defending, similar to the doctrine of the existence of God.Of course, Scripture could not directly address situations in which these reproductive technologies were available. But even though techniques like AID are not the subject of direct biblical teaching, there are biblical principles that can be applied to these different methods of alleviating infertility. Christian tradition on the family, for example, has always assumed that children will be born into a stable family setting of monogamous marriage in which sexual relations between father and mother result in the child’s birth. The principles underlying such an assumption are the integrity of the family and the continuity between procreation and parenthood. Adoption is widely recognized as an exception to the general rule, or an emergency solution to the tragic situation of an unwanted pregnancy. Just because the exceptional case is allowed, however, that does not justify it as the norm. Catholicism and Natural Law The Catholic tradition of natural law (i.e., basing morality on the natural tendencies or function of a thing) has also emphasized the continuity between procreation and parenthood, even to the point of denying the moral legitimacy of contraception, something that clearly interrupts that process. This is also the basis for Catholic opposition to abortion and most reproductive technologies. If everything progresses as God designed it, sexual relations result in conception and childbirth. In the same way that God designed an acorn to grow into an oak tree, He likewise designed sexual relations to come to fruition in the birth of a child. Thus there is a God-designed, natural continuity between sex in marriage and parenthood. Every sexual encounter has the potential for conception, and every conception has the potential for childbirth and parenthood. This is why sex is reserved for marriage, and why Catholic tradition makes little room for any reproductive technology that would interfere with a natural process that is the result of creation. It also rules out any third party involvement that would replace one of the partners in the married couple. The most recent Vatican statement on reproductive technology put it this way: “The procreation of a new person, whereby the man and the woman collaborate with the power of the Creator, must be the fruit and the sign of the mutual self-giving of the spouses, of their love and fidelity….marriage and….its indissoluble unity [provide] the only setting worthy of truly responsible procreation.”1 In other words, only in marriage is it morally legitimate to procreate children. A further statement clarifies the unity of sex and procreation, thereby ruling out most technological interventions for infertile couples: “But from a moral point of view procreation is deprived of its proper perfection when it is not desired as the fruit of the conjugal act, that is to say, of the specific act of the spouses’ union….the procreation of a human person [is to be] brought about as the fruit of the conjugal act specific to the love between persons.2 In other words, there is a unity between sexual relations and procreation. Procreation cannot occur apart from marital sexual intercourse, and every conjugal act in marriage must be open to procreation as the natural result of God’s creation design.3 For non-Catholics it may be problematic to assume that what is natural is also what is moral. This is what is known as the “naturalistic fallacy.” One cannot necessarily make the leap from the natural to the moral. As the British intuitionist philosopher G. E. Moore has suggested, what is natural is natural; nothing more and nothing less. A further problem with restrictions on reproductive technologies is that such restrictions may not be consistent with God’s creation mandate given to mankind to exercise dominion over the earth (Gen. 1:26). God gave mankind the ability to discover and apply all kinds of technological innovations. It does not follow, of course, that mankind has the responsibility to use every bit of technology that has been discovered (e.g., certain types of genetic engineering, nuclear weapons technology). But for the most part, technological innovations that clearly improve the lot of mankind are considered a part of God’s common grace, or His general blessings on creation, as opposed to His blessings that are restricted to those who know Christ personally. It would appear that many of the reproductive technologies in question fit under the heading of common grace, and whether or not they should be used depends on whether such use violates a biblical text or principle. IN VITRO FERTILIZATION (IVF) On July 25, 1978, Louise Brown was born. She was the first child ever born through the use of in vitro fertilization; that is, she was the first “test-tube” baby. A British gynecologist, Dr. Patrick Steptoe, and a physiologist, Dr. Robert Edwards, successfully joined egg and sperm outside the body, then implanted the embryo in the mother. Nine months later, Louise Brown was born and was heralded as a miracle baby around the world. In vitro fertilization simply means fertilization “in glass,” as in the glass container of a test tube or petri dish used in a laboratory. The procedure involves extraction of a number of eggs from the woman. To do this she is usually given a drug that enables her to “superovulate,” or to produce more eggs in one cycle than she normally does. The eggs are then surgically removed and fertilized outside the body in the laboratory, normally using the sperm of the woman’s husband. Since the procedure is so expensive ($10,000 — the extraction of the eggs being the most expensive part of the process), all of the eggs are fertilized in the lab. In this way if none of the fertilized embryos are successfully implanted, reimplantation can occur without much additional cost or lost time, since to extract the eggs would involve waiting until at least the woman’s next cycle. Normally, more than one embryo is implanted in the woman’s uterus, since it is uncertain how many, if any at all, will be implanted successfully. The actual number implanted depends on various factors relating to the condition of the eggs and the health of the woman. It is not unusual to have some if not all of the embryos spontaneously miscarry. If more than one embryo does successfully implant, then the couple may end up with more children than they originally intended. Twins and even triplets are not uncommon for couples who use IVF. Lest one think that IVF is successful more often than not, however, the average success rate is less than 10 percent of the fertilized embryos actually implanting and developing into a child. In order to keep the procedure as cost-effective as possible, embryos are frozen in storage to be used later if the first attempt fails. In some cases, however, more embryos successfully implant than the woman is able to carry without endangering her health and at times even endangering her life. Concerns about IVF Both of the above possibilities (embryos in storage and having more children in utero than the woman can safely carry) raise significant legal and moral issues about IVF. For example, what happens if, during the time in which the embryos are in storage, the couple divorces and a “custody” battle ensues over the unused embryos? A case like this was recently resolved in court in Tennessee. A couple who had utilized IVF later were divorced and the woman wanted to use the embryos to have a child. Her ex-husband refused, claiming that he did not want his progeny running around without his knowledge even of their existence. They went to court to have their dispute arbitrated. The court ruled in favor of the ex-husband, holding that one’s procreative liberty also gives one the freedom not to procreate, and thus the embryos could not be used without the man’s consent. What to do with frozen embryos if they are not needed raises significant moral issues. The alternatives would appear to be to keep the embryos in storage indefinitely (at a cost of around $150/year), to destroy them, to allow the couple to donate them to another infertile couple, or to use them for experimental purposes. Since, as most Christians believe, the right to life is acquired at conception, destroying embryos or using them in experiments is problematic. Destroying embryos outside the body is the moral equivalent of abortion, and science cannot experiment on someone with basic human rights without that person’s consent, particularly since experimentation on the embryo would result in its destruction. Storing the embryos indefinitely only postpones dealing with this issue. That leaves donation of the embryos as the only viable alternative. Yet this is problematic too since it involves a separation of the biological and social roles of parenthood that is a significant part of the biblical teaching on the family. It might be possible, however, to view embryo donation in a way that is parallel to adoption — as a preimplantation adoption in which the couple who contributed the genetic materials to form the embryo consent to give up parental rights to their child before implantation instead of after the child’s birth. This would require a significant change in the adoption laws of many states, since they frequently do not recognize any consent to adoption as valid and legal until a period of time after the child’s birth. These difficulties should cause Christians to think twice before utilizing IVF. A second problem arises not from the failures of implantation, but from its successes. As noted above, more embryos are routinely implanted than will survive in the uterus. But occasionally a woman is left with more developing embryos than she can carry to term without risk to her health and life. In these cases, the woman and her husband and her doctor have very difficult decisions to make. When this happens the doctor will normally recommend what is called selective termination of one or more of the developing embryos. This is done not for convenience’ sake, but out of a genuine concern for the life of the mother. Not only does this involve trading one life or more (the developing child[ren]), but the doctor is faced with the decision of which one(s) to terminate and how to make that decision. If the mother’s life is clearly at risk in carrying all the unborn children to term, then it would appear justified to terminate one or more of the fetuses in order to save the life of the mother. This is analogous to cases in which abortion is justifiable when carrying the pregnancy to term would put the mother’s life at grave risk. However, the agony of making such painful decisions must surely be considered prior to utilizing IVF to alleviate infertility. To avoid these dilemmas, a couple using IVF should request that only the number of eggs be fertilized that the couple will actually have implanted. In addition, they should request that only the number of embryos be implanted that the woman could carry safely should all of them successfully be implanted. Undoubtedly, surrogate motherhood is the most controversial of the new reproductive technologies. In many cases, the surrogate bears the child for the contracting couple, willingly gives up to them the child she has borne, and accepts her role with no difficulty. In those cases, the contracting couple views the surrogate with extreme gratitude for helping their dream of having a child come true. The surrogate also feels a great deal of satisfaction, since she has in effect given a “gift of life” to a previously infertile couple. But in some cases that have been well publicized in the media, the surrogate wants to keep the child she has borne and fights the contracting couple for custody. What began as a harmonious relationship between the couple and the surrogate ends with regrets about using this type of reproductive arrangement. Surrogacy itself is not new. The Old Testament records two incidents of surrogacy (Gen. 16:1-6; 30:1-13), and it appears that use of a surrogate to circumvent female infertility was an accepted practice in the Ancient Near East4 What makes today’s surrogacy new is the presence of lawyers and detailed contracts in the previously very private area of procreation. Today, surrogacy does not normally involve any sophisticated medical technology. Normally conception is accomplished by artificial insemination, though in some cases in vitro fertilization is used to impregnate the surrogate. In the latter cases the contracting couple normally provide both sperm and eggs, so that the surrogate mother is not the genetic mother. Problems With Surrogate Motherhood Surrogacy Involves the Sale of Children. Certainly the most serious objection to commercial surrogacy is that it reduces children to objects of barter by putting a price on them. Most of the arguments in favor of surrogacy are attempts to avoid this problem. Opponents of surrogacy insist that any attempt to deny or minimize the charge of baby-selling fails, and thus surrogacy involves the sale of children. This violates the Thirteenth Amendment that outlawed slavery because it constituted the sale of human beings. It also violates commonly and widely held moral principles that safeguard human rights and the dignity of human persons, namely that human beings are made in God’s image and are His unique creations. Persons are not fundamentally things that can be purchased and sold for a price. The fact that proponents of surrogacy try so hard to get around the charge of baby-selling indicates their acceptance of these moral principles as well. Rather than the debate being over whether human beings should be bought and sold, it is over whether commercial surrogacy constitutes such a sale of children. If it does, most people would agree that the case against surrogacy is quite strong. As the New Jersey Supreme Court put it in the Baby M case, “There are, in a civilized society, some things that money cannot buy….There are values…that society deems more important than granting to wealth whatever it can buy, be it labor, love or life.”5 The sale of children, which normally results from a surrogacy transaction (the only exception being cases of altruistic surrogacy), is inherently problematic. This is so irrespective of the other good consequences the arrangement produces, in the same way that slavery is inherently troubling, because human beings are not objects for sale. Surrogacy Involves Potential for Exploitation of the Surrogate. Most agree that commercial surrogacy has the potential to be exploitative. The combination of desperate infertile couples, low income surrogates, and surrogacy brokers with varying degrees of moral scruples raises the prospect that the entire commercial enterprise can be exploitative. But statistics on the approximately six hundred surrogacy arrangements to date indicate that this potential for exploitation has not yet materialized. Most surrogates are women of average means (the average income being around $25,000 per year),6 not destitute but certainly motivated by the money. The fee alone should not be considered exploitation but rather an inducement to do something that the surrogate would not otherwise do. Money functions as an inducement to do many things that people would not normally do, without being exploitative. This does not mean, however, that the potential for exploitation should be discounted. Should surrogacy become more socially acceptable, and states pass laws making it legal, it is not difficult to imagine the various ways surrogacy brokers might attempt to hold costs down in order to maximize their profit. One of the most attractive ways in which this could be done would be to recruit surrogate mothers more actively from among the poor, in this country, and particularly from the third world. For example, some are suggesting that those with financial need actually make the best candidates for surrogates since they are the least inclined to keep the child produced by the arrangement.7 Others are making plans to actively recruit women from the third world to be brought to the United States to serve as surrogates. The advantage to using these women is that it dramatically reduces the cost of running the surrogacy business. John Stehura, of the Bionetics Foundation, stated that the surrogates from these countries would only receive the basic necessities and travel expenses for their services. Revealing a strong inclination toward exploitation of the surrogates, he stated, “Often they [the potential surrogates] are looking for a survival situation — something to do to pay for the rent and food. They come from underdeveloped countries where food is a serious issue.” But he also added that they make good candidates for surrogacy: “They know how to take care of children…. it’s obviously a perfect match.”8 He further speculates that perhaps one-tenth of the normal fee could be paid to these women, and it would not even matter if they had some other health problems as long as they had an adequate diet and no problems that would affect the developing child.9 Stehura’s comments are representative of the fact that the potential for crass exploitation of poor women in desperate circumstances is already being seriously considered by brokers in the industry. It is not clear the degree to which these statements are representative of the entire industry. But with the profit motive being a primary factor it does not take much imagination to envision the abuses that could easily proliferate. Surrogacy Involves Detachment from the Child in Utero. One of the most serious objections to surrogacy applies to both commercial and altruistic surrogacy. In screening women to select the most ideal surrogates, one looks for the woman’s ability to give up easily the child she is carrying. Normally the less attached the woman is to the child the easier it is to complete the arrangement. But this is hardly an ideal setting for a pregnancy. Surrogacy sanctions female detachment from the child in the womb, a situation that one would never want in any other pregnancy. This detachment is something that would be strongly discouraged in a normal pregnancy, but is strongly encouraged in surrogacy. Thus surrogacy actually turns a vice — the ability to detach from the child in utero — into a virtue. Should surrogacy be widely practiced, bioethicist Daniel Callahan of the Hastings Center describes what one of the results would be: “We will be forced to cultivate the services of women with the hardly desirable trait of being willing to gestate and then give up their own children, especially if paid enough to do so…. There would still be the need to find women with the capacity to dissociate and distance themselves from their own child. This is not a psychological trait we should want to foster, even in the name of altruism.”10 Surrogacy Violates the Right of Mothers to Associate with Their Children. Another serious problem with commercial surrogacy might also apply to altruistic surrogacy. In most surrogacy contracts, whether for a fee or not, the surrogate agrees to relinquish any parental rights to the child she is carrying to the couple who contracted her services. In the Baby M case, the police actually had to break into a home to return Baby M to the contracting couple. A surrogacy contract forces a woman to give up the child she has borne to the couple who has paid her to do so. Should she have second thoughts and desire to keep the baby, under the contract she would nevertheless be forced to give up her child. Of course, this assumes the traditional definition of a mother. A mother is defined as the woman who gives birth to the child. Society never before needed to carefully define motherhood because medicine had previously not been able to separate the genetic and gestational aspects of motherhood. It is a new phenomenon to have one woman be the genetic contributor and a different woman be the one who carries the child. There is debate over whether genetics or gestation should determine motherhood. But in the great majority of surrogacy cases, the surrogate provides both the genetic material and the womb. Thus, by any definition, she is the mother of the child. To force her to give up her child under the terms of a surrogacy contract violates her fundamental right to associate with and raise her child.11 This does not mean that she has exclusive right to the child. That must be shared with the natural father, similar to a custody arrangement in a divorce proceeding. But the right of one parent (the natural father) to associate with his child cannot be enforced at the expense of the right of the other (the surrogate). As a result of this fundamental right, some states that allow a fee to be paid to the surrogate do not allow the contract to be enforced if the surrogate wants to keep the child. In these states, any contract that requires a woman to agree to give up the child she bears prior to birth is not considered a valid contract. This is similar to the way most states deal with adoptions. Any agreement prior to birth to give up one’s child is not binding and can be revoked if the birth mother changes her mind and wants to keep the child. Many states that have passed laws on surrogacy have chosen to use the model of adoption law rather than contract law that essentially says “a deal’s a deal.” The problem with allowing the surrogate to keep the child is that it substantially increases the element of risk for the contracting couple. They might go through the entire process and end up with shared custody of a child that they initially thought was to be all theirs. To many people, that doesn’t seem fair. But to others is it just as unfair to take a child away from his or her mother simply because a contract states that she must. AN ONGOING DISCUSSION These new reproductive technologies present some of the most difficult ethical dilemmas facing society today. Unfortunately, ethical reflection lags behind medical technology in this area. Given the strong desire of most couples to have a child to carry on their legacy, it is not surprising to see the lengths to which people will go to have a child that has at least some of their genetic material. People’s desires to have genetically related children will likely insure a brisk business for practitioners of reproductive medicine and, as a result, there will be an ongoing need for ethical discussion and decision making in this area. Scott B. Rae, Th.M., Ph.D. is Associate Professor of Biblical Studies and Christian Ethics, Talbot School of Theology, Biola University, La Mirada, California. 1 Congregation for the Doctrine of the Faith, “Instruction on Respect for Human Life in Its Origin and on the Dignity of Procreation,” Origins 16:40 (19 March 1987): 704-5. 2 Ibid., 706. 3 For further information on Catholic teaching in this area, see Edward Collins Vacek, S.J., “Catholic Natural Law and Reproductive Ethics,” Journal of Medicine and Philosophy 17 (1992): 329-46. 4 Both the Code of Hammurabi (1792-1750 B.C.) and the Nuzi tablets (1520 B.C.) authorize surrogacy, and not only for cases of barrenness. Thus surrogacy was not only widely practiced, but it was the subject of detailed legislation to keep the practice within proper limits. 5 In the matter of Baby M, 537 A. 2d, 1249 (1988). 6 The statistics on the annual income of surrogates are a bit misleading since they record the income of women who were selected as surrogates, but do not take into account the women who applied to be surrogates but were not chosen. In a 1983 study by psychiatrist Philip Parker, he found that more than forty percent of the applicants to provide surrogacy services were receiving some kind of government financial assistance. See “Motivation of Surrogate Mothers: Initial Findings,” American Journal of Psychiatry 140 (1983): 1. 7 Statement of staff psychologist Howard Adelman of Surrogate Mothering Ltd. in Philadelphia; cited in Gena Corea, The Mother Machine (New York: Harper and Row, 1985), 229. 8 Cited in Corea, 245. 9 Cited in Corea, 214-15. 10 Daniel Callahan, “Surrogate Motherhood: A Bad Idea,” New York Times, 20 January 1987, B21. 11 In Stanley v. Illinois, the Supreme Court stated that “the rights to conceive and to raise one’s children have been deemed essential…basic civil rights of man…far more precious than property rights. It is cardinal with us that the custody, care and nurture of the child reside first in the parents.” 405 U.S. 650 (1971), at 651.
<urn:uuid:f54250e3-414b-409b-b56c-812092de2a42>
CC-MAIN-2021-49
https://www.equip.org/article/reproductive-technologies/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00189.warc.gz
en
0.964084
6,408
2.546875
3
The following discussion gives an overview of the structure used for drivers. For a more detailed discussion, see ``DDI: 8 sample driver'', which provides a detailed commentary that can be viewed in parallel with a sample DDI 8 driver. The source code for a device driver is structured as a series of entry point routines that are documented in the following manual page sections: Other interfaces have their own D2* section that documents the entry point routines that are used with that interface. Each interface-specific D2* manual page section includes a manual page for every entry point routine that can be used with that interface. This means, for example, that there are config(D2), config(D2mdi), and config(D2sdi) manual pages. The D2 and D2oddi pages provide the basic information that is relative to all drivers; the interface-specific pages add ``Usage'' information and other material that is relevant only to that interface. Traditionally, entry point routines are given the name of the routine as documented on the manual pages, with the driver prefix(D1) prepended. For example, if the driver prefix is my, the driver has entry point routines named myopen( ), myclose( ), and so forth. Beginning with DDI 8, SVR5 drivers declare these entry point routines in the drvops(D4) structure rather than being accessed through switch tables that require strict adherence to this naming scheme, but driver code is easier to maintain if the traditional naming scheme is used. All SCO OpenServer 5 drivers use the named entry-point system and must be prefixed. Each entry point routine runs in a specific context (see ``Context of a driver'') which is identified on the man page for that routine. The context determines which synchronization primitives can be used and whether the driver can call functions that access the user context such as copyout(D3). This overview of driver code structure is divided into five parts: This discussion is based on DDI 8. SCO OpenServer 5 and DDI versions prior to version 8 use a different set of entry point routines. See Intro(D2) for a table comparing DDI 8 and DDI 7 entry point routines. See Intro(D2oddi) for a table that lists all SCO OpenServer 5 entry point routines. After reading this overview, look at the code samples that are provided in the HDK to get a better understanding of driver structure. We recommend that you use a copy of the code samples as a template for your own driver. ``Guidelines for all kernel drivers'' lists guidelines for making your driver robust and maintainable.
<urn:uuid:6242d90d-2752-4af1-8420-cb28c9a077f7>
CC-MAIN-2022-05
http://osr600doc.xinuos.com/en/HDK_basics/drv_structure.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00194.warc.gz
en
0.892276
559
2.53125
3
Efonidipine is a dihydropyridine calcium channel blocker that was first approved for use in Japan in 1995. It has since become an important treatment option for hypertension and other cardiovascular conditions. Pharmacology of efonidipine Efonidipine works by blocking the L-type calcium channels in smooth muscle cells, which results in a decrease in intracellular calcium concentration. This causes relaxation of the smooth muscle cells and dilation of the blood vessels, leading to a decrease in blood pressure. Efonidipine also has a vasodilatory effect on the coronary arteries, improving blood flow to the heart muscle. Efonidipine has a relatively short half-life of approximately 4–6 hours, which means that it needs to be taken twice daily to maintain therapeutic blood levels. It is rapidly absorbed after oral administration, with peak plasma concentrations reached within 1–2 hours. Clinical uses of efonidipine Efonidipine is primarily used for the treatment of hypertension. It has been shown to be effective in reducing both systolic and diastolic blood pressure in patients with mild to moderate hypertension. Efonidipine has also been found to be effective in combination therapy with other antihypertensive drugs such as angiotensin-converting enzyme inhibitors and beta-blockers. In addition to its use in hypertension, efonidipine has also been studied for its potential benefits in other cardiovascular conditions. It has been found to improve endothelial function in patients with coronary artery disease, and may also have a protective effect on the heart muscle in patients with heart failure. Efonidipine has also been studied for its potential use in preventing stroke. A study published in the Journal of Hypertension found that efonidipine was more effective in preventing stroke than a combination of other antihypertensive drugs. Side effects of efonidipine Like all medications, efonidipine can cause side effects. The most common side effects include dizziness, headache, and flushing. These side effects are generally mild and transient, and usually resolve on their own. Less common side effects include peripheral oedema, which can occur in up to 8% of patients. Efonidipine may also cause hypotension, particularly in patients with low blood pressure or those taking other medications that lower blood pressure. Efonidipine is a dihydropyridine calcium channel blocker that has been found to be effective in the treatment of hypertension and other cardiovascular conditions. Its vasodilatory effects on the blood vessels and coronary arteries make it a valuable treatment option for patients with these conditions. While efonidipine is generally well-tolerated, it can cause side effects in some patients. As with all medications, it is important to discuss the risks and benefits of efonidipine with your healthcare provider before starting treatment. Efonidipine is a valuable addition to the armamentarium of medications available for the treatment of hypertension and other cardiovascular conditions. Its unique pharmacological profile and potential benefits in preventing stroke make it a promising option for patients with these conditions. Ellen Diamond, a psychology graduate from the University of Hertfordshire, has a keen interest in the fields of mental health, wellness, and lifestyle The articles we publish on Psychreg are here to educate and inform. They’re not meant to take the place of expert advice. So if you’re looking for professional help, don’t delay or ignore it because of what you’ve read here. Check our full disclaimer.
<urn:uuid:ab14b06e-d3df-4042-8a25-9283a04996ad>
CC-MAIN-2023-23
https://www.psychreg.org/efonidipine-comprehensive-review-pharmacology-clinical-uses/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643663.27/warc/CC-MAIN-20230528083025-20230528113025-00580.warc.gz
en
0.936857
771
3.125
3
It is not only in the financial sector that mathematical models are contested. Climate predictions are also subject to criticism from those who attribute fluctuations in global temperature to misconceived equations. For the past ten years, and especially recently, the scientific community has been tearing itself apart over an apparently simple question: Is the earth getting warmer and if so, is it because of human activity? Why such a heated debate? Because, behind this "simple" question lies one that is more profound, more political, and philosophical than it is scientific: Is humankind harming the planet? On the one hand, there are the proponents of “climate change.” They swear that the rise—quick and dangerous—in global temperatures is no doubt of human origin. On the other hand, “climate skeptics” argue that it has yet to be proved that human activity is causing the change in global climate. Each camp, which has alternately enjoyed the media’s support, has not been sparing in hurling invectives that are hardly scientific. Climate change alarmists are accused of dogmatism and intellectual terrorism, whereas skeptics are compared to negationists. Today in France, a small minority of skeptics supported by a part of the public opinion, are coming out in the open after years of hiding—considerably later than their American counterparts. For the mathematician Benoit Rittaud, author of the Climatic Myth, the climate war has become the biggest scientific controversy since Trofim Denisovitch Lysenko’s experimental research in improved crop yields during the 1930s. A staunch Stalinist agronomist, Lysenko declared a relentless war against the findings of the Austrian monk Johann Gregor Mendel, who was earlier considered (and since then) as the founding father of genetics and theoretician of the formation of hybrid plants. The 1930s were ripe for a pseudo scientific show of force in so much as they served the interests of the dictatorships. In 2010, we hope that such is not the case. The reality—which is easier for scientists to accept than for politicians—is that in 2010, we still do not know many things. This is particularly true when it comes to climate behavior at the planetary scale. Climatologists’ work is very complex. At the moment, the concept of average temperature is the only option available to describe climate behavior at the global level with a single parameter. But, determining change in average temperature over a period of time is not an easy task. In France, Pierre Morel, founder of the Laboratory for Dynamic Meteorology (LMD), explains that for an increase of 0.6° C over the past 100 years across U.S. territory, 0.4° C corresponds to the corrections made to compensate for errors of the measurement equipment. Wary of the potential influence of ideological assumptions on scientific conclusions, Morel cautions: “Beware, climate is like the Rorschach tests, we find what we are looking for.” Vincent Cassé, director of LMD, softens the tone: “The use of satellite data, available for 30 years, offers a global vision of the atmosphere at a given time. But to have the elements of its long-term evolution—say a few decades—calls for a minute calibration of the captors used. The slightest change in the captor on a satellite disturbs the measurements. The real challenge is to obtain a coherent series.” For a long time, the climate debate revolved around the “hockey stick” curve, an MBH98/ MBH99 reconstruction of average global temperature for the past 1,000 years. The graphic solution was proposed in 1998 then in 1999 by three scientists: Michael Mann, professor at Pennsylvania State University, Raymond Bradley, professor at the University of Massachusetts, Amherst, and Malcolm Hughes, professor of dendrochronology at the University of Arizona. This chart jolted minds and shattered a consensus prevalent until the late 1990s by minimizing two phenomena that were previously considered defining: the warm period corresponding to the European Middle Ages (the “optimal medieval climate”) and the Ice Age between the Renaissance and the mid-19th century, coined the “Little Ice Age”. The hockey stick, and the implicit accusations it carried against economic growth, has quickly become the argument of choice of proponents of the catastrophic global warming theory—with its origins in the Industrial Revolution. The Intergovernmental Panel of Experts on Climate Change (IPCC), a group set up by the United Nations to gauge the risks of global warming, has in fact used this argument extensively in numerous reports and in its communications with political authorities and the media. However, this graph was quickly discredited and discarded after serious errors in its statistical methodology (false data and incorrect choice of variance) were pointed out and proved in 2004 by Richard Muller, professor of physics at the University of California, Berkeley. But curiously enough, the controversy has not died down. The war of hypotheses is still raging on in 2010. The first question being considered is whether or not we can attribute global warming observed between the 1970s and 1990s to human activity. A large number of scientists concur that humans are indeed responsible for climate change. They rely on two proven facts: carbon dioxide (CO2) is a greenhouse gas (of human origin) and its concentration in the atmosphere is increasing. However, it remains to be proved that the observed increase in the concentration of CO2 is behind global warming. In fact, the sensitivity of climate to this type of stimulus is not known with certainty and no mathematical model is currently available to dispel the hesitation. The two camps are in opposition but they are not equal in numbers. Even though in some countries public opinion sides with the skeptics, they are, within the scientific community, a small numerical minority. According to a recent study conducted by Stephen Schneider (climatologist, Stanford University) of the publications and citations of the 1,372 most-active researchers in the field, 97% to 98% of scientists believe that human activity is responsible for climate change. However, there are a number of other possible explanations for the global warming observed between 1970 and 1990. They are not mutually exclusive: - The solar hypothesis: the eruptions on the surface of the sun trigger a radiation flux that can interact with the atmosphere. This “solar wind” contributes to the formation of clouds, a source of cooling. Scientists are hoping to reveal the correlations by studying variations in the eruption cycles. How can we enter this scientific debate—given that it is contaminated by ideology—without getting obscurantist or becoming obsessed with models? The path is narrow. And IPCC’s method of functioning is a disturbing element. Political powers detest the unpredictable nature of scientific work and are dubious about leaving the choice to the men and women of science. For them, the IPCC is first of all a think-tank that enables them to make decisions. This is a big mistake, says Sir David King, professor at the University of Oxford and former scientific advisor to the British government, in the Daily Telegraph: “The IPCC was set up as a means of arriving at a consensus, which contradicts the very spirit of scientific research. Scientists are supposed to defy received ideas and consensus, so that only the most sound ideas among them can survive.” The late Sir Karl Popper, philosopher and professor at the London School of Economics, put it differently: “Refutability is the fundamental property of any scientific proposal”. King is nonetheless resolute: “It is ridiculous to deny the reality that global warming is linked to human activities and absurd to pretend that we do not know the reason why.” André Berger, professor at the Catholic university of Louvain, tries to put his finger on the most deleterious human activities. In regards to CO2 emissions, he says: Industry is not the main culprit, but deforestation and transportation are. Between 1990 and 2005, industries world over reduced their emissions by 10% whereas those related to transportation increased by 26%.” He also notes that China emits less than the U.S., the European Union, and Russia combined, whereas its population is much greater. Fabian Leurant, professor at the Institute of Ponts ParisTech and assistant director of the French Laboratory of City, Mobility, and Transportation (Laboratoire Ville Mobilité Transport), also cautions against hasty simplifications. “Concentration and industrial specialization have their virtues in terms of efficiency, but they often imply, paradoxically, more transportation and more emissions,” he says. “Similarly, the shift in the means of transportation from airplanes to light goods vehicle (LGV) or electrical vehicles will not benefit climate change unless the electricity used to run them is produced from renewable resources.” The question of desertification is also under debate. Majority opinion finds that more areas will become deserts. This reality is translated, for example, by the ascent towards northern Europe of plant species that are traditionally confined to the Mediterranean. According to a study published in December 2009 by Scott Loarie (Stanford University) and David Ackerly (University of California, Berkeley) in Nature magazine, a large number of animal and plant species will migrate towards the North or to higher altitudes, but a third of them will not be able to make the shift quickly enough and will perish. The two researchers calculated that the average displacement speed of natural environments from the surface of the earth will be 0.42 km/year during the 21st century. This average was established on the basis of the IPCC’s “A1B” scenario. The IPCC foresees a significant increase in greenhouse gas emissions until the middle of the century, despite low population growth and the rapid introduction of more efficient new technologies. On the other side of the debate, a few dissidents believe that the desert will become green again. Among them is Farouk El Baz, director of the Center for Remote Sensing, University of Boston, who uses satellite images to evaluate the origin and evolution of desert landform. In July 2009, he explained to the BBC: “Global warming of the earth will trigger more evaporation of the oceans and thus more rain [leading to more vegetation].” This theory has found favor with Martin Claussen, researcher at the Max Planck Institute of Meteorology in Hamburg, Germany. Claussen confirms that North Africa is one of the most controversial regions in the world as regards to global warming. An explosion in the growth of plants in the region was predicted by certain climatologists. In 2005, Reindert Haarsma’s team at the Royal Meteorological Institute of De Bilt, the Netherlands, predicted a major increase in future rainfall in the Sahel. The team’s study, published in Geophysical Research Letters, says that rainfall in July and September (the monsoon season) will increase by two millimeters per day by 2080. So, will we ever be able to describe and predict the climate? Will math and physics offer a reliable instrument for this purpose? Climatologists have been working on the matter for the last few decades, but the biologist Henri Atlan almost denies the very possibility of the existence of a prediction tool: “In matters of climatic models, the amount of data available is much less compared to the number of variables that are taken into account while constructing them. Thus, there are a large number of good models, capable of taking into consideration the observations available, even though they are based on different explanatory theories and lead to distinctive or even opposing predictions. This is because we are in a situation where we underdetermine theories by facts, when the amount of data cannot be multiplied as much as necessary during repeated and reproducible experiments.” His conclusion: the models on climate change are just theories full of uncertainties as regards to their connection with reality. The same goes for the predictions inferred from them. Hervé Le Treut, professor at the Ecole Polytechnique ParisTech and director of the Pierre-Simon Laplace Institute, challenges this analysis: “The current models correctly take into account the climatic structures organized at the global level and their modes of variation. However, it is true that they remain deficient in their regional or local approach to climate change.” Will technologies help in making any progress? Le Treut is confident that they will: “There are two promising sources of progress: a better physical representation and an increased resolution thanks to faster computers. The models of the 1990s had a resolution of 500 km (i.e., the atmosphere was broken up by the model into blocks of 500 km squares). In 2010, it is 100 km. As and when we gain in resolution, the empirical theories necessary for representing the different phenomenon can be replaced by real observations. Japan possesses computers capable of carrying out climatic simulations with a resolution of 3 km. This will enable us to reproduce the diversity of the atmospheric scales, mainly those prevailing in a cloud.” Accurate knowledge about the climate may then just be a matter of time. Le mythe climatiqueBenoît Rittaud List Price: EUR 17,20 More on paristech review On the topic By the author - Agriculture and food: the rise of digital platformson February 12th, 2016 - The education tsunami – 9 pieces to make sense of a revolutionon July 22nd, 2015 - Energy transition Series – Smart consumption: technology to the rescueon June 26th, 2015
<urn:uuid:a272ccc6-9816-44b0-8b1c-df54c3cbdd4d>
CC-MAIN-2022-40
http://www.paristechreview.com/2010/08/31/climate-science-loathing-unpredictable/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00240.warc.gz
en
0.949701
2,809
2.921875
3
Carnegie Institution for Science ||This article's use of external links may not follow Wikipedia's policies or guidelines. (February 2011)| |This article needs additional citations for verification. (January 2011)| - This article is about a scientific institution headquartered in Washington, D.C., and is not to be confused with the Carnegie Institute, the Carnegie Institute of Technology, or the Carnegie Science Center all of which are located in Pittsburgh, Pennsylvania. The Carnegie Institution for Science (CIS), also called the Carnegie Institution of Washington (CIW), is an organization in the United States established to support scientific research. Today the CIS directs its efforts in six main areas: 1) Astronomy at the Department of Terrestrial Magnetism (Washington, DC) and the Observatories of the Carnegie Institution of Washington (Pasadena, CA and Las Campanas, Chile); 2) Earth and planetary science also at the Department of Terrestrial Magnetism and the Geophysical Laboratory (Washington, DC); 3) Global Ecology at the Department of Global Ecology (Stanford, CA); 4) Genetics and developmental biology at the Department of Embryology (Baltimore, MD); 5) Matter at extreme states also at the Geophysical Laboratory; and 6) Plant science at the Department of Plant Biology (Stanford, CA). As of June 30, 2013, the Institution's endowment was valued at $855 million. Expenses for scientific programs and administration was $99.8 million. "It is proposed to found in the city of Washington, an institution which...shall in the broadest and most liberal manner encourage investigation, research, and discovery [and] show the application of knowledge to the improvement of mankind..." — Andrew Carnegie, January 28, 1902 Beginning in 1895, Andrew Carnegie contributed his vast fortune toward the establishment of 22 organizations that today bear his name and carry on work in such fields as art, education, international affairs, peace, and scientific research. In 1901, Andrew Carnegie retired from business to begin his career in philanthropy. Among his new enterprises, he considered establishing a national university in Washington, D.C., similar to the great centers of learning in Europe. Because he was concerned that a new university could weaken existing institutions, he opted for a more exciting, albeit riskier, endeavor—an independent research organization that would increase basic scientific knowledge. Carnegie contacted President Theodore Roosevelt and declared his readiness to endow the new institution with $10 million. He added $2 million more to the endowment in 1907, and another $10 million in 1911. As ex officio members of the first board of trustees, Carnegie chose the President of the United States, the President of the Senate, the Speaker of the House of Representatives, the secretary of the Smithsonian Institution and the president of the National Academy of Sciences. In all, he selected 27 men for the institution’s original board. Their first meeting was held in the office of the Secretary of State on January 29, 1902, and Daniel C. Gilman, who had been president of Johns Hopkins University, was elected president. The institution was incorporated by the U.S. Congress in 1903. Initially, the president and trustees devoted much of the institution’s budget to individual grants in various fields, including astronomy, anthropology -- including Maya studies -- literature, economics, history and mathematics. Under the leadership of Robert Woodward, who became president in 1904, the board changed its course, deciding to provide major support to departments of research rather than to individuals. This approach allowed them to concentrate on fewer fields and support groups of researchers in related areas over many years. Since the beginning, the Carnegie Institution has been like an explorer—discovering new areas, but often leaving the development to others. This philosophy has fostered new areas of science and has led to unexpected benefits to society, including the development of hybrid corn, radar, the technology that led to Pyrex ® glass, and novel techniques to control genes called RNA interference. Some of Carnegie’s leading researchers from the early and middle years of the 20th century are well known: - Edwin Hubble, who revolutionized astronomy with his discovery that the universe is expanding and that there are galaxies other than our own Milky Way - Charles Richter, who created the earthquake measurement scale; - Barbara McClintock, who won the Nobel Prize for her early work on patterns of genetic inheritance; - Alfred Hershey, who won the Nobel Prize for determining that DNA, not protein, harbors the genetic recipe for life; - Vera Rubin, who was awarded the Presidential Medal of Science for her work confirming the existence of dark matter in the universe; and - Andrew Fire, who with colleagues elsewhere opened up the world of RNA interference, for which he shared a Nobel Prize in 2006 Today, Carnegie scientists continue to be at the forefront of scientific discovery. Working in six scientific departments on the East and West Coasts, Carnegie investigators are leaders in the fields of astronomy, Earth and planetary science, global ecology, genetics and developmental biology, matter at extreme states, and plant science. They seek answers to questions about the structure of the universe, the formation of our solar system and other planetary systems, the behavior and transformation of matter when subjected to extreme conditions, the origin of life, the function of genes, and the development of organisms from single-celled egg to adult. The Carnegie Institution is headquartered in Washington, D.C., Matthew P. Scott serves as president. Beginning in 1895, Andrew Carnegie donated his vast fortune to establish 22 organizations around the world that today bear his name and carry on work in fields as diverse as art, education, international affairs, world peace, and scientific research. (See Andrew Carnegie's 23 Organizations). The organizations are independent entities and are related by name only. In 2007, the institution adopted the name "Carnegie Institution for Science" to better distinguish it from the other organizations established by and named for Andrew Carnegie. The new name closely associates the words “Carnegie” and “science” and thereby reveals the core identity. The institution remains officially and legally the Carnegie Institution of Washington, but now has a public identity that more clearly describes its work. Carnegie investigators are leaders in the fields of astronomy, Earth and planetary science, global ecology, genetics and developmental biology, matter at extreme states, and plant science. The institution has six research departments: the Geophysical Laboratory and the Department of Terrestrial Magnetism, both located in Washington, D.C.; The Observatories, in Pasadena, California, and Chile; the Department of Plant Biology and the Department of Global Ecology, in Stanford, California; and the Department of Embryology, in Baltimore, Maryland. The Carnegie Institution’s Six Research Departments: Department of Embryology, Baltimore, Maryland The Department of Embryology was founded in 1913 in affiliation with the department of anatomy at The Johns Hopkins University. Until the 1960s its focus was human embryo development. Since then the researchers have addressed fundamental questions in animal development and genetics at the cellular and molecular levels. Some researchers investigate the genetic programming behind cellular processes as cells develop, while others explore the genes that control growth and obesity, stimulate stem cells to become specialized body parts, and perform many other functions. Geophysical Laboratory, Washington, D.C. Researchers at the Geophysical Laboratory (GL), founded in 1905, examine the physics and chemistry of Earth’s deep interior. The laboratory is a world-renowned center for petrology—the study of rocks. It is also a world leader in high-pressure and high-temperature physics making significant contributions to both Earth and material sciences. The GL, with the Department of Terrestrial Magnetism co-located on the same campus, is additionally a member of NASA’s Astrobiology Institute—an interdisciplinary effort to investigate how life evolved on this planet and determine its potential for existing elsewhere. Among their many projects is one dedicated to examining how common rocks found at high-pressure, high-temperature hydrothermal vents at the ocean bottom may have provided the catalyst for life on this planet. Department of Global Ecology, Stanford, California Established in 2002, Global Ecology is the newest Carnegie department in over 80 years. Using innovative approaches, these researchers are picking apart the complicated interactions of Earth’s land, atmosphere, and oceans to understand how global systems operate. With a wide range of powerful tools—from satellites to the instruments of molecular biology—these scientists explore issues such as the global carbon cycle, the role of land and oceanic ecosystems in regulating climate, the interaction of biological diversity with ecosystem function, and much more. These ecologists also play an active role in the public arena, from giving congressional testimony to promoting satellite imagery for the discovery of environmental “hotspots.” Department of Plant Biology, Stanford, California The Department of Plant Biology began as a desert laboratory in 1903 to study plants in their natural habitats. Over time the research evolved to the study of photosynthesis. Today, using molecular genetics and related methods, these biologists study the genes responsible for plant responses to light and the genetic controls over various growth and developmental processes including those that enable plants to survive disease and environmental stress. In addition, the department is a world leader in bioinformatics. It developed and now manages an online-integrated database that supplies all aspects of biological information on the most widely used model plant, Arabidopsis. Department of Terrestrial Magnetism, Washington, D.C. The Department of Terrestrial Magnetism was founded in 1904 to map the geomagnetic field of the Earth. Over the years the research direction shifted, but the historic goal—to understand the Earth and its place in the universe—has remained the same. Today the department is home to an interdisciplinary team of astronomers and astrophysicists, geophysicists and geochemists, cosmochemists and planetary scientists. These Carnegie researchers are discovering planets outside our solar system, determining the age and structure of the universe, and studying the causes of earthquakes and volcanoes. With colleagues from the Geophysical Laboratory, these investigators are also helping to define the new and exciting field of astrobiology. The Observatories, Pasadena, California, and Las Campanas, Chile The Observatories were founded in 1904 as the Mount Wilson Observatory. Mount Wilson transformed our notion of the cosmos with the discoveries by Edwin Hubble that the universe is far larger than had been thought and that it is expanding. Carnegie astronomers today study the cosmos with an unusual twist. Unlike most in their field, they design and build their own instruments to capture the secrets of space. They are tracing the evolution of the universe from the spark of the Big Bang through star and galaxy formation, exploring the structure of the universe, and probing the mysteries of dark matter, dark energy, and the ever-accelerating rate at which the universe is expanding. CASE: Carnegie Academy for Science Education and First Light In 1989, Maxine Singer, president of Carnegie at that time, founded First Light, a free Saturday science program for middle school students from D.C. public, charter, private, and parochial schools. The program teaches hands-on science, such as constructing and programming robots, investigating pond ecology, and studying the solar system and telescope building. First Light marked the beginning of CASE, the Carnegie Academy for Science Education. Since 1994 CASE has also offered professional development for D.C. teachers in science, mathematics, and technology. The Carnegie Institution's administrative offices are located at 1530 P St., NW, Washington, D.C., at the corner of 16th and P Streets. The building houses the offices of the president, administration and finance, publications, and advancement. Andrew Carnegie's 23 Organizations Beginning in 1895, Andrew Carnegie contributed his vast fortune toward the establishment of 23 organizations that today bear his name and carry on work in such fields as art, education, international affairs, peace, and scientific research. Support for eugenics In 1920 the Eugenics Record Office, founded by Charles Davenport in 1910 in Cold Spring Harbor, New York, was merged with the Station for Experimental Evolution to become the Carnegie Institution's Department of Genetics. The Institution funded that laboratory until 1939; it employed such anthropologists as Morris Steggerda, who collaborated closely with Davenport. The Carnegie Institution ceased its support of eugenics research and closed the department in 1944. The department's records were retained in a university library. The Carnegie Institution continues its support for legitimate genetic research. Among its notable staff members in that field are Nobel laureates Barbara McClintock, Alfred Hershey and Andrew Fire. Presidents of the CIW - Daniel Coit Gilman (1902–1904) - Robert S. Woodward (1904–1920) - John C. Merriam (1921–1938) - Vannevar Bush (1939–1955) - Caryl P. Haskins (1956–1971) - Philip Abelson (1971–1978) - James D. Ebert (1978–1987) - Edward E. David, Jr. (Acting President, 1987–1988) - Maxine F. Singer (1989–2002) - Michael E. Gellert (Acting President, Jan. – April 2003) - Richard A. Meserve (April 2003 – September 2014) - Matthew P. Scott (September 1, 2014) |Wikisource has the text of the 1920 Encyclopedia Americana article Carnegie Institution of Washington.| - Official website - Carnegie Academy for Science Education and First Light - Department of Embryology - Department of Global Ecology - Department of Plant Biology - Department of Terrestrial Magnetism - Historic American Engineering Record (HAER) No. DC-52-A, "Carnegie Institute of Washington, Department of Terrestrial Magnetism, Standardizing Magnetic Observatory" - HAER No. DC-52-B, "Carnegie Institute of Washington, Department of Terrestrial Magnetism, Brass Foundry" - HAER No. DC-52-C, "Carnegie Institute of Washington, Department of Terrestrial Magnetism, Atomic Physics Observatory" - Geophysical Laboratory - Magellan Telescope
<urn:uuid:50d70fb7-4b62-4743-a1aa-36eb7fe11346>
CC-MAIN-2014-42
http://en.wikipedia.org/wiki/Carnegie_Institution_for_Science
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00248-ip-10-16-133-185.ec2.internal.warc.gz
en
0.929765
2,933
2.84375
3
Other than the common cold, tooth decay is the most prevalent disease in the world. And while a cavity or two may seem like a minor matter, tooth decay’s full destructive potential is anything but trivial. Without proper prevention and treatment, tooth decay can cause pain, tooth loss and, in rare cases, even death. This common disease begins with bacteria in the mouth. Though these microscopic organisms’ presence is completely normal and at times beneficial, certain strains cause problems: they consume left over carbohydrates in the mouth like sugar and produce acid as a byproduct. The higher the levels of bacteria the higher the amount of acid, which disrupts the mouth’s normal neutral pH. This is a problem because acid is the primary enemy of enamel, the teeth’s hard protective outer shell. Acid causes enamel to lose its mineral content (de-mineralization), eventually producing cavities. Saliva neutralizes acid that arises normally after we eat, but if the levels are too high for too long this process can be overwhelmed. The longer the enamel is exposed to acid, the more it softens and dissolves. While tooth decay is a global epidemic, dental advances of the last century have made it highly preventable. The foundation for prevention is fluoride in toothpaste and effective oral hygiene — daily brushing and flossing to removing plaque, a thin film of food remnant on teeth that’s a feeding ground for bacteria, along with regular dental visits for more thorough cleaning and examination. This regular regimen should begin in infancy when teeth first appear in the mouth. For children especially, further prevention measures in the form of sealants or topical fluoride applications performed in the dentist office can provide added protection for those at higher risk. You can also help your preventive measures by limiting sugar or other carbohydrates in your family’s diet, and eating more fresh vegetables, fruit and dairy products, especially as snacks. Doing so reduces food sources for bacteria, which will lower their multiplication and subsequently the amount of acid produced. In this day and age, tooth decay isn’t a given. Keeping it at bay, though, requires a personal commitment to effective hygiene, lifestyle choices and regular dental care. Doing these things will help ensure you and your family’s teeth remain free from this all too common disease. If you would like more information on preventing and treating tooth decay, please contact us or schedule an appointment for a consultation. You can also learn more about this topic by reading the Dear Doctor magazine article “Tooth Decay.” More than likely your great-grandparents, grandparents and even your parents had a common dental experience: when one of their teeth developed a cavity, their dentist removed the decayed portion (and maybe a little more) through drilling and then filled the cavity. In other words, treatment was mainly reactive—fix the problem when it occurred, then fix it again if it reoccurred. You may have had similar experiences—but the chances are good your dentist’s approach is now quite different. Today’s tooth decay treatment is much more proactive: address first the issues that cause tooth decay, and if it does occur treat it with an eye on preventing it in the future. This approach depends on maintaining equilibrium between two sets of competing factors that influence how your teeth may encounter tooth decay. This is known as the caries balance (caries being another name for tooth decay). On one side are factors that increase the risk of decay, known by the acronym BAD: Bad Bacteria that produce acid that dissolves the minerals in tooth enamel; Absence of Saliva, the body’s natural acid neutralizer; and Dietary Habits, especially foods with added sugars that feed bacteria, and acid that further weakens enamel. There are also factors that decrease the risk of tooth decay, known by the acronym SAFE: Saliva and Sealants, which focuses on methods to boost low salivary flow and cover chewing surfaces prone to decay with sealant materials; Antimicrobials, rinses or other substances that reduce bad bacteria populations and encourage the growth of beneficial strains; Fluoride, increased intake or topical applications of this known enamel-strengthening chemical; and Effective Diet, reducing the amount and frequency of sugary or acidic foods and replacing them with more dental-friendly choices. In effect, we employ a variety of techniques and materials that inhibit BAD factors and support SAFE ones. The foundation for prevention, though, remains the same as it was for past family generations—practice effective oral hygiene by brushing and flossing daily and regular dental cleanings and checkups to keep bacterial plaque from accumulating and growing. Your own diligent daily care rounds out this more effective way that could change your family history of tooth decay for you and future generations.
<urn:uuid:c1e7af03-ccec-45f7-8b82-ceb7b121a97a>
CC-MAIN-2019-35
http://www.gregorypalmerdmd.com/blog/date/2017-12-01.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00496.warc.gz
en
0.946435
990
3.625
4
Premature and/or excessive lightening of a paint colour, which often occurs on surfaces with high levels of sun exposure. What happens to be a fading/poor colour retention issue can also be a result of chalking (see Chalking). - Use of an interior grade of paint for an outdoor application - Use of a lower quality paint, leading to rapid degradation (chalking) of the paint film - Use of a paint color that is particularly vulnerable to UV radiation (most notably, certain bright reds, blues and yellows) - Tinting a white paint not intended for tinting, or over tinting a light or medium paint base When fading/poor color retention is a result of chalking, it is necessary to remove as much of the chalk as possible (see Chalking). In repainting, be sure to use a quality exterior house paint in colours recommended for exterior use. Dulux recommends the Dulux Weathershield® range. For more information, please consult our detailed Technical Advice note on the topic.
<urn:uuid:28499cb2-8bd4-4647-8ffc-19946bcfae58>
CC-MAIN-2018-51
https://www.dulux.com.au/applicator/technical-advice/professional-painter-problem-solver/exterior/fading-and-poor-colour-retention
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829568.86/warc/CC-MAIN-20181218184418-20181218210418-00127.warc.gz
en
0.898224
215
2.5625
3
Is it a cold, flu, allergic rhinitis, or sinusitis? Each year, millions of Americans suffer from one or more episodes of the common cold, flu (influenza), allergic rhinitis, and sinusitis. All of these conditions have one thing in common -- they all produce respiratory problems. Since the symptoms of these conditions are all very similar, they often are confused with each other. (See comparison chart.) Symptoms of the common cold usually begin 2 - 3 days after infection and often include nasal discharge, obstruction of nasal breathing, swelling of the sinus membranes, sneezing, sore throat, cough, and headache. Fever is usually slight but can climb to 102° F in infants and young children. Cold symptoms (which are caused by viruses) can last from 2 - 14 days, but two-thirds of people recover in a week. If symptoms occur often or last much longer than 2 weeks, they may be the result of an allergy rather than a cold. Colds occasionally can lead to secondary bacterial infections of the middle ear or sinuses, requiring treatment with antibiotics. High fever, significantly swollen glands, severe facial pain in the sinuses, and a cough that produces mucus, may indicate a complication or more serious illness requiring a doctor's attention. People understandably often confuse an allergy with a cold or flu. Remember colds are short-lived and passed from person to person, whereas allergies are immune system reactions to normally harmless substances. Allergy symptoms include: - Sneezing, watery eyes, or cold symptoms that last more than 10 days without a fever - Repeated ear and sinus infections - Loss of smell or taste - Frequent throat clearing, hoarseness, coughing, or wheezing - Dark circles under the eyes caused by increased blood flow near the sinuses (allergic shiners) - A crease just above the tip of the nose from constant upward nose wiping more commonly seen in children (allergic salute) The flu usually begins with a fever over 102° F, a flushed face, body aches, and lack of energy. Some people have other symptoms such as dizziness or vomiting. The fever usually lasts for a day or two, but can last 5 days. Somewhere between day 2 and day 4 of the illness, the "whole body" symptoms begin to subside, and respiratory symptoms begin to increase. The virus can settle anywhere in the respiratory tract, producing symptoms of a cold, croup, sore throat, bronchitis, ear infection, and pneumonia. The most prominent of the respiratory symptoms is usually a dry, hacking cough. Most people also develop a sore throat and a headache. Nasal discharge and sneezing are not uncommon. These symptoms (except the cough) usually disappear within 4 - 7 days. Sometimes there is a second wave of fever at this time. The cough and tiredness usually lasts for weeks after the rest of the illness is over. Usually, doctors diagnose flu on the basis of whether flu is epidemic in the community and whether the patient's complaints fit the current pattern of symptoms. Doctors rarely use laboratory testing to identify the virus. The classic symptoms of acute sinusitis are nasal congestion, greenish nasal discharge, facial or dental pain, eye pain, headache, and a nighttime cough. Some patients also complain of fever, malaise (feeling ill), bad breath, and a sore throat. It is usually preceded by a cold, which does not improve or worsens after 5 - 7 days of symptoms. Chronic sinusitis is subtler, and can be difficult to diagnose. It manifests the symptoms listed above in a milder form, but usually persists for longer than 8 weeks. It is most common in patients with allergies. Cold and allergy descriptions created by the National Institutes of Health. Illustrations and additional text copyright A.D.A.M., Inc.
<urn:uuid:134438fe-63be-4935-a618-fd32a67e2460>
CC-MAIN-2018-34
https://www.nicklauschildrens.org/care-guides/is-it-a-cold,-flu,-allergic-rhinitis,-or-sinusitis
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00634.warc.gz
en
0.924832
811
3.53125
4
A new Israeli study shows that changing meal times may have a significant effect on the levels of triglycerides in the liver, which can lead to all types of metabolic diseases. The results of this Weizmann Institute of Science study, recently published in Cell Metabolism, have important implications for the potential treatment of liver diseases as well as broader concerns for most research areas in the life sciences. Anyone who has worked the night shift knows that a lot of snacking takes place in order to keep awake. That’s because many biological processes follow a set timetable, with levels of activity rising and dipping at certain times of the day. Such fluctuations, known as circadian rhythms, are driven by internal “body clocks” based on an approximately 24-hour period – synchronized to light-dark cycles and other cues in an organism’s environment. Disruption to this optimum timing system in both animal models and in humans can cause imbalances, leading to such diseases as obesity, metabolic syndrome and fatty liver. Postdoctoral fellow Yaarit Adamovich and the team in the lab of Dr. Gad Asher of the Weizmann Institute’s Biological Chemistry Department, together with scientists from Dr. Xianlin Han’s lab in the Sanford-Burnham Medical Research Institute in Orlando, studied the role of circadian rhythm in the accumulation of lipids in the liver in mice. They discovered that a certain group of lipids, namely the triglycerides (TAG), exhibit circadian behavior, with levels peaking about eight hours after sunrise. The scientists were astonished to find, however, that daily fluctuations in this group of lipids persist even in mice lacking a functional biological clock, albeit with levels cresting at a completely different time – 12 hours later than the natural schedule. “These results came as a complete surprise: One would expect that if the inherent clock mechanism is ‘dead,’ TAG could not accumulate in a time-dependent fashion,” says Adamovich. So what was making the fluctuating lipid levels “tick” if not the clocks? “One thing that came to mind was that, since food is a major source of lipids – particularly TAG – the eating habits of these mice might play a role,” she says. Usually, mice consume 20 percent of their food during the day and 80% at night. However, in mice lacking a functional clock, the team noted that they ingest food constantly throughout the day. This observation excluded the possibility that food is responsible for the fluctuating patterns seen in TAG levels in these mice. When the scientists proceeded to check the effect of an imposed feeding regimen upon wild type mice, however, they were in for another surprise: After they provided the same amount of food – but restricted 100% of the feeding to nighttime hours – the team observed a dramatic 50% decrease in overall liver TAG levels. “The striking outcome of restricted nighttime feeding – lowering liver TAG levels in the very short time period of 10 days in the mice – is of clinical importance,” says Asher. “Hyperlipidemia and hypertriglyceridemia are common diseases characterized by abnormally elevated levels of lipids in blood and liver cells, which lead to fatty liver and other metabolic diseases. Yet no currently available drugs have been shown to change lipid accumulation as efficiently and drastically as simply adjusting meal time – not to mention the possible side effects that may be associated with such drugs.”
<urn:uuid:304b9d9d-7ff1-4a03-ac8b-076762ad79b2>
CC-MAIN-2015-48
http://www.israel21c.org/israeli-study-shows-how-meal-times-impact-liver-disease-sufferers/
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461132.22/warc/CC-MAIN-20151124205421-00115-ip-10-71-132-137.ec2.internal.warc.gz
en
0.960078
709
3.578125
4
In the image above, there's a picture of a cat on the left. On the right, can you tell whether it's a picture of the same cat, or a picture of a similar looking dog? The difference between the two pictures is that the one on the right has been tweaked a bit by an algorithm to make it difficult for a type of computer model called a convolutional neural network (CNN) to be able to tell what it really is. In this case, the CNN think it's looking at a dog rather than a cat, but what's remarkable is that most people think the same thing. This is an example of what's called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they're looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they're looking at something they aren't. What are Adversarial Images? Visual classification algorithms powered by convolutional neural networks are commonly used to recognize objects in images. You train these algorithms to recognize something like a panda by showing them lots of different panda pictures, and letting the CNN compare the pictures to figure out what features they share with each other. Once the CNN (commonly called a "classifier") has identified enough panda-like features in its training pictures, it'll be able to reliably recognize pandas in new pictures that you show it. Humans recognize pandas in pictures by looking for abstract features: little black ears, big white heads, black eyes, fur, and so forth. The features that CNNs recognize aren't like this at all, and don't necessarily make any sense to humans, because we interpret the world much differently than a CNN does. It's possible to leverage this to design "adversarial images," which are images that have been altered with a carefully calculated input of what looks to us like noise, such that the image looks almost the same to a human but totally different to a classifier, and the classifier makes a mistake when it tries to identify them. Here's a panda example: A CNN-based classifier is about 60 percent sure that the picture on the left is a panda. But if you slightly change ("perturb") the image by adding what looks to us like a bunch of random noise (highly exaggerated in the middle image), that same classifier becomes 99.3 percent sure that it's looking at a gibbon instead. The reason this kind of attack can be so successful with a nearly imperceptible change to the image is because it's targeted at one specific computer model, and likely won't fool other models that may have been trained differently. Adversarial images that can cause multiple different classifiers to make the same mistake need to be more robust—tiny changes that work for one model aren't going to cut it. "Robust" adversarial images tend to involve more than just slight tweaks to the structure of an image. In other words, if you want your adversarial images to be effective from different angles and distances, the tweaks need to be more significant, or as a human might say, more obvious. Here are two examples of robust adversarial images that make a little more sense to us humans: The image of the cat on the left, which models classify as a computer, is robust to geometric transformations. If you look closely, or maybe even not that closely, you can see the image has been perturbed by introducing some angles and boxyness that we'd recognize as a characteristic of computers. And the image of the banana on the right, which models classify as a toaster, is robust to different viewpoints. We humans recognize the banana immediately, of course, but the weird perturbation next to it definitely has some recognizable toaster-like features. When you generate a very robust adversarial image to be able to fool a whole bunch of different models, the adversarial image often starts to show the "development of human-meaningful features," as in the examples above. In other words, a single adversarial image that can fool one model might not look any different to a human, but by the time you come up with one image that can fool five or 10 models at the same time, your image is likely relying on visual features that a human has the potential to notice. By itself, this doesn't necessarily mean that a human is likely to think that a boxy image of a cat is really a computer, or that a banana with a weird graphic next to it is a toaster. What it suggests, though, is that it might be possible to target the development of an adversarial image at humans by choosing models that match the human visual system as closely as possible. Fooling the Eye (and the Brain) There are some similarities between deep convolutional neural networks and the human visual system, but in general, CNNs look at things more like computers than like humans. That is, when a CNN is presented with an image, it's looking at a static grid of rectangular pixels. Because of how our eyes work, humans see lots of detail within about five degrees of where we're looking, but outside of that area, the detail we can perceive drops off linearly. So, unlike with a CNN, it's not very useful to (say) adversarially blur the sides of an image for a human, because that’s not something our eyes will detect. The researchers were able to model this by adding a "retinal layer" that modified the image fed into the CNN to simulate the characteristics of the human eye, with the goal of "[limiting] the CNN to information which is also available to the human visual system." We should note that humans make up for this kind of thing by moving our eyes around a lot, but the researchers compensated for this in the way they ran their experiment, in order to keep the comparison between humans and CNNs useful. We'll get into that in a minute. Using this retinal layer was the extent of the human-specific tweaking that the researchers did to their machine learning models. To generate the adversarial images for the experiment, they tested candidate images across 10 different machine learning models, each of which would reliably misclassify an image of (say) a cat as actually being an image of (say) a dog. If all 10 of the models were fooled, that candidate moved on to the human experiment. Does it Work? The experiment involved three groups of images: pets (cats and dogs), vegetables (cabbages and broccoli), and "hazards" (spiders and snakes, although as a snake owner I take exception to the group name). For each group, a successful adversarial image was able to fool people into choosing the wrong member of the group, by identifying it as a dog when it's actually a cat, or vice versa. Subjects sat in front of a computer screen and were shown an image from a specific group for between 60 and 70 milliseconds, after which they could push one of two buttons to identify which image they thought they were looking at. The short amount of time that the image was shown mitigated the difference between how CNNs perceive the world and how humans do; the image at the top of this article, the researchers say, is unusual in that the effect persists. The images shown to the subjects during the experiment had the potential to be an unmodified image, an adversarial image, an image where the perturbation layer was flipped upside down before being applied, or an image where the perturbation later was applied to a different image entirely. The last two cases made sure to control for the perturbation layer itself (does the structure of the perturbation layer make a difference as opposed to just whether or not it's there?) and to determine whether the perturbation can really fool people into choosing one thing over another, as opposed to just making them less accurate overall. Here's an example showing the percentage of people who could accurately identify an image of a dog, along with the perturbation layer that was used to alter the image. Remember, people only had between 60 ms and 70 ms to look at each image and make a decision: And here are the overall results: This graph shows the accuracy of choosing the correct image. If you chose cat and it's really an image of a cat, your accuracy is good. If you chose cat and it's really an image of a dog perturbed to look like a cat, your accuracy is bad. As you can see, people are significantly more likely to be accurate when identifying images that are unmodified, or images with flipped perturbation layers, than when identifying adversarial images. This suggests that adversarial image attacks can in fact transfer from CNNs to humans. While these attacks are effective, they're also more subtle than one might expect—no boxy cats or toaster graphics or anything of that sort. Since we can see the perturbation layers themselves and examine the images both before and after they've been futzed with, it's tempting to try and figure out exactly what is screwing us up. However, the researchers point out that "our adversarial examples are designed to fool human perception, so we should be careful using subjective human perception to understand how they work." They are willing to make some generalizations about a few different categories of modifications, including "disrupting object edges, especially by mid-frequency modulations perpendicular to the edge; enhancing edges both by increasing contrast and creating texture boundaries; modifying texture; and taking advantage of dark regions in the image, where the perceptual magnitude of small perturbations can be larger." You can see examples of these in the images below, with the red boxes highlighting where the effects are most visible. What it Means There's much, much more going on here than just a neat trick. The researchers were able to show that their technique is effective, but they're not entirely sure why, on a level that's so abstract, it’s almost existential: Our study raises fundamental questions how adversarial examples work, how CNN models work, and how the brain works. Do adversarial attacks transfer from CNNs to humans because the semantic representation in a CNN is similar to that in the human brain? Do they instead transfer because both the representation in the CNN and the human brain are similar to some inherent semantic representation which naturally corresponds to reality? And if you really want your noodle baked, the researchers are happy to oblige, by pointing out how with "visual object recognition… it is difficult to define objectively correct answers. Is Figure 1 objectively a dog or is it objectively a cat but fools people into thinking it is a dog?" In other words, at what point does an adversarial image actually become the thing that it's trying to fool you into thinking that it is? The scary thing here (and I do mean scary) are some of the ways in which it might be possible to leverage the fact that there's overlap between the perceptual manipulation of CNNs and the manipulation of humans. It means that machine learning techniques could potentially be used to subtly alter things like pictures or videos in a way that could change our perception of (and reaction to) them without us ever realizing what was going on. From the paper: For instance, an ensemble of deep models might be trained on human ratings of face trustworthiness. It might then be possible to generate adversarial perturbations which enhance or reduce human impressions of trustworthiness, and those perturbed images might be used in news reports or political advertising. More speculative risks involve the possibility of crafting sensory stimuli that hack the brain in a more diverse set of ways, and with larger effect. As one example, many animals have been observed to be susceptible to supernormal stimuli. For instance, cuckoo chicks generate begging calls and an associated visual display that causes birds of other species to prefer to feed the cuckoo chick over their own offspring. Adversarial examples can be seen as a form of supernormal stimuli for neural networks. A worrying possibility is that supernormal stimuli designed to influence human behavior or emotions, rather than merely the perceived class label of an image, might also transfer from machines to humans. These techniques could also be used in positive ways, of course, and the researchers do suggest a few, like using image perturbations to "improve saliency, or attentiveness, when performing tasks like air traffic control or examination of radiology images, which are potentially tedious, but where the consequences of inattention are dire." Also, "user interface designers could use image perturbations to create more naturally intuitive designs." Hmm. That's great, but I'm much more worried about the whole hacking of how my brain perceives whether people are trustworthy or not, you know? Some of these questions could be addressed in future research—it may be possible to determine what exactly makes certain adversarial examples more likely to be transferable to humans, which might provide clues about how our brains work. And that, in turn, could help us understand and improve the neural networks that are being increasingly used to help computers learn faster and more effectively. But we'll have to be careful, and keep in mind that just like those computers, sometimes we're far too easy to fool. Adversarial Examples that Fool both Human and Computer Vision, by Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein, from Google Brain, is available on arXiv. And if you want to see more adversarial images used in the human experiments, the supplemental material is here.
<urn:uuid:92d9b3b0-f140-40b7-aafb-d82aeb2be493>
CC-MAIN-2019-30
https://spectrum.ieee.org/the-human-os/robotics/artificial-intelligence/hacking-the-brain-with-adversarial-images
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524679.39/warc/CC-MAIN-20190716160315-20190716182315-00319.warc.gz
en
0.961511
2,852
4
4
Knots are intertwined loops of rope, cord, string or other flexible material used to fasten two such ropes to one another, or to another object. Knowing how to tie knots is a useful skill. Different knots are used for different purposes. Everyone should learn how to tie a few basic knots. This section gives an A to Z selection of some of the most common (and not so common) knots and their uses.
<urn:uuid:f7e71941-9dc5-4bae-a45d-d6bb55225587>
CC-MAIN-2018-13
http://survivalworld.com/knots/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647600.49/warc/CC-MAIN-20180321082653-20180321102653-00676.warc.gz
en
0.927262
87
3.03125
3
Friends are important; they support you, listen to you, laugh at your jokes and have fun with you. Online they share information with you, point you in the right direction to useful material and support you with their feedback. There is little doubt that both online and offline friends are important. Indeed, for many years doctors have known that those of us with plenty of good friends tend to be the healthiest; friendship boosts our positive biochemistry helping our immune systems and protecting us from disease. But your friends do more than this; they influence your thinking. Everyone you meet has some kind of influence upon you, but the people you have the most connection with are the ones who have the greatest power over you. New research on schoolchildren confirms that the attitude of those around us influences our feelings and our behaviour. The study from the University of Chicago showed that female teachers who believe that girls are no good at maths end up with girls in their class who – you guessed it – are not much cop at adding up. In other words they are passing on their anxiety to the children they teach, almost by a process of osmosis. This research confirms many previous studies which show that our thoughts and feelings are often not of our own making. They arise as a result of the thoughts and feelings of the social groups which we inhabit. The whole notion of “group thinking” is an interesting one – how, for instance, do groups of people all think the same thing at the same time? They do. We seem capable of transmitting thoughts between us using all sorts of behaviours. Online you can see this happening in places like Facebook groups. A thought, attitude or feeling takes hold and everyone in the group tends to think the same thing. It happens online with people who collectively support WordPress, for instance, all claiming that Blogger is garbage in comparison – and no amount of arguing can shift them. That’s because unless the entire group changes its attitude, individuals are less likely to alter their opinion. It all means that you are influenced heavily by the groups you get involved with online. Even in subtle ways they are affecting your thoughts, feelings, attitudes and online behaviour. This new research on schoolgirls shows us that we are open to influence not only in our thoughts but in the results of the way our thinking affects our behaviour. In other words, if you inhabit a group that suggests using a social network is tough, you will find it practically difficult. If we measured your knowledge and ability with social networks – and you have friends who tell you that social networking is difficult – then your results would be lower than people whose friends love social networking and say it is brilliant. In other words, your actual abilities in online technology are probably affected by the people who surround you. It’s the same with making money. The friends of millionaires tend to be millionaires. The friends of would-be millionaires are also millionaires. If you are poor and all your friends are poor – guess what? Yes, you remain poor. If you inhabit online networks that tell you blogging is a waste of time – guess what? You will find every excuse in the world not to do any blogging. Your online friends influence you in many ways. Make sure you choose the right ones.
<urn:uuid:13e2b790-8c2f-424e-8f48-d8123342fe93>
CC-MAIN-2018-47
https://www.grahamjones.co.uk/2010/blog/internet-psychology/choose-your-online-friends-with-care.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746398.20/warc/CC-MAIN-20181120130743-20181120152743-00327.warc.gz
en
0.964022
662
3.03125
3
The human microbiome is a key factor in healthy function of the body and changes throughout our lives. It may sound like a natural leap in logic then that the state of the microbes in and on our body can help researchers predict chronological age. In particular, a recent study published in the journal mSystems found that skin microbes, more than those present in the gut and mouth, could serve as predictors of chronological age. They were accurate to within 3.8 years, compared to 4.5 years for oral microbes and 11.5 years for those in the gut. While everyone’s skin ages differently, researchers discovered that predictable age-related changes such as loss of natural skin oils and increased dryness that everyone experiences can serve as useful predictors of age. Microbes and the Aging Process The importance of the study is highlighted by the potential correlation between microbes and age playing a part in advancing researchers knowledge of the role that microbes play in the development of age-related diseases. This could help in the development of therapeutic interventions for the microbiome. It could also lead to the development of a non-invasive, microbiome diagnostic tests that could help doctors determine a person’s risk of developing certain diseases. Your diet can cause changes in your microbiome and thus change the organisms living on your skin. What researchers need to do now is determine what the microbes on your skin indicate in terms of age-related illness. If a person has the microbiome similar to someone of a drastically different age, then understanding the differences in their microbiome could indicate developing conditions on the horizon. The great thing about using the microbiome as a way to measure not only age, but a person’s overall health in the context of their age, is that it can then be manipulated and tracked to see the effect on a person’s age comes as a result of changing the microbiome. To determine the profile of microbes in age, the researchers worked with IBM to develop a predictive tool that uses machine learning to examine the data and compute a person’s age. What Kind of Microbes Indicate Age on Skin? There are a variety of factors that influence the type of microbes that live on the skin, including sun exposure, temperature, moisture, oxygen levels and skin pH. Skin microbes gather around hair follicles. The following are examples of skin microbes. Microorganisms are categorized according to their relationship to their human carriers. Examples are: - Commensals — organisms that benefit from us, but we don’t benefit from them - Symbionts — microorganisms and humans share a mutually beneficial relationship. - Pathogens — the microorganism benefits but causes disease to humans. The majority of microorganisms on our skin are commensals and typically do not cause illness.
<urn:uuid:1e48fc5b-7233-4b0e-b6dd-29dd91d8108b>
CC-MAIN-2020-50
https://www.betteraging.com/aging-science/how-skin-microbes-can-predict-your-age/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176256.21/warc/CC-MAIN-20201124111924-20201124141924-00088.warc.gz
en
0.93017
570
3.484375
3
SURFER (IMAGINARY exhibit) |On display at||IMAGINARY exhibitions and others| |Topics||Algebraic geometry, surfaces| |License||Apache 2.0 / CC-BY 4.0| SURFER is a software exhibit used and developed by IMAGINARY. This program draws algebraic surfaces in 3D in real time, from a polynomial given by the user as the input. Some example surfaces, parameters and colouring allow interactive play with the surfaces. The main and only window of the program has a view of the current algebraic surface, which one can drag to rotate and zoom. The lower part of the window has a small keyboard and a textbox with the polynomial defining the surface. With the on-screen keyboard, the program is ready to be used on touchscreens without physical keyboard. A left menu brings some options of colouring and a list with some examples of surfaces. Each surface is accompanied of an explanation text. The program is available in 15 languages. Optionally, two buttons can be activated: one for saving the surface in a file, and another for printing the surface to bring it to the visitor. These features can be used to organize competitions amongst visitors and schools for obtaining the most beautiful surfaces. On exhibitions, SURFER is displayed usually in a big touch screen, and often surrounded by big posters of algebraic surfaces that can be produced with the program. On free-visit exhibitions, visitors can discover the features of the program exploring by themselves, and the on-screen keyboard invites to write and change the equations. However, this exhibit usually poses a difficulty barrier to casual visitors, so the presence of a mediator is highly advised. A mediator can present the main features of the program and give a general overview of what is algebraic geometry in about 10 minutes, for instance gathering 4-6 people at a time. See the Video Tutorial above for an example of such presentation. SURFER can be used also in longer workshops (up to 1 or 2 hours), at museums, schools or public places equiped with the necessary resources, namely a computer for each participant and a screen/projector for the mediator. On these workshops, the participants are guided on a tutorial, leaving them to explore the equations. The mediator proposes challenges of increasing difficulty, like "draw three spheres", "draw a snowman", etc. On these activities it is useful to use a shared text editor along with SURFER to prepare the equations before drawing them, and to share equations between the mediator and the participants. Algebraic surfaces are sets of points (x,y,z) in space that satisfy a given polynomial equation p(x,y,z)=0. The extensive texts on the program provide enough information as to get a general idea on the subject. Additionally, the Video Tutorial and the Didactical guides on the SURFER website give enough resources for the training of mediators. Deeper insights on Algebraic Geometry will probably need a more formal training in higher mathematics. On the technical side, visualization of algebraic surfaces can be tricky. Most surfaces appearing in computer graphics are (ideally) smooth surfaces, that can be approximated by a triangulation mesh with quite good accurary. This mesh consist on a vast collection of flat triangles in space, and graphic cards are designed to render them rapidly on screen. However, many interesting algebraic surfaces contain mathematical singularities (points where the surface fails to be smooth), and the triangulation technique is not a good approach. A different technique in computer graphics is ray-tracing. On this technique, one describes a scene abstractly (objects, lighting, textures, and a camera view), and the rendering consists on tracing, for each pixel on the final image, a light ray from the camera, inciding on the closest surface, and reflecting according to the laws of optics to reach the source of light, and this determines the colour of the pixel. Ray-tracing is much more expensive int terms of computation power and usually is only used for still images, whereas triangulation can be used in real-time video such as videogames. SURFER is a ray-tracer optimized for rendering algebraic surfaces in real time. History and museology The first antecedent of SURFER is the program surf , released in 2000 by Stephan Endrass and others. This program is a script-driven ray-tracer designed for algebraic surfaces written in C++. In 2006, Herwig Hauser designed and formulated a gallery of algebraic surfaces for the ICM 2006 in Madrid. That gallery was created using POV-ray, a general purpose ray-tracer software that, unlike most other ray-tracers, admitted as a primitive shape an algebraic surface defined by the zeroset of a polynomial. At that moment, no program was fast enough to render live a surface. The IMAGINARY project was created with occasion of the German Year of Mathematics 2008, and one of its core initiatives was to design a fast renderer of algebraic surfaces with a simple user interface to be used in an exhibition. Firstly, the old surf program was ported to Java libraries, giving rise to jsurf. Then, a complete user interface was designed and the result was SURFER 2008. This was the version used in the German exhibitions during 2008, and included several advanced features as well as a more simplified view. In 2010 the user interface was redesigned, including several more examples of surfaces (such as Hauser's designs and record singularities surfaces). Some features such as multiple surfaces and animation were removed, in order to have a much simpler interface for exhibitions. This is the current version of SURFER. In 2012, the National Museum of Mathematics and IMAGINARY made an agreement to design a new exhibit based on SURFER, Formula Morph, that is currently on display at MoMath. This exhibit has an user interface based on physical devices (wheels, buttons, levers) which are mounted on the exhibit structure that holds the screen.
<urn:uuid:1e260530-bc83-4822-a6c5-df96c565f8c9>
CC-MAIN-2019-04
https://www.mathcom.wiki/index.php?title=SURFER_(IMAGINARY_exhibit)
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584336901.97/warc/CC-MAIN-20190123172047-20190123194047-00269.warc.gz
en
0.942223
1,260
2.5625
3
We challenged people to create innovative tools, applications, and mash-ups using the data available through the World Bank’s Open Data Initiative. The World Bank launched its Open Data Initiative in April, 2010. This initiative made an array of data sets, including the World Development Indicators, Africa Development Indicators, and Millennium Development Goal Indicators – free for all. The Apps for Development Competition aims to bring together the best ideas from both the software developer and the development practitioner communities to create innovative apps using World Bank data. The Competition challenges participants to develop software applications related to one or more of the Millennium Development Goals (MDGs). Submissions may be any kind of software application, be it for the web, a personal computer, a mobile handheld device, console, SMS, or any software platform broadly available to the public. The only other requirement is that the proposed application use one or more datasets from the World Bank Data Catalog available at data.worldbank.org. Applications submitted to the Competition should address at least one of the following objectives: - Raise awareness of at least one of the Millennium Development Goals (MDGs), or - Contribute to progress toward meeting one of the MDGs by 2015. Applications which best satisfy the competition criteria will receive cash prizes and the opportunity to have their apps featured on the World Bank Open Data website. Competition participants are encouraged to also use other relevant indicators and datasets, and to be creative in exploring approaches for realizing the goals. About the Millennium Development Goals The MDGs represent a collective ambition for our world. Simply put, the MDGs express a vision of a world in which extreme poverty and hunger have been eliminated, and the economic and human welfare of poor people worldwide has been tangibly improved. The MDGs articulate specific targets to be reached by 2015 related to poverty and hunger, universal education, child health and other crucial dimensions. How to enter Interested participants must register for the contest on this webpage by creating an account between October 7, 2010, and January 10, 2011. Registrants will receive an email, which they must use to verify their account. Once registered, participants may enter their submissions via the Submit Application tab. In order to be considered, each submission must include: a link to the application, a video of the application, a text-based description of the application, and at least one still photograph of the working application. The Challenge is open to all individuals from member countries of the World Bank (see: worldbank.org) who have attained the age of majority in their individual nations at the time of their entries, as well as companies with fewer than fifty employees. Organizations employing fifty or more employees are eligible for the Large Organization Recognition award. In order to be considered for prizes, submissions must be original software applications solely owned by the entrant(s), which must use at least one of the World Bank datasets found at http://data.worldbank.org. We encourage you to view the Resources Page to find more information on the data and the MDG's. The text descriptions in the contest submission must accurately describe the functionality of the application, and all submitted materials must be in English, or include an English translation. Submissions must eschew indecency, defamation, violence, pornography, and obvious bad taste. All entrants will retain all intellectual property ownership in their submissions. All interested applicants must read the complete version of the Official Rules document. The rules listed here provide only a brief introduction and overview, and do not constitute a complete list of all requirements and restrictions that apply to this competition. Director, Development Data Group, World Bank Chief Economist, Africa Region, World Bank Vice President, Government Relations and Business Environment, Nokia Middle East and Africa Senior Fellow and Deputy Director, Global Economy and Development, Brookings Institution Chief Economist (a.i.) and General Manager, Research Department, Inter-American Development Bank Founder, Craigslist, Inc. Director, Engineering, Google Chief Executive Officer, Development Gateway Quality of the Idea Including creativity and originality Implementation of the Idea Including user experience, design, and performance Potential Impact on the Competition Objectives Which are 1) raising awareness of, or 2) making progress toward achieving at least one MDG
<urn:uuid:ba7a606d-67dd-4f57-90d6-57b54cea632b>
CC-MAIN-2017-30
https://appsfordevelopment.devpost.com/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425381.3/warc/CC-MAIN-20170725202416-20170725222416-00129.warc.gz
en
0.902367
875
2.53125
3
-> Click here to learn how to get live help <- NAMEcalloc, malloc, free, realloc - Allocate and free dynamic memory #include <stdlib.h> void *calloc(size_t nmemb, size_t size); void *malloc(size_t size); void free(void *ptr); void *realloc(void *ptr, size_t size); DESCRIPTIONcalloc() allocates memory for an array of nmemb elements of size bytes each and returns a pointer to the allocated memory. The memory is set to zero. malloc() allocates size bytes and returns a pointer to the allocated memory. The memory is not cleared. free() frees the memory space pointed to by ptr, which must have been returned by a previous call to malloc(), calloc() or realloc(). Otherwise, or if free(ptr) has already been called before, undefined behaviour occurs. If ptr is NULL, no operation is performed. realloc() changes the size of the memory block pointed to by ptr to size bytes. The contents will be unchanged to the minimum of the old and new sizes; newly allocated memory will be uninitialized. If ptr is NULL, the call is equivalent to malloc(size); if size is equal to zero, the call is equivalent to free(ptr). Unless ptr is NULL, it must have been returned by an earlier call to malloc(), calloc() or realloc(). RETURN VALUEFor calloc() and malloc(), the value returned is a pointer to the allocated memory, which is suitably aligned for any kind of variable, or NULL if the request fails. free() returns no value. realloc() returns a pointer to the newly allocated memory, which is suitably aligned for any kind of variable and may be different from ptr, or NULL if the request fails. If size was equal to 0, either NULL or a pointer suitable to be passed to free() is returned. If realloc() fails the original block is left untouched - it is not freed or moved. SEE ALSObrk(2), posix_memalign(3) NOTESThe Unix98 standard requires malloc(), calloc(), and realloc() to set errno to ENOMEM upon failure. Glibc assumes that this is done (and the glibc versions of these routines do this); if you use a private malloc implementation that does not set errno, then certain library routines may fail without having a reason in errno. Recent versions of Linux libc (later than 5.4.23) and GNU libc (2.x) include a malloc implementation which is tunable via environment variables. When MALLOC_CHECK_ is set, a special (less efficient) implementation is used which is designed to be tolerant against simple errors, such as double calls of free() with the same argument, or overruns of a single byte (off-by-one bugs). Not all such errors can be protected against, however, and memory leaks can result. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort() is called immediately. This can be useful because otherwise a crash may happen much later, and the true cause for the problem is then very hard to track down. BUGSBy default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose some randomly picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior using a command like
<urn:uuid:e8678b35-8c60-481e-bdfa-eb111c8dd083>
CC-MAIN-2014-52
http://glprogramming.com/funcref.php?func=Free&q=1
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772972.2/warc/CC-MAIN-20141217075252-00165-ip-10-231-17-201.ec2.internal.warc.gz
en
0.869185
825
2.546875
3
The use and implementation of computer aided tomographic techniques in the late 1970’s allowed users to access an innovative technology for significant contributions for medical applications. Soon after, tomographic techniques were utilized for industrial applications, enabling users to identify and locate internal failures, without cutting open the industrial part. Computer aided tomographic techniques, such as Industrial Computed Tomography (CT), have revolutionized the way industry leaders qualify and validate industrial parts. What is tomography? Tomography is a process of developing a three dimensional image of the internal features of a solid object. The term tomography originates from the Greek term “Tomos”, meaning section or slice. According to the Merriam-Webster dictionary, tomography is a method of producing a 3D image of the internal structures of a solid object by the observation and recording of the differences in the effect on the passage of waves of energy impinging on those structures. Commonly, tomography is used in combination with x-ray technology, in order to manipulate and reconstruct 2D x-rays into tomographic images which are used to develop a 3D model for the external and internal part surface and structures. How does tomography work? Tomography refers to the process of capturing images by sections, through some wave of energy; most commonly an x-ray source. For x-ray computed tomography, an x-ray source must be positioned on the opposite side of the detector panel. The radiation is exposed onto the subject being scanned, which is placed between the x-ray source and detector panel. The radiation passes through toward the detector panel and 2D cross sectional slices are captured in pre-determined increments. These tomographic images are then reconstructed using a software, into a 3D model. Types of tomography Although computed tomography is one of the most common type of industrial tomography, there are many other types. Some of the most significant types of tomography include: - X-ray – Computed Tomography - Gamma rays – SPECT - Radio-frequency – MRI - Electron-positron annihilation – PET
<urn:uuid:584d1ea9-696b-47bd-971e-431f29d68c5f>
CC-MAIN-2020-40
https://jgarantmc.com/tomography/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00490.warc.gz
en
0.913192
438
3.765625
4
Cancer kills more people in the Harborough district than any other condition. Newly-released data from Public Health England has revealed that, of 783 deaths registered, 239 people died from the disease in 2016, the most recent period for which data has been released. The number represents 30.5 per cent of the deaths in Harborough, although the proportion is down from 32.1 per cent in 2011. It is also higher than the rate for England, where 28 per cent of the deaths were caused by all cancers in 2016. Helen Rippon, Chief Executive of Worldwide Cancer Research, reckons the lower mortality rate from cancer in the country is a consequence of better tests and treatments, but there is still work to be done. She said: “Some types of cancer have benefitted incredibly from research, with a person’s chance of survival pushing upwards of 90%. Others have not fared as well and survival rates are still as low as they were in 1970. Historically, less funding has been given to some types of cancer, which somewhat explains the discrepancies in survival rates. “The proportion of deaths caused by cancer in the UK is slightly higher than seen in Europe as a whole, where cancer accounts for 20% of all deaths. To understand why some places may have higher or lower numbers of people dying from cancer you need to be able to take everything into account, including dietary, lifestyle and environmental factors.” After cancer, circulatory diseases, like hypertension, was the second deadliest illness in Harborough. About 25.7 per cent of the deaths were caused by this condition. Jacob West, Director of Healthcare Innovation at the British Heart Foundation, said the national trend is down to the advancements in treating conditions.
<urn:uuid:5aceb603-8988-43da-8e57-6f9f7e7a37df>
CC-MAIN-2019-04
https://www.harboroughmail.co.uk/news/cancer-is-biggest-killer-in-harborough-district-figures-reveal-1-8567321
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00077.warc.gz
en
0.97848
359
2.5625
3