text
stringlengths
174
640k
id
stringlengths
47
47
dump
stringclasses
17 values
url
stringlengths
14
1.94k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
43
156k
score
float64
2.52
5.34
int_score
int64
3
5
Comprehensive DescriptionRead full entry Batis maritima, also known as saltwort, turtleweed, and beachwort. It is also known as pickleweed, barilla, planta de sal, camphire, herbe-à-crâbes, and akulikuli-kai. The many names of this plant come from the many languages of the areas it is found (Lonard, Judd, Stalter, 2011: 441). The conservation status for Batis maritima is G4 or apparently secure (USDA, 2013), but is invasive in Hawaii (Lonard, et al., 2011: 443). In the United States, Batis maritima is found in Alabama, California, Florida, Georgia, Hawaii, Georgia, Louisiana, Mississippi, North and South Carolina, Texas, Puerto Rico, and the Virgin Islands (USDA, 2013). Batis maritima is a low-lying shrub found growing in salt marshes and mangrove swamps (Lonard et al., 2011: 441; Francis, 2002). B. maritima can reach a height of 1 meter and 5 cm in diameter and has succulent (fleshy/thick tissue) leaves to retain water (Francis, 2002). The flowers are unisexual are likely wind pollinated (Lonard et al., 2011: 445). Salt tolerance of B. maritima was investigated in Sonora, Mexico (Miyamoto et al., 1996: 142). When B. maritima was irrigated with increasing salinities (1–60 g/l)), the plant was able to take up the water but not reduce the amount of salt in the soils (Miyamoto et al., 1996: 157). For example, the mean salinity in the root zone was 2.7 and 3.2 g/l when irrigated with 1-2 g/l and 68 and 48 g/l when treated with 40 g/l in summer and spring, respectively (Miyamoto et al., 1996: 146). The presence of B. maritima can promote colonization success of the black mangrove (Avicennia germinans) in denuded areas along the coast of the Gulf of Mexico, Florida (Milbrandt & Tinsley, 2006: 369). Seedling mortality of A. germinans was approximately 70% when growing with B. maritima and also where B. maritima root structure was intact but aboveground biomass was removed. By comparison, mortality was approximately 90% when planted in open areas (Milbrandt & Tinsley, 2006: 374). At 1600 Eastern Daylight Time, maximum and range in temperatures was greater in the open mudflats (44 °C and 16 °C) than in B. maritima patches (35 °C and 6.5 °C) (Milbrandt & Tinsley, 2006: 375). Batis maritima was found to have methyl chloride transferase which is an enzyme used to catalyze the transfer of methyl chloride from the chloride ion is the methyl group (Ni & Hager, 1998: 12866). Ni & Hager (1998: 12871) speculate that the enzyme is used to maintain chloride ion concentrations in the cytoplasm by releasing excess chloride as methyl chloride in the atmosphere rather than directly into the soil. Francis, J. 2002. Batis maritima L. U.S. Department of Agriculture; available at: http://ww.fs.fed.us/global/iitf/pdf/shrubs/Batis%20maritima.pdf; accessed on Feb 10, 2013. Lonard, R.I., Judd, F. W., & Stalter, R. 2011. The biological flora of coastal dunes and wetlands: Batis maritima C. Linnaeus. Journal of Coastal Research 27: 441-449. doi:10.2112/JCOASTRES-D-10-00142.1; accessed on Feb 10, 2013. Milbrandt, E.C. & Tinsley, M.N. 2007. The role of salwort (Batis maritima L.) in regeneration of degraded mangrove forests Hydrobiologia 568: 369-377. Miyamoto, S., Glenn, E.P., & Olsen, M.W. 1994. Growth, water use and salt uptake of four halophytes irrigated with highly saline water. Journal of Arid Environments 32: 141-159. Ni, X. & Hager, L.P. 1998. cDNA cloning of Batis maritima methyl chloride transferase and purification of the enzyme. Proceedings of the National Academy of Sciences 95: 12866-12871. USDA. 2013. United States Department of Agriculture and the Natural Resources Conservation Service: Batis maritima; available at: http://plants.usda.gov/java/ClassificationServlet?source=profile&symbol=BAMA5&display=63; accessed in April 18, 2013.
<urn:uuid:a8bb9070-27b6-4254-9c69-2201e3c0c235>
CC-MAIN-2013-20
http://eol.org/pages/594848/overview
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.84797
1,069
3.046875
3
<programming> (COCOMO) A method for evaluating the cost of a software package proposed by Dr Barry Boehm. There are a number of different types: The Basic COCOMO Model estimates the effort required to develop software in three modes of development (Organic Mode, Semidetached Mode, or Embedded Mode) using only DSIs as an input. The Basic model is good for quick, early, and rough order of magnitude estimates. The Intermediate COCOMO Model an extension of the Basic COCOMO model. The Intermediate model uses an Effort Adjustment Factor (EAF) and slightly different coefficients for the effort equation than the Basic model. It produces better results than the Basic model because the user supplies settings for cost drivers that determine the effort and duration of the software projects. The Intermediate model also allows the system to be divided and estimated in components. DSI values and cost drivers can be chosen for individual components instead of for the system as a whole. The Detailed COCOMO Model differs from the Intermediate COCOMO model in that it uses effort multipliers for each phase of the project. These phase dependent effort multipliers yield better estimates because the cost driver ratings may be different during each phase. The detailed model also provides a three-level product hierarchy and has some other capabilities such as a procedure for adjusting the phase distribution of the development schedule. ["Software Engineering Economics", B. Boehm, Prentice-Hall, 1981]. Try this search on Wikipedia, OneLook, Google Nearby terms: constraint satisfaction « constructed type « constructive « Constructive Cost Model » constructive solid geometry » constructor » Consul
<urn:uuid:0c405a33-49e9-44bc-965a-24af5a021340>
CC-MAIN-2013-20
http://foldoc.org/COCOMO
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.858344
343
2.6875
3
Bad breath, or halitosis, can be a major problem, especially when you're about to snuggle with your sweetie or whisper a joke to your friend. The good news is that bad breath can often be prevented with some simple steps. Bad breath is caused by odor-producing bacteria that grow in the mouth. When you don't brush and floss regularly, bacteria accumulate on the bits of food left in your mouth and between your teeth. The sulfur compounds released by these bacteria make your breath smell. Certain foods, especially ones like garlic and onions that contain pungent oils, can contribute to bad breath because the oils are carried to your lungs and out through your mouth. Smoking is also a major cause of bad breath. There are lots of myths about taking care of bad breath. Here are three things you may have heard about bad breath that are not true: Myth #1: Mouthwash will make bad breath go away. Mouthwash only gets rid of bad breath temporarily. If you do use mouthwash, look for an antiseptic (kills the germs that cause bad breath) and plaque-reducing one with a seal from the American Dental Association (ADA). When you're deciding which dental products to toss into your shopping cart, it's always a good idea to look for those that are accepted by the ADA. Also, ask your dentist for recommendations. Myth #2: As long as you brush your teeth, you shouldn't have bad breath. The truth is that most people only brush their teeth for 30 to 45 seconds, which just doesn't cut it. To sufficiently clean all the surfaces of your teeth, you should brush for at least 2 minutes at least twice a day. Remember to brush your tongue, too — bacteria love to hang out there. It's equally important to floss because brushing alone won't remove harmful plaque and food particles that become stuck between your teeth and gums. Myth #3: If you breathe into your hand, you'll know when you have bad breath. Wrong! When you breathe, you don't use your throat the same way you do when you talk. When you talk, you tend to bring out the odors from the back of your mouth (where bad breath originates), which simply breathing doesn't do. Also, because we tend to get used to our own smells, it's hard for a person to tell if he or she has bad breath. If you're concerned about bad breath, make sure you're taking care of your teeth and mouth properly. Some sugar-free gums and mints can temporarily mask odors, too. If you brush and floss properly and visit your dentist for regular cleanings, but your bad breath persists, you may have a medical problem like sinusitis or gum disease. Call your doctor or dentist if you suspect a problem. They can figure out if something else is behind your bad breath and help you take care of it.
<urn:uuid:012c26f2-41c6-4117-a54f-001b0db9343a>
CC-MAIN-2013-20
http://kidshealth.org/PageManager.jsp?dn=BannerHealth&lic=160&cat_id=20217&article_set=21951&ps=204
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96055
609
3.15625
3
Posted at 2:39 PM on January 19, 2010 by Sanden Totten This month scientists announced some new findings on what may be the world's first "plant-imal", an animal that works like a plant. The species in question in elysia chlorotica , a green sea slug found in Northeastern U.S. and parts of Canada. It has the uncanny ability to steal the genes from the plants it eats and use their chloroplast to start turning sunlight into energy. This amazing little creature inspired In The Loop's own Sanden Totten to pick up a guitar and sing this song. (This tune was part of ITL's latest podcast) Photo via PNAS Comment on this post
<urn:uuid:aaaf0a5f-71e2-4af7-aff5-cd88270ccafc>
CC-MAIN-2013-20
http://minnesota.publicradio.org/collections/special/columns/loophole/archive/2010/01/19/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932356
151
2.796875
3
The UNFCCC secretariat's preliminary assessment of nations' pledges and voluntary commitments, 15 December 2009, to reduce carbon emissions finds that a significant gap remains that will lead to temperature increases of 3 degrees Celsius. Executive summary: This preliminary assessment of the United Nations Framework Convention on Climate Change, written 15 December 2009 during the Copenhagen Climate Summit, reviews the pledges made by Annex I countries and voluntary commitments made by Annex II countries to reduce carbon emissions. The draft states that while current commitments would - if implemented successfully - lead to a reduction in atmospheric carbon, there still remains a significant gap with the level required by science, meaning that emissions would peak after 2020 ensuring an increase in average global temperatures of at least 3 degrees Celsius. This means failing to stay within the agreed-on 2 degree Celsius limit, and would lead to dangerous, runaway climate change. Number of pages: 8
<urn:uuid:e25bc127-ab29-4893-82c6-bf8295871eee>
CC-MAIN-2013-20
http://www.greenpeace.org/international/en/publications/reports/unfccc-secretariat-pledges-ass/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934267
176
3.25
3
Allergies are a short-term inflammation of the mucous membranes that line the nasal passages. "Hay fever," as the condition is commonly called, is caused by airborne pollens from trees, grasses, flowers, and weeds. Allergy season typically kicks off in the spring and fall when certain trees or grasses pollinate. When pollen season starts and how long it lasts varies throughout the country. In southern states, trees can start pollinating as early as late February and grass can start by the end of April, while in midwestern states allergies may not flare up until May. Another round of allergies may begin in late summer or early fall when ragweed is the culprit. In western states, grass pollinates for a longer period of time and certain weeds exist that can keep allergies blooming into the fall. Allergies caused by pollen and other allergens affect 40 million Americans and cost more than $1 billion in annual treatment costs. Although it's usually not a dangerous condition, it can be very uncomfortable and, for some people, can severely disrupt daily activities. The standard reactions include sneezing, itchy throat, headache, swollen sinuses, runny nose, and itchy, watery eyes, . In allergies, airborne pollen from various seasonal plants—or, in some cases, spores from mold—enter the body through the eyes, nose, or throat, and trigger an allergic reaction. Normally, the immune system does not respond to mild substances like pollen and mold. But in sensitive individuals, the body's defense mechanism views these allergens as it would an infectious agent and mounts an attack. Once the immune system has detected the "invader," it unleashes a cascade of chemicals such as histamine and other compounds resulting in localized inflammation that leads to irritation and discomfort. The symptoms of allergic reaction begin 5 to 10 minutes after allergen exposure, subside within an hour, and may return two to four hours later. - Common Side Effects of AntidepressantsFind out about common and not-so-common side effects of antidepressants and how to manage them. - How Drugs Can Lower CholesterolDiscover how cholesterol-lowering medications work in your body to bring your cholesterol numbers down to ideal levels. - Do Over-the-Counter Proton-Pump Inhibitors Work?You might wonder why you need a prescription for GERD if many PPIs are available over the counter. Get the answers to this and other questions about OTC PPIs.
<urn:uuid:8b1cce27-03b8-4773-bdca-a2b5f9f8cd13>
CC-MAIN-2013-20
http://www.pdrhealth.com/diseases/seasonal-allergies
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930884
507
3.75
4
Biblical Commentary on the Old Testament, by Carl Friedrich Keil and Franz Delitzsh, [1857-78], at sacred-texts.com Increase in the Number of the IsraelitesTheir Bondage in Egypt - Exodus 1 The promise which God gave to Jacob in his departure from Canaan (Gen 46:3) was perfectly fulfilled. The children of Israel settled down in the most fruitful province of the fertile land of Egypt, and grew there into a great nation (Exo 1:1-7). But the words which the Lord had spoken to Abram (Gen 15:13) were also fulfilled in relation to his seed in Egypt. The children of Israel were oppressed in a strange land, were compelled to serve the Egyptians (Exo 1:8-14), and were in great danger of being entirely crushed by them (Exo 1:15-22). To place the multiplication of the children of Israel into a strong nation in its true light, as the commencement of the realization of the promises of God, the number of the souls that went down with Jacob to Egypt is repeated from Gen 46:27 (on the number 70, in which Jacob is included, see the notes on this passage); and the repetition of the names of the twelve sons of Jacob serves to give to the history which follows a character of completeness within itself. "With Jacob they came, every one and his house," i.e., his sons, together with their families, their wives, and their children. The sons are arranged according to their mothers, as in Gen 35:23-26, and the sons of the two maid-servants stand last. Joseph, indeed, is not placed in the list, but brought into special prominence by the words, "for Joseph was in Egypt" (Exo 1:5), since he did not go down to Egypt along with the house of Jacob, and occupied an exalted position in relation to them there. After the death of Joseph and his brethren and the whole of the family that had first immigrated, there occurred that miraculous increase in the number of the children of Israel, by which the blessings of creation and promise were fully realised. The words פּרוּ ישׁרצוּ (swarmed), and ירבּוּ point back to Gen 1:28 and Gen 8:17, and יעצמוּ to עצוּם גּוי in Gen 18:18. "The land was filled with them," i.e., the land of Egypt, particularly Goshen, where they were settled (Gen 47:11). The extra-ordinary fruitfulness of Egypt in both men and cattle is attested not only by ancient writers, but by modern travellers also (vid., Aristotelis hist. animal. vii. 4, 5; Columella de re rust. iii. 8; Plin. hist. n. vii. 3; also Rosenmller a. und n. Morgenland i. p. 252). This blessing of nature was heightened still further in the case of the Israelites by the grace of the promise, so that the increase became extraordinarily great (see the comm. on Exo 12:37). The promised blessing was manifested chiefly in the fact, that all the measures adopted by the cunning of Pharaoh to weaken and diminish the Israelites, instead of checking, served rather to promote their continuous increase. "There arose a new king over Egypt, who knew not Joseph." ויּקם signifies he came to the throne, קוּם denoting his appearance in history, as in Deu 34:10. A "new king" (lxx: βασιλεὺς ἕτερος; the other ancient versions, rex novus) is a king who follows different principles of government from his predecessors. Cf. חדשׁים אלהים, "new gods," in distinction from the God that their fathers had worshipped, Jdg 5:8; Deu 32:17. That this king belonged to a new dynasty, as the majority of commentators follow Josephus (Note: Ant. ii. 9, 1. Τῆς βασιλέιας εἰς ἄλλον οἶκον μεταληλυθυΐ́ας.) in assuming, cannot be inferred with certainty from the predicate new; but it is very probable, as furnishing the readiest explanation of the change in the principles of government. The question itself, however, is of no direct importance in relation to theology, though it has considerable interest in connection with Egyptological researches. (Note: The want of trustworthy accounts of the history of ancient Egypt and its rulers precludes the possibility of bringing this question to a decision. It is true that attempts have been made to mix it up in various ways with the statements which Josephus has transmitted from Manetho with regard to the rule of the Hyksos in Egypt (c. Ap. i. 14 and 26), and the rising up of the "new king" has been identified sometimes with the commencement of the Hyksos rule, and at other times with the return of the native dynasty on the expulsion of the Hyksos. But just as the accounts of the ancients with regard to the Hyksos bear throughout the stamp of very distorted legends and exaggerations, so the attempts of modern inquirers to clear up the confusion of these legends, and to bring out the historical truth that lies at the foundation of them all, have led to nothing but confused and contradictory hypotheses; so that the greatest Egyptologists of our own days, - viz., Lepsius, Bunsen, and Brugsch - differ throughout, and are even diametrically opposed to one another in their views respecting the dynasties of Egypt. Not a single trace of the Hyksos dynasty is to be found either in or upon the ancient monuments. The documental proofs of the existence of a dynasty of foreign kings, which the Vicomte de Roug thought that he had discovered in the Papyrus Sallier No. 1 of the British Museum, and which Brugsch pronounced "an Egyptian document concerning the Hyksos period," have since then been declared untenable both by Brugsch and Lepsius, and therefore given up again. Neither Herodotus nor Diodorus Siculus heard anything at all about the Hyksos though the former made very minute inquiry of the Egyptian priests of Memphis and Heliopolis. And lastly, the notices of Egypt and its kings, which we meet with in Genesis and Exodus, do not contain the slightest intimation that there were foreign kings ruling there either in Joseph's or Moses' days, or that the genuine Egyptian spirit which pervades these notices was nothing more than the "outward adoption" of Egyptian customs and modes of thought. If we add to this the unquestionably legendary character of the Manetho accounts, there is always the greatest probability in the views of those inquirers who regard the two accounts given by Manetho concerning the Hyksos as two different forms of one and the same legend, and the historical fact upon which this legend was founded as being the 430 years' sojourn of the Israelites, which had been thoroughly distorted in the national interests of Egypt. - For a further expansion and defence of this view see Hvernick's Einleitung in d. A. T. i. 2, pp. 338ff., Ed. 2 (Introduction to the Pentateuch, pp. 235ff. English translation).) The new king did not acknowledge Joseph, i.e., his great merits in relation to Egypt. ידע לא signifies here, not to perceive, or acknowledge, in the sense of not wanting to know anything about him, as in Sa1 2:12, etc. In the natural course of things, the merits of Joseph might very well have been forgotten long before; for the multiplication of the Israelites into a numerous people, which had taken place in the meantime, is a sufficient proof that a very long time had elapsed since Joseph's death. At the same time such forgetfulness does not usually take place all at once, unless the account handed down has been intentionally obscured or suppressed. If the new king, therefore, did not know Joseph, the reason must simply have been, that he did not trouble himself about the past, and did not want to know anything about the measures of his predecessors and the events of their reigns. The passage is correctly paraphrased by Jonathan thus: non agnovit (חכּים) Josephum nec ambulavit in statutis ejus. Forgetfulness of Joseph brought the favour shown to the Israelites by the kings of Egypt to a close. As they still continued foreigners both in religion and customs, their rapid increase excited distrust in the mind of the king, and induced him to take steps for staying their increase and reducing their strength. The statement that "the people of the children of Israel" (ישׂראל בּני עם lit., "nation, viz., the sons of Israel;" for עם with the dist. accent is not the construct state, and ישראל בני is in apposition, cf. Ges. 113) were "more and mightier" than the Egyptians, is no doubt an exaggeration. "Let us deal wisely with them," i.e., act craftily towards them. התחכּם, sapiensem se gessit (Ecc 7:16), is used here of political craftiness, or worldly wisdom combined with craft and cunning (κατασοφισώμεθα, lxx), and therefore is altered into התנכּל in Psa 105:25 (cf. Gen 37:18). The reason assigned by the king for the measures he was about to propose, was the fear that in case of war the Israelites might make common cause with his enemies, and then remove from Egypt. It was not the conquest of his kingdom that he was afraid of, but alliance with his enemies and emigration. עלה is used here, as in Gen 13:1, etc., to denote removal from Egypt to Canaan. He was acquainted with the home of the Israelites therefore, and cannot have been entirely ignorant of the circumstances of their settlement in Egypt. But he regarded them as his subjects, and was unwilling that they should leave the country, and therefore was anxious to prevent the possibility of their emancipating themselves in the event of war. - In the form תּקראנה for תּקרינה, according to the frequent interchange of the forms הל and אל (vid., Gen 42:4), nh is transferred from the feminine plural to the singular, to distinguish the 3rd pers. fem. from the 2nd pers., as in Jdg 5:26; Job 17:16 (vid., Ewald, 191c, and Ges. 47, 3, Anm. 3). Consequently there is no necessity either to understand מלחמה collectively as signifying soldiers, or to regard תּקראנוּ drager ot , the reading adopted by the lxx (συμβῆ ἡμῖν), the Samaritan, Chaldee, Syriac, and Vulgate, as "certainly the original," as Knobel has done. The first measure adopted (Exo 1:11) consisted in the appointment of taskmasters over the Israelites, to bend them down by hard labour. מסּים שׂרי bailiffs over the serfs. מסּים from מס signifies, not feudal service, but feudal labourers, serfs (see my Commentary on Kg1 4:6). ענּה to bend, to wear out any one's strength (Psa 102:24). By hard feudal labour (סבלות burdens, burdensome toil) Pharaoh hoped, according to the ordinary maxims of tyrants (Aristot. polit., 5, 9; Liv. hist. i. 56, 59), to break down the physical strength of Israel and lessen its increase-since a population always grows more slowly under oppression than in the midst of prosperous circumstances-and also to crush their spirit so as to banish the very wish for liberty. - ויּבן - .ytrebil r, and so Israel built (was compelled to build) provision or magazine cities vid., Ch2 32:28, cities for the storing of the harvest), in which the produce of the land was housed, partly for purposes of trade, and partly for provisioning the army in time of war; - not fortresses, πόλεις ὀχυραί, as the lxx have rendered it. Pithom was Πάτουμος; it was situated, according to Herodotus (2, 158), upon the canal which commenced above Bybastus and connected the Nile with the Red Sea. This city is called Thou or Thoum in the Itiner. Anton., the Egyptian article pi being dropped, and according to Jomard (descript. t. 9, p. 368) is to be sought for on the site of the modern Abassieh in the Wady Tumilat. - Raemses (cf. Gen 47:11) was the ancient Heroopolis, and is not to be looked for on the site of the modern Belbeis. In support of the latter supposition, Stickel, who agrees with Kurtz and Knobel, adduces chiefly the statement of the Egyptian geographer Makrizi, that in the (Jews') book of the law Belbeis is called the land of Goshen, in which Jacob dwelt when he came to his son Joseph, and that the capital of the province was el Sharkiyeh. This place is a day's journey (for as others affirm, 14 hours) to the north-east of Cairo on the Syrian and Egyptian road. It served as a meeting-place in the middle ages for the caravans from Egypt to Syria and Arabia (Ritter, Erdkunde 14, p. 59). It is said to have been in existence before the Mohammedan conquest of Egypt. But the clue cannot be traced any farther back; and it is too far from the Red Sea for the Raemses of the Bible (vid., Exo 12:37). The authority of Makrizi is quite counterbalanced by the much older statement of the Septuagint, in which Jacob is made to meet his son Joseph in Heroopolis; the words of Gen 46:29, "and Joseph went up to meet Israel his father to Goshen," being rendered thus: εἰς συϚάϚτησιν Ἰσραὴλ τῷ πατρὶ αὐτοῦκαθ ̓ Ἡρώων πόλιν. Hengstenberg is not correct in saying that the later name Heroopolis is here substituted for the older name Raemses; and Gesenius, Kurtz, and Knobel are equally wrong in affirming that καθ ̓ ἩρώωϚ πόλιν is supplied ex ingenio suo; but the place of meeting, which is given indefinitely as Goshen in the original, is here distinctly named. Now if this more precise definition is not an arbitrary conjecture of the Alexandrian translators, but sprang out of their acquaintance with the country, and is really correct, as Kurtz has no doubt, it follows that Heroopolis belongs to the γῆ Ῥαμεσσῆ (Gen 46:28, lxx), or was situated within it. But this district formed the centre of the Israelitish settlement in Goshen; for according to Gen 47:11, Joseph gave his father and brethren "a possession in the best of the land, in the land of Raemses." Following this passage, the lxx have also rendered גּשׁן ארצה in Gen 46:28 by εἰς γῆν Ῥαμεσσῆ, whereas in other places the land of Goshen is simply called γῆ Γεσέμ (Gen 45:10; Gen 46:34; Gen 47:1, etc.). But if Heroopolis belonged to the γῆ Ῥαμεσσῆ, or the province of Raemses, which formed the centre of the land of Goshen that was assigned to the Israelites, this city must have stood in the immediate neighbourhood of Raemses, or have been identical with it. Now, since the researches of the scientific men attached to the great French expedition, it has been generally admitted that Heroopolis occupied the site of the modern Abu Keisheib in the Wady Tumilat, between Thoum = Pithom and the Birket Temsah or Crocodile Lake; and according to the Itiner. p. 170, it was only 24 Roman miles to the east of Pithom, - a position that was admirably adapted not only for a magazine, but also for the gathering-place of Israel prior to their departure (Exo 12:37). But Pharaoh's first plan did not accomplish his purpose (Exo 1:12). The multiplication of Israel went on just in proportion to the amount of the oppression (כּן = כּאשׁר prout, ita; פּרץ as in Gen 30:30; Gen 28:14), so that the Egyptians were dismayed at the Israelites (קוּץ to feel dismay, or fear, Num 22:3). In this increase of their numbers, which surpassed all expectation, there was the manifestation of a higher, supernatural, and to them awful power. But instead of bowing before it, they still endeavoured to enslave Israel through hard servile labour. In Exo 1:13, Exo 1:14 we have not an account of any fresh oppression; but "the crushing by hard labour" is represented as enslaving the Israelites and embittering their lives. פּרך hard oppression, from the Chaldee פּרך to break or crush in pieces. "They embittered their life with hard labour in clay and bricks (making clay into bricks, and working with the bricks when made), and in all kinds of labour in the field (this was very severe in Egypt on account of the laborious process by which the ground was watered, Deu 11:10), כּל־עבדתם את with regard to all their labour, which they worked (i.e., performed) through them (viz., the Israelites) with severe oppression." כל־ע את is also dependent upon ימררו, as a second accusative (Ewald, 277d). Bricks of clay were the building materials most commonly used in Egypt. The employment of foreigners in this kind of labour is to be seen represented in a painting, discovered in the ruins of Thebes, and given in the Egyptological works of Rosellini and Wilkinson, in which workmen who are evidently not Egyptians are occupied in making bricks, whilst two Egyptians with sticks are standing as overlookers; - even if the labourers are not intended for the Israelites, as the Jewish physiognomies would lead us to suppose. (For fuller details, see Hengstenberg's Egypt and the Books of Moses, p. 80ff. English translation). As the first plan miscarried, the king proceeded to try a second, and that a bloody act of cruel despotism. He commanded the midwives to destroy the male children in the birth and to leave only the girls alive. The midwives named in Exo 1:15, who are not Egyptian but Hebrew women, were no doubt the heads of the whole profession, and were expected to communicate their instructions to their associates. ויּאמר in Exo 1:16 resumes the address introduced by ויאמר in Exo 1:15. The expression על־האבנים, of which such various renderings have been given, is used in Jer 18:3 to denote the revolving table of a potter, i.e., the two round discs between which a potter forms his earthenware vessels by turning, and appears to be transferred here to the vagina out of which the child twists itself, as it were like the vessel about to be formed out of the potter's discs. Knobel has at length decided in favour of this explanation, at which the Targumists hint with their מתברא. When the midwives were called in to assist at a birth, they were to look carefully at the vagina; and if the child were a boy, they were to destroy it as it came out of the womb. וחיה for חייה rof ו from חיי, see Gen 3:22. The w takes kametz before the major pause, as in Gen 44:9 (cf. Ewald, 243a). But the midwives feared God (ha-Elohim, the personal, true God), and did not execute the king's command. When questioned upon the matter, the explanation which they gave was, that the Hebrew women were not like the delicate women of Egypt, but were חיות "vigorous" (had much vital energy: Abenezra), so that they gave birth to their children before the midwives arrived. They succeeded in deceiving the king with this reply, as childbirth is remarkably rapid and easy in the case of Arabian women (see Burckhardt, Beduinen, p. 78; Tischendorf, Reise i. p. 108). God rewarded them for their conduct, and "made them houses," i.e., gave them families and preserved their posterity. In this sense to "make a house" in Sa2 7:11 is interchanged with to "build a house" in Sa2 7:27 (vid., Rut 4:11). להם for להן as in Gen 31:9, etc. Through not carrying out the ruthless command of the king, they had helped to build up the families of Israel, and their own families were therefore built up by God. Thus God rewarded them, "not, however, because they lied, but because they were merciful to the people of God; it was not their falsehood therefore that was rewarded, but their kindness (more correctly, their fear of God), their benignity of mind, not the wickedness of their lying; and for the sake of what was good, God forgave what was evil." (Augustine, contra mendac. c. 19.) The failure of his second plan drove the king to acts of open violence. He issued commands to all his subjects to throw every Hebrew boy that was born into the river (i.e., the Nile). The fact, that this command, if carried out, would necessarily have resulted in the extermination of Israel, did not in the least concern the tyrant; and this cannot be adduced as forming any objection to the historical credibility of the narrative, since other cruelties of a similar kind are to be found recorded in the history of the world. Clericus has cited the conduct of the Spartans towards the helots. Nor can the numbers of the Israelites at the time of the exodus be adduced as a proof that no such murderous command can ever have been issued; for nothing more can be inferred from this, than that the command was neither fully executed nor long regarded, as the Egyptians were not all so hostile to the Israelites as to be very zealous in carrying it out, and the Israelites would certainly neglect no means of preventing its execution. Even Pharaoh's obstinate refusal to let the people go, though it certainly is inconsistent with the intention to destroy them, cannot shake the truth of the narrative, but may be accounted for on psychological grounds, from the very nature of pride and tyranny which often act in the most reckless manner without at all regarding the consequences, or on historical grounds, from the supposition not only that the king who refused the permission to depart was a different man from the one who issued the murderous edicts (cf. Exo 2:23), but that when the oppression had continued for some time the Egyptian government generally discovered the advantage they derived from the slave labour of the Israelites, and hoped through a continuance of that oppression so to crush and break their spirits, as to remove all ground for fearing either rebellion, or alliance with their foes.
<urn:uuid:d7871281-a478-443b-81ad-0124e001141c>
CC-MAIN-2013-20
http://www.sacred-texts.com/bib/cmt/kad/exo001.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96715
5,384
2.6875
3
Over 8,000 websites created by students around the world who have participated in a ThinkQuest Competition. Compete | FAQ | Contact Us The goal of the Reyes-Andres Network is to provide all Internet users visiting the website an opportunity to learn more about the background of computers. The Reyes-Andres Network also features tutorials and free programs for graphing calculators for use in mathematics and science classes. In addition, submitted as an Interdisiplinary website to the ThinkQuest Internet Challenge, the Reyes-Andres Network includes original poetry and downloadable electronic books which visitors may view for both educational and enjoyable purposes. EugeneTallwood High School, Virginia Beach, VA, United States HowardTallwood High School, Virginia Beach, VA, United States 19 & under Josephine ReyesTallwood High School, Virginia Beach, VA, United States Computers & the Internet > Programming Social Sciences & Culture > Languages & Language Arts > Writing
<urn:uuid:e20c93a1-bca3-4475-862e-c91ba606efa7>
CC-MAIN-2013-20
http://www.thinkquest.org/pls/html/f?p=52300:100:2957065423019920::::P100_TEAM_ID:501577421
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.864474
198
2.515625
3
Aussie Researchers See Hope in Goji The Goji berry trend might be more than a trend. Faculty of Pharmacy researchers in Australia conducted in vitro tests to evaluate the potential of Tibetan goji berry on diabetics with vision problems. According to AFN, University of Sydney Professor of Pharmaceutical Chemistry Basil Roufogalis found that the taurine that’s innate in goji berry has antioxidant properties that may offer some protection for the retina. The study looked at goji as a possible source of aid for diabetics with “retinopathy” that is resulting in blindness for those with diabetes. Roufogalis explained “blood vessels build up in the retina and grow over the vision spot, which can result in vision loss.” The research allegedly found that the Tibetan goji berry somehow provided some layer of protection that could combat cell death resulting from too much glucose in the retina. The hope is that one day there will be enough data to warrant a human clinical trial.
<urn:uuid:1595927a-1a39-400b-b936-225aa3e4f972>
CC-MAIN-2013-20
http://www.trendceteramag.com/2012/06/aussie-researchers-see-hope-in-goji/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.912711
211
2.75
3
PC Energy Conservation Tips - Do not run animated screen savers. These programs use additional power and often prevent a computer from going into "sleep" mode that saves about 30% of the monitor’s energy consumption. Screen savers protect only the phosphorescent coating on the monitor (not usually a problem with color monitors). - Better yet, turn off your monitor whenever you won’t be using your computer for more than 15 minutes (such as when you’re at a meeting, at lunch, or at home). Shutting off the monitor will NOT prevent your computer from operating to check for viruses, perform backups or receive software upgrades, some of which is now commonly run over night or on weekends. When you return to your computer, just press the monitor's power button and in a few seconds you’re right where you left off. - Keep your printer (and other peripherals) turned off until you need it. Most people do not print constantly all day long, but they leave their printer turned on all the time. - In dorm rooms we recommend you turn your entire computer system off overnight whenever possible. This option will save the most energy. - In offices you should check with your computer support person, however, the recommendation is that most personal computer system units should be left on (with monitors and peripherals off) at night and on weekends. This permits automated security and antivirus patch protection as well as application upgrade procedures to occur during non-business hours. - For more energy saving ideas see http://www.uni.edu/energy
<urn:uuid:76bf28c3-655c-4fc8-871c-af0ce8df01f5>
CC-MAIN-2013-20
http://www.uni.edu/its/support/article/554
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940171
322
2.578125
3
WHO Response to the 2009 floods emergency in Namibia: Preventing diseases, saving lives From March 2009, north-central and north-eastern Namibia experienced the worst flooding in decades. The six regions affected were Caprivi, Kavango, Ohangwena, Omusati, Oshana and Oshikoto. Nearly 700 000 people (over 30% of Namibia's 2.1. million population) were affected; more than 56 000 people were displaced, 28 932 of whom were accommodated in a hundred relocation camps.1 Many of those affected by the 2009 floods had not yet recovered fully from the floods of 2008, reducing their resiliency to cope with the new disaster. The MoHSS requested support from WHO to provide technical assistance to regional and district health officials to undertake a rapid needs assessment. WHO assisted the regional health management teams, particularly in the north-eastern parts of Namibia to conduct regular coordination meetings. This helped to monitor the situation in flood-affected areas and take the necessary actions in time. This report examines the extent of the damages caused to the health sector as a result of the floods. It discusses the response by government and partners, particularly the contributions to this emergency by the WHO. Furthermore, it identifies the challenges experienced in the response and provides recommendations for improving health services to ensure effective emergency preparedness and response capability for future disasters. While this report focuses on the flood disaster of 2009, the recommendations could be valuable for any other disaster, such as disease outbreaks, droughts and wildfires if effectively implemented. The report shows that capacity building is required in life-saving skills across communities as well as in disease surveillance, emergency preparedness and response and planning across all regions affected.
<urn:uuid:47b4a103-0913-4130-b143-c96263b89293>
CC-MAIN-2013-20
http://reliefweb.int/report/namibia/who-response-2009-floods-emergency-namibia-preventing-diseases-saving-lives
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963077
355
2.984375
3
Backed up against the Qionglai Mountains in the Sichuan Province of central China, the 2,000+-year-old city of Chengdu, China, grew dramatically between July 1990 and July 2000. This Landsat image of Chengdu is part of a study that is using high- and moderate-resolution satellite data to monitor patterns of urbanization across the Earth. Urban landscapes are so different from natural ones that they have the potential to alter regional and global climate, changing everything from temperature, to rainfall patterns, to the chemistry of the atmosphere. The more humans urbanize the landscape, the greater the possible impacts on climate will be. In the image above, yellow areas show the extent of the urban area in 1990, while orange areas show what was built up in the 10 years after that. In many cases, the urban build up has moved out of the core of the city along roadways, which radiate out from the city like spokes on a wheel. A new roadway makes an orange ring around the city and is connected to the core by many new “spokes.” Urban expansion has mostly been on the western side of the city, approaching the mountain foothills. Scientists conducting the study are approaching their mapping task with several sources of data: high-resolution satellite data, like the Landsat data pictured here; night-time city light data; data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), which provides less detail but observes larger areas at one time; and population density data. This fusion of data sources maximizes the strengths and minimizes the limitations of each data type, and it allows a more complete picture of urban growth in cities across the world. To read more about the project and to see images from additional cities, read NASA satellites watch world’s cities grow.
<urn:uuid:368e6a0b-3477-4d68-806b-fec144c92d0c>
CC-MAIN-2013-20
http://visibleearth.nasa.gov/view.php?id=4039
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939444
375
3.828125
4
To many people, sometimes even experienced doctors, the involuntary movements, limited mobility, abnormal muscular postures, and verbal tremors that are symptomatic of dystonia simply point to other neuromuscular disorders, such as Tourette’s syndrome. To make matters more confusing, dystonia is not just one medical condition, but a group of movement disorders that affect either a single muscle or group of muscles primarily in the arms, legs, or neck. Several distinct patterns of dystonia fall under the dystonia movement disorders umbrella. Symptoms generally involve: - Involuntary, long-lasting muscle contractions causing twisting, abnormal postures, or repetitive movements of a particular part of the body - Occasionally, movement is affected throughout the entire body - Speech problems, tremor, or uncontrollable eye blinking may occur In early-onset dystonia, symptoms first appear around age 12, usually in an arm or a leg. Dystonia can spread, affecting other parts of the body. For others, dystonia symptoms emerge in late adolescence or early adulthood. The older a person is when symptoms appear, the more likely dystonia will remain limited to a particular area. Research has found advocates from among one affected group—musicians. Because of the nature of their work, musicians are among the first to notice abnormal muscular changes when practicing or performing. One such performer, Leon Fleisher, a world-class touring pianist, was forced into retirement at the age of 36 when a very localized dystonia claimed the use of his right hand. He only returned to professional, two-handed piano play in 2003 after decades of misdiagnosis and failed treatments. His performance career was renewed after a drug treatment restored his fingers to their full, extended length. The gene DYT1 on chromosome nine appears to be linked to early-onset dystonia. DYT1 is responsible for making a damaging protein—torsin A. The protein interferes with the brain’s ability to process a group of brain chemicals called neurotransmitters. These chemicals needed for normal muscle contraction include GABA (gamma-aminobutyric acid), dopamine, acetylcholine, norepinephrine, and serotonin. Other less common gene mutations have also been linked to dystonia. Other forms of dystonia are linked with injury, stroke, or environmental triggers, such as lack of oxygen during birth, certain infections, reactions to certain drugs, or heavy-metal or carbon monoxide poisoning. Dystonias may also appear as a symptom of other inherited diseases. Dystonias do not shorten life expectancy, except in rare cases. There is no universally effective treatment for all dystonia disorders. Instead, most people receive highly individualized treatment, including drugs, surgery, and physical therapy aimed at stopping or reducing muscular pain and spasm. Some frequently used treatments include: Drugs aimed at altering neurotransmitter levels in the brain are often the first type of drug treatment. These include: Drugs that reduce acetylcholine (eg, Muscle relaxants (eg, Dopamine-boosting medicines (eg, Small amounts of this drug may provide temporary relief of some dystonias that affect only a particular part of the body. Leon Fleisher resumed his musical career after receiving botox injections. Botox blocks the release of acetylcholine and, when effective, relieves symptoms for up to six months before more injections are needed. If drug therapy is not successful, surgery may be the next step for people with severe symptoms. Deep brain stimulation can also be tried in certain cases. In this procedure, electrical pulses are transmitted to the region of the brain that is causing the contraction. Bunch M. Focal dystonia. EBSCO Patient Education Reference Center website. Available at: . Updated December 30, 2011. Accessed June 15, 2012. Cervical dystonia. EBSCO DynaMed website. Available at: . Updated September 2010. Accessed November 15, 2010. Dystonia. American Association of Neurological Surgeons website. Available at: . Accessed June 15, 2012. Dystonias fact sheet. National Institute of Neurological Disorders and Stroke website. Available at: . Updated June 7, 2012. Accessed June 15, 2012. Early-onset primary dystonia. Genetics Home Reference website. Available at: . Published June 11, 2012. Accessed June 15, 2012. Internationally renowned pianist Leon Fleisher partners with DMRF to launch freedom to play. Dystonia Medical Research Foundation website. Available at: . Accessed October 11, 2004. Last reviewed June 2012 by Brian Randall, MD Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition. Copyright © EBSCO Publishing. All rights reserved.
<urn:uuid:669a5487-9c14-4f8b-be19-aa900e0a8f24>
CC-MAIN-2013-20
http://www.abrazohealth.com/education/healthinfo.aspx?chunkiid=86992
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917659
1,089
3.34375
3
Current that flows back and forth, changing directions rapidly. AC current is typically used in households in the United States and Canada. It reverses directions 120 times per second or 60 full cycles. A measurement of electrical current; what you feel when you receive a shock. The higher the amperage, the more intense shock the animal will feel. Unit of flow of current. Used to train wild animals to avoid an electric fence. Turn off fence controller (charger). Smear an aluminum pie tin with the bait (peanut butter, honey, rancid bacon, molasses, etc.). Connect pie tin to an electric fence wire using metal wire. Locate several baited pie tins around the perimeter of the fence. After baiting is completed, turn fence controller (charger) on and monitor bait stations regularly. A term used to describe electric fence chargers that pulse electricity at regular intervals through a fence, typically at one-second intervals. An output capacitor is used to store direct current (DC) electricity between pulses through a fence. This energy is released through the output transformer in the form of a high energy pulse. Alternating current (AC) can't be stored using a capacitor. A material through which current will readily flow. All metals are conductors. Refers to a continuous output of alternating current (AC) rather than a pulsed or cycled output. Continuous current fences produce very low voltages and extremely low amperages in order to keep them safe. As a result, these fences do not work well on long, weedy or wet fences. Continuous current fences are not UL listed. It is the current, the duration and rate of its flow which causes the shock. Increasing the voltage increases the current whereas increasing the resistance decreases the current. Sturdy wooden posts driven deep into the ground to provide extra support for the tension put on a fence line as it changes direction. Corner posts are not only used at corners, but also for gates and end posts. Current that flows steadily in one direction, typically produced by batteries through a chemical reaction. A type of fence charger that does not require a grounding system to deliver an electrical shock. Direct-discharge fences are most effective on short, weed-free fences. A way of comparing the relative power of fence chargers. Ratings are based on a single strand of 17-gauge steel wire strung 36 inches above the ground under ideal, weed-free laboratory conditions. Any number of conditions that cause current to be drawn from a fence wire. Weeds touching the fence, broken insulators, rusty fence wire, and even wire splices all increase fence load and reduce the fence's voltage and amperage. Fence load is measured in ohms. The rods in the ground which are connected to the ground terminal on the charger. The ground collects the pulse through the earth when an animal touches the live wire and completes the circuit. Ground wire return system Used where dry or sandy soil conditions do not allow a traditional ground system to work. Consists of running a ground wire parallel to a hot fence wire, delivering at the point where the animal touches the two lines. Necessary to create a complete electrical circuit: when the animal touches the electrified wire, the electricity travels through the animal, into the soil, back to the ground rods that are connected to the fence charger, resulting in the animal receiving a brief shock. A ground system consists of ground rods (3), hookup wire, ground rod clamps and line clamps. An affordable, long lasting electrified fence system that is an excellent choice for perimeter fences, providing a barrier to contain or exclude animals. These sturdy, permanent fences require braced corner and end posts in wood along with special insulators, hardware, and tools that maintain constant high tension on metal wire. Total effective fence load. This is made up of Capacitance, Inductance and Resistance. In terms of the charger, low impedance means low internal resistance of the charger. In terms of the fence line, this is the transfer of power without physical contact, from an electrified wire to a non electric wire or gate. This is usually noticed by touching a wire on the conventional fence (or gate) and finding it "live". This phenomenon is more noticeable in damp weather conditions. A nonconductive material (plastic or ceramic), typically used to offset fence wire from a fence post. Insulators prevent the current from traveling through the post and into the ground, short-circuiting the system. Unit of energy. A joule is one watt for one second. A measurement of electrical energy used to rate low impedance fence chargers. The effective power the charger delivers to the fence, independent of other factors that can drain voltage. The higher the joules, the more intense shock the animal will feel. (1 joule = 1 watt of power for 1 second) Small losses of energy from the fence line to earth. These losses can be caused by seasonal vegetation growth, faulty insulators etc. A post used to support electric or non-electric fence wire. Line posts support the fence line, and have far less tension put on it than corner posts. As a result, they can be made from a variety of materials, including metal, wood, plastic and fiberglass. The wire connected to the charger power terminal which carries the current. Low impedance fence chargers increase the joules (energy or shock) on the fence line if weeds or other vegetation touch the line. Available in AC, DC and solar powered models. The tendency among certain species of animals to graze vegetation down to the dirt. May cause animals to reach vegetation outside the fence. Unit of resistance Ohms are used to measure resistance to the flow of an electric current. A low ohms reading represents a heavy fence load, and a high ohms reading represents a light fence load. On-time / Off-time On-time refers to the duration of the electrical pulse produced by a capacitive discharge fence. Off-time refers to the length of time between the pulses. Fences have electrical pulses that are only microseconds long, followed by one full second of off-time between each pulse. This long off-time enables an animal (or person) to easily break away from the fence. Pulse width refers to the duration of the electrical pulse produced by a capacitive discharge fence. (See On-time / Off-time) Resistance is any force that resists the flow of electricity, consuming power from a circuit by changing electric energy into heat. This is often called "the load" and is measured in ohms. A system for livestock grazing, using internal temporary enclosures (within a boundary fence) to control the specific areas where the animals graze. This allows the vegetation in the previous enclosures to grow back. Typically is 1-strand of wire at 40" or at animal's nose level. A large loss of voltage from the fence line to the ground. This can be caused by live wires touching the ground or ground wires. Solid-state fence chargers deliver a medium amperage shock in pulses of medium duration. They are best used to control shorthaired livestock, small animals, and pets where light weed conditions exist. A component that joins together separate strands of fence wire, tape or rope without breaking the fence's electrical circuit. A one to three-strand electric fence system that is used for rotational grazing or other short-term uses. It typically uses step-in poly posts or rod posts, and a DC or solar operated fence charger for portability and flexibility. A component used to tighten fence wires, typically polytape, to increase tension on a section of the fence line. A device that increases or decreases the voltage of alternating current. Unit of electrical pressure which creates the current flow. A measurement of electrical pressure. It functions similarly to water pressure in that it "pushes" amperage down the fence wire. A unit of measurement for electric power equal to voltage times amperage.
<urn:uuid:7845a4d6-b1a3-49a6-b168-0ed1b1afd048>
CC-MAIN-2013-20
http://www.fishock.com/resources/glossary
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909346
1,675
3.28125
3
Oct 31, 2012 09:10 AM EDT By Renee Anderson Antidepressants before and during pregnancy can lead to worst outcomes like babies with cardiac problems, autism or low birth weight, researchers say. The results come at a time when antidepressant use among women aged between 18 and 44 has increased considerably over the past few years. Lead author of the study Alice Domar and colleagues reviewed previous studies to examine the benefits and risks associated with taking antidepressants known as selective serotonin reuptake inhibitors (SSRI) during pregnancy. Like Us on Facebook "More broadly, there is little evidence of benefit from the antidepressants prescribed for the majority of women of childbearing age-and there is ample evidence of risk," the authors wrote, in a statement. According to the researchers, taking SSRI during infertility treatment reduces the chance to get pregnant. Apart from that, it can lead to miscarriages, congenital abnormalities, preterm birth, hypertension and preeclampsia. They found Paxil, manufactured by GlaxoSmithKline associated with cardiac defects in babies. Domar and team also found a higher risk of preterm birth among women who took antidepressants during pregnancy. Apart from that antidepressants were found increasing the risks of giving birth to small babies, with low birth weight and respiratory distress. Exposure to antidepressants in the womb was found increasing the risks of Newborn Behavioral Syndrome, condition associated with persistent crying, restlessness and feeding difficulties. They also found following an antidepressant medication during first trimester increasing the risks of giving birth to a child with autism spectrum disorders. After finding the risks associated with antidepressant use before and after pregnancy, researchers reviewed some studies and found cognitive behavioral therapy (CBT) an effective substitute for antidepressants to treat depression during pregnancy. "There is enough evidence to strongly recommend that great caution be exercised before prescribing SSRI antidepressants to women who are pregnant or who are attempting to get pregnant, whether or not they are undergoing infertility treatment," Domar said. "We want to stress that depressive symptoms should be taken seriously and should not go untreated prior to or during pregnancy, but there are other options out there that may be as effective, or more effective than SSRIs without all the attendant risks." Results of the study have been published in the journal Human Reproduction.
<urn:uuid:d9dae27a-6140-460c-9ff8-4994af9a59da>
CC-MAIN-2013-20
http://www.parentherald.com/articles/367/20121031/antidepressants-selective-serotonin-reuptake-inhibitors-ssri-pregnancy-birth-outcomes-preterm-birth-autism-cardiac-problems.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961694
464
2.75
3
What is Self-esteem? Self-esteem is the value you have of yourself and your sense of self-worth It’s when you have a healthy self image and hold yourself in high regard without thinking you’re better than or more worthy than other people. Good self-esteem is honestly accessing your strengths and weaknesses and honoring and respecting yourself regardless of the outcome of your appraisal. Low self-esteem is focusing only on your weaknesses and not appreciating your good qualities and feeling you’re not worthy of self-respect. It’s allowing external influences to have a negative effect on your thinking and actions. The Nature of Self-esteem How you feel about yourself is transmitted to others in subtle ways and tells them how you expect to be treated. When your opinion of yourself is low, you will not treat yourself any better than others. You make bad decisions that have far-reaching consequences. How to Build Confidence and a Healthy Self-Esteem Self-confidence is one of the most important aspects of self-esteem. People with low self-esteem wonder how to build confidence and how to achieve the changes that will bring the freedom of living in their truth. The first step to gaining confidence is by being more aware on a conscious level of the things you do and say on a daily basis. Here are some important qualities in how to gain confidence in your daily life: Appreciating your own individuality and what makes you unique is what gives you a strong self-image. It’s having a sense of your own distinctive qualities from that of friends, family and society. When you live within integrity, you’re living according to your values. An important part of self-esteem is matching your words with your actions. People who compromise their values and don’t honor what they believe at their core - jeopardize their self-esteem and live with insecurity. Most of the time the reason people contradict their own integrity is for financial gain, power, status or simply acceptance. Personal power is not about conquering other people. It’s about moving beyond your own perceived limitations, “hot buttons” and stale habits so that you can finally be in control of yourself. And being in control of yourself is what gives you the power to positively influence others. Overcoming insecurity allows you to take calculated risks in work and life. Learning to persevere despite the possibility of failure allows you to accomplish your goals because your self-esteem no longer hinges on the success or failure of one decision or one endeavor. Moving past the idea that you’re never good enough and speaking to yourself with positive self-talk is one of the most powerful and most difficult acts you can accomplish. Oftentimes, people with extreme low self-esteem have been abused as children, either emotionally or physically, and negative self-talk and self-criticism is reflective of the victims’ damaged attitude toward themselves and the world. Changing the constant disapproving tape in your head is life changing. Having self-respect is honoring yourself and it’s impossible to have respect for others until you first respect yourself. Often your own lack of respect manifests in behaviors that have negative consequences. Cultivate relationships with people who like and accept you for who you are and beware of those who put you down. Don’t waste your time or energy with people who will mentally and emotionally drag you down. People with a healthy self-esteem also have healthy relationships. Good supportive relationships are built on respect and trust. We choose partners who treat us like we think we deserve to be treated – good or bad. When our relationships, including business partners and co-workers, friends, family and romantic partners are formed in accordance with our values – we build a strong foundation for them to thrive. Actively working on your self-esteem also sets a good example for your children. Self-esteem is the basis for success in business and personal relationships. Achievement is accomplished when all of these qualities are active and strong. Getting Help Overcoming Insecurity and Building Self-Esteem Understanding all of these qualities is the awareness you need to build confidence and have a good opinion of yourself. You can unlearn the negative habitual thinking and behavior that is responsible for ingraining low self-esteem in you. A caring professional counselor can help. Reach out to a therapist who is skilled and experienced in helping with self-esteem issues.
<urn:uuid:21304fea-c647-4b4d-9297-4b26610ae8b7>
CC-MAIN-2013-20
http://montvillecounseling.com/psychotherapy/self-esteem-building-confidence-counseling.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958858
909
3.0625
3
go to start of menu Technology is the cornerstone of Distance Learning and the future of education in the coming years. Instructors and administrators are responsible for not only the content of their courses but also for the method of presenting that content on the Internet. The emergence of educational websites offering instructional material has been growing steadily. What Distance Learning now lacks is the element of accessibility for all students. In order to be accessible, a website must meet strict guidelines and adhere to standardized code. If it does not, it becomes nothing more than an attractive building that lacks wheelchair access for the disabled. Itís a building with beautifully designed, automatically opening doors but, the controls have been placed out of reach to the student with a disability who is trying to enter. This booklet will take you step-by-step through the process of designing a course that will meet the guidelines and restrictions for California Community Colleges1. The information will provide you with the tools necessary to design a fully accessible website or repair an existing one. You will be provided with information on where to go to solve specific problems and what resources are available within the Community College system to help you. Clarification is provided for the sometimes confusing State and Federal laws regarding accessibility and how those laws apply to public education websites within the State of California. This handbook is not intended as a replacement for the Distance Education: Access Guidelines for Students with Disabilities (1999), but it is your starting point. Itís time for educators to make the commitment to improve accessibility in Distance Learning while utilizing media rich technology. Itís time for administrators to encourage compliance with the guidelines and provide assistance in the development of accessible websites for education. 1. Distance Education: Access Guidelines for Students with Disabilities August 1999, and available in PDF format at: Distance Education: Access Guidelines for Students with Disabilities, August 1999 (PDF format)
<urn:uuid:d8d1824e-e535-4d03-96ab-086f52e843b8>
CC-MAIN-2013-20
http://www.grossmont.edu/dsps/gettingstarted/introduction.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909318
387
3.421875
3
There is a wealth of information available in journal articles to support almost any international law topic. Journal articles can… - offer important analysis and commentary on a topic; - set out the history or chronology of events for a treaty or topical issue; - furnish definitions and explanations of important legal concepts; and - provide citations to treaties and other documents. Relevant articles can be found in journals from a variety of disciplines: law, international relations, political science, economics, etc. There are several ways to locate articles:
<urn:uuid:48c06b8a-6f6e-4eb8-a8c4-9d8bcafcb311>
CC-MAIN-2013-20
http://law.duke.edu/ilrt/intl_law_2.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.849109
111
2.640625
3
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...study of conditionals faces two interrelated problems: stating the conditions in which counterfactual conditionals are true and representing the conditional connection between the antecedent and the consequent. The difficulty of the first problem is illustrated by the following pair of counterfactual conditionals:If Los Angeles were in Massachusetts, it would not be on the Pacific... ...or as “not both p and not-q.” The symbol “⊃” is known as the (material) implication sign, the first argument as the antecedent, and the second as the consequent; q ⊃ p is known as the converse of p ⊃ q.Finally, p ≡ q (“p is [materially] equivalent to q” or... What made you want to look up "consequent"? Please share what surprised you most...
<urn:uuid:f0cb7a2e-cc5f-4524-a7a8-1dea5feefc40>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/133372/consequent
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94733
219
3.5
4
Statistics about New Zealand women include details about age, ethnic affiliation, language, religion, families and households, fertility, work (paid and unpaid) and income, education, housing, and where they live. Also available is information about women's health and disability status, life expectancy, and the ratio of women to men. New Zealand General Social Survey The New Zealand General Social Survey (NZGSS) provides information on the well-being of New Zealanders aged 15 years and over. It covers a wide range of social and economic outcomes, and shows how these outcomes are distributed across different groups within the New Zealand population. Reports and articles Search for more information.
<urn:uuid:f25cfdc7-f958-4654-869f-33d6f2f6f893>
CC-MAIN-2013-20
http://www.stats.govt.nz/browse_for_stats/people_and_communities/Women.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934001
136
2.609375
3
No matter what you do with your computer, storage is an important part of your system. In fact, most personal computers have one or more of the following storage devices: Usually, these devices connect to the computer through an Integrated Drive Electronics (IDE) interface. Essentially, an IDE interface is a standard way for a storage device to connect to a computer. IDE is actually not the true technical name for the interface standard. The original name, AT Attachment (ATA), signified that the interface was initially developed for the IBM AT computer. In this article, you will learn about the evolution of IDE/ATA, what the pinouts are and exactly what "slave" and "master" mean in IDE.
<urn:uuid:0684f8ca-68cd-4ae3-a834-61d8dddd4974>
CC-MAIN-2013-20
http://computer.howstuffworks.com/ide.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931028
143
3.15625
3
INET(3) BSD Programmer's Manual INET(3) inet_addr, inet_aton, inet_lnaof, inet_makeaddr, inet_netof, inet_network, inet_ntoa, inet_ntop, inet_pton - Internet address manipu- lation routines #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> in_addr_t inet_addr(const char *cp); int inet_aton(const char *cp, struct in_addr *addr); in_addr_t inet_lnaof(struct in_addr in); struct in_addr inet_makeaddr(in_addr_t net, in_addr_t lna); in_addr_t inet_netof(struct in_addr in); in_addr_t inet_network(const char *cp); char * inet_ntoa(struct in_addr in); const char * inet_ntop(int af, const void *src, char *dst, size_t size); int inet_pton(int af, const char *src, void *dst); The routines inet_aton(), inet_addr(), and inet_network() interpret char- acter strings representing numbers expressed in the Internet standard '.' notation. The inet_pton() function converts a presentation format address (that is, printable form as held in a character string) to network format (usually a struct in_addr or some other internal binary representation, in network byte order). It returns 1 if the address was valid for the specified address family; 0 if the address wasn't parseable in the speci- fied address family; or -1 if some system error occurred (in which case errno will have been set). This function is presently valid for AF_INET and AF_INET6. The inet_aton() routine interprets the specified character string as an Internet address, placing the address into the structure provided. It returns 1 if the string was successfully interpreted, or 0 if the string was invalid. The inet_addr() and inet_network() functions return numbers suitable for use as Internet addresses and Internet net- work numbers, respectively. The function inet_ntop() converts an address from network format (usually a struct in_addr or some other binary form, in network byte order) to presentation format (suitable for external display purposes). It returns NULL if a system error occurs (in which case, errno will have been set), or it returns a pointer to the destination string. The routine inet_ntoa() takes an Internet address and returns an ASCII string representing the address in '.' notation. The routine inet_makeaddr() takes an Internet network number and a local network address and con- structs an Internet address from it. The routines inet_netof() and inet_lnaof() break apart Internet host addresses, returning the network number and local network address part, respectively. All Internet addresses are returned in network order (bytes ordered from left to right). All network numbers and local address parts are returned as machine format integer values. INTERNET ADDRESSES (IP VERSION 4) Values specified using the '.' notation take one of the following forms: a.b.c.d a.b.c a.b a When four parts are specified, each is interpreted as a byte of data and assigned, from left to right, to the four bytes of an Internet address. Note that when an Internet address is viewed as a 32-bit integer quantity on a system that uses little-endian byte order (such as the Intel 386, 486 and Pentium processors) the bytes referred to above appear as "d.c.b.a". That is, little-endian bytes are ordered from right to left. When a three part address is specified, the last part is interpreted as a 16-bit quantity and placed in the rightmost two bytes of the network ad- dress. This makes the three part address format convenient for specifying Class B network addresses as "128.net.host". When a two part address is supplied, the last part is interpreted as a 24-bit quantity and placed in the rightmost three bytes of the network address. This makes the two part address format convenient for specifying Class A network addresses as "net.host". When only one part is given, the value is stored directly in the network address without any byte rearrangement. All numbers supplied as "parts" in a '.' notation may be decimal, octal, or hexadecimal, as specified in the C language (i.e., a leading 0x or 0X implies hexadecimal; a leading 0 implies octal; otherwise, the number is interpreted as decimal). INTERNET ADDRESSES (IP VERSION 6) In order to support scoped IPv6 addresses, getaddrinfo(3) and getnameinfo(3) are recommended rather than the functions presented here. The presentation format of an IPv6 address is given in RFC 2373: There are three conventional forms for representing IPv6 addresses as text strings: 1. The preferred form is x:x:x:x:x:x:x:x, where the 'x's are the hexa- decimal values of the eight 16-bit pieces of the address. Examples: FEDC:BA98:7654:3210:FEDC:BA98:7654:3210 1080:0:0:0:8:800:200C:417A Note that it is not necessary to write the leading zeros in an indi- vidual field, but there must be at least one numeral in every field (except for the case described in 2.). 2. Due to the method of allocating certain styles of IPv6 addresses, it will be common for addresses to contain long strings of zero bits. In order to make writing addresses containing zero bits easier, a special syntax is available to compress the zeros. The use of "::" indicates multiple groups of 16 bits of zeros. The "::" can only ap- pear once in an address. The "::" can also be used to compress the leading and/or trailing zeros in an address. For example the following addresses: 1080:0:0:0:8:800:200C:417A a unicast address FF01:0:0:0:0:0:0:43 a multicast address 0:0:0:0:0:0:0:1 the loopback address 0:0:0:0:0:0:0:0 the unspecified addresses may be represented as: 1080::8:800:200C:417A a unicast address FF01::43 a multicast address ::1 the loopback address :: the unspecified addresses 3. An alternative form that is sometimes more convenient when dealing with a mixed environment of IPv4 and IPv6 nodes is x:x:x:x:x:x:d.d.d.d, where the 'x's are the hexadecimal values of the six high-order 16-bit pieces of the address, and the 'd's are the decimal values of the four low-order 8-bit pieces of the address (standard IPv4 representation). Examples: 0:0:0:0:0:0:184.108.40.206 0:0:0:0:0:FFFF:220.127.116.11 or in compressed form: ::18.104.22.168 ::FFFF:22.214.171.124 The constant INADDR_NONE is returned by inet_addr() and inet_network() for malformed requests. byteorder(3), gethostbyname(3), getnetent(3), inet_net(3), hosts(5), networks(5) IP Version 6 Addressing Architecture, RFC 2373, July 1998. Basic Socket Interface Extensions for IPv6, RFC 3493, February 2003. The inet_ntop and inet_pton functions conform to the IETF IPv6 BSD API and address formatting specifications. Note that inet_pton does not ac- cept 1-, 2-, or 3-part dotted addresses; all four parts must be speci- fied. This is a narrower input set than that accepted by inet_aton. The inet_addr, inet_network, inet_makeaddr, inet_lnaof, and inet_netof functions appeared in 4.2BSD. The inet_aton and inet_ntoa functions ap- peared in 4.3BSD. The inet_pton and inet_ntop functions appeared in BIND 4.9.4. The value INADDR_NONE (0xffffffff) is a valid broadcast address, but inet_addr() cannot return that value without indicating failure. Also, inet_addr() should have been designed to return a struct in_addr. The newer inet_aton() function does not share these problems, and almost all existing code should be modified to use inet_aton() instead. The problem of host byte ordering versus network byte ordering is confus- ing. The string returned by inet_ntoa() resides in a static memory area. MirOS BSD #10-current June 18, 1997 2 Generated on 2013-04-27 00:20:00 by $MirOS: src/scripts/roff2htm,v 1.77 2013/01/01 20:49:09 tg Exp $ These manual pages and other documentation are copyrighted by their respective writers; their source is available at our CVSweb, AnonCVS, and other mirrors. The rest is Copyright © 2002‒2013 The MirOS Project, Germany. This product includes material provided by Thorsten Glaser. This manual page’s HTML representation is supposed to be valid XHTML/1.1; if not, please send a bug report – diffs preferred.
<urn:uuid:44db75cd-deca-407a-a89a-2efa355f0662>
CC-MAIN-2013-20
http://www.mirbsd.org/htman/i386/man3/inet_pton.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00041-ip-10-60-113-184.ec2.internal.warc.gz
en
0.791735
2,157
3.203125
3
Do you want to make your garden the best it can be? Do you want to learn more about how to conserve as much as possible, while cultivating a thriving garden? The Peoples’ Garden at USDA was conceptualized to help Americans learn more about all things gardening — and this summer, we’re putting on weekly Healthy Garden Workshops to help everyone learn the ins and outs of a great garden. July 3 Kickoff Workshop At the July 3 Healthy Garden Workshop, “Watering Your Lawn and Garden,” USDA experts offered a few of the scientific tricks to ensure that your plants and lawn get enough water, but not too much as well as how much water plants “drink” from the soil every week. - It helps to create a weekly watering plan, and to monitor the rainfall in your lawn/garden and subtract that amount of water from your weekly watering plan. - Plants use approximately one inch of water from the soil each week during the summer growing season. Therefore, this amount needs to be replenished for plants to continue to grow and thrive. Depending on where we live, nature takes care of this for us by providing rain and we supplement with additional water as needed. Watering is a delicate balance for gardens. Too little water causes plants to wilt and die. Too much water means the plant cannot breathe; this can cause rotting and make the plant susceptible diseases. And that’s just a sample. For more information about these tips, and to learn more about gardening, visit the the Peoples’ Garden page. At tomorrow’s Healthy Garden Workshop, you can learn about gardening in containers and window boxes (especially helpful for Americans living in cities and urban areas). The workshop will include discussion on finding the right container for the job, getting good soil, and nursing the container garden over time. Next week, on July 24, we’ll show you techniques for weeding and removing invasive plants. At both of these activities, a “Sprouts” workshop for kids will focus on a series of activities for children that demonstrate how the food system works, from farm to fork. If you can’t make a workshop, consider a garden tour — they’re held every Tuesday and Thursday at 1 p.m. at the Garden. You can also always follow ongoing updates from the Peoples’ Garden project on Twitter. or visit the Peoples’ Garden page online.
<urn:uuid:035a0ec5-c581-48a5-a9fd-5746201bade5>
CC-MAIN-2013-20
http://blogs.usda.gov/2009/07/16/peoples-garden-provides-tips-workshops-on-gardening/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940709
503
3.53125
4
Realms of Heroism: Indian Paintings in the Brooklyn Museum - Dates: October 14, 1994 through January 8, 1995 - Collections: Asian Art June 1994: An exhibition of approximately 80 jewel-like Indian miniature paintings from the permanent collection of The Brooklyn Museum will open on October 14, 1994, and remain on view through January 8, 1995. Entitled Realms of Heroism: Indian Paintings from The Brooklyn Museum, it celebrates the publication of a fully illustrated catalogue raisonné that documents The Brooklyn Museum’s significant holdings of Indian miniature paintings. The exhibition will be organized around the theme of heroism as it is understood in a South Asian context, exploring the hero as a warrior and adventurer, as a Hindu deity, and as a secular ruler. A wide variety of historical portraits reveals the sometimes subtle means by which Indian rulers asserted their power and achievement. The exhibition will also explore the theme of the romantic hero and heroine in the South Asian tradition, from the pastoral exploits of the god Krishna to the depiction of lovers and emotional states found in Ragamalas and Nayika-nayaka literature. An orientation section will introduce the technique and historical context of these delicate, striking works. Indian miniature paintings were commissioned by royal patrons as illustrations to religious and secular texts. The Mughal emperors, who conquered much of India in the 16th century, established an imperial atelier where manuscripts were created depicting historical and legendary subjects. Among the Mughal paintings on view in the exhibition are four folios from the famous Hamza-nama series, made for the emperor Akbar in the mid-sixteenth century. The oversized illustrations to the Hamza-nama, a traditional Moslem historical epic, are celebrated for their vivid depiction of heroic deeds interspersed with carefully recorded observations of Indian life. The Mughal emperors were not the only patrons of Indian painting. Indigenous Hindu rulers called the Rajputs continued to govern small principalities in northern and central India after the establishment of the Mughal empire. The Rajputs were responsible for the creation of large numbers of manuscripts illustrating Hindu mythological subjects in a wide variety of styles. They also emulated the Mughals through commissions of large groups of dynastic portraits. Like paintings made for the Mughals, these works on paper were bound or collected into volumes and were viewed in intimate gatherings, often accompanied by performances of music or poetry for a highly refined aesthetic experience. Painted with watercolors on cotton or paper and highlighted with gold and silver, the images are extremely colorful and delicate. Their vibrancy is achieved by applying color in layers and then burnishing the sheets from behind until the colors are opaque. Beetle wings are sometimes applied to indicate areas of jewelry. The paintings are fragile and flake easily, presenting many problems for conservators; 42 weeks of treatment were required to prepare the 80 exhibition objects for viewing. Since its inception in 1914—unusually early for an American museum—The Brooklyn Museum’s Indian painting collection has amassed more than 275 paintings and 85 drawings from the early 15th through the 19th centuries. The exhibition and catalogue raisonné have been the outcome of years of documentation and scholarly research. Surveys of the collection began in 1973, aided by a 1982 grant from the National Endowment for the Arts. Research for the catalogue raisonné began as early as 1980 and reflects the expertise of many South Asian art historians. The catalogue represents numerous regional styles of Indian painting, incorporating new readings of inscriptions and other documentary and technical evidence. Approximately 75 color plates in the catalogue reproduce the intricate detail and vision of these paintings. A wide range of educational and public programs is planned in conjunction with the exhibition, including an Indian film festival, storytelling, and docent and teacher training. Programs will be designed to introduce aspects of South Asian tradition while focusing on the definition and significance of heroic behavior in Indian and other cultures. Amy G. Poster, Curator of Asian Art and Head of the Asian Art Department, is the author of the catalogue and curator of the exhibition. October 1994: The Brooklyn Museum will celebrate the arts of India with a wide variety of public programs for the entire family that have been organized in conjunction with the major exhibition Realms of Heroism: Indian Paintings at The Brooklyn Museum. Among them are lectures, special gallery talks, drop-in programs for children[,] and a film series: Special Gallery Talks (free with Museum admission) Saturday, October 15, 1 p.m. Joachim Bautze, South Asian Institute, University of Heidelberg “Issues of Connoisseurship in Indian Painting” Sunday, October 23, 1 p.m. Joan Cummins, Exhibition Coordinator, Realms of Heroism “Delicacy and Vigor: Indian Painting and the Model Prince” Saturday. November 5, 1 p.m. Amy G. Poster, Curator of Asian Art and curator of Realms of Heroism “The Concept of the Hero in Indian Painting” Additional gallery talks for Realms of Heroism led by trained docents will be offered throughout the run of the exhibition. For information about dates and times, call (718) 638-5000, ext. 226. (free with Museum admission) Saturday, November 19, noon. Vidya Dehejia, Curator of Indian Art, Freer Gallery of Art and Arthur M. Sackler Gallery, Smithsonian Institution, Washington, D.C. “Once upon a Time: Storytelling in Indian Painting” Saturday, December 3, noon Dr. Milo C. Beach, Director, Freer Gallery of Art and Arthur M. Sackler Gallery, Smithsonian Institution, Washington, D.C. “The Mughal Prince as Hero: Painting and the Sons of Shah Jahan” Saturday, December 10, noon John Seyller, Associate Professor, Department of Art, University of Vermont “The Adventures of Amir Hamza” Thursday, September 29, 7 p.m., and thereafter every Saturday and Sunday at 2 p.m. through October 23. This series of films will highlight the theme of the hero in Indian cinema by exploring the role of the religions in Indian society, representations of morality and social mores, colonial influences, representations of Indian women, and narrative traditions as adapted for the screen. Guest speakers will introduce the series, which has been coordinated by L. Somi Roy of Roy/Emmons Associates. Realms of Heroism: Indian Paintings at The Brooklyn Museum will also serve as the inspiration for several of the Museum’s regularly scheduled drop-in programs for children, Arty Facts (4-7), Saturdays and Sundays, 11 a.m., and What’s Up? (8-12), Saturdays and Sundays, 2 p.m. (Free with Museum admission). Additional special programs for families in conjunction with the exhibition will include storytelling and dance performances. Call (718) 638-5000 for further information. Realms of Heroism: Indian Paintings at The Brooklyn Museum will include approximately 80 masterpieces, several of them never before on public view, from The Brooklyn Museum’s collection of Indian miniature paintings. Among them are four folio pages from the rare Hamza-nama series created for the 16th-century Mughal Emperor Akbar, of which there are only about 120 left in the world. The exhibition celebrates the publication of a fully illustrated catalogue raisonné of the Museum’s important collection of 250 Indian minature paintings. - THE ANNOTATED CALENDAR; ARTSeptember 11, 1994 By ROBERTA SMITH"Here is a selective guide to cultural events in New York and beyond in the months ahead. As usual, programs change, and some events are sold out. Unless indicated, all dates listed are for openings, all addresses are in Manhattan, and all telephone numbers have a 212 area code. Addresses for locations in New York City cited more than once appear at..." - For ChildrenNovember 4, 1994 By Dulcie Leimbach"Rollerbasketball Tompkins Square Park 10th Street at Avenue B Lower East Side Recommended for all ages Saturdays and Sundays Tom La Garde is tall, even taller with Rollerblades on. At 6 feet 10 inches, he is a former professional basketball player for the Denver Nuggets, Seattle Supersonics and Dallas Mavericks. And he played in the 1976 Olympics...."
<urn:uuid:867dc358-3fcb-4388-8d50-5c72728de8f5>
CC-MAIN-2013-20
http://www.brooklynmuseum.org/opencollection/exhibitions/719/Realms_of_Heroism%3A_Indian_Paintings_in_the_Brooklyn_Museum/image/1898/Realms_of_Heroism%3A_Indian_Paintings_in_The_Brooklyn_Museum._%7C10141994_-_01081995%7C._Installation_view
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939912
1,763
2.6875
3
A Catalyst article about diamonds. The element carbon exists in a number of allotropic forms, but diamonds have always held a special allure, whether it be for their hardness or for their transparency. The article examines how they can be made artificially and looks at some of their uses. This article is from Catalyst: GCSE Science Review 2007, Volume 17, Issue 4. Catalyst is a science magazine for students aged 14-19 years. Annual subscriptions to print copies of the magazine can be purchased from Mindsets. HEALTH and SAFETY Any use of a resource that includes a practical activity must include a risk assessment. Please note that collections may contain ARCHIVE resources, which were developed at a much earlier date. Since that time there have been significant changes in the rules and guidance affecting laboratory practical work. Further information is provided in our Health and Safety guidance.
<urn:uuid:5865a92a-4ca2-472d-a75f-59c3ed3a81fc>
CC-MAIN-2013-20
http://www.nationalstemcentre.org.uk/elibrary/resource/2744/diamonds
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963137
178
3.296875
3
ISBN-13: 9780324545678 / ISBN-10: 0324545673 Offering a uniquely modern presentation of macroeconomics, this brand-new text makes it easy for instructors to emphasize a solid microfoundations, real-business cycle approach. In the all-new MACROECONOMICS: A MODERN APPROACH, leading economist and proven author Robert J. Barro couples his extraordinary command of growth, equilibrium, and business cycles with a focus on microfoundations to create a groundbreaking new macroeconomics textbook steeped in real-world application. Accessibly written and extremely student friendly, the book is packed with current policy and data examples, reflecting the author’s extensive research in the field. The book also includes captivating boxed features, challenging exercises, and innovative online resources like CengageNOW, which enables students to create personalized learning paths and equips instructors with tools to easily assign, grade, and record homework and quizzes. Covering growth theory more completely than any other text, MACROECONOMICS delivers a unified model of macroeconomics that serves well for economics majors and nonmajors alike. Part I. INTRODUCTION. 1. Thinking about Macroeconomics. 2. National-Income Accounting: Gross Domestic Product and the Price Level. Part II. ECONOMIC GROWTH. 3. Introduction to Economic Growth. 4. Working with the Solow Growth Model. 5. Conditional Convergence and Long-Run Economic Growth. Part III. ECONOMIC FLUCTUATIONS. 6. Markets, Prices, Supply, and Demand. 7. Consumption, Saving, and Investment. 8. An Equilibrium Business-Cycle Model. 9. Capital Utilization and Unemployment. Part IV. MONEY AND PRICES. 10. The Demand for Money and the Price Level. 11. Inflation, Money Growth, and Interest Rates. Part V. THE GOVERNMENT SECTOR. 12. Government Expenditure. 14. The Public Debt. Part VI. MONEY AND BUSINESS CYCLES. 15. Money and Business Cycles I: the Price-Misperceptions Model. 16. Money and Business Cycles II: Sticky Prices and Nominal Wage Rates. Part VII. INTERNATIONAL MACROECONOMICS. 17. World Markets in Goods and Credit. 18. Exchange Rates. Robert J. Barro Born in New York City, Robert Barro moved to Los Angeles, where he studied undergraduate physics at Caltech, including classes from the famous Richard Feynman. He changed his focus to economics for graduate school at Harvard University. Dr. Barro returned to Harvard as a professor in 1987. He recently served as president of the Western Economic Association and vice president of the American Economic Association. In addition to academic research, Professor Barro is an accomplished writer for the popular press. He worked as a viewpoint columnist for BUSINESSWEEK from 1998 to 2006 and contributing editor of THE WALL STREET JOURNAL from 1991 to 1998.
<urn:uuid:ff9444ce-3d6a-451c-acf9-9f7d57284308>
CC-MAIN-2013-20
http://edu.cengage.co.uk/catalogue/product.aspx?isbn=0324545673
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.842685
654
2.59375
3
By Patrick O'Driscoll and Larry Copeland, USA TODAY The Southeast's worst drought in more than a century is forcing parched states and communities into crisis measures to conserve water and fight for access to more. A region accustomed to plentiful rain from tropical storms and hurricanes is experiencing its second straight year of less rain in the summer and fall. "This idea of wait-and-see, because some (rain) might be around the corner, can really suppress timely responses," says Mike Hayes, director of the National Drought Mitigation Center. Urgent efforts range from shutting down small-town car washes in North Carolina to a total ban on outdoor watering in Atlanta. Georgia's top water official, environmental Commissioner Carol Couch, says industrial and commercial water users very likely will have to make "across-the-board reductions" next. Outdoor watering bans already cover the northern third of Georgia and dozens of cities, counties and towns in surrounding states. Farmers are selling cattle because pastures have dried up. Alabama's Elmore County had to bring in floating pumps and barges to extend its water intake pipe farther out into shrinking Lake Martin. Georgia might have to do the same at Lake Lanier, Atlanta's main water source. Although rain is due today across parts of the region, it will barely dampen the 16-month drought. Through September, it is the region's driest year in 113 years of record-keeping. In five of the six worst-hit states, rain totals this year are close to a foot below normal. It is the driest year on record for North Carolina and Tennessee, second-driest in Alabama and third-driest in Kentucky. A tree-ring study this summer of Tennessee's rainfall history shows this is the third-driest year for the state in at least 350 years, behind only 1839 and 1708. Georgia Gov. Sonny Perdue said this week that he will sue the Army Corps of Engineers unless the federal agency holds back more water in Lake Lanier. The corps, which by law must release water downstream to protect endangered aquatic species, says it is "exploring possible drought contingency options." By various estimates, the lake has only two to four months' supply left. Couch says if the water releases are not curbed, metro Atlanta could need water deliveries from the Federal Emergency Management Agency. In Tennessee, towns below Normandy Dam south of Nashville convinced the Tennessee Valley Authority this week to begin "winter pool" storage of water a month and a half ahead of its usual Dec. 1 start to protect their dwindling supply. Monteagle, Tenn., is buying 350,000 gallons a day from three neighboring towns and enforcing mandatory curbs in water use. Hayes says the severe conditions in the Southeast are busting myths that drought strikes only semiarid regions and that the West is more vulnerable than the rainy East. "If it can happen there, it can happen anywhere," he says. Contributing: Jordan Schrader, Asheville (N.C.) Citizen-Times; Marty Roney, The Montgomery (Ala.) Advertiser; Leon Alligood, The Tennessean in Nashville; Ron Barnett, The Greenville (S.C.) News; Jessie Halladay, The Louisville (Ky.) Courier-Journal; Matt Reed, Florida Today in Melbourne, Fla.; Jennie Coughlin, The Daily News Leader in Staunton, Va. Conversation guidelines: USA TODAY welcomes your thoughts, stories and information related to this article. Please stay on topic and be respectful of others. Keep the conversation appropriate for interested readers across the map.
<urn:uuid:9ac610e6-b8fb-4a0c-a2a0-228d602a6ea0>
CC-MAIN-2013-20
http://usatoday30.usatoday.com/weather/news/2007-10-19-drought_N.htm?csp=34
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922486
747
2.703125
3
Recommend UsEmail this PageeGazetteAlislam.org June 9, 2008 Exclusive: Indonesia — A Civil War Between Islamists And Moderates?: Part One of Two Indonesia is widely described as a “moderate” Islamic nation. In many ways this has been true. Recently, however, a conflict has been brewing between those who support moderate interpretations of Islam and those who support hardline and intolerant forms. This conflict has even been seen by some commentators to be pushing Indonesia to the very brink of a civil war. Today and tomorrow, I will try to explain the background of this conflict, whose causes belong as much to politics as they do to religion. Indonesia is certainly the most populous Muslim nation in the world. Its total population is around 235 million, with 85% of this figure being Muslim. The official language (Bahasa Indonesia) is a version of Malay, but other regional tongues exist on various islands. As an archipelago, Indonesia comprises a total of 17,508 islands, many of which were part of the Dutch East Indies. Indonesia sought independence from the Netherlands immediately following World War II. After 1949, the Dutch accepted Indonesia as a nation. The first ruler of Indonesia was Sukarno, who had declared independence in August 1945. He was overthrown in a coup led by General Suharto (Soeharto), who ruled from March 1968 until he was forced to resign in May 1998. Under Suharto’s rule, there was widespread corruption. Suharto’s son Tommy (Hutomo Mandala Putra) grew rich from embezzlement. Even when he was found guilty of the murder of Syaifuddin Kartasasita (the judge who convicted him of corruption), Tommy Suharto only served four years in jail . The current president of Indonesia is Susilo Bambang Yudhoyono who has been in power since 2004. His government has been weak when dealing with the demands of Islamists. During Yudhoyono’s presidency many areas of Indonesia have introduced bylaws which enforce Islamist laws. These laws were introduced following pressure from Islamist groups such as the Front Pembela Islam (Islamic Defender’s Front). Even though these bylaws are unconstitutional, Yudhoyonyo is either too politically weak or indifferent to oppose them. During the three decades that Suharto was in power, Islamist groups and movements were, along with communist groups, viciously suppressed. With Indonesia being comprised of varying cultural groups, the influence of totalitarians such as communists or religious supremacists would naturally lead to conflict. Two groups came into existence following the end of Suharto’s rule. The strident Islamism expressed by these groups has threatened to destroy the values of religious tolerance and pluralism that are promised by the constitution (called “Pancasila“) of Indonesia. Article 29, b, of the Indonesian constitution reads: “The State guarantees all persons the freedom of worship, each according to his/her own religion or belief.” Both of these Islamist groups are said to have tacit support from senior figures within the military as well as the judiciary and police. Laskar Jihad (Lashkar Jihad) was led by Jaffar Umar Thalib . This group, which allegedly was formed with the approval of members of the military and the government in 2000, was the main instigator of sectarian violence during the Moluccan War which lasted from the end of 1998 until 2002. This war pitted fanatical Islamists against Christians and at least 9,000 people, mostly Christian, were killed. The fighting was worst on the large island of Sulawesi and in the Moluccan islands (the Spice Islands). Thalib urged his followers to wage an attack upon Christian villagers in Soya on the island of Ambon. On Friday April 26, 2002 , Thalib spoke to Laskar Jihad followers outside Ambon’s biggest mosque. He urged a religious war against Christians, saying: “From today, we will no longer talk about reconciliation. Our … focus now must be preparing for war — ready your guns, spears and daggers.” Two days later, Laskar Jihad invaded the mainly Christian village of Soya on Ambon Island. Men, women and children were stabbed, beaten to death, burned and decapitated. Even babies did not escape machete attacks. The Soya massacre took place even though other Islamist groups had signed a peace deal with Christians on February 12, 2002 . This deal was called the Malino Accord. It was brokered by Yusuf Kalla (who is now the vice president of Indonesia), and was intended to put an end to the Moluccan War. Laskar Jihad refused to acknowledge the terms of the Malino Accord. Thalib’s vigilantes had also driven away Christian landowners in Malaku province, sharing their lands as “booty” among Laskar Jihad and Muslims from outside the province. Thalib himself had fought Soviets in Afghanistan from 1988 to 1989 and had met Osama bin Laden. He had been educated at the Mawdudi Institute in Lahore, Pakistan, before dropping out and joining the Afghan Mujahideen. He ran an Islamic boarding school (pesantren) called Ihya’us Sunnah Tadribud Du’at on the large island of Java. Thalib allegedly supervised an illegal Shari’a court which stoned a man to death, but though he was arrested for this, Thalib was never prosecuted. Following the Soya atrocity, Thalib was prosecuted for inciting religious violence but bizarrely, he was acquitted . Laskar Jihad announced it was officially disbanding in October 2002, but in 2003 it was waging war against the native peoples of West Papua. This territory — the Western end of New Guinea was never ceded by the Dutch, and was annexed by Indonesia in 1963, and officially recognized by the UN as “Indonesian” in 1969. Very few indigenous West Papuans consider themselves to be Muslim. FPI — The Islamic Defenders Group While Laskar Jihad continues to operate in secret, away from the prying eyes of the media, the Front Pembela Islam has been blatantly courting publicity. The Front Pembela Islam or Islamic Defenders Front was founded in August 1998, only three months after Suharto was ousted from power. The uniformed members of this group in their white jackets and hats appear indistinguishable from the vigilantes of Laskar Jihad. Their motives are the same — to impose a strict interpretation of Islam as the sole religion of Islam and to ignore or destroy the rights of those they deem to be non-Muslims. The BBC stated in 2003 of the FPI: “Unlike other groups it is not fighting for an Islamic state, but it does want to establish strict Sharia law.” Yet its subsequent actions in enforcing Islamist local bylaws to be imposed on all citizens, including non-Muslims, belie the BBC’s claims. At the time the group had claimed that it was suspending its activities, while its founder was awaiting trial for inciting his followers to carry out raids on social establishments. The founder of the group is Al Habib Muhammad Rizieq bin Hussein Shihab, more commonly described as Habib Rizieq Shihab. From its inception, the FPI began to make its presence felt in the main cities of Indonesia. During the holy month Ramadan, members of the group would attack bars and clubs that were seen to be flouting the conventions of Islam. In 2001 he organized a series of attacks against American interests, targeting businesses he believed were supportive of, or funded by, the United States. Even though Saudi-educated Habib Rizieq Shihab could have received seven years for inciting his followers to violence, when he was found guilty, he was only jailed for seven months. Upon his release from Salemba Penitentiary in Central Jakarta on November 19, 2003, the FPI became more intransigent. The group, according to the now-defunct MITP Terrorism Knowledge Base, apparently funds itself via extortion from businesses. In October 2004 during Ramadan, hundreds of FPI members attacked a restaurant and bar in the south of Jakarta, Indonesia’s capital city. They also raided a pool hall. Apparently when the attacks took place, police who were nearby took no action against the vigilantes. Though there is little to distinguish them from the core group, the paramilitary wing of FPI, which carries out the raids on bars, is known as the Laskar Pembela Islam (Islam Defenders’ Army). The FPI as a whole now has a total of 200,000 members who are based in at least 22 of Indonesia’s 33 provinces. On December 24, 2004, a massive tsunami devastated the province of Aceh, located on the northwestern tip of Sumatra Island. Relief workers came to the area to assist in the amelioration of the local population’s plight. A less positive addition to the relief work was the arrival of Islamist groups . These included the Laskar Mujahideen which had been involved with killing Christians in the Moluccan War. Additionally the Indonesian Muhajideen Council, whose spiritual head is the controversial cleric Abu Bakakr Bashir, arrived, as well as the Front Pembela Islam. The arrival or Islamist groups had been spurred on by a decision by the largest group of Indonesian clerics to make a grim announcement. On January 14, 2005, the Majelis Ulama Indonesia (Indonesia Ulemas Council or MUI) warned that there would be a Muslim backlash if any of the Christian relief workers in the tsunami-devastated region of Aceh attempted to proselytize. Fox News reported on January 21, 2005 on the intimidation of relief workers in Aceh by Islamists: “Hasri Husan, a leader of the Islamic Defenders Front, a militant Muslim group that is operating a refugee camp in Banda Aceh, made his feelings clear. ‘We will chase down any Christian group that does anything beyond offering aid,’ he said before making a slashing motion across his throat’” In July 2005, the Majelis Ulama Indonesia made a “fatwa” containing 11 decrees, which decried activities involving interfaith, pluralist and “liberal” thought. The fatwa declared that liberal interpretations of Islam, secularism and pluralism were un-Islamic and therefore forbidden. This ruling was seen by some as generating a climate of intolerance in Indonesia. On September 21, 2005 a community of Ahmadis was attacked in Sukadana in West Java. No individuals were hurt, but a mob of 1,000 fanatical Muslims carrying swords and sharpened bamboo stakes ran through the village. At least 70 homes and six mosques were badly damaged. Only five people were arrested. The attack upon the Ahmadi sect in 2005 mirrors very closely recent events that have taken place in Indonesia. In October 2005 , Strategy Page reported that: “Armed men claiming to belong to organizations like the “Islamic Defender Front” continue to attack Christians, threatening to burn down houses and kill people if, in one instance, Catholics do not stop holding prayer services in their homes.” The Ahmadiyah or Ahmadiyya are Muslims, but they are treated by orthodox Islam as heretics. They revere the founder of their sect, Ghulam Ahmad Qadiani (1835-1908). As many Ahmadi believe their fonder was a prophet, they are treated as heretics. They are barred from entering Mecca for the Haj pilgrimage, and in Pakistan blasphemy laws prevent them from proselytizing. In Bangladesh , political parties in the last coalition government supported attacks against the sect. In January of this year, the MUI (Indonesia Ulemas Council) declared that the Ahmadi sect was “deviant.” On Thursday January 3, 2008 a group claiming to represent 50 Islamic organizations petitioned the attorney-general of Indonesia, demanding that the Ahmadiyyah be abolished. The two main national Muslim groups, Nahdlatul Ulama and Muhammadiyah, apparently also supported the motion. These have respectively 40 million and 30 million members. The Indonesian Muslim Brotherhood (GPMI) sent Ahmad Sumargono as a delegate. On Sunday April 20th this year, thousands of Muslims marched in Jakarta, demanding that the Ahmadiyah sect be banned. A statement read: “We call on President Susilo Bambang Yudhoyono to immediately issue a presidential decree disbanding the Ahmadiyyah organization, confiscate its assets and demand its members and followers to disband and return to the true teachings of Islam.” Instead of demanding that such calls to ban any religious group were in contravention of the terms expressed in the constitution, the president did nothing. A few days before the April 20th march, a government-sponsored committee had agreed that the Ahmadiyah were “deviant” and recommended that the group be officially abolished. The decision was approved by the attorney-general’s office. This is not the first time that President Yudhoyono has stood by while his government acts in ways that contradict the constitution. In March 2006 , one of his ministers openly condemned the Ahmadi. Maftuh Basyuni, the Indonesian Minister of Religious Affairs (pictured), had said that the Ahmadiyah sect should discontinue calling itself “Islamic” and should declare itself as a new religion altogether, adding; “If they refuse to do so, they should return to Islam by renouncing their beliefs.” A month later, the minister repeated his comments on April 17th. A group calling itself National Alliance for Freedom of Religion and Faith (AKKBB) demanded that Maftuh Basyuni within a week or face legal consequences. The minister ignored the deadline. Basyuni was educated in Saudi Arabia and appears to share that nation’s contempt for “deviant” forms of Islam. A complaint was registered with the police against Maftuh Basyuni for “insulting and slandering… the members of the Ahmadiyah community,” but no action appears to have been taken against him. Basyuni remains employed as Religious Affairs Minister in Susilo Bambang Yudhoyono’s government. The Religious Affairs Minister’s comments against the Ahmadiyah had come at a particularly sensitive time. In February 2006, a month before, a community of Ahmadis had been physically attacked on the island of Lombok, adjoining Bali. Almost 200 Ahmadis had been forced to live as refugees. One said of the minister’s comments: “It’s ridiculous to suggest that we form a new religion. We are Muslims who pray five times a day, fast during Ramadan, and believe in the same Quran.” 187 Ahmadi refugees later discussed claiming asylum in Australia. This year, the Indonesian government has allowed the resentments between orthodox Muslims and those they deem to be heretical to reach dangerously tense levels. On the morning of April 28th this year, a mob of 300 individuals attacked an Ahmadi mosque in Sukabumi district in West Java. The mosque was burned to the ground. Three days earlier, a group of Muslim activists grouped outside the mosque demanding it remove any mention of Islam from its sign board. On the afternoon of Sunday June 1, 2008 , the National Alliance for Freedom of Religion and Faith (AKKBB) held a rally in Jakarta to support the right of the Ahmadiyah sect to exist, free from persecution. The date was significant — as it was a national holiday called Pancasila Day. “Pancasila”, the principle of the constitution, means literally “five principles”, which are these : 1) Belief in one supreme God The Front Pembela Islam was also holding a rally on the same day, to protest against fuel price rises. The two groups met at Monas Square, where the National Monument is situated. Here the FPI launched an attack upon the members of the National Alliance for Freedom of Religion and Faith using bamboo sticks. Seventy people were injured, with seven of these seriously wounded. Witnesses claimed that members of the FPI had shouted : “If you are defending Ahmadiyya, you must be killed.” On the following day President Yudhoyono awake from his political torpor to condemn the attacks made by the Front Pembela Islam. There were calls from inside the country and abroad for the FPI to be abolished. Habib Rizieq Shihab had no remorse about the incident at Monas Square. He appeared before reporters and openly told his followers on June 2nd to prepare for war. He said: “I have ordered all members of the Islamic Force to prepare for war against the Ahmadiyah (sect) and their supporters. We will never accept the arrest of a single member of our force before the government disbands Ahmadiyah. We will fight until our last drop of blood.” He added : “We will not accept Islam to be defiled by anyone. I prefer to be in prison or even be killed than accepting Islam to be defiled.” On Wednesday last week 58 members of the Front Pembela Islam were arrested from their headquarters in Central Jakarta. Habib Rizieq Shihab accompanied the arrested individuals as they were taken to a police station. There, he too was arrested. One individual among FPI’s leadership called Munarman is still on the run. The Indonesian police have finally acted to put a stop to the FPI, a group that has been openly practicing violence and intimidation. The actions come too little and too late. The current government has vacillated while extremists have eroded people’s basic rights and freedoms, and now the country is in danger of succumbing to violence. In Part Two, I will show how the Indonesian authorities have colluded with violent forces, rather than confront them head-on. In some instances, it appears that the government and the military have deliberately encouraged a climate of tension and potential conflict. June 13, 2008 Exclusive: Indonesia — A Civil War Between Islamists And Moderates?: Part Two of Two In Part One I described how the Front Pembela Islam (Islamic Defenders’ Front or FPI) had threatened to make war on the minority Islamic sect called the Ahmadiyah. On June 1st, FPI members violently attacked a procession of the National Alliance for Freedom of Religion and Faith (AKKBB), who support the rights of the Ahmadiyah. Several FPI members, including leader Habib Rizieq Shahib were arrested on Wednesday June 3rd in a police operation that involved 1,500 officers. Most FPI members were released shortly afterwards but Habib Rizieq Shahib and seven others remain in police custody. The Ahmadiyah (also called Ahmadi or Ahmadiyya) revere their founder Mirza Ghulam Ahmad — with many regarding him as a prophet. This places them into the category of Muslim “heretics,” as traditionally Mohammed is the last prophet of Islam. The Indonesian Ahmadiyah have recently officially claimed that they regard their founder not as a prophet but as a pious Muslim. Their protestations have been ignored by the Indonesian government. The FPI’s threats against the Ahmadiyah worsened this year after the nation’s leading group of clerics, the Majelis Ulama Indonesia (Indonesia Ulemas Council or MUI) declared that the Ahmadis were “deviant.” On July 27, 2005 , the same council had denounced all liberal and pluralist interpretations of Islam and condemned the Ahmadiyah, a fatwa that led to violence. The Ahmadiyah in Sukadana in West Java were attacked. Government bodies suggested that they would ban the Ahmadiyah movement, even though such an action contravened the 1945 constitution . This constitution is based upon a set of principles known as Pancasila . On Monday June 9th this week, about 5,000 Muslim protesters demonstrated in front of the presidential palace in Jakarta. They called for the Ahmadiyah to be disbanded. They also called for the seven members of the FPI in police custody, including leader and founder Habib Rizieq Shahib, to be released. The group that protested on Monday is called the Peaceful Alliance against Islam’s Defilement (ADA API). The group is comprised of various Islamist factions, including Hizb ut-Tahrir and the notoriously violent Forum Betawi Rempug (Betawi Brotherhood Forum or FBR). Noer Muhammad Iskandar, who led the demonstration on Monday, told the crowd: “Muslims’ demand for disbandment of the deviant Ahmadiyah sect is not a violation of religion freedom because Ahmadiyah has defiled Islamic teachings by recognizing Mirza Ghulan Ahmad as the last prophet, instead of the Prophet Muhammad.” Alliances between extremists have been a key feature of recent attempts to push Indonesian society towards Islamic “orthodoxy.” The Government Restricts Ahmadiyah On the evening of Monday June 9th this year, the Religious Affairs Minister, Maftuh Basyuni, issued a decree. Basyuni was educated in Saudi Arabia (where the Ahmadiyah are banned from visiting Mecca) and has previously urged the Ahmadiyah to abandon their claims to be Muslim. On Monday Basyuni’s decree, backed by President Susilo Bambang Yudhoyono’s cabinet, told the Ahmadiyah that they must stop spreading their religion or face five year jail terms on charges of blasphemy. The decree was co-signed by Hendarman Supanji, the Attorney General. The MUI (Indonesia Ulemas Council) has vowed to uphold the government’s decree against the Ahmadiyah sect by spying on the group and reporting its activities. It issued a statement which read: “If Ahmadiyah disobeys the decree, or continues its deviant activities, we will report it to the authorities and recommend that the president disband Ahmadiyah.” The MUI has deliberately attempted to undermine religious tolerance in Indonesia. In May 2005 the MUI encouraged the arrest of three Christian women under the Child Protection Act for inviting Muslim children to a “Happy Sunday ” event run by their church. The women were jailed for three years on September 1, 2005. The MUI first issued a fatwa against the Ahmadiyah in 1981, with another in 2001. In 2001 the secretary general of the MUI was Din Syamsuddin. Since 2005, Syamsuddin has been president of the “moderate” Muhammadiya movement, which has 30 million members. Currently he has attempted to be publicly diplomatic about the Ahmadiyah. In April this year, Syamsuddin had said that the Ahmadiyah should be persuaded to return to conventional Islam. Syamsuddin is a potential candidate for next year’s presidential elections. The July 2005 fatwa from the MUI that condemned deviant, pluralist and liberal forms of Islam affected not only the Ahmadiyah. Christian communities — particularly in West Java — became targets of a group calling itself the Anti-Apostasy Alliance (AGAP). This Alliance includes the Front Pembela Islam, and exploited a 1979 ruling by former president Suharto to declare churches to be illegal. The SKB or Joint Ministerial Decree declared that religious buildings should have proper permits, and was originally introduced to prevent Islamists building mosques. The SKB stated that before a religious building should be constructed, the community’s neighbors should be consulted. The MUI, which annually receives $600,000 from the Indonesian government, would pressure local people to disapprove of such buildings. In the month after the July 2005 fatwa by the MUI, at least 35 churches in West Java were closed down. In March 2006 the SKB was revised. The law made it more difficult for minority groups such as Christians and Ahmadiyah to construct places of worship. The law stated that a place of worship must have a minimum of 90 members and receive approval from 60 neighbors of another faith. On Wednesday last week, when 59 members of the FPI were arrested, some individuals avoided capture. The leader of the FPI wing that led the attack on June 1st remained at large . This man, called Munarman, this week surrendered himself to police late on Monday night. He claimed that his mission to outlaw the “infidel” Ahmadiyah sect had achieved its goal. The Ahmadiyah have been in Indonesia since the 1920s. To become an Ahmadi, a vow is taken to “harm no one.” What seems bizarre to Western minds is that a group which is peaceful and has not initiated violence is outlawed, while a group (FPI) that is openly violent, and has publicly called for a war to be made on the Ahmadiyah remains “legal.” On February 14th this year, Front Pembela Islam cleric Ahmad Sobri Lubis addressed a large crowd at a rally in Banjar, West Java. A video of his performance (in Bahasa Indonesian) can be found on the internet. The language used by Sobri Lubis is uncompromising. “Kill! Kill! Kill!,” Sobri Lubis told the rally. “It is halal to spill the blood of Ahmadiyah. If any of you should kill Ahmadiyah as ordered by us, I personally, as well as the FPI, will take responsibility.” Lubis is the secretary general of the Font Pembela Islam. He urged followers to kill Ahmadiyah members because they defile Islam. He said of human rights that they were cat excrement. Also attending the rally was Muhammad Al Khathath, head of the Forum Umat Islam (FUI). Abu Bakar Bashir also spoke at the rally. Bashir was jailed for giving consent to the 2002 Bali bombing, in which 202 people died. Bashir was released on June 13, 2006 and following an appeal, his conviction was overturned by Indonesia’s Supreme Court on December 21, 2006 . Bashir formerly ran the Indonesian Mujahideen Council (Majelis Mujahidin Indonesia or MMI). Calls for the deaths of those they oppose have been a hallmark of FPI activities for most of the time that the group has been in existence. In October 2000 , two years after being founded, armed members of the FPI patrolled Sukarno-Hatta International Airport. Their spokesman, Zainuddin, said: “’If we find any Israelis, we will first try to persuade them to leave, but if they refuse, we will slaughter them.” Two months later, on December 13, 2000 FPI violence led to the death of a civilian. The group was intimidating residents of an alleged red light district in Cikijing, Subang regency, in West Java and raiding entertainment centers. The vigilantes found women whom they claimed were prostitutes. They cut the women’s hair short and then began attacking homes in the neighborhood. When one young man objected, he was stabbed to death. The day after the stabbing, locals burned the house of Saleh Al Habsy, local FPI leader. On that Friday (December 15, 2000), the FPI under the leadership of Alawy Usman attacked a police station in Cikoko, 55 miles east of Jakarta, the capital. Three police officers were seriously injured. Usman later claimed that a rock had been thrown from the police station as his vigilantes passed. The rock caused one member to fall. Assuming he had been shot, the mob attacked the police station. No one was charged for the fatal stabbing in Cikijing. The FPI’s threats to kill Christians have continued even after the violence that took place on Pancasila Day (June 1st) this year. On June 4th in Tangerang in West Java, church leader Bedali Hulu was threatened with death by FPI members. The threats happened as he visited his elderly mother-in-law. The FPI has been able to act with virtual impunity. Its attacks on business premises rarely brought arrests, and when arrests have happened prosecutions rarely follow. Islamic vigilante groups in Indonesia are connected with political figures or parties. In 1998 , the FPI was linked with a voluntary militia called PAM Swakarsa. This militia was funded by B. J. Habibie, the President of Indonesia who succeeded Suharto. PAM Swakarsa and the FPI were used by the government and military to harass and intimidate student opponents of the government and the military figures supporting Habibie. PAM Swakarsa was founded in 1998 by Abdul Gafar, who was then deputy-speaker in the government. Gafur still plays a role in politics, albeit a corrupt one . FPI is still said to be linked to the military. FPI has close links with other fanatical and quasi-paramilitary factions in Indonesia, such as the MMI which was founded by Abu Bakar Bashir. It is linked to the Forum Umat Islam, which was founded in 1999 when it was linked to President Habibie and was used to fight against students loyal to Megawati (Sukarno’s daughter). In 2006, FPI took on a battle that had been initiated by the MMI (Majelis Mujahideen Indonesia) — the attack upon Indonesian Playboy. In January Avianto Nugroho announced that he had gained the rights to publish an Indonesian version of the famous magazine, though he made clear that it would contain no nudes. The MMI chairman, Irfan Awas, declared that Playboy was pornographic and its publication in Indonesia would damage the nation’s morals, even without nudity. The first issue was intended to appear in March, but was delayed. The first Indonesian edition of Playboy, edited by Erwin Arnada, appeared on April 7, 2006 . FPI members protested outside the magazine’s editorial offices in Jakarta. Alawi Usman, who had led the 2000 attack upon Cikoko police station said: “If within a week they are still active and sell the magazine, we will take physical action.” Tubagus Muhamad Sidik, another FPI activist, said: “Even if it had no pictures of women in it, we would still protest it because of the name… Our crew will clearly hound the editors.” Indonesian radio stations buzzed with callers, with many of these complaining about Playboy’s lack of raunchiness. One caller quipped: “It’s sinful to read Playboy if there’s no nudity!” Less than a week after initial publication, FPI members violently attacked the offices of the magazine. On Sunday February 19, 2006, about 400 FPI members had tried to storm the American Embassy over the Danish cartoons. Stones had been thrown at the embassy. On April 12, 2006 , about 300 FPI members stoned the building in South Jakarta where Playboy was put together. Attempts were made to smash though the iron gates outside the building, and policemen were attacked. The violence led to Velvet Media Group, who published Playboy, being forced to vacate their offices. They eventually moved to Bali. The editor of Indonesian Playboy, Erwin Arnada, was taken to court, charged with indecency. When one of the clothed models from the first edition, Andhara Early, appeared in the South Jakarta courthouse in January 2006 , protesters insulted her. Andhara too was charged with indecency. As she left the building she was called a prostitute who would go to Hell. Others shouted: “I hope your daughter gets raped.” Andahara Early and another model, Kartika Oktavini Gunawan, were acquitted. On Thursday April 5, 2007 , Erwin Arnada was also acquitted. The Front Pembela Islam is well-known for its campaigns of violence and intimidation. In February 2006 , while the Danish cartoon crisis was going on, members of the FPI and the Anti-Apostasy Movement were intimidating foreigners in Bandung, West Java. 27 activists were arrested outside the Holiday Inn in Bandung. The activists wee asking foreigners what they thought of the cartoons. “If they support the cartoons, we will have no other choice but to ask them to leave Indonesia,” one activist said. The Front Pembela Islam also influences politics in Indonesia at a local and national level. At the start of 2006, numerous local administrations introduced Islamic bylaws. In Tangerang near Jakarta, a law was introduced that stated that any woman found alone outside after 7 pm was a prostitute. A Muslim woman, Lilis Lindawati was one of the first to become a victim of this law. In late February 2006 as she waited for a bus to take her home, the pregnant wife and mother of two children was arrested. She had just finished work as a waitress, around 8 pm. She was placed in a cell and taken to court on the following day. In court she was made to empty the contents of her purse. Lipstick fell out. Judge Barmen Sinurat told her: “There is powder and lipstick in your bag. That means you’re lying to say that you are a housewife. You are guilty. You are prostitute.” The judge fined Mrs. Lindawatis $45 but as she had only her bus fare home, she was forced to spend three days in jail. Mayor Wahidin who introduced the law is the brother of Hassan Wiraduya, the Indonesian foreign minister. He said of Mrs. Lindawati’s case: “She could not prove she is not a prostitute. It is true when my men arrested her she was not committing adultery, but why does she put on such make-up?” Mrs. Lindawati later sued the mayor of Tangerang, but whether she won is unknown. In Depok , south of Jakarta, similar laws were being introduced. These had been brought in after the local administration had consulted with the FPI and the Indonesian Ulemas Council (MUI). Indonesian researcher Syaiful Mujani has claimed that such bylaws are unconstitutional and illegal. In South Sulawesi, laws were introduced female civil servants are forced to wear Islamic clothing and government employees must be able to read and write Arabic. On Saturday April 22, 2006 a meeting of the Indonesian Youth Circle claimed that Islamists and Muslim hardliners were threatening Indonesia’s democracy. Zuly Qodir of Muhammadiyah said: “Now the sectarian groups are pressing their agenda to change Indonesia into a theocratic state. They seek to formalize Islam as the state ideology.” At that time, a controversial act was being introduced in the nation’s parliament, called the Anti-Pornography Bill which would have made aspects of the Islamist bylaws become standardized throughout the nation. This proposed law was opposed by former president Kyai Haji Abdurrahman Wahid (Gus Dur). As a result, on May 23, 2006, FPI members forced him off a stage at a rally in Purwakarta, West Java. The bill would have outlawed kissing in public — resulting in a five year jail sentence for those found guilty. Exposing certain areas of the body, such the stomach, thigh or hip, could have invoked a 10 year jail sentence and $50,000 fine. On the island of Lombok, Muslim women protested against the bill. Yenny Wahid, a Muslim women’s rights campaigner said of the bill: “This is an attempt by some people to import Arab culture to Indonesia.” When women condemned the draft Anti-Pornography Bill they were harassed by the FPI’s allies, the Betawi Brotherhood Forum (FBR). The Front Pembela Islam helped to organize mass rallies in favor of the repressive bill, which would have destroyed the tourist trade in places such as Bali, and would have discriminated against Hindus, Christians and the indigenous peoples of West Papua. The bill was “watered down” in February 2007 but it appears not to have been fully introduced into law. The potential “civil war” ‘between moderate and hardline Muslims that has been highlighted by the Ahmadiyah/FPI problems reflects a more basic struggle — the struggle between Islamism and democracy. The current government is not, it seems, prepared to alienate or antagonize the Islamist minority. As a result, it has chosen to make the lives of a peaceful group — the Ahmadiyahs — more difficult. Faced with widespread demands to ban or outlaw the Front Pembela Islam, the government of Indonesia does nothing. Many of the leading Islamists in Indonesia — Umar Jaffar Thalib of the Laskar Jihad, Abu Bakar Bashir who is spiritual leader of the terrorist group Jemaah Islamiyah and Habib Rizieq Shahib are of Arabic descent. They do not value Indonesia’s cultural diversity, and do not value either the Pancasila principles or the 1945 constitution. There are many in the Indonesian military who appear happy for the country to have democracy break down so they can gain power under martial rule. The current president, Susilo Bambang Yudhoyono, appears to have no desire to uphold the principles of the constitution. He will be fighting a presidential election next year. When in 2004 he was elected, it was believed that Yudhoyono was firm in a time of crisis. That firmness is no longer visible. He has vacillated while others in his government, including the Attorney General Hendarman Supanji, have sought to remove Indonesia’s democratic foundations. Yudhoyono has become weak in the face of Islamic activism. In 2003, he wooed women voters with his voice, producing an album of love songs entitled “My longing for you.” Such a stunt now will do him no favors in the 2009 elections. He has bowed down to Islamist pressure, and failed to uphold his nation’s democracy and constitution. He has even apparently become hoodwinked by a mountebank who claimed to have a scheme to make energy from water. While Islamist bylaws were being introduced across Indonesia, sometimes following pressure from the Front Pembela Islam, Yudhoyono’s government did nothing. According to legal expert Denny Indrayana , sharia-based bylaws can be revoked by presidential decree: ““Based on Law No. 32/2004, the government can make a decision 60 days after local administrations give bylaws for review.” The recent decision to severely curtail the activities of the peaceful and law-abiding citizens in the Ahmadiyah movement has struck a sour note inside Indonesia and beyond. Already the group has suffered persecution in West Java and on the island of Lombok. Between 2005 and 2008 at least 25 Ahmadiyah mosques have been destroyed. The decree has been criticized by Islamists such as Abu Bakar Bashir because it is not a complete disbandment of the Ahmadiyah. Human Rights Watch condemned the move and urged the Indonesian government to uphold the pluralist values of the constitution. Adnan Buyung Nasution is a prominent lawyer who acts as an advisor to President Yudhoyono. He said: “I would say this is the beginning of a further war between Indonesians who want to maintain a secular state, an open democratic society, and those who want to dominate (and turn) the country into a Muslim country.” The Indonesian rights group Kontras has also condemned the decree. Usman Hamid, coordinator of Kontras has said : “The government has not been able to protect citizens from violence, from prosecutions committed by hard-line groups. This is a serious, serious problem in Indonesia… we have been able to achieve several political reforms, political freedom. But the case of Ahmadiyah undermines the image of reform even more starkly because religious freedom has been attacked after 10 years of reform in Indonesia.” The ideological war that is being fought now in Indonesia is between two diametrically opposed systems — Islamism and democracy. So far, the Islamists appear to be winning. Adrian Morgan is a British based writer and artist who has written for Western Resistance since its inception. He also writes for Spero News. He has previously contributed to various publications, including the Guardian and New Scientist and is a former Fellow of the Royal Anthropological Society.
<urn:uuid:0170c909-3081-41fe-9f1e-a80058d570f6>
CC-MAIN-2013-20
http://www.thepersecution.org/world/indonesia/08/06/fsm13.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97424
8,530
2.59375
3
This article was published in Australian Dictionary of Biography, Volume 8, (MUP), 1981 Alfred William Foster (1886-1962), judge, was born on 28 July 1886 at Beechworth, Victoria, eldest of the three surviving children of Alfred William Foster, tobacconist and commission agent, and his wife Sarah, née Brown, daughter of a Jewish draper. In the 1850s Foster's paternal grandfather, a Yorkshire man, had left his post as a police magistrate in Tasmania to bring his family to the gold diggings, near Beechworth. Alfred was educated there and matriculated from the Beechworth College at 14. As a youth he became interested in spiritualism and rejected Christian beliefs for which he claimed his studies revealed no scientific evidence. In 1906 Foster went to Melbourne to study law and there he joined the Victorian Rationalist Association. As a member of a conservative debating society he was chosen to debate socialism against a team from the Victorian Socialist Party which included John Curtin. Not only was Foster badly beaten in the debate but he was converted to socialism and subsequently joined the V.S.P. Foster signed the roll of the counsel of the Bar in June 1910 and spent the next few years as a struggling barrister. He became politically prominent with the outbreak of war in August 1914 as an outspoken opponent of Australia's involvement. When conscription became an issue Foster wrote articles and pamphlets, lectured, addressed street-corner rallies and defended fellow anti-conscriptionists charged with offences under the War Precautions Act. In 1917 he too faced charges for a fiery anti-conscription speech but escaped conviction. Much of his opposition to the Hughes government during the war years was directed at 'unjust and suppressive' censorship provisions. During the war Foster joined the Australian Labor Party. In 1917, following the split in the party over conscription, he was elected to the Victorian central executive; that year he stood unsuccessfully as the Labor candidate in the strongly Nationalist Federal electorate of Balaclava. In 1918 Foster founded the Y Club for 'sociable socialists'. Through this he became involved in campaigning for the One Big Union until the movement lost impetus. Seeking a broader base for his political ambitions, he joined the Food Preservers' Union and became its president and delegate at the annual conferences of the Victorian branch of the A.L.P. and a member of the Trades Hall Council. In 1922 and 1925 he stood as the endorsed Federal Labor candidate for Fawkner but was defeated. Advancement in Foster's legal career came with his appointment as union advocate to the 1920 royal commission on the basic wage. He demonstrated that the basic wage was quite inadequate to meet the cost of living, but the commission's recommended increase was so high that it was not implemented. Another valuable brief came his way in 1924 with his appointment as counsel assisting the royal commission inquiring into the 1923 police strike in Melbourne. Then in 1926 he was appointed as counsel to represent the Labor governments of New South Wales and Queensland in the Commonwealth Court of Conciliation and Arbitration's main hours case which reduced standard hours from forty-eight to forty-four a week in the engineering industry. When Foster became a judge of the County Court of Victoria in 1927 he had to give up all political party positions, including that of senior vice-president of the Victorian branch of the A.L.P. His work as a judge spanned the Depression years and many of the defendants in his court were victims of economic hardship. Foster enforced the laws but at the same time canvassed the need to reform many outdated provisions, calling also for medical assessment and treatment for sex offenders. He provoked a public outcry in 1934 when he told a boy witness, 'There is no hell, sonny'. Throughout the 1930s Foster worked for peace with an increasing sense of urgency. As president of the Victorian branch of the League of Nations Union he sought to publicize the need for support of the league as a force for peace. When war came he became Victorian president of the Sheepskins for Russia appeal and joined the Australia-Soviet Friendship League. In October 1942 the Curtin government set up a wartime Women's Employment Board and appointed Foster to head it. For the next two years the board set the wages, hours and conditions for more than 70,000 women, in most cases awarding 90 per cent of the male wage rate instead of the standard 54 per cent. Foster was transferred to the Commonwealth Arbitration Court in October 1944. In 1945 he conducted an inquiry into the troubled stevedoring industry and recommended changes which were incorporated into the Stevedoring Industry Act of 1947. His first major case was the standard hours hearing which spanned the twenty-two months to September 1947. During that time he became the senior puisne judge with the task of writing and delivering the judgment which awarded the 40-hour week. In assessing his arbitration work he always remained proudest of this decision. In the 1950 basic wage case he was dominant in awarding a huge £1 increase to the weekly wage of £7 per week to enable workers to share in the post-war prosperity. His participation in the 1959 basic wage case was notable for his suggestion that the legalistic 'burden of proof' was inappropriate in arbitration cases. At first the unions regarded Foster as their champion. But in 1949, during the crippling seven-week national coal strike, he gaoled eight officials and imposed heavy fines on their unions for their defiance of the Chifley government's emergency legislation to stop union funds from being used to assist the strikers. In 1956 the arbitration system was overhauled and the functions of the court were split between the newly established Commonwealth Industrial Court and the Commonwealth Conciliation and Arbitration Commission. Foster, although the senior remaining arbitration court judge, was not appointed to head either body but became senior deputy president of the commission. He remained in charge of the maritime industry to which he had been assigned in 1952. His direct approach and availability at all times resulted in a speedier turn around of ships and a reduction of time lost through disputes. He instituted a 'hard-lying' allowance to encourage shipowners to modernize or replace dirty, old and obsolete vessels, and this was virtually achieved by 1958. He conducted an inquiry into seamen's conditions and in 1955 made a new seamen's award, the first since 1935. The award pleased seamen but shipowners complained of a double payment effect in the leave provisions. When in 1960 Foster made a new award to correct this anomaly, seamen were indignant. Demonstrations against him marred the last years of his arbitration work and obscured the substantial gains made by seamen during his administration of their industry. Foster's interests included tennis and golf, at which he excelled, the repairing of old and valuable clocks, carpentry—he made his own golf clubs—gardening and the study of mathematics. On 12 January 1916 he had married Beatrice May Warden, a fellow member of the Victorian Socialist Party. A son was born before her death from cancer in 1925. On 25 January 1927 Foster married Ella Wilhelmina Jones. He died at his Sandringham home on 26 November 1962, survived by his wife and their two sons and daughter. To the end of his life he described himself as a socialist, a pacifist and a rationalist. In accordance with his wishes there was no religious service before his cremation. Instead his friend, Sir John Barry, spoke of his tenacity, courage and unshakable integrity in the pursuit of truth. Constance Larmour, 'Foster, Alfred William (1886–1962)', Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/foster-alfred-william-6217/text10695, accessed 26 May 2013. This article was first published in hardcopy in Australian Dictionary of Biography, Volume 8, (MUP), 1981
<urn:uuid:3d4cc735-402a-4b4b-b028-1c78fd30cb51>
CC-MAIN-2013-20
http://adb.anu.edu.au/biography/foster-alfred-william-6217
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982826
1,649
2.671875
3
(c) Pocumtuck Valley Memorial Association, Deerfield MA. All rights reserved. Contact us for information about using this image. There is currently no available "Beginner" label. The following is the default level label: Women in the United States had been actively seeking the vote since the 1848 Women's Rights Convention in Seneca Falls, New York. Women were given the vote in Colorado in 1893, and in both Utah and Idaho in 1896. That same year, a method for showing what effect women's votes would have on the presidential election was proposed by the Postum Cereal Food Company. Women would write the name of their candidate on a postcard, have their name and address verified by either a banker or grocer, and send the postcard to the Postum Company in Battle Creek, Michigan. The votes would be tallied weekly and printed in newspapers across the country, with the final results published on November 7.
<urn:uuid:fe102b1f-4fb3-4588-a16c-326ca946ef2d>
CC-MAIN-2013-20
http://www.americancenturies.mass.edu/collection/itempage.jsp?itemid=17669&img=0&level=beginner&transcription=0
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969656
189
2.828125
3
occur twice a year, when the tilt of the Earth's axis is most oriented toward or away from the Sun , causing the Sun to reach its northernmost and southernmost extremes. The name is derived from the Latin sol (sun) and sistere (to stand still), because at the solstices, the Sun stands still in declination ; that is, its apparent movement north or south comes to a standstill. The term solstice can also be used in a wider sense, as the date (day) that such a passage happens. The solstices, together with the equinoxes, are connected with the seasons. In some languages they are considered to start or separate the seasons; in others they are considered to be centre points (in English, in the Northern hemisphere, for example, the period around the June solstice is known as midsummer, and Midsummer's Day is 24 June, about three days after the solstice itself). Similarly 25 December is the start of the Christmas celebration, which was a pagan festival in pre-Christian times, and is the day the sun begins to return to the northern hemisphere. The two solstices can be distinguished by different pairs of names, depending on which feature one wants to stress. - Summer solstice and winter solstice are the most common names. However, these can be ambiguous since seasons of the northern hemisphere and southern hemisphere are opposites, and the summer solstice of one hemisphere is the winter solstice of the other. These are also known as the 'longest' or 'shortest' days of the year. - Northern solstice and southern solstice indicate the direction of the sun's apparent movement. The northern solstice is in June on Earth, when the sun is directly over the Tropic of Cancer in the Northern Hemisphere, and the southern solstice is in December, when the sun is directly over the Tropic of Capricorn in the Southern Hemisphere. - June solstice and December solstice are an alternative to the more common "summer" and "winter" terms, but without the ambiguity as to which hemisphere is the context. They are still not universal, however, as not all people use a solar-based calendar where the solstices occur every year in the same month (as they do not in the Islamic Calendar and Hebrew calendar, for example), and the names are also not useful for other planets (Mars, for example), even though these planets do have seasons. - First point of Cancer and first point of Capricorn. One disadvantage of these names is that, due to the precession of the equinoxes, the astrological signs where these solstices are located no longer correspond with the actual constellations. - Taurus solstice and Sagittarius solstice are names that indicate in which constellations the two solstices are currently located. These terms are not widely used, though, and until December 1989 the first solstice was in Gemini, according to official IAU boundaries. - The Latin names Hibernal solstice (winter), and Aestival solstice (summer) are sometimes used. Solstice terms in East Asia The traditional East Asian calendars divide a year into 24 solar terms (節氣). Xiàzhì (pīnyīn) or Geshi (rōmaji) is the 10th solar term, and marks the summer solstice . It begins when the Sun reaches the celestial longitude of 90° (around June 21 ) and ends when the Sun reaches the longitude of 105° (around July 7 ). Xiàzhì more often refers in particular to the day when the Sun is exactly at the celestial longitude of 90°. Dōngzhì (pīnyīn) or Tōji (rōmaji) is the 22nd solar term, and marks the winter solstice. It begins when the Sun reaches the celestial longitude of 270° (around December 22 ) and ends when the Sun reaches the longitude of 285° (around January 5). Dōngzhì more often refers in particular to the day when the Sun is exactly at the celestial longitude of 270°. The solstices (as well as the equinoxes) mark the middle of the seasons in East Asian calendars. Here, the Chinese character 至 means "extreme", so the terms for the solstices directly signify the summits of summer and winter, a linkage that may not be immediately obvious in Western languages. Heliocentric view of the seasons The cause of the seasons is that the Earth's axis of rotation is not perpendicular to its orbital plane (the flat plane made through the center of mass (barycenter ) of the solar system (near or within the Sun) and the successive locations of Earth during the year), but currently makes an angle of about 23.44° (called the "obliquity of the ecliptic "), and that the axis keeps its orientation with respect to inertial space . As a consequence, for half the year (from around 20 March to 22 September ) the northern hemisphere tips to the Sun, with the maximum around 21 June , while for the other half year the southern hemisphere has this distinction, with the maximum around 21 December . The two moments when the inclination of Earth's rotational axis has maximum effect are the solstices. The table at the top of the article gives the instances of equinoxes and solstices over several years. Refer to the equinox article for some remarks. At the northern solstice the subsolar point reaches to 23.44° north, known as the tropic of Cancer. Likewise at the southern solstice the same thing happens for latitude 23.44° south, known as the tropic of Capricorn. The subsolar point will cross every latitude between these two extremes exactly twice per year. Also during the northern solstice places situated at latitude 66.56° north, known as the Arctic Circle will see the Sun just on the horizon during midnight, and all places north of it will see the Sun above horizon for 24 hours. That is the midnight sun or midsummer-night sun or polar day. On the other hand, places at latitude 66.56° south, known as the Antarctic Circle will see the Sun just on the horizon during midday, and all places south of it will not see the Sun above horizon at any time of the day. That is the polar night. During the southern solstice the effects on both hemispheres are just the opposite. At the temperate latitudes, during summer the Sun remains longer and higher above the horizon, while in winter it remains shorter and lower. This is the cause of summer heat and winter cold. The seasons are not caused by the varying distance of Earth from the Sun due to the orbital eccentricity of the Earth's orbit. This variation does make such a contribution, but is small compared with the effects of exposure because of Earth's tilt. Currently the Earth reaches perihelion at the beginning of January, which is during the northern winter and the southern summer. The Sun, being closer to Earth and therefore hotter, does not cause the whole planet to enter summer. Although it is true that the northern winter is somewhat warmer than the southern winter, the placement of the continents, ice-covered Antarctica in particular, may also play an important factor. In the same way, during aphelion at the beginning of July, the Sun is farther away, but that still leaves the northern summer and southern winter as they are with only minor effects. Due to Milankovitch cycles, the Earth's axial tilt and orbital eccentricity will change over thousands of years. Thus in 10,000 years one would find that Earth's northern winter occurs at aphelion and its northern summer at perihelion. The severity of seasonal change — the average temperature difference between summer and winter in location — will also change over time because the Earth's axial tilt fluctuates between 22.1 and 24.5 degrees. Geocentric view of the seasons The explanation given in the previous section is useful for observers in outer space. They would see how the Earth revolves around the Sun and how the distribution of sunlight on the planet would change over the year. To observers on Earth, it is also useful to see how the Sun seems to revolve around them. These pictures show such a perspective as follows. They show the day arcs of the Sun, the paths the Sun tracks along the celestial dome in its diurnal movement. The pictures show this for every hour on both solstice days. The longer arc is always the summer track and the shorter one the winter track. The two tracks are at a distance of 46.88° (2 × 23.44°) away from each other. In addition, some 'ghost' suns are indicated below the horizon, as much as 18° down. The Sun in this area causes twilight. The pictures can be used for both the northern and southern hemispheres. The observer is supposed to sit near the tree on the island in the middle of the ocean. The green arrows give the cardinal directions. - On the northern hemisphere the north is to the left, the Sun rises in the east (far arrow), culminates in the south (to the right) while moving to the right and sets in the west (near arrow). Both rise and set positions are displaced towards the north in summer, and towards the south for the winter track. - On the southern hemisphere the south is to the left, the Sun rises in the east (near arrow), culminates in the north (to the right) while moving to the left and sets in the west (far arrow). Both rise and set positions are displaced towards the south in summer, and towards the north for the winter track. The following special cases are depicted. - On the equator the Sun is not overhead every day, as some people think. In fact that happens only on two days of the year, the equinoxes. The solstices are the dates that the Sun stays farthest away from the zenith, only reaching an altitude of 66.56° either to the north or the south. The only thing special about the equator is that all days of the year, solstices included, have roughly the same length of about 12 hours, so that it makes no sense to talk about summer and winter. Instead, tropical areas often have wet and dry seasons. - The day arcs at 20° latitude. The Sun culminates at 46.56° altitude in winter and 93.44° altitude in summer. In this case an angle larger than 90° means that the culmination takes place at an altitude of 86.56° in the opposite cardinal direction. For example in the southern hemisphere, the Sun remains in the north during winter, but can reach over the zenith to the south in midsummer. Summer days are longer than winter days, but the difference is no more than two or three hours. The daily path of the Sun is steep at the horizon the whole year round, resulting in a twilight of only about one hour. - The day arcs at 50° latitude. The winter Sun does not rise more than 16.56° above the horizon at midday, and 63.44° in summer above the same horizon direction. The difference in the length of the day between summer and winter is striking. Likewise is the difference in direction of sunrise and sunset. Also note the different steepness of the daily path of the Sun above the horizon in summer and winter. It is much shallower in winter. Therefore not only is the Sun not reaching as high, it also seems not to be in a hurry to do so. But conversely this means that in summer the Sun is not in a hurry to dip deeply below the horizon at night. At this latitude at midnight the summer sun is only 16.56° below the horizon, which means that astronomical twilight continues the whole night. This phenomenon is known as the grey nights, nights when it does not get dark enough for astronomers to do their observations. Above 60° latitude the Sun would be even closer to the horizon, only 6.56° away from it. Then civil twilight continues the whole night. This phenomenon is known as the white nights. And above 66° latitude, of course, one would get the midnight sun. - The day arcs at 70° latitude. At local noon the winter Sun culminates at −3.44°, and the summer Sun at 43.44°. Said another way, during the winter the Sun does not rise above the horizon, it is the polar night. There will be still a strong twilight though. At local midnight the summer Sun culminates at 3.44°, said another way, it does not set, it is the polar day. - The day arcs at the pole. All the time the Sun is 23.44° above or below the horizon, depending on whether it is the summer or winter solstice. In the latter case, that is enough to not even have any twilight. All directions are north at the South Pole and south at the North pole. There is also no south at the South Pole, no north at the North Pole, and neither east nor west is discernible at either pole. Due to atmospheric refraction, the Sun may already appear above the horizon when the real, geometric Sun is still below it. Many cultures celebrate various combinations of the winter and summer solstices, the equinoxes, and the midpoints between them, leading to various holidays arising around these events. For the December solstice, Christmas is the most popular holiday to have arisen. In addition, Yalda (see winter solstice for more) are also celebrated around this time. For the June solstice, Catholic and Nordic Protestant cultures celebrate the feast of St. John from June 23 to June 24 (see St. John's Eve , Ivan Kupala Day ), while Neopagans . For the vernal (spring) equinox, several spring-time festivals are celebrated, such as the observance in Judaism . The autumnal equinox has also given rise to various holidays, such as the Jewish holiday of Sukkot . At the midpoints between these four solar events, cross-quarter days In many cultures the solstices and equinoxes traditionally determine the midpoint of the seasons, which can be seen in the celebrations called midsummer and midwinter. Along this vein, the Japanese celebrate the start of each season with an occurrence known as Setsubun. The cumulative cooling and warming that result from the tilt of the planet become most pronounced after the solstices. In the Hindu calendar, two sidereal solstices are named Uttarayana and Dakshinayana. The former occurs around January 14 each year, while the latter occurs around July 14 each year. These mark the movement of the Sun along a sidereally fixed zodiac (precession is ignored) into Mesha, a zodiacal sign which corresponded with Aries about 285, and into Tula, the opposite zodiacal sign which corresponded with Libra about 285. Calculations, plots and tables Debate about season start Pictures and videos
<urn:uuid:2159fa67-4c51-4be6-b792-474f27d87548>
CC-MAIN-2013-20
http://www.reference.com/browse/solstice
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945746
3,199
4.1875
4
During the Baroque era, Domenico Scarlatti (1685-1757) was one of the most important and influential keyboard composers of the day. His works for harpsichord and organ feature typically complex interweaving melodies which continue to present technical challenges -- including crossing hands -- to performers who attempt his pieces. Scarlatti's most lasting contribution may be the influential forms he pioneered, including an Italian style of Sonata that features two repeating sections, each complete with a change of key and distinct, recapitulating melodies. Through his travels to Portugal and Spain, Scarlatti added distinct influences to his compositions, including that of Spanish guitar music. It's possible that some of Scarlatti's innovations were inspired by his father Alessandro, an important composer in his own right.
<urn:uuid:64df5b10-9a26-413e-8e6e-dd28742a02a4>
CC-MAIN-2013-20
http://www.rhapsody.com/artist/domenico-scarlatti
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978143
164
2.53125
3
What are omega-3 fatty acids? Omega-3 fatty acids are unsaturated fats that benefit the cardiovascular system. Although the body needs these fats, it cannot make them on its own; so, we must get them from food and supplements. Fish is the main source of the heart disease-fighting omega-3 fats eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), but some plant-based foods also contain omega-3 in the form of alpha-linolenic acid (ALA), which also helps heart health. Flaxseed (flax) is the richest source of ALA and lignans in the North American diet and is an excellent source of fiber, high quality protein and potassium. Lignans are phytoestrogens and antioxidants that have been shown to help prevent certain diseases such as heart disease and cancer. Flaxseed contains 75 to 800 times more lignans than other plant foods. How can I add flaxseed to my daily diet? Flaxseed can be added to almost any food. It has a nutty flavor that goes well in many meals. The seeds are reddish-brown or golden-yellow in color. The outer hull of the seed is very difficult to digest, so you should grind or mill the whole flaxseed to get the most nutrition from it. You can grind the seeds in a coffee grinder, blender or food processor, or you can buy ground or milled flaxseed at the store. - Use flaxseed instead of fat in a homemade baked goods. Replace 1½ cups of ground flaxseed for ½ cup butter or margarine. Caution: Using flaxseed can cause baked goods to brown faster. - Use flaxseed instead of an egg when baking. Substitute one egg with 1 Tablespoon of ground flaxseed and 3 Tablespoons of water. - Sprinkle ground flaxseed on yogurt, cereal, soup or salad. - Add ground flaxseed to shakes and smoothies. How much flaxseed should I eat? Eating 2 Tablespoons of ground flaxseed per day is considered a healthy daily amount. Because flaxseed is high in fat, the ground form can become rancid or spoil quickly. You can store ground or milled flaxseed in the refrigerator (35 º–38 º F) for up to 3 months. Whole flaxseed can be stored at room temperature for up to one year. Chia Seeds are another source of ALA. They are also a good source of fiber, protein, calcium, magnesium and phosphorous. For hundreds of years, this tiny seed was used by the Aztecs as their main energy source. This unprocessed, nutty-tasting seed can be made into a gel and added to foods as well as used as a substitute for whole grains. How can I add Chia seeds to my daily diet? Unlike flaxseed, Chia seeds do not need to be ground for your body to absorb the nutrients. Some easy ways to add Chia seeds to your diet are: - Sprinkle Chia seeds on yogurt, cereal or salad. - Add Chia seeds to shakes or smoothies. - Add Chia seeds to your favorite quick-bread batter. - To boost the nutrition in homemade muffins or pancakes, substitute Chia powder for one-quarter of the flour called for in the recipe. You can buy Chia powder in stores or you can make your own by grinding the seeds in a coffee grinder. - Chia seeds can absorb 10 times their own weight in water. So, they can also be made into gel to thicken puddings, sauces, fruit spreads or dips. You can make Chia gel by adding one-third cup of Chia seeds to 2 cups of water. Mix well for 3 to 5 minutes to avoid clumping. Place the Chia Seeds in the refrigerator in a sealed jar. - For vegan baking, replace one egg with ¼ cup of Chia gel. How much Chia should I eat? Eating 1 to 2 Tablespoons of Chia seeds a day is considered a healthy daily amount. If you decide to eat flaxseed and Chia seeds every day, it is best to slowly add them to your diet until you reach the healthy daily amounts. Both seeds are high in fiber, and eating too much too quickly can cause stomach discomfort. Other plant-based sources of Omega-3 fatty acids Walnuts, soy foods, pumpkin seeds, and canola (rapeseed) oil are additional sources of Omega-3 fats. These foods contain a lower concentration of ALA than flax and Chia seeds, but they can still help boost your overall ALA intake. In addition, these foods contain disease-fighting vitamins, minerals, antioxidants and dietary fiber, which are all part of a heart-healthy diet. For More Information Nutrition Program Preventative Cardiology and Rehabilitation Appointments: 216.444.9353 or 800.223.2273 EXT.49353 Hearing Impaired (TTY) Assistance: 216.444.0261 Reviewed: 09/11 #332171
<urn:uuid:749553e2-218b-4049-a9e7-4654524e898d>
CC-MAIN-2013-20
http://my.clevelandclinic.org/heart/prevention/nutrition/flax1_02.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937777
1,097
2.828125
3
Copyright © 1965 by Roger Lynds This photo was taken by Roger Lynds at Kitt Peak, Arizona, on the morning of 1965 October 29. It was a 4-minute exposure. The two stars to the left of the comet's head are Delta and Eta Corvi (magnitude 3.0 and 4.3, respectively), while the star a little ways up and just right of the tail is Gamma Corvi (magnitude 2.6). The tail extends into Crater in this picture, with the length being about 17°. (Special thanks to Jeannette Barnes (NOAO/Tucson) for relaying my request to use this picture to Roger Lynds). Kaoru Ikeya and Tsutomu Seki independently discovered this comet on 1965 September 18.8, within about 15 minutes of each other. It was then just west of Alpha Hydrae. The magnitude was estimated as 8, and the comet was described as diffuse, with a condensation. The first confirmation was obtained on September 19.79, when the Smithsonian Astrophysical Observatory station at Woomera, Australia, obtained a photograph showing the comet at magnitude 8. Comet quickly recognized as a sungrazer and brightened rapidly. It reached magnitude 5.5 by October 1 and magnitude 2 by October 15. On the latter date the tail was 5 degrees long. The comet was closest to the Sun (perihelion) on October 21 (0.008 AU). Became visible in broad daylight on October 21 to anyone who blocked the sun with their hand. Maximum magnitude may have been around -10 or -11. Japanese astronomers using a coronagraph on Mount Norikura said the comet was seen to disrupt into three pieces just 30 minutes prior to perihelion. Copyright © 1965 by F. Moriyama and T. Hirayama This photo was taken by F. Moriyama and T. Hirayama (Tokyo Astronomical Observatory, Mitaka, Japan) at the Norikura Corona Station on 1965 October 21. They used a 12-cm coronagraph and Fuji Panchroprocess plates behind a Mazda VG1B color filter. This was a 4-second exposure. Comet's tail was longest at the end of October and early November when observers reported lengths of 20 to 25 degrees. Two definite nuclei were photographed on November 4, with a third suspected. Comet last definitely detected on January 14, 1966, although images were suspected on Baker-Nunn plates exposed on February 12. The orbital period is 880 years. There is a chance this was a return of the great comet of 1106, which was seen in broad daylight in Europe. | cometography.com | | Comet Information If you have any questions, please
<urn:uuid:f0f98a65-04e4-4601-be08-16a9bbeba230>
CC-MAIN-2013-20
http://www.cometography.com/lcomets/1965s1.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963016
581
3.125
3
Written for the KidsKnowIt Network by: Fossils are the preserved remains of plants or animals. For such remains to be considered fossils, scientists have decided they have to be over 10,000 years old. There are two main types of fossils, body fossils and trace fossils. Body fossils are the preserved remains of a plant or animal's body. Trace fossils are the remains of the activity of an animal, such as preserved trackways, footprints, fossilized egg shells, and nests. When asked what a fossil is, most people think of petrified bones or petrified wood. Permineralization is a process. For bone to be permineralized, the body must first be quickly buried. Second, ground water fills up all the empty spaces in body, even the cells get filled with water. Third, the water slowly dissolves the organic material and leaves minerals behind. By the time permineralization is done, what was once bone is now a rock in the shape of a bone. Unlike what you see in cartoons, dogs wouldn't be interested in these bones. When an animal or plant dies, it may fall into mud or soft sand and make an impression or mark in the dirt. The body is then covered by another layer of mud or sand. Over time, the body falls apart and is dissolved. The mud or sand can harden into rock preserving the impression of the body, leaving an animal or plant shaped hole in the rock. This hole is called a mold fossil. If the mold becomes filled over time with other minerals the rock is called a cast fossil. A simple experiment can show you how this works. Take some clay and press a seashell or some other object into the clay. Pull the sea shell out of the clay any you will see a detailed impression of your seashell in the clay. If, over time, the clay hardens into rock the result would be a fossil mold. But really, who has millions of years to wait to make their own fossil? Here's the quick way. Pour plaster of Paris, dental stone, or other plaster into the mold. Wait for it to harden and you have just made your own cast fossil. Another type of fossil is a resin fossil. Resin is sometimes called amber. Plants, mostly trees, secrete sticky stuff called resin. Sometimes insects, other small animals, or bits of plants get stuck in the sticky resin. The resin hardens overtime and is preserved in rock making a fossil. Quetzalcoatlus was one of the largest flying animals to have ever inhabit the Earth. Its wingspan was over 40 feet (12m) in width. Quetzalcoatlus' neck alone was 10 feet (3m) long. This huge flying reptile is believed to have been a scavenger, picking at the carcasses of dead dinosaurs on the ground.
<urn:uuid:346a921f-4dc5-43dd-a47a-7bd4e6100ff7>
CC-MAIN-2013-20
http://www.kidsdinos.com/palaeontology-what-are-fossils.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946703
588
3.859375
4
Breton National Wildlife Refuge, LA Fish and Wildlife Service The Breton National Wildlife Refuge was established in 1904 and is the second oldest refuge in the National Wildlife Refuge System. The refuge is comprised of a series of barrier islands including Breton Island and all of the Chandeleur Islands in St. Bernard Parish, Louisiana. It was formed from the remnants of the Mississippi River's former St.Bernard Delta, which was active 2000 years ago. The barrier islands size and shapes constantly are altered by tropical storms, wind, and tidal action. The area above mean high tide is approximately 6,923 acres. In 1975, the refuge was established as a National Wilderness Area. And in 1990, an agreement with the Louisiana Department of Wildlife and Fisheries and the U.S. Fish and Wildlife Service authorizing the service management rights and law enforcement authority on additional State of Louisiana owned lands. The agreement increased the refuge by an additional 11,350 acres for a total of 18,273 acres. The objectives for the refuge are to provide sanctuary for nesting and wintering seabirds, protect and preserve the wilderness character of the islands, and provide sandy beach habitat for a variety of wildlife species. Twenty-three species of seabirds and shorebirds frequently use the refuge, and thirteen species nest on the various islands. The most abundant nesters are brown pelicans, laughing gulls, and royal, caspian, and sandwich terns. Over ten thousand brown pelicans have been recorded nesting on the refuge. Endangered and threatened species such as the least tern, brown pelican, and piping plover are common to the refuge. Waterfowl winter near the refuge islands and benefit from adjacent shallows, marshes, and sounds for feeding and protection during inclement weather. Redheads and lesser scaup account for most waterfowl numbers. 61389 Hwy. 434 Directions:Please call the refuge office for directions to the refuge. The only access to the refuge is by boat.
<urn:uuid:60ecf607-fe8b-47de-bc4e-79c1f1086419>
CC-MAIN-2013-20
http://www.recreation.gov/recAreaDetails.do?contractCode=NRSO&recAreaId=1319&agencyCode=70906
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926589
407
3.1875
3
Get out the microscope, because we’re going through this poem line-by-line. Before it cloud, Christ, lord, and sour with sinning, - Our speaker is urging Christ to get hold of this bounty and lushness before it spoils, before it goes bad. The way it goes bad, we learn, is through sin. - (That's what happened with Eden. Forbidden fruit, original sin…) - The syntax is getting kind of jumbled. This seems to reflect an emotional turmoil. - All of a sudden a sense of anguish has entered the poem. We say anguish because there's a strong sense of urgency ("– Have, get, before it," from line 11) and pain (the hard c-sounds, and the way he keeps repeating and rephrasing – "before it cloy, / Before it cloud, Christ, lord, and sour with sinning"). - When we get to "Christ, lord," not only does it become quite clear that our speaker is Christian, but the poem also begins to sound more directly like a prayer. Innocent mind and Mayday in girl and boy, - It's the innocent minds that are threatened with the possibility of cloying and clouding and souring with sinning. So if we put it together with the line before, our speaker is asking Christ to save the innocent from sin. - The way the line before this one works, it seems to suggest spoilage in both directions: both of the Eden-like natural world and of the innocent children's minds. - Should we understand that the sweetness is what leads the innocent mind to sin? - It sounds pretty inevitable, kind of like the change of seasons. - There's definitely a lot of complicated emotion going on here. There's that sense of urgency and pleading, combined with a feeling that it's going to happen anyway – the kids will grow up and they won't be innocent anymore. They'll be jaded, spike their hair, and listen to punk rock. - We had those lambs racing into the poem with their innocence, and now we have that inevitable loss of innocence. Most, O maid's child, thy choice and worthy the winning. - "Maid's child" is probably referring to Jesus, born of the Virgin Mary (one of the meanings of "maid" is virgin). - So here's one way you could read line 14: "Jesus, it's up to you – won't you win over these innocent children, and save them from sin? It would be a very worthy thing to do, to win them to your goodness (and keep them free from sin)." - The syntax gets pretty confusing again. - The poem has definitely turned into a prayer. Though, honestly, it could have been one all along, just one that changes gears a couple times, from awe and praise, to anguish, despair, pleading, and finally… - This last line seems to acknowledge, as most prayers do, that the power (the "choice") is in the hands of God. - "Choice" definitely brings up a lot of questions and possibilities. What does it mean that it is God's choice to have or to allow sin? - The word "choice" also might brings up the idea of free will, and maybe that's the answer to the question we just asked: in order to allow free will, God has to allow sin. - Do there seem to be a lot of subtleties and multiple ways to read each line? A lot of things hinted at and no clear answer for how to understand everything? Yes, that's about right.
<urn:uuid:43b5fd68-0c8d-4325-b0dc-a18d298bedfd>
CC-MAIN-2013-20
http://www.shmoop.com/spring-hopkins/stanza-3-summary.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966816
760
2.765625
3
Photovoltaic Resources and Technologies Selecting, Implementing, and Funding Photovoltaic Systems in Federal Facilities: Learn how to select, implement, and fund a photovoltaic system by taking this FEMP eTraining course. This page provides a brief overview of photovoltaic (PV) technologies supplemented by specific information to apply PV within the Federal sector. Photovoltaic cells convert sunlight into electricity. Systems typically include a PV module or array made of individual PV cells installed on or near a building or other structure. A power inverter converts the direct current (DC) electricity produced by the PV cells to alternative current (AC) electricity. PV systems can be found across the globe, from the most isolated locations to the heart of the largest cities. A typical PV cell converts approximately 10% of the solar energy striking its surface into usable electricity. Visit the Department of Energy's (DOE) Solar Technologies Program for in-depth information about solar energy basics and technologies. PV arrays are viable sources of renewable energy in the Federal sector. Before conducting an assessment or deploying PV systems, Federal agencies must evaluate a series of questions. What are my energy goals? Energy goals range from meeting regulatory requirements to powering remote applications to increasing energy security. PV systems, if applied properly, are suitable for each. Regulatory Requirements: Electricity produced by solar energy falls under the Energy Policy Act (EPAct) of 2005 definition of renewable energy and can be used to meet EPAct 2005 renewable energy requirements. Remote Power: PV arrays can stand alone to provide intermittent power for remote applications, or be coupled with wind turbines, battery storage systems, backup generators, or other energy resources to deliver around the clock power for remote applications. Energy Security: Solar energy is natural and renewable. The energy source is found in abundance across the U.S. and can be leveraged to increase energy continuity. What kind of energy do I use? Federal agencies must understand what type of energy is used before determining if photovoltaics are applicable. PV systems generate electricity and are not appropriate for mechanical or thermal power. When do I need the energy? Although solar resources can be quite predictable, PV cannot be guaranteed to generate power where and when it is demanded like a fossil fuel generator. For example, PV arrays must be connected to energy storage or backup equipment to provide electricity when sunlight is not available (e.g., evenings). How much power do I use/need to produce? The size and nature of an electric load must be well understood to properly select a packaged PV system or to design and specify a custom system. For any system, the following must be known: - Maximum energy needed at any one time (watts) - Maximum daily power requirements (kilowatt-hours) - Availability of solar resources - Cost of power alternatives Typical photovoltaic systems deployed by the Federal Government range from several watts to 1.1 megawatts. Where am I located? For a broad overview of your facility's solar resources, the National Renewable Energy Laboratory (NREL) provides solar energy resource maps of the U.S. Before initiating a project, solar resources in your area must be measured and verified. Resource maps are a good start, but resources vary at a micro level. It is important to consult an expert for a professional evaluation before implementing energy projects. Is there rooftop or open land available? Photovoltaic arrays are typically installed on building rooftops or adjacent to where the electricity is needed. If these areas are not available, PV may not be the best solution. It is important to consult an expert to determine whether PV arrays are a good fit for your Federal facility. What is my budget? The installation cost of PV varies greatly depending on the application, system size, and whether it is prepackaged and preassembled or separate components that need to be integrated into a structure on site. What resources are available for operations and maintenance? Photovoltaic systems require very little maintenance. Most small PV systems take no more than 2 to 4 hours per year to maintain. A visual inspection of the system and simple battery maintenance should occur every 3 to 6 months. Visit the project planning section for detailed information on planning and deploying renewable energy projects. Federal case studies are available to provide specific examples of viable solar energy projects. Detailed information on solar energy resources and technologies is available through: DOE Solar Energy Technologies Program: Program providing information and resources on solar energy resources and technologies. American Solar Energy Society: Leading association of solar professional and grassroots advocates. Solar Electric Power Association: Compilation of more than 560 electric utilities, solar companies, and other industry stakeholders to form a central resource for unbiased and actionable solar intelligence. Solar Energy Industries Association: Organization working to expand the use of solar technologies, strengthen research and development, remove market barriers, and improve education and outreach for solar. Procuring Solar Energy: A Guide for Federal Facility Decision Makers: Guide to help Federal agencies turn their interest in solar energy projects into success installations through a concise, easy-to-understand, step-by-step process. 205 kW Photovoltaic (PV) System Installed on the U.S. Department of Energy's Forrestal Building: Fact sheet on the installation of a photovoltaic system providing renewable energy for DOE and demonstrating leadership for meeting Federal goals in the use of renewable energy technologies. Solar Ready Buildings Planning Guide: Checklist for building design and construction to enable solar photovoltaic and heating systems after the building is constructed. Building-Integrated Photovoltaic Designs for Commercial and Institutional Structures: Sourcebook for architects on building-integrated photovoltaics into commercial and institutional buildings. Counting on Solar Power for Disaster Relief: Technical assistance fact sheet on using solar cells to generate electricity during immediate and long-term crisis relief. Photovoltaics: Details how photovoltaic systems convert sunlight to electricity to meet various energy needs. Photovoltaics: Federal Technology Alert outlining photovoltaics as a proven technology for providing electricity in remote and difficult to access locations.
<urn:uuid:8c673061-e92f-4a48-9e3f-aac39e99b486>
CC-MAIN-2013-20
http://www1.eere.energy.gov/femp/technologies/m/renewable_pv.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.865827
1,281
3.109375
3
asy clifts" of grey freestone rock, Lewis called them. Freestone, according to a contemporary dictionary,1 denoted "a kind of grit, or sandstone" that could easily be split into slabs or blocks along natural fracture lines. Usually the term was applied to sandstone or limestone, but the conspicuous outcrops such as this one, which is adjacent to the hot springs on upper Lolo Creek, are granitic igneous rocks of the huge Idaho batholith that lies deeply submerged beneath the Bitterroot Mountains. Lewis and Clark must be forgiven for their errors in geological exploration, since the science was still young, and they were ill-prepared to deal with it. For more on the geology of the Bitterroot Range, see "The Rocks They Walked On." ur 21st-century obsession with personal cleanliness would have puzzled and perhaps even amused Lewis and Clark's generation, situated as it was some three-quarters of a century B.B.—"before bacteria" were discovered. Although intimations of the real causes of "dis-ease" had occurred to a few scientists in the late 17th century, it was not until Louis Pasteur broke the barrier in the early 1880s that set in motion the sciences of bacteriology and immunology, and made civilization aware of what had been bugging humans since the beginning of time. Meanwhile, from the 1790s on, people patronized the new commercial public bath houses in eastern cities, unaware of how unhealthful they really were. Household conveniences were better only insofar as they kept the bacteria in the family. Not until 1883 were the first cast-iron bathtubs with easy-to-clean enameled interiors introduced into private homes, although only the well-to-do could afford them at the time. By the early 1920s only one percent of all homes in the U.S. had indoor plumbing, and even then running hot water was a rare luxury. In northern latitudes, especially in cold weather, the best a person could do was stand within reach of a pan of hot water on the wood stove and wash up as far as possible, down as far as possible, and occasionally wash "possible." Or one could put a few inches of cold water in a copper tub, fetch a teakettle of boiling water to mix with it, then hop in—standing, kneeling or, depending on the size of the bather, sitting—and scrub briskly before it cooled below comfort level. Unless one could call upon servants or slaves for help—as Jefferson, Lewis, and Clark could—even that routine was easiest for children and young adults. For the lame and the elderly it was usually not worth the effort. Perfumes, whether bought from a pharmacist or concocted of sweet herbs and alcohol, were more convenient. The general treatment for body odor was simply to get used to it. A person could go for years without getting wet all over at one time. In fact, many people considered it unhealthful to immerse one's body in water, as evidenced by the disturbing, even frightening, observation that the flesh of one's fingers shriveled up after a few minutes of soaking. On the other hand, natural hot running water from a geothermal spring, which never cooled off and perpetually refreshed itself, represented the acme of pleasure and delight, not only for the sake of cleanliness but especially for the supposedly healthful effect of of the minerals in the otherwise pure water. The extent of that benefit was easily recognizable by the water's smell and taste, both qualities usually being attributed to sulfur, whether or not there was actually any in it. Contemporary wisdom held that the smellier the water, the more healthful it must be. The worse the taste, the better. Private Joe Whitehouse of the Corps of Discovery noted, with perhaps some satisfaction, that several of the men drank the water from these hot springs and found that "it has a little sulpur taste and verry clear." --Joseph Mussulman; 3/05 1. Noah Webster, A Compendious Dictionary of the English Language (Hartford, Connecticut, 1806). Funded in part by a grant from the Montana Cultural Trust.
<urn:uuid:45b0c0a6-0c57-47cd-bf73-d0f64269acc1>
CC-MAIN-2013-20
http://lewis-clark.org/content/content-article.asp?ArticleID=2281
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974464
873
3.265625
3
Assess the feasibility of transboundary conservation in your region by using a new diagnostic tool developed for transboundary conservation planners. Explore the tool here. To qualify for inclusion in this TBPA 2007 list the protected area (PA) had to: - conform to the IUCN definition of a protected area (IUCN, 1994) and be designated either under national legislation or within international or regional conventions or initiatives (e.g. World Heritage Convention, UNESCO Programme on Man and the Biosphere, etc.); - be included in the World Database on Protected Areas as a georeferenced entity - either polygon outlining PA boundary or point indicating the centroid (latitude/longitude) of the PA. and -- be adjacent to an international boundary and adjacent to a protected area in a neighboring country (1st-order transboundary neighborhood) or -- be directly adjacent to (or overlap partially and/or entirely with) 1st-order transboundary sites (these sites constitute 2nd-order component of IAPA complexes) identified above or -- be contained by 2nd-order sites (these sites are also considered as a 2nd-order component of IAPA complexes). In addition to sites identified on the basis of these criteria through the GIS analysis, a number of non-adjacent couples or groups of sites are documented in Zbicz (2001) and Mittermeier et al. (2005). Cooperation between countries in managing these sites was evaluated on a case-by-case basis and relevant sites have been included into the list. Estimation of the total area extent of a particular transboundary complex was made on the basis of the GIS analysis, which avoided double-counting of territories assigned by two or more designations, e.g. by national legislation and under international convention. Therefore the total size of a particular internationally adjoining protected area is often smaller than the sum of territories of individual sites combining transboundary complex. For TBPA/IAPA complexes represented by a single site (or by non-overlapping sites) on each side of the international boundary only the official statistic on PA area was applied. Similarly, for TBPA/IAPA complexes lacking data on site boundaries (i.e. polygon data) official information on PA area was used wherever the GIS-based estimate of a total area for a group of sites exceeded the sum of officially documented area for individual PA's in this group. Relevant calculations and checking were applied separately for every IAPA part belonging to particular country. Total area of TBPA (that includes sites belonging to two or more neighboring countries) was calculated as a sum of relevant national parts of a particular TBPA. Total Area of TBPA by Region (30.04.2007) |Region||TBPA area, km2| |Central and South America||1,424,697.66|
<urn:uuid:0e4dd5dd-5b7b-433e-ae48-e4725d4ffad2>
CC-MAIN-2013-20
http://www.tbpa.net/page.php?ndx=82
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913251
605
2.828125
3
Chaturmas (Sanskrit: चातुर्मास, Cāturmāsa) is a period of four months beginning on Shayani Ekadashi - the 11th day of the first bright half of Ashadh (fourth month of the Hindu lunar calendar) until Prabodhini Ekadashi - the 11th day of the first bright half of the Kartik (eighth month of the Hindu lunar calender). It is believed that Hindu gods and goddesses are at rest. The sun enters the orbit of Karka (Cancer) and begins to move southwards in the month of Ashadh. The Hindu preserver-god Vishnu is believed to sleep on this day, hence the 11th of Ashadh is called as Devashayani Ekadashi. He is believed to wake up on 11th of Kartik, hence called as Prabodhini Ekadashi. It corresponds with the rainy season in India. Significance in HinduismEdit This period of four months is prescribed for penance, austerities, religious observances, recital of mantras, bathing in holy rivers, performing sacrifices, worship and charity. Fasts and purity during this period helps maintaining health. This period considered inauspicious for marriages and other such ceremonies. Significance for SanyasisEdit The Sanyasis or the ascetics observe Chaturmas for four fortnights, beginning on full moon day of the month Ashadh also known as Guru Purnima or Vyas Purnima, and ending on full moon day of the month Bhadrapada. Sanyasis are supposed to halt during this period at one selected place, giving discourse to the public. Major fetivals within this holy period include: - Krishna Janmashtami - Raksha Bandhan - Ganesh Chaturthi - Navratri (Dasara - Durga Puja - Vijayadashami) - ↑ Bhalla, Prem P. (2006). Hindu Rites, Rituals, Customs and Traditions. p. 293. - ↑ "Spken Sanskrit". http://spokensanskrit.de/index.php?script=HK&tinput=four&country_ID=&trans=Translate&direction=AU. Retrieved 2009-07-06. - ↑ "Spoken Sanskrit". http://spokensanskrit.de/index.php?script=HK&tinput=month&country_ID=&trans=Translate&direction=AU. Retrieved 2009-07-06. - ↑ 4.0 4.1 4.2 Bhargava, Gopal K; S. C. Bhatt. Land and people of Indian states and union territories. 8. p. 506. - ↑ 5.0 5.1 Eleanor Zelliot, Maxine Berntsen, ed. The Experience of Hinduism. SUNY Press. p. 335. http://books.google.co.in/books?id=7PDr-QF4YmYC&pg=PA207&dq=Chaturmas&client=firefox-a. - ↑ Sampurna Chaturmas in Marathi
<urn:uuid:02aa7966-4f14-4414-9829-06160b872de4>
CC-MAIN-2013-20
http://religion.wikia.com/wiki/Chaturmas
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.776236
688
3.078125
3
- IMAGE GALLERIES - CYCLORAMA SHOP - Cyclorama Week - Guide to Types of Bike - Beginner's Guide - Practical Information Articles - Women's Cycling - Cycling Technology - Cycling History - Issues and Inspiration - Cycling Worldwide - Cycle Sport - Cycling Books. Reviews and Other Lit Crit. - Bike Culture on the web - Press department *Von Drais’ Laufmaschine “Without doubt, the first bicycle ever made was by von Drais in Germany, in 1817.” That's the certain belief of cycle historian JOHN PINKERTON. The laufmaschine, as it was called by its inventor, was built in Mannheim in Germany by Karl Friedrich Christian Ludwig Drais, Freiherr von Sauerbronn, and it carried him on his first two-wheeled journey on 13th July 1817. 'He had produced tricycles and other types of machines,' says Pinkerton, 'but this was the important one. A single track vehicle was useful to von Drais so he could move along the narrow trails of the forestry estates that he worked on.' His design was copied all over the world. The English coachmaker Dennis Johnson copied the principle, although his design was very different. 'The von Drais machine was very Germanic. The framework was very correct with lots of straight lines,' says Pinkerton. 'When Johnson built his, he curved the main frame. The handle on the front and the rear forks were also nicely curved.' Johnson used metal for the forks and the steering mechanism to give additional strength where it was needed. His 'Pedestrian Curricle' was patented in June of 1818, less than a year after von Drais created his laufmaschine. The design principle for the 'hobbyhorse' was always the same - two wheels, supporting a beam, on which there was a saddle. 'There are two important things about the design which are absolutely instrumental to the bicycle,' explains Pinkerton. 'Firstly, when you transfer the weight of an object onto a set of wheels it's a lot easier to move it about. Secondly, and most importantly, is that you can balance on two wheels only as long as you can steer. An unsteerable bicycle is unrideable with both wheels on the ground.' The construction of the laufmaschine was relatively simple. Two wooden wheels, with wooden spokes and wooden rims, and an iron band shrunk on the outside of each rim — called the tyre because it tied the whole wheel together. On some of the early hobby-horses the spokes were on a central line from the rim to the hub. But on the von Drais laufmaschine the spokes were staggered so that alternate spokes went from each side of the hub to the rim, as they do on modern bicycle wheels. This gave some triangulation and additional strength. The wheels were fitted into wooden forks attached to the main frame member - a wooden beam - and on that was a padded saddle made of leather stuffed with horse-hair. 'It [the saddle] didn't need to be wide because you didn't sit on it with your pelvic bones,' says Pinkerton. 'You were actually sitting on the bony part between the pelvis. If you get it wrong it can be a bit painful!' The rear forks were fixed but the front ones could turn. The head of the front forks passed through a hole in the main beam. On early models the steering mechanism wasn't attached to the top of the fork column. Johnson's hobby-horse had a steering control consisting of two pieces of metal starting either end of the front axle, curving around the front of the frame and coming together at a handle above (but not attached to) the top of the fork column. Von Drais's laufmaschine used the same principle. 'They hadn't realised at the time that you could steer from the top of the steering head,' says Pinkerton. This improvement followed later and Johnson is said to have made open-framed, ladies' hobby-horses in 1819 that featured direct steering. 'There is at least one hobby-horse in existence with sprung front forks!' adds Pinkerton. 'So, suspension isn't that new.' These machines were very expensive and, generally, only made to order. Drais's machine was intended for his own transport but, on the whole, they were little more than toys for the very rich. Its invention, and subsequent copies, had an enormous impact on this very small percentage of the population. 'It was a craze that lasted only a few years before it died out,' says Pinkerton. 'Nothing much happened after that until Macmillan produced his machine in about 1840. 'To look at a hobby-horse you'd think it wouldn't be a lot of use,' comments Pinkerton. 'To get some idea how it would work, take the pedals off a normal bike and lower the saddle so you can put both feet flat on the floor. Then you can appreciate how wonderful it was as opposed to walking. In fact, I would go so far as to recommend this as a first step in learning to ride a bike. The 'paddling' action is much the same as walking but, with most of your bodyweight supported by the wheels, you are carried at least half as far again with each stride. Try it and see! Don't Forget the Whisky!
<urn:uuid:72c85b92-b96d-4c5c-bc15-8d3ff0681082>
CC-MAIN-2013-20
http://www.cyclorama.net/viewArticle.php?id=152&subjectId=1
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978547
1,154
3.296875
3
Global futures scenarios do not specifically or uniquely consider GHG emissions. Instead, they are more general stories of possible future worlds. They can complement the more quantitative emissions scenario assessments, because they consider dimensions that elude quantification, such as governance and social structures and institutions, but which are nonetheless important to the success of mitigation policies. Addressing these issues reflects the different perspectives presented in Section 1: cost-effectiveness and/or efficiency, equity, and sustainability. A survey of this literature has yielded a number of insights that are relevant to GHG emissions scenarios and sustainable development. First, a wide range of future conditions has been identified by futurists, ranging from variants of sustainable development to collapse of social, economic, and environmental systems. Since future values of the underlying socio-economic drivers of emissions may vary widely, it is important that climate policies should be designed so that they are resilient against widely different future conditions. Second, the global futures scenarios that show falling GHG emissions tend to show improved governance, increased equity and political participation, reduced conflict, and improved environmental quality. They also tend to show increased energy efficiency, shifts to non-fossil energy sources, and/or shifts to a post-industrial (service-based) economy; population tends to stabilize at relatively low levels, in many cases thanks to increased prosperity, expanded provision of family planning, and improved rights and opportunities for women. A key implication is that sustainable development policies can make a significant contribution to emission reduction. Third, different combinations of driving forces are consistent with low emissions scenarios, which agrees with the SRES findings. The implication of this seems to be that it is important to consider the linkage between climate policy and other policies and conditions associated with the choice of future paths in a general sense. Figure TS.1: Qualitative directions of SRES scenarios for different indicators. Six new GHG emission reference scenario groups (not including specific climate policy initiatives), organized into 4 scenario families, were developed by the IPCC and published as the Special Report on Emissions Scenarios (SRES). Scenario families A1 and A2 emphasize economic development but differ with respect to the degree of economic and social convergence; B1 and B2 emphasize sustainable development but also differ in terms of degree of convergence (see Box TS.1). In all, six models were used to generate the 40 scenarios that comprise the six scenario groups. Six of these scenarios, which should be considered equally sound, were chosen to illustrate the whole set of scenarios. These six scenarios include marker scenarios for each of the worlds as well as two scenarios, A1FI and A1T, which illustrate alternative energy technology developments in the A1 world (see Figure TS.1). The SRES scenarios lead to the following findings: Box TS.1. The Emissions Scenarios of the IPCC Special Report on Emissions Scenarios (SRES) A1. The A1 storyline and scenario family describe a future world of very rapid economic growth, global population that peaks in mid-century and declines thereafter, and the rapid introduction of new and more efficient technologies. Major underlying themes are convergence among regions, capacity building, and increased cultural and social interactions, with a substantial reduction in regional differences in per capita income. The A1 scenario family develops into three groups that describe alternative directions of technological change in the energy system. The three A1 groups are distinguished by their technological emphasis: fossil intensive (A1FI), non-fossil energy sources (A1T), or a balance across all sources (A1B) (where balanced is defined as not relying too heavily on one particular energy source, on the assumption that similar improvement rates apply to all energy supply and end-use technologies). A2. The A2 storyline and scenario family describe a very heterogeneous world. The underlying theme is self-reliance and preservation of local identities. Fertility patterns across regions converge very slowly, which results in a continuously increasing population. Economic development is primarily regionally oriented and per capita economic growth and technological change more fragmented and slower than in other storylines. B1. The B1 storyline and scenario family describe a convergent world with the same global population, which peaks in mid-century and declines thereafter, as in the A1 storyline, but with rapid change in economic structures towards a service and information economy, with reductions in material intensity and the introduction of clean and resource-efficient technologies. The emphasis is on global solutions to economic, social, and environmental sustainability, including improved equity, but without additional climate initiatives. B2. The B2 storyline and scenario family describe a world in which the emphasis is on local solutions to economic, social, and environmental sustainability. It is a world with continuously increasing global population, at a rate lower than in A2, intermediate levels of economic development, and less rapid and more diverse technological change than in the B1 and A1 storylines. While the scenario is also oriented towards environmental protection and social equity, it focuses on local and regional levels. An illustrative scenario was chosen for each of the six scenario groups A1B, A1FI, A1T, A2, B1, and B2. All should be considered equally sound. The SRES scenarios do not include additional climate initiatives, which means that no scenarios are included that explicitly assume implementation of the United Nations Framework Convention on Climate Change or the emissions targets of the Kyoto Protocol. Other reports in this collection
<urn:uuid:cd6a0f42-bfb0-4603-a9b7-2ea286545711>
CC-MAIN-2013-20
http://www.grida.no/climate/ipcc_tar/wg3/015.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929772
1,124
2.984375
3
There are plenty of benefits of preschool - it can be a great place for kids to interact with peers and to learn valuable life lessons such as how to share, take turns, and follow rules. Preschool can also prepare kids for kindergarten and beyond. But going to preschool does come with its fair share of emotions, for both the parent and the child. For a kid, entering a new preschool environment filled with unfamiliar teachers and children can cause both and anticipation. For parents, there may be mixed emotions over whether the child is ready for preschool. The more comfortable you are about your decision to place your child in preschool and the more familiar the setting can be made for your child, the fewer problems you - and your little one - will encounter. Easing Your Child's Fears Spend time talking with your child about preschool even before it starts. Before the first day, gradually introduce your child to activities that often take place in a classroom. A child accustomed to scribbling with paper and crayons at home, for example, will find it comforting to discover the same crayons and paper in his or her preschool classroom. Visiting your child's first preschool classroom a few times before school starts can also ease the entrance into unfamiliar territory. This offers the opportunity to not only meet your child's teacher and ask about routines and common activities, but to then introduce some of those routines and activities to the child at home. While you're in the classroom, let your child explore and observe the class in his or her own way and choose whether to interact with other children. The idea is to familiarize your child with the classroom and to let him or her get comfortable. You can also use this time to ask your child's new teacher how he or she handles the first tear-filled days. How will the first week be structured to make the transition smooth for your Although it's necessary for you to acknowledge the important step your child is taking and to provide support, too much emphasis on the change may just make your child's worse. Young kids can pick up on their parents' nonverbal cues. If you feel guilty or worried about leaving your child at school, he or she will probably sense that. The more calm and assured you are about your choice to send your child to preschool, the more confident your child will be. The First Day When you enter the classroom on the first day, calmly reintroduce the teacher to your child, then step back and let him or her set the tone. This will allow the teacher to begin forming a relationship with your child. Your endorsement of the teacher will show your child that he or she will be happy and safe in the If your child clings to you or refuses to participate in the class, don't get upset - this may only upset your child more. Follow the guidelines described by the teacher beforehand, and go at your child's pace. Suggestions for leaving your child at preschool are simple but can be hard on a parent. Always say a loving good-bye to your child, but once you do, you should leave promptly. Never sneak out. As tempting as it may be, leaving without saying good-bye may make your child feel abandoned, whereas a long farewell scene might only serve to reinforce a child's sense that preschool is a bad A consistent and predictable farewell ritual can make leaving easier. Some parents wave from outside a certain classroom window or make a funny good-bye face, whereas others read a short book before parting. Transitional objects - a family picture, a special doll, or a favorite blanket - can also help comfort your child. Also, keep in mind that most children do well once their parents Regardless of whether your child is eager or reluctant to go to preschool, make sure that a school staff member is ready to help with the transfer from your care to the classroom when you arrive in the morning. Some kids may jump right in with their classmates, whereas others might want a private cuddle and a story from a caregiver before joining the group. Many preschools begin with a daily ritual, such as circle time (when teachers and children talk about what they did the day before and the activities that are ahead for the day). Preschoolers tend to respond to this kind of predictability and following a routine will help ease the move from home to school. Updated and reviewed by: Mary L. Gavin, MD Date reviewed: September 2007 Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor. © 1995-2009 The Nemours Foundation/KidsHealth. All rights reserved.
<urn:uuid:70a69fa6-431e-4e1e-a396-6f60a4e114a7>
CC-MAIN-2013-20
http://www.seattlechildrens.org/kids-health/page.aspx?id=59714
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94579
1,024
3.0625
3
Avoid Fall Plowing -- Leave Food and Cover for Wildlife Fall tillage practices, even reduced tillage techniques like disking and chisel plowing, can eliminate important winter food and cover for many wildlife species. “Waste grains and crop residue remaining in untilled crop fields following harvest provide important food and cover for pheasants, quail, partridge, turkey and deer,” said Todd Bogenschutz, wildlife research biologist with the Iowa Department of Natural Resources. Studies of harvested untilled crop fields show wildlife consume 55-85 percent of the waste corn and soybeans between fall harvest and the following spring. Corn stubble and stalks remaining in untilled cornfields also provide concealment cover for pheasants, quail and partridge, so the birds are not so exposed to predators when feeding in the winter, Bogenschutz said. “Research shows even reduced tillage methods, such as disking and chisel plowing, reduce waste grains available to wildlife by 80 percent and reduce crop stubble by 50 percent or more,” he said. A 1985 study showed untilled Illinois corn fields averaged of 200 pounds waste corn per acre verses 40 pounds per acre in corn fields that were disked or chisel plowed. Moldboard plowed fields’ averaged 4 pounds per acre. Farmers and landowners can leave a free food plot for wildlife by simply not fall plowing their fields. “No till farming is a great way to leave food and cover for wildlife. Leaving stubble is also a great way to capture soil moisture for next year,” he said. For more information contact Todd Bogenschutz, Wildlife Research Biologist, Iowa Department of Natural Resources, 515-432-2823.
<urn:uuid:82771ead-4428-4ba3-b819-57f634ebe56b>
CC-MAIN-2013-20
http://www.iowadnr.gov/idnr/insidednr/socialmediapressroom/newsreleases/iowaoutdoorsnews/vw/1/itemid/1045
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922591
367
2.640625
3
Scientists discover Earth-sized planet ORLANDO, Fla. (MCT) — University of Central Florida scientists have discovered a new planet outside our solar system that is the closest Earth-size planet ever discovered. And they have named it after UCF. Planet UCF 1.01, an “exoplanet” orbiting a red dwarf star called GJ 436, is about two-thirds the size of Earth and only 33 light years distant. “Cosmically speaking, that’s right around the corner,” said discoverer Kevin Stevenson, though that works out to 194 trillion miles. If you have any technical difficulties, either with your username and password or with the payment options, please contact us by e-mail at firstname.lastname@example.org
<urn:uuid:56f9721f-d146-46d6-9ad2-214943378776>
CC-MAIN-2013-20
http://www.morrisdailyherald.com/2012/07/19/scientists-discover-earth-sized-planet/aa8qhhr/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913945
164
3.0625
3
The rendering model for WPF is quite unlike its predecessors in how it formats controls and other elements within a window. It also has a couple of options for how the contents of the window render. While it can work with fixed positioning, it can also work in a flow document form, meaning that documents can flow within the document area and adjust themselves when the window resizes in various ways. Flow documents are a new feature of the Windows Presentation Framework (WPF), giving developers another option to display content. There are a variety of elements available that format the content in special ways, and this is a help too. You will recognize some similarities with HTML, in the ways it generates lists, tables, and the like. It also introduces other features like figures and floaters, which add to the appeal of the content design. In addition, WPF changes the way that images are supported, and we'll touch upon that and how the framework takes an image path, and converts it to the correct object. Flow document content, as I mentioned before, is a new feature of WPF. In its simplest form, the following code creates a flow document reader with an empty document: Listing 1: Flow Document Reader Content There are three levels of flow document reader controls built-in to the framework: the FlowDocumentPageViewer, FlowDocumentScrollViewer, and FlowDocumentReader controls. FlowDocumentReader is the most functional, and therefore this article will use this control. Built into the control is a zoom control, a page layout control allowing you to choose the display format, and an in-built text searching control. Take a look at the example screenshot below: Figure 1: A FlowDocumentReader Control Example This control has the ability to reposition its content based on the size of the screen. If the content is larger, it may reposition the content into a multi-column format. See the alternative screenshot below: Figure 2: A Resized Flow Document Reader With Multiple Columns The flow document is made up of a variety of objects. Primarily, there are two types of elements: block and inline elements. A block element is an element that can have multiple inline elements. One of the more common block elements is the Paragraph class, which represents a paragraph of text. This can be used to represent text in a flow document. The following is the makeup of some text within the document: Listing 2: Flow Document Content What isn't exactly obvious is that the Paragraph element uses the Run element as its child. The Run element is optional and doesn't need to be declared (it's implicitly created); however, when working with paragraphs in code, you will need to instantiate an instance of the Run class, which will contain the paragraph text. The Paragraph element breaks out the text into paragraphs in a document as you could imagine. Within flow documents, you can't embed WPF controls. However, using either the InlineUIContainer or BlockUIContainer allows a flow document to contain WPF controls, in inline or block fashion. The InlineUIContainer element can be used within a Paragraph or any other control that supports inline elements (because it is an inline element, it can be used as a child of the block element); the BlockUIContainer element can host any controls that can be used within a Section or any other control that supports block elements. Below is an example of the inline and block UI Container classes. Listing 3: Block/Inline UI Containers In addition, flow documents support a table structure, which looks similar to an HTML table with a few added steps. The following is an example table: Listing 4: Tables in Flow Documents The TableRowGroup element groups a series of table rows together, instead of being an element directly under the table. Also note the use of the paragraph element as a child of the TableCell element; the content must appear in the paragraph element. There is an interesting difference with the TableCell element approach; this is because the TableCell isn't a block or inline element (it doesn't inherit from those base classes); rather, it branches off from the root base class. Also, if you want to change the sizing of the table, the Table object has a Columns collection, where you can specify widths for each column as such: Listing 5: Table Column Setup Similar to the Grid element, tables can have relative widths specified for them. Flow documents also support list-based structures. Lists work very similar to HTML lists, but with more verbose setup, as such: Listing 6: Lists in Flow Documents The List has a MarkerStyle attribute that defines the type of list to render (an enumerated value). I'm sure that if you've read computer books or other books, you've seen quotes on the sides of the page, as a way to stand out to the reader as they browse through the page in an effort to drive home a point. These can be achieved in flow documents, through the use of the Figure and Floater elements. Each has its own level of control, as to what it can do or can't do. In order for the floater to stand out and appear in the portion of the page, you have to provide a manual Width value, and give it a HorizontalAlignment setting. The figure element allows more control over positioning than the floater, so it can allow a user to control how the figure appears in the page with more precision. For instance, the Floater element example puts the text on the right side, allowing the text to wrap around it: Listing 7: Quote Listing on the Right of the Document At the core, an image is pretty easy to setup in XAML code. The following is an image control that displays a JPEG picture. Listing 8: Basic Image Declaration That is pretty simplistic. However, at its core, the Source property is something different than a string, that you would see in the ASP.NET Image control, for instance. Rather, it is an ImageSource object, which defines Width properties. Some of the derivatives are the BitmapSource and the DrawingImage controls, which represent image metadata API for those object types. However, in the XAML markup, one can specify a string value. This is because the ImageSource class declares a TypeConverter attribute, and this class is responsible for converting the value in the designer. This class also declares a ValueSerializer attribute, which specifies a class responsible for serializing the object to a string Flow documents have a lot of capabilities, with all of the inline elements that are supported. There are a variety of elements, very similar to HTML that provides a lot of functionality in WPF applications. As a sample, I've embedded portions of this article, and some added content, as a sample of flow documents. Resize the document and see how flexible flow documents can be.
<urn:uuid:c05d1ee8-805c-434a-aacd-7b0bd6a8d0d1>
CC-MAIN-2013-20
http://dotnetslackers.com/articles/PrintArticle.aspx?ArticleId=212
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893256
1,486
3.109375
3
With all the contemporary emphasis on modern sustainable architecture, sometimes we seem to forget that environmentally friendly architecture has existed for a long time. Built in 1980, Thorncrown Chapel was created to highlight its location, which was – and still is – an attractive natural setting for tourists in the area. The owner of the site, Jim Reed, hired well-known architect and Frank Lloyd Wright alumni E. Fay Jones to design and build the project. The chapel is constructed from native timber to match the setting around it, and it was awarded the “Twenty-Five year award” by the American Institute of Architects. The Thorncrown Chapel shows us how proper planning can reduce a building’s impact on its site. The vertical and diagonal cross-tension trusses are made from lengths of pine cut to size so that they could be carried through the woods. The selection of materials was also an important consideration- all the timber came from local sources (this was before Forest Council Stewardship), the floor is made out of flagstone, and the building is lined with a rock wall that links it with its surrounding environment. But the Thorncrown Chapel’s most important feature is the way it completely blends into its surroundings. The glazed facade turns what could be a rather heavy object in the middle of the forest into a light, almost invisible structure. The transparent facade allows visitors to experience the forest while being inside the building. The building changes with the weather and the surrounding forest, ensuring that every visit is unique.
<urn:uuid:5ca79725-4d92-42f6-bd57-2341823a81ac>
CC-MAIN-2013-20
http://inhabitat.com/thorncrown-chapel-a-paragon-of-ecological-architecture/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973558
311
2.796875
3
Helixes in tenth grade. Helixes in eleventh grade. Helixes in twelfth grade. Helixes Video Tutorials Many students find helixes difficult. They feel overwhelmed with helixes homework, tests and projects. And it is not always easy to find helixes tutor who is both good and affordable. Now finding helixes help is easy. For your helixes homework, helixes tests, helixes projects, and helixes tutoring needs, TuLyn is a one-stop solution. You can master hundreds of math topics by using TuLyn. At TuLyn, we have over 2000 math video tutorial clips including helixes videos , helixes practice word problems , helixes questions and answers , and helixes worksheets Our helixes videos replace text-based tutorials and give you better step-by-step explanations of helixes. Watch each video repeatedly until you understand how to approach helixes problems and how to solve them. - Hundreds of video tutorials on helixes make it easy for you to better understand the concept. How to do better on helixes: TuLyn makes helixes easy.
<urn:uuid:5e4b5b14-b39e-4e72-9d02-175e6ff37599>
CC-MAIN-2013-20
http://tulyn.com/10th-grade-math/helixes/videotutorials/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.817497
258
2.765625
3
overviewThe Rio-Negro Plecostomus, also known as the Candy Stripe-Peckoltia, comes from the rivers and tributaries of South America. It is dark brown to black with irregular golden vertical stripes. The rays of the fins are also golden with black stripes. Rio-Negro Plecos make good additions to any community aquarium. Planted aquariums with hearty, fast-growing plants, high aeration, and water movement make for a healthy environment. Rocks and driftwood help to accent a natural habitat and provide hiding spaces to reduce stress for the Candy Stripe Plecostomus. A recommended minimum tank of 30 gallons should be provided to house this fish. The Rio-Negro Plecostomus has not been bred in an aquarium setting and little is known about their breeding habits. Feeding the Rio-Negro Plecostomus is not difficult due to the fact that it is not a picky eater. Feeding off the bottom of the aquarium, it gets most of its nutrition from left over food and algae. If there is no algae or left over food present, supplement with high quality flake food, sinking carnivore pellets, freeze-dried bloodworms, and tubifex. Approximate Purchase Size: 1-3/4" to 3"
<urn:uuid:038ff633-3745-413f-9811-e832b0463af9>
CC-MAIN-2013-20
http://www.liveaquaria.com/product/prod_display.cfm?c=830+837+1040&pcatid=1040
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904978
272
2.8125
3
A software license (or software licence in commonwealth usage) is a legal instrument (by way of contract law) governing the usage or redistribution of software. All software is copyright protected, irrespective of whether it is in the public domain. Contractual confidentiality is another way of protecting software. A typical software license grants an end-user permission to use one or more copies of software in ways where such a use would otherwise constitute copyright infringement of the software owner's exclusive rights under copyright law. Some software comes with the license when purchased off the shelf or an OEM license when bundled with hardware. Software can also be in the form of freeware or shareware. Software licenses can generally be fit into the following categories: proprietary licenses and free and open source licenses, which include free software licenses and other open source licenses. The features that distinguishes them are significant in terms of the effect they have on the end-user's rights. A free or open source license makes software free for inspection of its code, modification of its code, and distribution. While software released with such a license, like the GNU General Public License can be sold for money, the distribution cannot be restricted in the same ways as software with copyright and patent restrictions used by firms to require licensing fees. The hallmark of proprietary software licenses is that the software publisher grants a license to use one or more copies of software, but that ownership of those copies remains with the software publisher (hence use of the term "proprietary"). One consequence of this feature of proprietary software licenses is that virtually all rights regarding the software are reserved by the software publisher. Only a very limited set of well-defined rights are conceded to the end-user. Therefore, it is typical of proprietary software license agreements to include many terms which specifically prohibit certain uses of the software, often including uses which would otherwise be allowed under copyright law. The most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all. One example of such a proprietary software license is the license for Microsoft Windows. As is usually the case with proprietary software licenses, this license contains an extensive list of activities which are restricted, such as: reverse engineering, simultaneous use of the software by multiple users, and publication of benchmarks or performance tests. With a free software license, in contrast to proprietary software licenses, ownership of a particular copy of the software does not remain with the software publisher. Instead, ownership of the copy is transferred to the end-user. As a result, the end-user is, by default, afforded all rights granted by copyright law to the copy owner. Note that "copy owner" is not the same as "copyright owner". While ownership in a particular copy is transferred, ownership of the copyright remains with the software publisher. Additionally, a free software license typically grants to the end-user extra rights, which would otherwise be reserved by the software publisher. A primary consequence of the free software form of licensing is that acceptance of the license is essentially optional—the end-user may use the software without accepting the license. However, if the end-user wishes to exercise any of the additional rights granted by a free software license (such as the right to redistribute the software), then the end-user must accept, and be bound by, the software license. Open source licenses generally fall under two categories: Those that aim to preserve the freedom and openness of the software itself ('copyleft' licenses), and those that aim to give freedom to the users of that software (permissive licenses). An example of a copyleft Free Software license is the GNU General Public License (GPL). This license is aimed at giving the end-user significant permission, such as permission to redistribute, reverse engineer, or otherwise modify the software. These permissions are not entirely free of obligations for the end-user, however. The end-user must comply with certain terms if the end-user wishes to exercise these extra permissions granted by the GPL. For instance, any modifications made and redistributed by the end-user must include the source code for these, and the end-user is not allowed to re-assert the removed copyright restrictions back over their derivative work. Examples of permissive free software licenses are the BSD license and the MIT license, which essentially grant the end-user permission to do anything they wish with the source code in question, including the right to take the code and use it as part of closed-source software or software released under a proprietary software license. In addition to granting rights and imposing restrictions on the use of software, software licenses typically contain provisions which allocate liability and responsibility between the parties entering into the license agreement. In enterprise and commercial software transactions these terms (such as limitations of liability, warranties and warranty disclaimers, and indemnity if the software infringes intellectual property rights of others) are often negotiated by attorneys specialized in software licensing. The legal field has seen the growth of this specialized practice area due to unique legal issues with software licenses, and the desire of software companies to protect assets which, if licensed improperly, could diminish their value. In the United States, Section 117 of the Copyright Act gives the owner of a particular copy of software the explicit right to use the software with a computer, even if use of the software with a computer requires the making of incidental copies or adaptations (acts which could otherwise potentially constitute copyright infringement). Therefore, the owner of a copy of computer software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, then the end-user may legally use the software without a license from the software publisher. As many proprietary "licenses" only enumerate the rights that the user already has under 17 U.S.C. § 117, and yet proclaim to take rights away from the user, these contracts may lack consideration. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. By doing so, Section 117 does not apply to the end-user and the software publisher may then compel the end-user to accept all of the terms of the license agreement, many of which may be more restrictive than copyright law alone. It should be noticed that the form of the relationship determines if it is a lease or a purchase, for example UMG v. Augusto, Vernor v. Autodesk, Inc..
<urn:uuid:6abf81d3-45e7-47dd-94f6-c926c53e4d6f>
CC-MAIN-2013-20
http://www.thefullwiki.org/Software_license
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924731
1,354
3.515625
4
Climate Change: Arctic Passes 400 Parts Per Million Milestone By Seth Borenstein The world's air has reached what scientists call a troubling new milestone for carbon dioxide, the main global warming pollutant. Monitoring stations across the Arctic this spring are measuring more than 400 parts per million of the heat-trapping gas in the atmosphere. The number isn't quite a surprise, because it's been rising at an accelerating pace. Years ago, it passed the 350 ppm mark that many scientists say is the highest safe level for carbon dioxide. It now stands globally at 395. So far, only the Arctic has reached that 400 level, but the rest of the world will follow soon. "The fact that it's 400 is significant," said Jim Butler, global monitoring director at the National Oceanic and Atmospheric Administration's Earth System Research Lab in Boulder, Colo. "It's just a reminder to everybody that we haven't fixed this and we're still in trouble." Carbon dioxide is the chief greenhouse gas and stays in the atmosphere for 100 years. Some carbon dioxide is natural, mainly from decomposing dead plants and animals. Before the Industrial Age, levels were around 275 parts per million. For more than 60 years, readings have been in the 300s, except in urban areas, where levels are skewed. The burning of fossil fuels, such as coal for electricity and oil for gasoline, has caused the overwhelming bulk of the man-made increase in carbon in the air, scientists say. It's been at least 800,000 years — probably more — since Earth saw carbon dioxide levels in the 400s, Butler and other climate scientists said. Readings are coming in at 400 and higher all over the Arctic. They've been recorded in Alaska, Greenland, Norway, Iceland and even Mongolia. But levels change with the seasons and will drop a bit in the summer, when plants suck up carbon dioxide, NOAA scientists said. So the yearly average for those northern stations likely will be lower and so will the global number. Globally, the average carbon dioxide level is about 395 parts per million but will pass the 400 mark within a few years, scientists said. The Arctic is the leading indicator in global warming, both in carbon dioxide in the air and effects, said Pieter Tans, a senior NOAA scientist. "This is the first time the entire Arctic is that high," he said. Tans called reaching the 400 number "depressing," and Butler said it was "a troubling milestone." "It's an important threshold," said Carnegie Institution ecologist Chris Field, a scientist who helps lead the Nobel Prize-winning Intergovernmental Panel on Climate Change. "It is an indication that we're in a different world." Ronald Prinn, an atmospheric sciences professor at the Massachusetts Institute of Technology, said 400 is more a psychological milestone than a scientific one. We think in hundreds, and "we're poking our heads above 400," he said. Tans said the readings show how much the Earth's atmosphere and its climate are being affected by humans. Global carbon dioxide emissions from fossil fuels hit a record high of 34.8 billion tons in 2011, up 3.2 percent, the International Energy Agency announced last week. The agency said it's becoming unlikely that the world can achieve the European goal of limiting global warming to just 2 degrees based on increasing pollution and greenhouse gas levels. "The news today, that some stations have measured concentrations above 400 ppm in the atmosphere, is further evidence that the world's political leaders — with a few honorable exceptions — are failing catastrophically to address the climate crisis," former Vice President Al Gore, the highest-profile campaigner against global warming, said in an email. "History will not understand or forgive them." But political dynamics in the United States mean there's no possibility of significant restrictions on man-made greenhouse gases no matter what the levels are in the air, said Jerry Taylor, a senior fellow of the libertarian Cato Institute. "These milestones are always worth noting," said economist Myron Ebell at the conservative Competitive Enterprise Institute. "As carbon dioxide levels have continued to increase, global temperatures flattened out, contrary to the models" used by climate scientists and the United Nations. He contends temperatures have not risen since 1998, which was unusually hot. Temperature records contradict that claim. Both 2005 and 2010 were warmer than 1998, and the entire decade of 2000 to 2009 was the warmest on record, according to NOAA. Copyright © 2012 The Associated Press. This article originally appeared here. |Photo © Paul S. Hamilton||HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /|
<urn:uuid:f74c87f0-d2ea-46d0-aae1-04ebba02e652>
CC-MAIN-2013-20
http://www.biologicaldiversity.org/news/center/articles/2012/christian-science-monitor-05-31-2012.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949648
964
3.1875
3
Low white blood cell countBy Mayo Clinic staff Original Article: http://www.mayoclinic.com/health/low-white-blood-cell-count/MY00162 A low white blood cell count, or leukopenia, is a decrease in disease-fighting cells (leukocytes) circulating in your blood. The threshold for a low white blood cell count varies from one medical practice to another. Some healthy people have white cell counts that are lower than what's considered normal. A count lower than 4,000 white blood cells per microliter of blood is generally considered a low white blood cell count. The threshold for a low white blood cell count in children varies with age and sex. A low white blood cell count usually is caused by one of the following: - Viral infections that temporarily disrupt bone marrow function - Congenital disorders characterized by diminished bone marrow function - Cancer or other diseases that damage bone marrow - Autoimmune disorders that destroy white blood cells or bone marrow cells - Overwhelming infections that use up white blood cells faster than they can be produced - Drugs that destroy white blood cells or damage bone marrow Specific causes of low white blood cell count include: - Aplastic anemia - Certain medications, such as antibiotics and diuretics - Hypersplenism, a premature destruction of blood cells by the spleen - Infectious diseases - Kostmann's syndrome, a congenital disorder involving low neutrophil production - Myelodysplastic syndromes - Myelokathexis, a congenital disorder involving failure of neutrophils to enter the bloodstream - Other autoimmune disorders - Other congenital disorders - Parasitic diseases - Radiation therapy - Vitamin deficiencies When to see a doctor A low white blood cell count is usually found when your doctor has ordered tests to help diagnose a condition you're already experiencing. It's rarely an unexpected finding or simply discovered by chance. Talk to your doctor about what these results mean. A low white blood cell count along with results from other tests may already indicate the cause of your illness, or your doctor may suggest other tests to further check your condition. Because a chronic very low white blood cell count makes you vulnerable to infections, discuss precautions with your doctor to avoid catching contagious diseases. Always wash your hands regularly and thoroughly. You may also be told to wear a face mask and avoid anyone with a cold or other illness. - Kumar V, et al. Robbins and Cotran Pathologic Basis of Disease. 8th ed. Philadelphia, Pa.: Saunders Elsevier; 2010. http://www.mdconsult.com/books/about.do?eid=4-u1.0-B978-1-4377-0792-2..X5001-9--TOP&isbn=978-1-4377-0792-2&about=true&uniqId=376703042-2. Accessed Oct. 27, 2012. - Kliegman RM, et al. Nelson Textbook of Pediatrics. 19th ed. Philadelphia, Pa.: Saunders Elsevier; 2011. http://www.mdconsult.com/das/book/body/208746819-6/0/1608/0.html. Accessed Oct. 27, 2012. - Marx JA, et al. Rosen's Emergency Medicine: Concepts and Clinical Practice. 7th ed. Philadelphia, Pa.: Mosby Elsevier; 2010. http://www.mdconsult.com/books/about.do?about=true&eid=4-u1.0-B978-0-323-05472-0..X0001-1--TOP&isbn=978-0-323-05472-0&uniqId=230100505-57. Accessed Oct. 27, 2012
<urn:uuid:5b5cb75f-0ea1-48ec-b00a-9fd194b7149c>
CC-MAIN-2013-20
http://www.mayoclinic.com/health/low-white-blood-cell-count/MY00162/METHOD=print&DSECTION=all
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.825583
811
3.5625
4
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Inferring How and Why Characters Change |Grades||3 – 5| |Lesson Plan Type||Standard Lesson| |Estimated Time||Three 50-minute sessions| - Infer character traits - Support inferences with evidence from the text - Infer how a character changes across a text - Explain why that character may have changed |1.||Begin by gathering students together for a minilesson. Introduce the idea that good readers get to know and understand the characters in their books. This understanding helps readers comprehend the text and enjoy the books they are reading. You can talk about books you have read aloud or even movies that students are familiar with to model this concept. |2.||Begin to read aloud a short story with a strong main character who changes during the course of the story. "A Bad Road for Cats" by Cynthia Rylant is used as a model throughout the lesson, but you may use any short story you wish. In "A Bad Road for Cats," the reader is introduced to a poor, harsh woman named Magda who is searching for her lost cat. As Magda goes through the process of searching and eventually finding her cat, she begins to show kindness and compassion for the young boy who found and cared for the cat. |3.||Ask students to think about the main character, Magda, as you read. What does she look like? How does she act? How do other characters in the story react to her? These questions can be listed on a chart for students to refer to, or you can show them the categories on the character map portion of the interactive Story Map. |4.||Stop reading when you feel that students have enough information to answer the questions and come up with a predominant character trait for the main character. If you are using "A Bad Road for Cats," a good place to stop is after Magda reads the "4 Sal. CAT" sign. |5.||Model for students how you are thinking about the character and responding to the questions. For example, you might model how you visualized the character in the story. You can also model how you infer character traits from your responses to the questions. It is helpful to have the story on an overhead so that you can explicitly model how to use information from the story to infer character traits. |6.||As a class, decide on a predominant character trait for the main character. Write this on chart paper. |7.||At this point, send students back to their independent reading texts and ask them to think about the characters in their own books in the same way as you have been thinking about Magda. Have students complete a character map for the main character in their independent texts, either online on the Story Map or on paper if you have printed the map in advance. |8.||During independent reading, you can confer with several students or small groups of students about their characters. During this time, you might meet with a small group of readers and have them apply these strategies to another short story at their instructional level (see list of possible stories in Materials and Technology). |9.||Gather at the end of the independent reading time (after about 30 to 40 minutes) so students can share what they have discovered about the characters in their books and what strategies they used to come to these conclusions. Have two or three students share the character traits they discovered and the evidence from the text to support these inferences. You can also have partners share their findings with each other so that more students can share and you can listen in and assess their understanding of the concept. |1.||Finish reading the story you started in Session 1, and ask students to once again think about the questions on the chart paper or interactive character map, just like they did for the first part of the story. At the end of the story, ask students to reconsider the same questions and complete a new chart or character map on the same character (Magda, for example). |2.||Ask students what they notice when they place these two character maps side by side. Model for students your thoughts about Magda and how she has changed since they first met her in Session 1. Show students how you are inferring (i.e., taking evidence from the text and combining it with your own experiences and knowledge) to understand how the character changed. Demonstrate how to complete the How and Why Characters Change graphic organizer. Leave the "Why the Character Changed" section blank for now. |3.||Have students discuss their own observations about Magda at the end of the story and how they think she has changed. You might want to have students discuss these observations with partners or in small groups. |4.||Provide each student with the How and Why Characters Change graphic organizer. Ask students to continue reading their independent reading books and think about how their main characters have changed. Have students complete the "At the Beginning" section of the organizer when they have enough information to do so; the "At the End" section should be completed when they near the end of the story. You might also have students again complete the interactive character map for their characters at the end of the story, compare the two character maps, and then complete the How and Why Characters Change organizer. During this time, you can confer with individual students or work with students in small groups. Note: If students are reading longer texts, you can have them think about how the character changes across several chapters. |5.||At the end of the reading time, have students gather and share (possibly with partners) what they have noticed about character change in their own books. |1.||Return to the partially completed How and Why Characters Change graphic organizer and review Magda's traits at the beginning of the story, the end of the story, and how she changed throughout. Ask students to think about why Magda might have changed the way she did. What would cause this sort of transformation? Ask students to brainstorm several possibilities and support their ideas with evidence from the story or their own experiences. Reinforce the fact that, as readers, they are inferring why the character has changed. Ask students to decide on the most likely reason for Magda's change and add that to the chart. |2.||Have students return to their independent reading books. Ask them to review their own How and Why Characters Change sheet and start thinking about why their characters might have changed throughout the story. Confer with students as they read to determine their understanding of the characters in their stories, focusing on their ability to infer how and why the characters changed. When students finish reading their stories, ask them to complete the "Why the Character Changed" section. This assignment may go beyond one session. |3.||Once again, you may want to gather a small group of students to read a short story at their instructional level and focus on how and why the characters in the story changed. |4.||At the end of the session, gather students and ask them to share their thoughts on why their characters might have changed in the stories they are reading. Ask students to reflect on how thinking about characters in this way helps them to better understand and enjoy the stories they are reading. - Students can study other characters in their books, in addition to the main character and complete the graphic organizers. - Students can use the Character Trading Cards tool to create trading cards for characters they are studying. They might exchange these with each other to learn about each other's characters or use them as writing prompts. For example, they can take one character and write about how he or she changed across a story and why. - Students can study how characters change across a series of texts. Possible series include the Ramona Quimby series by Beverly Cleary, the "Fudge" books by Judy Blume, the Dimwood Forest series by Avi, or J.K. Rowling's Harry Potter series. - Students can use similar charts and graphic organizers to develop dynamic characters for their own narrative stories. - Students can think about how and why they have changed in certain circumstances and connect this to the reading they are doing in class. - Provide students with a short story in which a character changes. Ask them to read the story independently (you will have to make sure it is a text that all students can read) and respond to the following questions, citing evidence from the text to support their responses. - Describe what the main character was like at the beginning of the story. - Describe what the main character was like at the end of the story. - How did the main character change? - Why do you think he or she changed in that way? - How has understanding character change helped you to become a better reader? - Describe what the main character was like at the beginning of the story. - Assess graphic organizers and character maps using the How and Why Characters Change Rubric. - Review observations and conference notes taken during these sessions.
<urn:uuid:8f93c784-906e-4207-be50-7a154654fef1>
CC-MAIN-2013-20
http://www.readwritethink.org/classroom-resources/lesson-plans/inferring-characters-change-858.html?tab=4
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962338
1,987
4.125
4
|Needham was named after a town in Suffolk, England, called Needham Market. This area of Massachusetts was settled mostly by East Anglians, and many of the local towns are named after English towns in Suffolk, Essex and Norfolk (in fact, Needham is in Norfolk County). Needham was originally part of Dedham, the county seat, which was named after the English town of Dedham (in Essex). Thus, when Needham became a separate village from Dedham in 1711, it was also given the name of a familiar East Anglian village. None of the early residents of Needham bore the surname "Needham". | The word "Needham" comes from the Anglo-Saxon, and means "Lower Village" (Germanic nieder- = nether or lower, and -ham, signifying a village or hamlet).
<urn:uuid:6557eca8-82f6-4e8d-8361-672bff64edf7>
CC-MAIN-2013-20
http://greisnet.com/needhist.nsf/Needham%20Name?OpenPage
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984422
179
3.3125
3
The science and mystery of hendra virus Hendra virus was first observed in 1994 but Australia's leading researchers are still quick to list what they do not know about it. Scientists identified the previously unknown virus after the death of Queensland horse trainer Vic Rail and 13 of his horses at Hendra, a Brisbane suburb, in 1994. The virus was later named for the suburb. After extensive testing of a range of animal species, the CSIRO identified flying foxes (fruit bats) as its natural host. A CSIRO-developed vaccine for horses was released in November 2012. Despite their progress, scientists remain puzzled by several key elements in the story of Hendra: - Why was 2011 the worst year for cases of Hendra? - What is the trigger for the movement of the virus from fruit bats to horses? - Is it one or more of Australia's four flying fox species that are responsible for its spread? - How are fruit bats immune to the effects of such a virulent virus? What is Hendra? Hendra is a zoonotic disease, meaning it is able to move from animals to humans. It was originally called 'equine morbillivirus' but was renamed Hendra and became the first of a new genus, henipavirus, within the paramyxoviridae family. It is closely related to the Nipah virus, which does not exist in Australia. CSIRO scientists have recently (mid-2012) identified a third related virus, now called Cedar virus. The discovery is described as a ‘lightbulb moment’ in Hendra research, with scientists keen to discover the genetic difference between the two. While Hendra virus has proven to be deadly to both horses and humans, Cedar virus is harmless. The evidence to date shows Hendra can be transmitted from flying fox to horse, from horse to horse and from horse to human. In 2011, a dog tested positive to Hendra for the first time. Scientists say it is known that other species may be susceptible to Hendra, but until now a case had not been seen beyond the laboratory. Further research is now under way into how the virus affects dogs and whether they can transmit it to humans. Fruit bats show no sign of illness when infected with Hendra, although when they shed the virus, it is highly virulent. In fact, Hendra has been described as one of the most virulent viruses in the world. It causes a range of symptoms in horses; they are typically fast acting and death comes rapidly from either respiratory or neurological symptoms. Of horses infected with Hendra, 75 per cent die as a result of the virus. Horses found to have Hendra are typically killed to minimise any further spread of the virus. While the transfer of the virus from horses to humans is rare, four of the seven human cases since 1994 have resulted in death. Hendra virus symptomsSymptoms in horses typically include: - a sudden fever; - laboured breathing; - frothy or blood-stained nasal discharge; and - neurological changes such as loss of vision, muscle twitching or loss of balance. Symptoms in humans include: - a flu-like illness that can develop into pneunomia or encephalitis. CSIRO scientists suggest Hendra is a stable virus, quite unlike strains of the human flu, which change and adapt. It is also known to be fragile in the environment, has little chance of living in a decomposing animal and is not easily transmitted. It is easily killed by soap or detergents, heat or in a dry environment. Close direct contact is needed for transmission to occur and it is not spread by droplets in the air like human flu or the highly contagious equine influenza. Hendra is thought to be a very ancient virus within fruit-bat populations. Since research into Hendra began, the virus has been found in all four mainland species of fruit bats. Recent research shows Hendra virus present in 30 per cent to 60 per cent of bat urine samples collected. It is thought that only some fruit bats carry high enough levels of the virus to allow it to move between species, a phenomenon known as 'spillover'. How spillover occurs is unknown but bat urine, saliva, aborted bat fetuses or reproductive fluids from bats may be involved. While the majority of Hendra cases have been recorded east of the Great Dividing Range in Queensland and northern NSW, Hendra has the potential to be found wherever fruit bats occur. Hendra was first observed in September 1994 when Vic Rail and some of his horses were affected by a sudden and unknown illness. Several days later, Mr Rail and 13 horses had died. A second person became infected with the virus, but survived. Another seven horses were infected with the virus. They did not die from Hendra but were put down to prevent a relapse of the infection. The Queensland Department of Primary Industries collected samples from the property in Hendra on the outskirts of Brisbane and sent them to the CSIRO in Victoria for analysis. The CSIRO's Australian Animal Health Laboratory isolated and identified a virus in what is thought to be record time. The virus was unknown anywhere else in the world. The second human death from Hendra came in October 1995, 13 months after the first case. A 35-year-old Mackay farmer was the third person to be infected with the virus. It was only after his death from a relapse that an investigation uncovered the death of two horses on his property in August 1994, a month earlier than the Hendra outbreak near Brisbane. The horses were found to have died from Hendra. Two Queensland vets died from the virus in separate incidents in 2008 and 2009. Dr Ben Cunneen was infected after treating an infected horse at a veterinary clinic in Redlands, south-east of the Brisbane CBD. An outbreak at a Cawarral property near Rockhampton killed four horses and 55-year-old vet Dr Alister Rodgers. There have been seven human infections since the virus was first detected 17 years ago; four people have died as a result. Hendra outbreaks in 2011 and 2012 There was an unprecedented spike in the number of Hendra cases in 2011. Twenty-one horses died and many hundreds were tested in NSW and Queensland. Before 2011, NSW had recorded only one horse death from Hendra, in Murwillumbah in 2006. During 2011, there were 10 confirmed cases of Hendra in NSW on eight properties. Hendra was found in Wollongbar, Macksville, Lismore, Mullumbimby, Ballina, and South Ballina. There have been no new cases of Hendra in NSW in 2012. - NSW Primary Industries: Latest Hendra information In Queensland in 2011, Hendra was confirmed in nine locations, resulting in the death of 11 horses. Hendra has been found in Beaudesert, Boonah, Park Ridge, Kuranda, Hervey Bay, Boondall, Logan Reserve, Chinchilla and the Gold Coast. So far this year (2012), Hendra outbreaks have been confirmed in six locations in Queensland including in Cairns, Rockhampton and Mackay. A number of people who’ve come into contact with sick horses are being monitors for signs of the virus. - Queensland Primary Industries: Latest Hendra information Scientists are unsure why 2011 was a particularly bad year for Hendra outbreaks. One theory is that Cyclone Yasi and the summer season flooding in Queensland over 2010/2011 may have destroyed food sources and fruit-bat habitats in northern Queensland, forcing bats south. CSIRO scientists believe stressed fruit bats may shed more of the virus. They may have spread the virus to southern fruit-bat populations. It is also thought infected animals shed higher levels of a virus when they are first exposed. The equine industry and Government authorities are on high alert in 2012 after the deaths of two horses in Rockhampton and Townsville in May. The transfer of Hendra virus from bats to horses and from horses to humans is considered rare. There is no evidence to suggest that Hendra can be transferred directly from fruit bats to humans. The virus is not highly contagious. Close direct contact with an infected horse is required for transmission to other horses or carers. Hendra has an incubation period of five days to 16 days, and horses can shed the virus about two to three days before showing symptoms. In humans, Hendra usually comes to light as a flu-like illness that can develop into pneunomia or encephalitis. People who have recovered from Hendra virus may relapse, sometimes many months after the first infection. If you are a horse owner, you are encouraged to assess your property for trees that may attract bats, observe the seasonal changes in those trees, and isolate horses from any area where bats may roost during the day. - Australian Horse Industry Council: Managing bats and trees Trees to be aware of include, but are not restricted to, a range of fig trees, melaleucas, eucalypts, wattles and passion fruit vines. Flying foxes are also attracted to flowering or fruiting trees with soft fruits, like paw-paw, mangoes, lillypillies, grevillias and some palms. Evidence from June and July 2011 indicates that all cases of Hendra occurred in paddocks with access to flowering or blossoming trees attractive to bats. Horses should be fenced away from fruiting or flowering trees where fruit bats may be present. They should be removed from areas where bats roost during the day. Food and water for horses should be kept well clear of the bat habitat, and should be under cover where possible. Horse feed should not include food like apples, carrots or molasses, which may attract bats. Horse owners are encouraged to change boundary fencing to avoid nose-to-nose contact with neighbouring horses. Research from the 2011 Hendra outbreaks indicates that infected horses are unlikely to pass on Hendra to stable or paddock mates, unless there has been close contact. Vets or horse owners should follow strict quarantine procedures if a horse is suspected of being ill. Ideally, avoid all contact until the illness has been investigated. Take great care with hygiene and personal protection, including the use of full protective clothing. If you notice symptoms of Hendra virus in your horse, quarantine the animal and contact your vet immediately. Contact the Emergency Animal Disease Hotline on 1800 675 888. - Equine Veterinarians Australia: 12-step-guide to minimising Hendra risks The science on Hendra In November 2012, a Hendra vaccine for horses was released. CSIRO tests showed the vaccine had prevented infection in horses. The horse vaccine is seen as a tool in breaking the cycle of Hendra transmission from horses to humans. It is thought that a vaccine for humans is still many years away, although work is underway to develop better treatments for people who do contract Hendra virus. In mid-2012 CSIRO scientists identified a previously unknown related virus, now called Cedar virus after its discovery in Cedar Grove south of Brisbane. The discovery is described as a 'lightbulb moment' in Hendra research with scientists keen to discover the genetic difference between the two. While Hendra virus has proven to be deadly to both horses and humans, Cedar virus is harmless. It is hoped a comparison of the two viruses will determine what makes Hendra so deadly. Scientists are also focusing on a better understanding fruit bats. They are studying bat ecology and bat immune systems to understand how Hendra maintains itself, how the virus emerges from bats and what causes it to switch hosts. They are looking for biological signs in bats that will help to predict potential Hendra 'spillover' events. This information could be used to offer an early warning to horse owners about likely outbreaks of Hendra. The CSIRO is looking particularly at the black flying fox. Its habitat mirrors the geography of outbreaks in Queensland and NSW. A study of bat urine from all bat species is also under way to determine if one species naturally carries higher rates of the virus than others. Hendra is a notifiable disease in Australia. Report suspicions to Biosecurity Queensland on 13 25 23 or the Emergency Animal Disease Watch Hotline on 1800 675 888.
<urn:uuid:42c2bccf-1e11-426c-88db-1b6c298819fd>
CC-MAIN-2013-20
http://www.abc.net.au/news/specials/hendra-virus/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963039
2,582
3.734375
4
Lesson Plans for Secondary School Educators Unit One: Introducing Tolkien and His Worlds "Rumpelstiltskin" (Brothers Grimm) With the help of a strange little man, a young woman accomplishes the impossible task of spinning straw into gold, but she must promise to give him her firstborn child in return. This famous fairy tale features two eucatastrophes: the appearance of Rumpelstiltskin and the discovery of his secret name. "The Devil with the Three Golden Hairs" (Brothers Grimm) To prove himself worthy of the King's daughter, a young man must enter Hell and bring back three golden hairs plucked from the Devil's head. This tale contains several classic motifs, including the lucky child, the impossible task, the well-earned reward, and when the King seeks to circumvent a prophecy concerning his daughter’s future husband the attempt to elude fate. "Orpheus and Eurydice" (Greek myth) When Eurydice, wife of the great musician Orpheus, dies from a snakebite, her husband descends into the underworld and, through the beauty of his singing, persuades the rulers of Hades to release her. He is permitted to take Eurydice back with him on one condition: he must not turn around and look at her until they have reached the surface. Apart from the biblical narrative of Lot's wife, "Orpheus and Eurydice" is the most famous example of a story centered on the "forbidden action" motif. "Creation of the World" (Norse myth) In this excerpt from "Voluspo" ("The Sibyl's Vision"), a wise woman relates how Othin (Odin) and his fellow gods created the world from the body of the frost-giant Ymir. These cryptic verses, written down in Old Icelandic but originating in the ancient Teutonic oral tradition, begin a collection of poems called "The Elder Edda," a crucial document in northern myth and a fount of inspiration for Tolkien. "Khodumodumo" (African folktale) Hiding from a shapeless ogre who is devouring every creature in its path, a woman gives birth to a boy who immediately grows to adulthood, slays the beast, and cuts the people and animals free from its body. Many variations of the "swallowing monster" motif occur in African folklore. The idea of a live person being recovered from a vanquished beast is familiar to us from the story of Little Red Riding Hood. In the European tradition, Hercules and the Irish hero Cuchulain are other examples of the child who displays adult powers. "The Charmed Ring" (Hindu folktale) A merchant's son is thought a fool for spending his inheritance to save the lives of three animals. But these very creatures help him gain a magic ring and the hand of a beautiful princess. The idea of doing the right thing for its own sake characterizes a folktale type called "the grateful dead," in which a hero starting on a journey gives his last penny so an anonymous corpse can receive a decent burial. Soon the traveler is joined by a companion, sometimes in animal form, who helps him gain his desires and is eventually revealed to be the ghost of the buried stranger. This theme of unforeseen benefits accruing to unselfish actions appears in The Hobbit and is especially prominent in The Lord of the Rings. "Thomas the Rhymer" (Scottish ballad) Thomas goes willingly to Faerie when the Queen of Elfland entices him to spend seven years with her. During their marvelous journey, the lady shows him three roads: one to Heaven, one to Hell, and one to Elfland. He chooses the third path. Upon returning home, Thomas becomes a famous prophet, called "True" for the accuracy of his predictions. This ballad, like those about Robin Hood in England and John Henry in America, is supposedly based on actual events. Unit One Content Comments for Teachers
<urn:uuid:f6c7c169-3d0a-45af-b247-47c06a877c78>
CC-MAIN-2013-20
http://www.houghtonmifflinbooks.com/features/lordoftheringstrilogy/lessons/one/handouts.jsp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940573
839
3.46875
3
There is little faith in the reform of the UN system; nonetheless, the United Nations Conference on Sustainable Development, to be held in Rio de Janeiro in 2012 – also known as Rio +20 – is not only to set the stage for a green economy, but also to provide an impetus for the institutional reform of the UN environmental sector. The ministerial-level advisory group brought together by the UN Environmental Program (UNEP) is preparing the reforms. The state of the discussion is analyzed here by Barbara Unmüßig. The consensus in the family of nations is great: the international environmental architecture is in urgent need of reform, for it is incapable of handling global environmental crises. As UNEP Deputy Executive Director Angela Cropper (see note) writes, the current “International Environmental Governance” (IEG) system reveals “little rationality, methodology or connection between various parts. Rather, we find immensely complex disorder of more than 500 environmental agreements, disengaged institutions and bodies, and unsupported commitments.” Business as usual not an option: The institutional fragmentation of UN environmental activities and agreements, their haphazard coordination and inefficiency, and their underfunding are all familiar issues. The structures of environmental governance within the UN have always been a topic of controversial discussion; that has not made the UN any more effective. On the contrary, new agreements, programs, and funds have continually been added, which have in fact aggravated the coordination problem. In addition to the hundreds of environmental agreements, there are now “44 different UN institutions with mandates for environmentally related activities”, as Nils Simon of the German Institute for International and Security Affairs (SWP) in Berlin has ascertained (see note). This fragmentation, Cropper points out, has caused “multiple overlaps and gaps, as well as additional costs which are overstretching human resources, especially in developing countries.” At the Johannesburg Summit in 2002 – the “Rio +10 Conference” – the call for fundamental structural reform was raised. Some European governments for the first time raised the issue of a world environmental organization, as a far-reaching response to the fragmentation of UN structures. NGOs reacted positively to the idea, since they hoped to create a counterweight to the World Trade Organization (WTO); nonetheless, it was politically impossible to implement. Neither the US under the Bush administration, which was during the unit decade implacably skeptical of new organizations, nor the majority of emerging and developing countries, warmed to the idea. The latter feared that environmental and development tasks would drift apart within the UN system. Governments have for some time now been announcing that the status quo is no option. How did this change of mood come about? With the mandate from the UN General Assembly, various international consultation processes have been initiated since 2006. Even if they have not arrived at any concrete conclusion, they have laid the foundation for, and analysis of, errors in the system of environmental governance, and led to the realization that business as usual is no longer justifiable. The Obama administration too is now cautiously signaled a willingness to change. China too can imagine moderate improvements. But all that still leaves us a long way from a guarantee for true reform steps. Since 2009, the Consultative Group has, at the urging of the urging of the UNEP’s Governing Council presented five reform options, including: At the meeting of the Consultative Group in Helsinki in November 2010, there was no consensus. All options were to be examined further. In February 2011, the Governing Council of UNEP again addressed the reform proposals. Here, there were great differences between the EU and Switzerland on the one hand, which the favored the establishment of a new powerful UN environmental organization based on UNEP, and the USA, China, Russia, India and Argentina on the other. At issue is what the added value of such a centralized UN environmental organization was to be. Further-reaching proposals are viewed with great skepticism by many emerging and developing countries. They are blocking the attempt to push through a new umbrella organization. The major exception is Brazil, which has been calling for a UN umbrella organization for the environment and sustainable development since 2007. As the host of the Rio +20 Conference, the Brazilian government wants to present a respectable performance. At the last preparatory meeting for the Conference at the beginning of March in New York, Brazil renewed its recommendations for a UN umbrella organization in an explicit statement. Under this concept, the mandates and roles of ECOSOC, UNEP and the CSD are to be newly conceived and newly defined, which is seen as necessary if the coherence, coordination and effectiveness of the present system of UN environmental governance is to be improved. The foundation of “UN Women”, in which the fractured UN women’s policy programs and organizations were brought together, is to serve as an example. Pragmatism and a huge step forward – both of the same time? The only thing that is clear right now is that the issue of reform of environmental governance will stay on the agenda for Rio +20. All the differences between the various actors have broken out anew. After all the failed attempts at reform of the past decade, the urge for a pragmatic solution is emerging ever more clearly. The reform will have to be realistic, and also politically acceptable for all the actors to be brought into the boat under the UN consensus principle. That sounds like the all-too-familiar lowest common denominator. It could however result in an upgrading of UNEP – although some developing countries are already asking critically what it all really means – more resources and a broader mandate? The expansion of UNEP would not be wrong. The basic problem of poor coordination between the many environmental agreements on the one hand, and, on the other, the other organizations of the UN system and such additional international structures as the international financial institutions and the WTO, would still be present, however. Nonetheless, such a step would constitute proof that at least small reforms of the UN system are possible. The Consultative Group will continue to work on the various options. As regards the reform options it has raised, it also refers to the High Level Panel on Global Sustainability that UN Secretary-General Ban Ki-moon has convened in preparation for the Rio +20 Conference. According to Ban Ki-moon, this panel is to “think big,” and present ambitious yet pragmatic plans for sustainable development in the twenty-first century. The report of the twenty-one member panel is to be presented in the fall of 2011. It is to be hoped that this panel will also address institutional reforms, in order to better mandate and equip the UN to solve global environmental crises, since the various blockades in the UNEP Governing Council are now emerging openly once again. The central cause of the current weakness of international environmental policy and its structural incapacities are rooted in conflicts between various interests, and in problems of distribution and power within the very heterogeneous community of nations. A limited reform of environmental governance, too, will require political will on the part of policymakers. For this reason, many observers today – more than a year before the Rio +20 Conference – believe that a further-reaching reform of UN environmental governance will only come about if there is movement in the various top levels of governments. Where constellations of interests have solidified, as in the case of the UN climate negotiations, or with regard to climate and development funding, such small reforms in the area of environmental governance are not really very helpful. For the various interest groups, what is at issue is not a clearly defined control goal, such as a low carbon economy, and sustainable agricultural resource policy, or the “only” desirable governance structure. What is at issue is their own contradictory interests. About Barbara Unmüßig: From 1996 to 2001, Barbara Unmüßig chaired the supervisory board of the Heinrich Böll Foundation, and was elected president of the foundation in May 2002. Her numerous contributions to periodicals and books have covered international trade and finance, international environmental issues, and gender policy. This analysis can also be found under http://www.boell.de/intlpolitics/security/foreign-affairs-security-global-environment-governance-rio20-11709.html. The original German version has been published in 'Weltwirtschaft & Entwicklung'.
<urn:uuid:8c6947a2-0e20-4974-84bd-14b036d7a3cb>
CC-MAIN-2013-20
http://www.ieg.earthsystemgovernance.org/news/2011-06-03/global-environmental-governance-and-rio-20-thinking-big-%E2%80%93-doing-little
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954038
1,726
2.546875
3
I want to talk a little bit about gratitude today. I think it´s vital for all of us to fully understand and realize the gratitude meaning and how much impact it can have on all of us. Do you know the meaning of gratitude and how much do you value it? The definition of gratitude is: a feeling, emotion or attitude in acknowledgement of a benefit that one has received or will receive. Gratitude is being thankful and ready to show appreciation for and to return kindness. While interacting with other people we have great impact on their overall well-being. Every gesture we make and every word we exchange with others will have some effect, either good or bad. Just by talking to another person over the phone for a few minutes will leave the other person with more positive or negative energy. So, we should interact with other people with this in mind and behave and speak according to the way we want to be perceived. Recently I did a post on the importance of positive thinking and on how you can create more positive energy in your life, which will in turn increase your overall well-being. I highly recommend using positive thinking quotes. Gratitude is also a huge magnet on more happiness and well-being. By showing gratitude you will be attracting both more positive people and circumstances to you. A pathway to well-being Many studies have been done in the past on gratitude and the effect it has on other areas in our lives. Results have shown that gratitude is related to well-being and people who practice gratitude seem to be more happy with their lives, more optimistic and to have better relationships with others when compared to people who dwell on daily difficulties and hassles.(1) Results also show that there is a link between gratitude in teenagers to being optimistic, having good social relationships and being pleased and happy about life.(2) With this in mind we should put more emphasis on practicing gratitude early in life. We should encourage our children to be positive in all their endeavours and to think and be optimistic in life. We should also practice what we preach, that´s actually quite important. We should strive to be good rolemodels for our children and interact with others the same way in which we encourage them to. Otherwise we are without a doubt sending mixed messages that might be very confusing in their eyes. When we express our gratitude in our actions we are also contributing to other people´s happiness. A gesture of gratitude is also a gesture of kindness. When someone does something nice for you or gives you a compliment, you probably experience some happiness. Just think about it for a minute.. When someone took the time to do something special for you, maybe wrote something extra special in your birthday card or complimented you on your wordrobe or on baking a delicious cake – how did you feel? Chances are you felt happy and glad. So why not do the same for others? When you feel grateful for something, say so! Say thank you or smile, when someone opens the door for you or compliments you. A simple smile will go a long way, it´s much more powerful than many people think. When someone smiles at you your first reaction is to smile back. When you smile you show your gratitude and both you and the person who you smile at, will experience a sense of happiness. This will make your day just a bit better and you´ll be happier. It´s amazing how much impact gratitude really has. How can you show your gratitude? There are many simple ways to practice gratitude, here are just a few to get you started on your „daily gratitude journey“: - Call or email someone just to say „thanks“. - At the end of each day take a few minutes to think about what you are thankful for. Maybe you met a friend or something you did went really well. Think about it and maybe even write it down in a special gratitude journal where you keep your thoughts about gratitude. - Exhibit patience, even when you are in a hurry. - Have an attitude of gratitude in all your relationships. - When you´re at the register at the supermarket, look the salesperson in the eye and even at their nametag to be able to say „Thanks Annie“. This will surely put a smile at Annie´s face. - Memorize a song about gratitude or a gratitude poem, one that reminds you to be thankful. Recite or sing it daily for a week, at least think about it. - Instead of frustration and thoughts about things that you do not own but want to pocess, think more in the lines of being happy and grateful for the things that you do have. On that note I want to share with you one of my favorite gratitude quotes: ”Just think how happy you would be if you lost everything you have right now, and then got it back again”. Frances Rodman This list of gratitude gestures could go on and on.. How many of these have you already done today? Maybe I´ll put together a longer list soon and post it here on the site. In the meantime I would love to hear some of your ideas on expressing gratitude to others. What´s your gratitude meaning, what does gratitude mean to you? Please put your thoughts and ideas in the comments below, I would be very grateful ;) Hope you have a wonderful day, today and every day. - Emmons RA, McCullough ME (2003). Counting blessings versus burdens: An experimental investigation of gratitude and subjective well-being in daily life. Journal of Personality and Social Psychology, 84(2): 377–389. - Froh JJ, et al. (2009). Gratitude and subjective well-being in early adolescence: Examining gender differences. Journal of Adolescence, 32(3): 633–650.
<urn:uuid:6491bcde-3f13-4f07-a02e-ab6ac68cbfe4>
CC-MAIN-2013-20
http://behealthywizard.com/category/uncategorized/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955775
1,204
3.09375
3
Getting along with a roommate can be challenging at times. It takes more than a little effort and diplomacy to work out the natural frictions that may develop in any such arrangement. The challenges of sharing refrigerator space, issues regarding timely rent and utility payments, discussions about mutual respect for privacy, or arguments over appropriate noise levels all pale in comparison to the difficulties of dealing with a depressed roommate. Depression is a medical condition. While everyone experiences sadness, lack of energy, or even despair occasionally, depression goes well beyond the blues. We now know that depression is probably linked to problems with brain chemicals called neurotransmitters, namely serotonin and norepinephrine. These neurotransmitters act as chemical messengers, traveling from one brain cell to another to propagate signals that play a role in normal mood regulation. Among people with clinical depression, levels of these chemicals are believed to be so low that mood becomes uncontrollably disordered. A depressed person is likely to exhibit behaviors that would challenge even the most accommodating of roommates. He or she may be sullen, sad, angry, irritable, lethargic, or even abusive. Some depressed people cry uncontrollably, with little apparent provocation, while others are likely to lash out in anger. Some eat too little, or too much, while others sleep too much — or struggle to sleep. It’s important to realize that a depressed person is not him or herself. Despite any evidence to the contrary, he or she is not necessarily being belligerent, or acting out, or being aggressive, or slovenly, or selfish, or irresponsible by choice. If your roommate is depressed, chances are he is incapable of behaving like an ideal — or even a marginally acceptable — roommate, because he simply cannot control his behavior any longer. Short of cutting your losses and moving out, your best option is to bear with your roommate and try to encourage him to get help. Depressed people often do not realize — or do not wish to admit — that they have a real problem and need medical help. But medicine offers the best hope of recovery. Once upon a time, people with depression were labeled melancholic, or called lunatics. They were either shunned or shut away in asylums where they languished, with no treatment, in often appallingly inhumane conditions. Fortunately, we now recognize that people with depression are not “crazy.” Rather, they suffer from a largely treatable illness that deserves our compassion, understanding and patience. With medication and/or psychotherapy, most cases of major depression are now treatable. In some instances alternative treatments, such as electroconvulsive therapy, may be necessary. The important point is that people with depression can get better with proper medical care. What’s a concerned roommate to do? If your roommate appears anxious, confused, indecisive, constantly sad, irritable or restless, he or she may be depressed. Rather than criticize her behavior, consider encouraging her to get help. Don’t take seemingly antisocial behavior personally. If a depressed roommate fails to clean up after herself, or to respect your wishes regarding shared tasks, etc., it doesn’t necessarily mean she’s disrespecting you. She may simply be too depressed to behave as she normally would. Encourage your roommate to seek help. If he refuses, consider contacting a family member who can help get him the medical attention he needs. Depressed people (especially men) often turn to drugs or alcohol in an attempt to feel better. Rather than improving the situation, drugs or alcohol (or both) invariably make things worse. While it’s one thing to share a friendly drink with a roommate, enabling ill-advised behavior in a sick person is another thing altogether. Be advised. If your roommate talks about suicide, or threatens to commit suicide, it’s important to take immediate action. Threats should not be viewed as idle bids for attention; they may be genuine signs of an impending suicide attempt. Call your roommate’s doctor, a suicide hotline, or 911. If the threat seems credible, stay with your roommate until help arrives, and remove any potential means of harm from the immediate environment. Guns, for instance, are used to commit suicide more often than any other method. While depressed women are more likely to attempt suicide, far more men than women die from suicide in the U.S. Of course, you need to look to your own needs, too. Assuming responsibility for the wellbeing of a depressed person takes time and energy. At some point, you may need to consider your options. If your roommate refuses to seek treatment, or will not take prescribed medications, and has rejected help from his or her own family members, you may need to consider making other living arrangements.
<urn:uuid:a7ffde75-9603-4f28-873e-a1ea61a15985>
CC-MAIN-2013-20
http://www.healthline.com/health/depression/depressed-roommate
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95365
983
2.625
3
I got some education in Java and everything you see in this program is something I was introduced to. There is a linked list with 4 of each card from 1 to 10. Can someone show how I could add all hte face cards (valued at 10) and maybe even give me step by step instructions on how to exchange the 1 for an 11 and have the 11 turn into 1 when the hand goes over 21. If anyone could provide help I would very much appreciate it. That's a lot of code to expect someone to go through, especially with such vague questions. I'll give it a shot, though... Your lines 31-40 seem to count from 1 to 10, and then for each value, add it four times to the deck. So that would give you 40 cards. You now need to add 12 more cards, all valued at 10, right? I'd just do another loop, and add them in. If you want my HONEST opinion, i'd throw this out and start over. I'd make a 'card' class that hold (among possible other things) a rank, a suit, and perhaps a value. I'd build a deck class that contains cards. I'd design the constructor to initialize the deck in a sorted order, and provide it a shuffle() method. I'd design a 'hand' class that holds cards. It can return how many cards are in the hand, and also return the value of the hand. So it would compute the value. it would basically add up all the values, assuming each ace was worth "11". If the hand was over 21, and I had one or more aces, I'd then subtract 10 and see if the value was now 21 or under, looping for each ace in the hand... Note - just typing that out implies I may need to keep track of how many aces are in the hand as well... Never ascribe to malice that which can be adequately explained by stupidity. Joined: Oct 13, 2005 fred rosenberger wrote:. . . keep track of how many aces are in the hand as well...
<urn:uuid:10b2b339-bf36-4d5f-9211-7d0997f22650>
CC-MAIN-2013-20
http://www.coderanch.com/t/588058/java/java/Blackjack-WIP
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976438
440
2.65625
3
The value of collecting evidence from GPS devices has been well established over the last several years. GPS evidence has played a major role in several high profile cases ranging from terrorism to homicide to kidnapping. Most of the time as investigators, we tend to focus on collecting evidence as part of criminal investigations, however GPS evidence can play a significant role in many other types of investigations such as accident reconstruction and search and rescue cases. Most investigators think in terms of being able to obtain GPS evidence in the form of the “breadcrumb trail” known as trackpoints, but much more data is available from these devices. This article will provide some basic information on the types of evidence and devices an investigator may come across. Standard GPS Data There are four main types of data that are constantly available across almost all GPS devices. These data types can be divided into two categories: system level information and user inputted data. - Trackpoint: A trackpoint is a location stored by the unit as a record of where the GPS has been. When the GPS unit is turned on, and has acquired satellites, it will begin to record an "electronic breadcrumb trail." The trackpoints are created automatically by the unit and cannot be changed by the user. The unit, by default, automatically decides how often to create trackpoints. The user may also specify to create track points based on a specific time or distance interval. - Track Log: The track log is the complete list of trackpoints that the unit has created. This track log is created such that if a user wants to retrace his or her steps, it is possible to perform a TrackBack. The unit will then navigate the user from point to point in the track log to take the user back to his or her starting location. - Waypoint: A waypoint is a location that a user stores in the GPS. This location can be a point where the user was physically present and wanted to store, or it can be a location that the user enters into the unit from coordinates, as an address, or selects a point of interest (POI) to which the user wants to navigate in the future. - Route: A route is a series of waypoints that the user wants the unit to navigate in a specific order. The advantage of using a route is that upon arrival at an intermediary waypoint, the unit automatically starts navigating the user to the next waypoint in the route. Generally speaking, system level information like trackpoints, can be used to prove actions, as they show that a device has been to a specific location. User data like waypoints, can be used to show intent, as user inputted data does not prove that the device has been to the location specified in the waypoint but it can show intent to go to the location. There are four main categories of GPS Devices or Portable Navigation Devices—Automotive, Aviation, Maritime, and Handheld—the most popular being automotive devices. The handheld category includes a range of devices used for hiking, biking, geocaching, fitness, golf, etc. There are four basic types of devices in the portable navigation marketplace; simple, smart, hybrid, and connected. Smart devices are the most proliferated devices as they are easily accessible to consumers at mainstream retail outlets. Simple devices are devices that are basic in nature and used to navigate from point A to point B. They may or may not have the ability to store maps or plot a location on a map. They are also generally capable of storing trackpoints, tracklogs, waypoints, and routes. On average they will hold 10,000 trackpoints and will have a serial or USB connection. Smart devices generally fall into the automotive category and are USB mass storage devices. They normally have at least 2 GB of internal data storage and an SD card slot. They are more consumer friendly and have features like point of interest lookup, the ability to save favorite locations like home or office, a built-in picture viewer, and an mp3 player. They will also store the same GPS type information as a basic device: trackpoints, tracklogs, waypoints, and routes. Not all smart devices will save trackpoints, but a vast majority will. Hybrid devices will have the same characteristics and features as a smart device but will also have a Bluetooth radio that allows the GPS device to connect to a mobile phone. This connection allows the GPS device to be used as a hands free calling device. Devices that have been connected to a mobile phone and used for hands free calling will generally have call logs (incoming, outgoing, and missed), an address book (which is normally imported from the mobile phone), the MAC address of the last ten mobile phones connected to it, and sent and received SMS messages. Connected devices have the same characteristics and features as hybrid devices but with one additional capability. They have an embedded GSM cellular radio and SIM card that has GPRS data service enabled. Connected GPS devices offer real-time online content from fuel prices to Google searches to live traffic updates. However, these services require a subscription. To help encourage users to buy into these high-end devices, companies will offer the first 1-2 years of service for free. Trackpoints are the Holy Grail in GPS forensics. They are the electronic breadcrumb trail that tells an investigator exactly where and when the device was in a specific location. With trackpoints, criminal acts can be pinpointed down to almost the exact second a crime was committed. Almost all GPS devices collect trackpoints but even without trackpoints, GPS devices still hold a significant amount of data. Waypoints and routes will show the location to which the user intended to navigate or has navigated and a timestamp when the location was put into the device. Hybrid devices that have been connected to a mobile phone will contain much of the same information that an investigator would find on a mobile phone: call logs, SMS messages, and contacts. These can prove very valuable, particularly when paired with a track log. The call logs and track logs allow an investigator to see what time a phone call was made and from what location. Because some of these devices are USB mass storage devices, any type of file could be found. Pictures, videos, documents, password files, encrypted containers, anything that can be stored on a computer can be stored on a USB mass storage GPS device. Connected devices add the complexity of having online content associated with them. Web history like Google searches, white pages lookup, etc. can all be critical information when assembling details for an investigation. In closing, GPS forensics is still an emerging field in the mobile devices community. As device manufacturers continue the race to win consumers and battle to convince customers they still need a dedicated navigation system, the sources of location based data relevant to an investigation will only continue to grow. True GPS forensics used to be limited only to dedicated navigation systems but has moved more into the Geo Referenced meta-data realm. GPS Forensics Specialists now find themselves analyzing smart phones, cameras, tablets, personal trackers, all for location based information. Ben LeMere is a Senior Forensic Specialist and currently serves as a contractor, through Basis Technologies, for the U.S. Government as a certified Computer Forensic Examiner where he specializes in mobile device exploitation. He has more than 14 years of military and federal government service, and his career has afforded him extensive technical, analytical, and operational experience. Ben also serves as a technical consultant and instructor for BerlaCorp. He is widely recognized as a subject matter expert in GPS forensics and was responsible for developing and implementing one of the first GPS forensic analysis programs for the Department of Homeland Security.
<urn:uuid:d428cbef-58fb-4210-9d64-1f0e5e209c0c>
CC-MAIN-2013-20
http://www.dfinews.com/articles/2011/04/enhancing-investigations-gps-evidence
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956915
1,573
2.9375
3
TV Profanity Leads to Teen Aggression Swearing in television programs and video games can lead adolescents to adopt the coarse language and can also influence aggressive behavior according to a study published Monday in the journal Pediatrics. "We didn't know this before and I was really surprised because we've got all these ratings for television, film and video games for profanity," said study author Sarah Coyne, Ph.D., assistant professor of family life at Brigham Young University and researcher of media and human development. She added that a lot of the time, the ratings are incorrect. "I think as a society we've gotten really lax concerning profanity," she added. "I think it's in part because we hear it all over the media." Researchers surveyed more 222 children ages 11 to 15 from a large Midwestern middle school. 135 of the participants were girls. The students were asked about their favorite shows and games, including how often they watch television and play the games. They were asked how much profanity they thought they were exposed to and about their feelings about profanity. Researchers determined that exposure and their stance on profanity were significantly related. Coyne said the statistics point to a "trickle-down effect." "So maybe you watch television, play video games with a lot of profanity and kind of you get more used to it," she said. "You get more desensitized to it, you become more accepting of it, then you kind of start using it in your own life and then kind of show the lack of respect for people." The study found aggression could be presented physically - hitting, kicking or punching. However, it could also show in the form of relational aggression like gossiping or spreading rumors about someone. "I think that parents should be a little bit more aware of what's out there in the programs our kids are watching, and the video games they're playing," Coyne added. "They could be a little more vigilant in terms of profanity exposure. She adds that television and video games need to be more accurately labeled for profanity. See the TVGuardian for filtering TV and the ClearPlay editing DVD Player for filtering offensive content from DVD movies.
<urn:uuid:3d35cc2d-a5a7-4391-9b0d-5be403a60858>
CC-MAIN-2013-20
http://www.familysafe.com/tv_and_video_game_profanity_st.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982909
462
2.921875
3
REVIEW AND CONCLUSIONS OF THE VALUES OF PALANGA PARK There is a unique combination of values in Palanga park. It was created by French landscape architect Edouard André combining nature and cultural values. Here mythology is connected with science, the legends continue and supplement living history, unique relictive pinery blends with rare flora, chosen by experts known all over the world. Ancient archaeological art merges with the greatest contribution of modern artists. The most famous historical symbol is the hill of Birute, announced as an archaeological monument of republic matter. Values of Birute hill inspired the later expression of creation (chapel, image of Lourde, composition “Tau, Birute” (For you, In the respect of composition very suggestive place is given to a palace, built and projected by architect F. Schwechten, neorenaisance style in 1897. For forming their terrace soil was taken from a nearby marshy pit establishing picturesque ponds of the park with an island. Looking from the ponds palace is very notable. Raise into 4,5 metres height and overt compositions maintains a perfect visibility from precondition to park's projection was the acquaintance of F. Tiskevicius with E. André and F. Schwechten. They were collaborating in native to Antanina Tiskeviciene Poznane great duchy, where they have restored the palace and park of Samostrzele. Then F. Tiskevicius invited F. Schwechten and E. André to Palanga. His son René André helped his father with his projects (his article about this work is very important). Planting lasted for several years at the end of the 19 century receiving help from Belgian planter Buyssens de Coulon. It's necessary to accentuate father's and son's André attentiveness to relictive Palanga pinery: the most important aim was to leave the forest untouched, to keep its wrath and majesty. The wiseness of the project needs to be accentuated: a marsh was drained according to this project and a pond was established instead of it. The soil for seedlings was improved by peaty black earth taken from that pond. The unique for Lithuania dendroflora assortment was applicated in the part of Palanga - about 500 species of trees, bushes and convolvulus. So for serious reasons a park of Palanga is one of the parks of landscape standards. By his example my teacher dendrologist L. Cibiras later grounded comprehensive method of old green plantations research and It's a pity that an original list of assortment used by E. André hasn't been found yet. Only half of its species survived - some of the plants didn't pass the exam of the climate, the others were ruined by vandalism and neglection during two World wars and hard post-war The sources of research during post-war years in Palanga park present 255 species and forms (among them 43 spontaneous, 212 introducted which include 35 new, 12 doubtful and 2 It seams that comprehensive explorations and the search of dendrological sources would ground the renovation of Lithuania's parks projected by E. André with authentic For that reason the collection of Palanga park dendroflora is a fund of gold for Lithuanian dendrology. Palanga park is unique considering the architectural, dendrological artistic values, also archeological and historical heritage. So because of that it deserves an international importance (it was created by specialists of different nations). That's why on the occasion of centenary a conference should send a resolution to Lithuania's Government and ICOMOS service suggesting that Palanga park should be included into the World's Heritage list. Prepared by: Labanauskas K. The review and conclusions of the values of Palanga park // Lietuvos zeldynu ateitis /sud. Regimantas Pilkauskas. - Vilnius: Publishing Office of Vilnius Academy of 2001. P. 38-39
<urn:uuid:b759fbe1-fe60-4870-ab94-a13adb05bd31>
CC-MAIN-2013-20
http://www.pgm.lt/Parkas/vertybiu_apzv.en.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917355
941
2.75
3
A medical and industrial isotope production factory in Belgium has been shut down after an unusual release of one of its products, iodine-131, through its chimney stack. The emission began during the weekend 23-24 August and the factory's operator, the Institute of Radioelements (IRE), informed safety regulators at 5.30pm on 25 August. An official from the Federal Agency for Nuclear Control (Fanc) travelled immediately to the plant, at Fleurus about 40 km south of Brussels, and ascertained that 40 GBq of radiation had passed through the chimney. It was then decided that the factory should be shut down. Iodine-131 is a product the plant produces for medical diagnosis and therapy applications. It also manufactures the radioactive elements xenon-133, yttrium-90 and rhenium-188 for similar uses as well as molybdenum-99/technetium-99m for cancer treatment. Fanc's French counterpart was informed of the incident on 27 August, while an alert via the Europe's Ecurie system was sent very early this morning. Sharing information from the alert, Spanish authorities said a person who remained at the facility's perimeter fence would receive a maximum radiation dose of 0.10 mSv - one tenth of the standard regulatory annual dose limit for a member of the public. For comparison, the typical limit for workers in a nuclear plant is around 20 mSv, with up to 50 mSv in a single year. Workers at IRE received no additional exposure. Despite the low levels of radiation involved, Fanc discovered some elevated readings on grass near the plant and advised nearby residents not to eat leafy vegetables or use rainwater from their gardens. Drinking fresh milk from the area was discouraged. Iodine-131 has a half life of eight days. Not far from the Belgian border, France's Chooz nuclear power plant has heightened its environmental monitoring but has not detected anything abnormal. Because the incident involved a release of radiation beyond a plant boundary it has been given a preliminary rating on the International Nuclear Events Scale of 3, a 'major incident'. This could be subject to revision, however, as that rating would normally require greater potential radiation exposure. The cause of the release is as yet unknown.
<urn:uuid:0777bc08-8c05-43e9-8bbf-41d0d444022d>
CC-MAIN-2013-20
http://www.world-nuclear-news.org/RS_Medical_plant_shut_down_after_emission_2908081.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969868
469
2.75
3
Researchers Look Inside Molecules For their look into the nanoworld, the Jülich researchers used a scanning tunneling microscope. Its thin metal tip scans the specimen surface like the needle of a record player and registers the atomic irregularities and differences of approximately one nanometer (a billionth of a millimeter) with minuscule electric currents. However, even though the tip of the microscope only has the width of an atom, it has not been able so far to take a look inside molecules. "In order to increase the sensitivity for organic molecules, we put a sensor and signal transducer on the tip," says Dr. Ruslan Temirov. Both functions are fulfilled by a small molecule made up of two deuterium atoms, also called heavy hydrogen. Since it hangs from the tip and can be moved, it follow the contours of the molecule and influences the current flowing from the tip of the microscope. One of the first molecules studied by Temirov and co-workers was the perylene tetracarboxylic dianhydride compound. It consists of 26 carbon atoms, eight hydrogen atoms and six oxygen atoms forming seven interconnected rings. Earlier images only showed a spot with a diameter of approximately one nanometer and without any contours. Much like an X-ray image, the Jülich scanning tunneling microscope shows the molecule’s honeycombed inner structure, which is formed by the rings. "It’s the remarkable simplicity of the method that makes it so valuable for future research," says Prof. Stefan Tautz, Director at the Institute of Bio- and Nanosystems at Forschungszentrum Jülich. The Jülich method has been filed as a patent and can easily be used with commercial scanning tunneling microscopes. "The spatial dimensions inside molecules can now be determined within a few minutes, and the preparation of the specimen is based predominantly on standard techniques," says Tautz. In the next step, the Jülich scientists are planning to calibrate the measured current intensity as well. If they are successful, the measured current intensities may allow the type of atoms to be directly determined. After publishing initial images produced with the new method in 2008, the research group of Tautz and Temirov has now been able to explain the quantum mechanical principle of operation of the deuterium at the tip of the microscope. Their results were supported by computer-assisted calculations by the working group of Prof. Michael Rohlfing at the University of Osnabrück. The so-called short-range Pauli repulsion is a quantum-physical force between the deuterium and the molecule which modulates the conductivity and allows us to measure the fine structures very sensitively. The Jülich method can be used to measure the structure and charge distribution of flat molecules which can be used as organic semiconductors or as part of fast and efficient future electronic devices. Large three-dimensional biomolecules such as proteins can be examined as soon as the techniques have been refined. Image Caption: The Juelich method makes it possible to resolve molecule structure where only a blurred cloud was visible before. Credit: Forschungszentrum Jülich On the Net:
<urn:uuid:ea9c8f90-5214-4278-adc5-98e66c24b698>
CC-MAIN-2013-20
http://www.redorbit.com/news/science/1906921/researchers_look_inside_molecules/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940515
686
3.65625
4
|You are here: HOME > Publications > Story Archives > Story| CNS Research Story Chemical Non-Lethal Weapons -- Why the Pentagon Wants Them and Why Others Don't While some experts hail "non-lethal weapons" as a "silver bullet" forever changing the face of warfare, others point out they are only "less-lethal". Moreover, some are chemical in nature and may be incompatible with international law, particularly the Chemical Weapons Convention. By Ingrid Lombardo, Graduate Research Assistant, East Asia Nonproliferation Program 8 June 2007 On January 27, 2007, the Pentagon demonstrated the Active Denial System (ADS) at Moody Air Force Base in Valdosta, Georgia. Unlike traditional weapons, the ADS does not kill or harm its targets; instead, it emits a highly concentrated energy beam that creates the sensation of unbearable heat, repelling people from its path without harming them. The release of the ADS is part of a wider effort on the part of the U.S. Department of Defense to design non-lethal weapons (NLW)--weapons that may temporarily stun, calm, or disable combatants without causing permanent injury or death. While some experts hail NLW as a "silver bullet" that will forever change the face of warfare, other experts point out that these new systems are only "less-lethal" than previous methods. Moreover, some NLW--particularly calmatives and malodorants--are chemical in nature and their use in warfare may not be compatible with international law, particularly the guidelines of the Chemical Weapons Convention (CWC). This paper examines NLW and the Pentagon's research into the feasibility of incorporating them into U.S. military strategy. While other countries have expressed interest in NLW, in particular the United Kingdom and Russia for law enforcement purposes, this report focuses on the consideration of NLW by the U.S. military, which is the global leader in NLW research and development. In order to underscore the degree of the government's interest in NLW, this paper describes three separate Defense Department-sponsored research programs conducted by the Institute for Non-Lethal Defense Technologies, the Monell Chemical Senses Center, and the Joint Non-Lethal Weapons Directorate. The report then analyzes the arguments for and against NLW, providing a summary of the possible benefits and risks associated with their use in the field. Finally, the paper examines the question of whether or not the CWC allows for use of NLW in military conflicts. This study concludes that more research is needed before NLW can be safely incorporated into U.S. wartime strategy. While international law forbids the use of NLW in traditional combat situations, it does not specifically exclude the use of NLW by military for peace-keeping functions, such as mitigating hostage situations, suppressing POW riots, controlling over-anxious refugees, and maintaining order in occupied territories. What are NLW and Who Wants Them? According to the Institute for Non-Lethal Defense Technologies at Pennsylvania State University, a non-lethal weapon is a weapon or piece of equipment whose purpose is to affect the behavior of an individual without injuring or killing the person. NLW are also intended not to cause serious damage to property, infrastructure, or the environment. Originally, the term "non-lethal" was applied strictly to equipment and tools used by police for the purpose of riot control. However, the term has changed over time to include technologies used by both military and police to handle hostile individuals, manage crowds, control prisoners, and aid in hostage rescue. A 2003 report issued by the U.S. National Research Council indicated that NLW are under consideration by the U.S. military and law enforcement with the following purposes in mind: The Pentagon is currently seeking to expand the list of non-lethal technologies at its disposal. The agents under consideration include, but are not limited to: acoustic systems that can create uncomfortable sounds; webs that can entangle people and automobiles; and non-penetrating projectiles. Also under consideration are chemical-based weapons such as malodorants (which create offensive smells that can clear an area) and calmatives (drugs that can alter the mindset and motivations of target individuals). (For a list of non-lethal technologies currently under U.S. Government consideration, see the sidebar.) A number of institutes in the United States are working on the development of NLW, including research programs at Pennsylvania State University and Monell Chemical Senses Center, described below. Both these programs include research on chemical-based non-lethal technologies. Research Program on Calmatives: Institute for Non-Lethal Defense Technologies, Pennsylvania State University, University Park, Pennsylvania The Institute for Non-Lethal Defense Technologies (INLDT) provides research services, support, education, and training on non-lethal technology for both military and law enforcement. Its overall goal is to provide military and law enforcement with the tools necessary to be more effective in their operations, especially in situations where lethal force is unnecessary. While most of the INLDT's work is classified, some of the institute's reports have been made available through the Freedom of Information Act. One such report is The Advantages and Limitations of Calmatives for Use as a Non-Lethal Technique, published on October 3, 2000. According to the Executive Summary of that report, the purpose of the INLDT study was to: "assess the use of pharmaceutical agents as calmatives with potential use as non-lethal techniques." The report continues that "pharmaceutical agents, or calmatives, with a profile of producing a calm-like behavioral state were considered highly appropriate for consideration in the design, enhancement, and implementation of non-lethal techniques." Convulsants were also considered. Within this report, INLDT researchers identified drugs that would induce a state of mild sedation in targets but not cause hypnosis, coma, or death. The compounds that were considered to have a high potential for use as NLW are listed in the table below. The report recommended the use of pharmaceutical drugs by the military and police in their operations, noting in its final conclusion that "the development and use of non-lethal calmative techniques is achievable and desirable." Table: Compounds considered to have a high potential for use as NLW Research Program on Malodorants: Monell Chemical Senses Center, Philadelphia, Pennsylvania In addition to studies on calmatives, the Monell Chemical Senses Center has conducted U.S. government-sponsored studies on the development of a class of chemical NLW called malodorants. Searching for smells that could serve as people repellants, Monell scientists combined natural and synthesized compounds to create offensive odors such as those of excrement and rotting flesh. Using perishable food items, animal carcasses, blood, sulphur, and other ingredients, they created smells that would cause nausea, vomiting, disorientation and panic in test subjects. Though Monell was able to create effective malodorants, their studies did not include designing methods for the deployment and weaponization of the compounds. Additional Research on NLW: Joint NLW Directorate (JNLWD), Department of Defense (DOD) The Joint NLW Directorate of the Department of Defense has been working on improving NLW technologies since 1996. A 2002 report entitled An Assessment of NLW Science and Technology cites a "clear and growing" need for military options other than lethal force. Among other techniques, the report calls for increased research and development on "calmatives and malodorants for controlling crowds and clearing facilities." Because these weapons would be new to the U.S. arsenal, the Pentagon recognizes that the full implications of their use are not yet understood. In order properly to assess the feasibility of deploying these kinds of weapons, the JNLWD report calls for "more research to understand biomechanical and physiological response mechanisms" in target individuals and the "effects on individuals and groups associated with repeated exposure." JNLWD researchers were optimistic that if the full implications of NLW, including calmatives and malodorants, could be understood, then these new weapons could become an accepted component of U.S. wartime strategy. Overall, the JNLWD report concludes that "the development and deployment of more capable NLW should be given a higher priority." Why the Pentagon Wants Them -- the Case for Chemical NLW Those supporting the development of NLW argue that their use by military and police provides certain advantages, including societal acceptance, fewer fatalities, and flexibility of response when lethal force is inappropriate. NLW could prove particularly valuable when military targets are hidden among civilian populations. According to senior officials in the U.S. military, the decreased fatalities brought about by using NLW could make the use of force more publicly acceptable. Andy Mazzara, who directs the research program at Penn State, and who formerly headed the Joint NLW Program, also argued that such weapons will be recognized as "more humane" than conventional deadly force employed during the police rescue of hostages, because they can mitigate the crisis without causing death. Proponents further argue that non-lethal methods, such as calmatives and malodorants, are preferable to the use of blunt trauma and other painful methods. According to the INLDT report: "One area of consideration is that blunt trauma has an incidence of organ damage, which may include the eyes, liver, kidney, spleen, heart and brain that may be permanent or even deadly...In contrast, a pharmaceutical agent may be administered in a discrete manner to a selected individual or a drug agent may be selected with a known duration of effect." Though opponents point out that the clandestine use of drugs on the battlefield carries its own set of risks (see the section on the Dubrovka Theatre hostage crisis below), insiders speculate that this technique would still be more acceptable to domestic and international audiences than lethal force. According to proponents of the development of NLW, traditional weapons offer police few options when dealing with non-compliant individuals. The INLDT points out that law enforcement is typically restricted to the two options of threatening or applying deadly force, whereas NLW provide a "wider range of choices" and allow "police the flexibility to act appropriately when circumstances may limit the use of lethal means." In a situation where law enforcement officials may be reluctant to resort to deadly force, having access to a range of non-lethal options would increase their ability to carry out their jobs. In situations in which combatants are interspersed with civilian populations, as was seen in U.S. interventions in Panama, Somalia, Haiti, and Bosnia, proponents argue that a "robust capability" in the realm of NLW would aid troops operating in these types of conflicts. In theory, a non-lethal weapon could be administered to incapacitate a large group of people; then forces could go in and separate the military targets from their civilian counterparts--the latter of whom would recover unharmed. Pentagon officials make the argument that this technique might have been effective in battles against Saddam Hussein's forces, which were notorious for using civilians as human shields and then blaming U.S. forces for their deaths. In conflicts where civilians or hostages are interspersed with military targets, NLW could prove to be an important tool in the protection of "noncombatants, human shields, and those forced to take up arms." Further illustrating the point, a Council on Foreign Relations (CFR) task force evaluating U.S. and coalition forces in Iraq concluded that a "wider integration of NLW into the U.S. Army and Marine Corps could have reduced damage, saved lives, and helped to limit the widespread looting and sabotage that occurred after the cessation of major conflict in Iraq." One might also speculate as to whether NLW could play a role against insurgent fighters in Iraq that are currently destabilizing the country. On NLW in general, the CFR task force concluded, "incorporating the NLW capabilities into the equipment, training, and doctrine of the armed services could substantially improve U.S. effectiveness in conflict, post conflict, and homeland defense." Why Others Do Not Want the Pentagon to Have Them - the Case Against Chemical NLW As the U.S. military further examines the possibilities for developing and incorporating NLW such as calmatives and malodorants into their war-fighting strategy, critics have been vigorously formulating and espousing the case against their use. One of the objections raised by opponents of NLW development is the point that NLW do not always work the way they are supposed to and that they can and do cause death. A well-documented example of how NLW can become lethal was the Moscow Dubrovka Theater incident in 2002. In that now infamous case, Chechen terrorists stormed the theater during a musical performance and took over 800 hostages. The terrorists demanded an end to the war in Chechnya, and the Russian authorities negotiated with them for over two days without reaching an agreement. Russian authorities decided to pump the opiate fentanyl into the theater to incapacitate the hostages and hostage-takers alike. The move ended the siege, allowing the police to apprehend all the captors; however, the gas also caused 127 of the hostages to die from respiratory failure. Chemical NLW appear on the surface to be an ideal solution for many law enforcement and military problems; by simply dispersing the agent in the air, dangerous episodes, like a riot or a hostage situation, could be ended relatively peacefully. In practice, opponents argue, NLW would not likely perform as well as expected in most instances. One particular fear is the risk of death, particularly when using incapacitating agents, where the margin of error between knocking someone out and killing them can be very small. As Robin Coupland of the International Committee of the Red Cross points out "the only difference between a drug and a poison is the dose." Rendering a person unconscious is a very delicate process; it becomes even more complicated when dealing with a large heterogeneous crowd of people varying in age, height, and weight, and situated at various distances from the dispersal mechanism. Injury or death in these situations would be hard to avoid. In addition to questions of dosage, others point out that the administration of anesthesia demands careful monitoring for apnea (stopped breathing) or obstruction of airway. Children, the elderly, pregnant women, and the handicapped are in particular danger of suffering adverse effects from incapacitation. A conflict situation is not conducive to the high level of monitoring necessary to ensure target safety. Opponents of the development of new chemical NLW also point to the imperfect safety record of already accepted chemical technologies, such as the riot control agents tear (CS) gas and pepper (OC) spray. Chemical riot control agents are currently restricted to domestic law enforcement purposes, and are considered to be relatively benign, but their use does have its risks. A report from the U.S. Department of Justice analyzed 63 cases in which suspects that had been exposed to OC spray by U.S. law enforcement officials died afterwards in custody. According to the report, "The study of in-custody deaths concluded that pepper spray contributed to death in two of the 63 cases, both involving people with asthma." In addition to questions of safety, critics of the development of NLW for war fighting purposes further point out that the use of chemical NLW in warfare might become a "slippery slope" leading to the re-deployment of traditional chemical weapons. When considering the most notorious known cases of traditional chemical weapons use in history, including World War I, Manchuria, Ethiopia, Yemen, and the Iran-Iraq war, it has been noted that they began with tear gas and escalated from there. Consistent with this model, during the Vietnam War in the early 1960s, the U.S. military considered switching from CS (tear) gas to fentanyl, after attacks launched against Viet Cong officers transporting supplies along the Ho Chi Minh trail often killed assisting peasants. Ultimately, authorities decided against weaponizing the opiate for use in combat, but the option was considered. Opponents of the use of NLW for war fighting also argue that the use of these weapons by powers such as the United States could lead to their proliferation to other nations. Steve Wright, director of the Omega Foundation, an affiliate of Amnesty International, points out that developing these weapons is therefore "dangerous and irresponsible." According to Wright, these agents could easily fall into hostile hands and be turned against U.S. forces. Mark Wheelis, of University of California at Davis, futher points out that if the United States and the United Kingdom develop and deploy non-lethal chemical weapons, these weapons will proliferate to other countries that may not choose to use the weapons responsibly. Malodorants have been singled out for particular proliferation concern. Though designed to be non-toxic, malodorants have been used as masking agents for lethal chemical weapons. In World War I, for example, noxious smells were used to camouflage mustard gas; in some cases malodorants were used to create the fear that lethal gas was being dispersed. U.S. CW experts noted after WWI that "malodorous compounds" had been "useful to mask the presence of other 'gases' or to force the enemy to wear respirators when no other 'gases' [were] present." If these types of malodorants were to fall into the hands of "rogue" states or terrorists groups, their use could cause significant problems for U.S. and other allied forces. NLW and the CWC Apart from the safety and lethality issues NLW pose, many critics also argue that the use of chemical-based NLW is in direct violation of international law--particularly the CWC. According to a 2003 editorial in the CBW Conventions Bulletin: "It is hard to think of any issue having as much potential for jeopardizing the long-term future of the Chemical and Biological Weapons Conventions as does the interest in creating special exemptions for so-called 'non-lethal' chemical weapons." Many experts have argued that use of chemical NLW for anything other than domestic riot control would be illegal under the CWC and that CW can never be used by the military under any circumstances. Under this argument, therefore, the current research by the United States on weapons explicitly intended for military use and as incapacitating agents would be in violation of the Convention. However, a detailed examination of the language of the CWC points to a much more ambiguous answer with regards to NLW research and development. It is accurate to say that chemical-based NLW, such as those discussed in this paper, can be considered CW if they are a toxic chemical, even if they are not intended to cause death or injury. (For the definition of CW, see Article II of the CWC.) According to the convention, an agent is considered a toxic chemical if its effects include "temporary incapacitation;" the CWC forbids the use of toxic chemicals in warfare. Furthermore, chemical-based NLW likely fall under the CWC's definition of riot control agents, because they "produce rapidly in humans sensory irritation or disabling physical effects which disappear within a short time following termination of exposure." The CWC specifically prohibits the use of riot control agents "as a method of warfare." Since chemical-based NLW would fall under the CWC's definitions of toxic chemicals and riot control agents, they cannot be used by military forces of CWC state parties in traditional military conflicts. However, how these agents may be used by military troops serving purposes other than fighting in traditional battles is not as clear-cut. The CWC allows for the use of chemical agents for "military purposes not connected with the use of chemical weapons and not dependent on the use of toxic properties of chemicals as a method of warfare." Some examples of situations in which the CWC would not specifically forbid the use of chemical agents include mitigating hostage situations, maintaining order in prisoner of war camps, distributing emergency supplies to over-anxious civilians, or maintaining a presence during the staging of civil processes such as the holding of elections, opening of schools and hospitals, or other activities that might incur a hostile response. When looking at forces working outside of their home country that are tasked with "keeping the peace," international law generally defines the term "law enforcement" as: maintaining public order and safety during occupations; controlling prisoners of war; and peacekeeping, either under a consensual agreement between the country and the peacekeeper providers, or as authorized by the UN Security Council. If agents such as malodorants and calmatives were to be used by military forces in these circumstances, it would not necessarily be in violation of the CWC. Therefore, while it may have been unlawful for the U.S. military to use devices like pepper spray, calmatives, or malodorants during its invasion of Iraq in 2003, now that the traditional combat phase of the conflict is over, occupying troops would not necessarily be forbidden from using chemical NLW to maintain order. For several decades, the U.S. Department of Defense has conducted research into NLW. While few objections have been raised over non-chemical NLW, the chemical technologies have generated controversy. They have been opposed by many groups and individuals for reasons such as their unpredictability in real life situations, occasional unintended lethality, risk of escalation to lethal chemical weapons, risk of abuse if obtained by hostile forces, and allegations that their use by the military would constitute a violation of the CWC. Bearing in mind the health and safety factors of the use of these agents, it is clear that more research would be needed before chemical NLW could be safely deployed. With regards to the compatibility of research and use of these agents with the guidelines of the CWC, chemical NLW, like riot control agents, can clearly not be used as part of traditional combat operations. That being said, military forces could use these agents for the purposes of maintaining order, control, and peace in controlled territories. For that reason, the on-going research of these agents by the United States and other state parties to the CWC should not be seen as a violation of the treaty. |Return to Top|
<urn:uuid:916a9af5-e086-48dc-892c-c1fdc4162eca>
CC-MAIN-2013-20
http://cns.miis.edu/stories/070608.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949225
4,570
2.546875
3
squinch, in architecture, a piece of construction used for filling in the upper angles of a square room so as to form a proper base to receive an octagonal or spherical dome. It was the primitive solution of this problem, the perfected one being eventually provided by the pendentive. Squinches may be formed by masonry built out from the angle in corbeled courses, by filling the corner with a vise placed diagonally, or by building an arch or a number of corbeled arches diagonally across the corner. In Islamic architecture, especially in Persia, where it may have been invented, the squinch took the form of a succession of corbeled stalactites. It was also commonly used in the early churches of Europe and the East. More on squinch from Fact Monster: See more Encyclopedia articles on: Architecture
<urn:uuid:a43acb7b-6421-4edd-b0a8-972f1f0a2552>
CC-MAIN-2013-20
http://www.factmonster.com/encyclopedia/world/squinch.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952607
175
3.78125
4
When Keynes went to America The first Bretton Woods meeting was intended to establish a postwar money regime and secure funds fo The night the Mount Washington Hotel opened in 1902, its builder, the New Hampshire coal and railroad magnate Joseph Stickney, raised a glass to “the damn fool who built this white elephant”. With its octagonal towers and 300 yards of wooden verandah, its 234 rooms each with its own bath, its telephone and mail system, and its interminable corridors, set in endless New Hampshire wilderness, this colossal monument to the Gilded Age somehow survived the Depression and wartime shortages to its appointment with financial history in July 1944. As allied armies fought their way into Normandy, some 730 finance ministers, delegates and clerks from all 44 allied countries, including China and the Soviet Union, gathered for three weeks at the Mount Washington to plan the postwar monetary and trading order. The United Nations Monetary and Financial Conference, better known from the hotel's railway stop and mail address as the Bretton Woods conference, established a currency regime and two powerful institutions, the International Monetary Fund and the World Bank. The role of Bretton Woods in the postwar recovery is, as always with economists, disputed but the name still evokes, for men such as Gordon Brown or Nicolas Sarkozy, an idea of order in a chaotic financial world. The gestation of the Bretton Woods conference, as the long-serving US diplomat Dean Acheson put it, "about doubled that of elephants". It arose in the minds of two men of different temper and background but equal brilliance and arrogance: the British economist John Maynard Keynes and Harry Dexter White of the US Treasury. At their backs, like a ghost, was the German banker who served the Nazis till he fell out with Hitler in 1938: Hjalmar Schacht. The Victorian system for settling international transactions, known as the international gold standard, had come to grief in the Depression of the 1930s. A succession of countries, led by Britain, detached their currencies from gold rather than be forced by a fixed exchange-rate to cut demand and add further to unemployment. Britain erected a trade tariff round the British empire, known as Imperial Preference, while other countries devalued their currencies to export at any price. By the summer of 1941, when Keynes retired to his country house in Sussex to think about a successor to the international gold standard, Britain was in a desperate plight, in debt not just to the US but to the countries playing host to her armies, such as India and Egypt. Without currency controls, Britain was bankrupt. Keynes envisaged a sort of supernational bank in which trading accounts would be settled not in gold, but in a sort of artificial or bank money that would be available to members as an overdraft facility according to their share of world trade. Behind it would stand the greatest creditor nation, the United States. As Keynes's biographer, Professor Robert Skidelsky, writes: "Provided all countries were guaranteed sufficient quantities of reserves, it might be possible to dismantle the trade barriers which had grown up in the 1930s and during the war and restore the single world which had vanished in 1914." In devising this plan, Keynes admitted to drawing on Schacht's ingenious use of bilateral clearing arrangements to permit the Third Reich to continue importing raw materials for its military build-up in the 1930s. Keynes, desperate to get away and rest, took the meetings at breakneck speed. On 19 July, he collapsed on the hotel stairs In Washington, Dexter White, director of monetary research at the US Treasury, was also thinking about "future currency arrangements" but from a different viewpoint. From President Roosevelt down, the US could not care less about preserving the British empire. The US wanted currency convertibility and open markets for its exports as soon as possible. The compromise between the Keynes and White plans, which were published in 1943, became known as the Bretton Woods System. The process began in an atmosphere of mistrust. At his first meeting with Henry Morgenthau, the US treasury secretary, Keynes tactlessly suggested that Britain would use US military aid to build up its cash balances. Keynes and his staff objected to the number of lawyers on the US side and made snide remarks about "rabbinics", by which they meant the precision and subtlety of the Jewish officials at the Treasury such as White and Edward Bernstein. Eventually, Keynes and White devised a system in which only the US dollar would exchange at a fixed rate into gold. The allies had to make their currencies convertible into these gold dollars within 1 per cent of a fixed rate, but could draw on short-term assistance from a stabilisation fund to which all members subscribed and the US, naturally, subscribed most. In addition to this fund, now christened the International Monetary Fund, White and his staff had devised a bank to finance the rebuilding of war-damaged economies. This International Bank for Reconstruction and Development still forms the core of what is now known as the World Bank. Lord Keynes was by now ailing and could not bear the thought of working through the Washington summer. With great courtesy, the Americans agreed to hold the drafting meetings in Atlantic City on the New Jersey shore and the main conference in the cool of New Hampshire. Arriving with Keynes by train on 30 June, Lydia Lopokova, the Russian ballerina whom Keynes had married in 1925, found chaos: "The taps run all day, the windows do not close or open, the pipes mend and unmend and no one can get anywhere." They were lodged in the room above Morgenthau, and for three weeks the US treasury secretary was disturbed by Lady Keynes's dancing exercises. With much of the main work done, the conference itself consisted mostly of a British rearguard action to delay the convertibility of its debts and much detail of a mind-numbing complexity. Desperate to get away and rest, Keynes took the meetings on the bank at a breakneck pace. As Acheson reported: "Keynes . . . knows this thing inside out so that when anybody says Section 15-C he knows what that is, but before you have an opportunity to turn to Section 15-C and see what he is talking about, he says, 'I hear no objection to that', and it is passed." On 19 July, Keynes collapsed on the hotel stairs, and word spread that he had had a heart attack. According to Skidelsky, the German newspapers ran adulatory obituaries. On 22 July, Keynes had recovered enough to propose acceptance of the conference's final act. As he left the room, many of the delegates stood and sang "For He's a Jolly Good Fellow". Within two years, Keynes was dead and White survived only two years longer, bedevilled in his last years by allegations of disloyalty in his dealings with the Soviet Union. Some economists, such as Milton Friedman, have questioned whether Keynes and White were correct in their analysis and, even if they were, whether Bretton Woods was the solution. Others argue that such measures as the $3.75bn American loan to Britain in 1945, the $13bn Marshall Plan of 1948 and the 30 per cent devaluation of sterling in 1949 did more to revive Europe. The system of semi-fixed exchange rates just about survived the 1960s but the US, under pressure from financing the war in Vietnam, abandoned gold convertibility in 1971. The two Bretton Woods institutions, the IMF and the World Bank, have been criticised for imposing quasi-colonial conditions on third world borrowers. The IMF is also undercapitalised in the face of the current financial crisis. When Gordon Brown calls for a new Bretton Woods, he is evidently not calling for a currency peg or an infrastructure bank but for a halcyon age of idealism and Anglo-American amity - above all for that ideal or hero of modern times embodied in John Maynard Keynes, the economist as saviour. James Buchan's latest novel is "The Gate of Air", published by the MacLehose Press Thirteen things you may not know about John Maynard Keynes - He was born the year Karl Marx died, 1883. Tags: Economy 2008 More from New Statesman - Online writers: - Steven Baxter - Rowenna Davis - David Allen Green - Mehdi Hasan - Nelson Jones - Gavin Kelly - Helen Lewis - Laurie Penny - The V Spot - Alex Hern - Martha Gill - Alan White - Samira Shackle - Alex Andreou - Nicky Woolf in America - Bim Adewunmi - Kate Mossman on pop - Ryan Gilbey on Film - Martin Robbins - Rafael Behr - Eleanor Margolis
<urn:uuid:0abb9cc9-e35d-47e7-b915-b23c7e5c3586>
CC-MAIN-2013-20
http://www.newstatesman.com/economy/2008/11/bretton-woods-keynes-british
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961425
1,811
3.28125
3
Since the Convention on Biological Diversity came into force in 1993, access and benefit-sharing (ABS) agreements have been used to facilitate the implementation of bioprospecting projects. While many of these agreements have been negotiated and signed among private and government parties in countries that lack national ABS policies, there are also cases where they have been established under these policies and with the involvement of government agencies that usually enforce a lengthy and slow application process. Empirical evidence shows that national ABS policies have thwarted or delayed access to genetic resources in a few countries (see Brush and Carrizosa 2004 for some examples). Because of reduced access, many companies increased their reliance on existing collections of organisms2 and the potential of modern biotechnology techniques to develop drugs from scratch. This, in turn, discouraged some pharmaceutical, agricultural, and biotech organizations from collecting genetic resources in biodiversity-rich countries. This situation was particularly evident during the 1990s (ten Kate and Laird 1999). Over the last seven years, several commentators have underscored the fact that despite its decade-long commercial development, combinatorial chemistry3 has failed to put in the market novel drug candidates for the treatment of common diseases, including cancer. Consequently, companies are looking for unexplored groups of organisms such as extremophiles, endophytes, marine organisms, and microorganisms as sources for novel genes and molecular structures. The interest of the industry for these species has also been encouraged by streamlined ABS policies from countries such as Costa Rica, Australia, Samoa, and Thailand (Carrizosa 2004), the increasing possibility to negotiate ABS agreements in countries that lack national ABS policies (Brush and Carrizosa 2004), and recent research that continues to demonstrate the importance of natural products4 for the pharmaceutical industry5 (Newman et al. 2003). These are clear indicators of a renaissance in interest by pharmaceutical and biotech companies in an old-fashioned bioprospecting approach in regions where ABS regulations are permissible and clear. This renaissance is strengthened by scientific advances in the identification of molecular targets for diseases, modern screening techniques, gene technology, large scale culturing of microorganisms, chemical purification techniques, and structure elucidation of natural compounds. Some pharmaceutical and biotech companies (e.g., those involved in the International Cooperative Biodiversity Group Program) that are going back to the bio-prospecting field are aware of potential benefit-sharing obligations and are prepared to share monetary and nonmonetary benefits derived from bioprospecting ventures. Most of these companies may not be willing to sign ABS agreements that include significant up-front payments such as the famous Costa Rican National Institute of Biodiversity (INBio)-Merck agreement. But they are certainly disposed to provide a share of the royalties, milestone payments, and short-term compensation packages that include training and transfer of technology. They are also looking for counterparts that have realistic expectations for benefit sharing and can add value to the resources collected. This paradigm is reflected in the ABS agreements signed by INBio (see Costa Rican Chapter No. 5, this volume) and in the 2005 Novartis- The National Center for Genetic Engineering and Biotechnology (BIOTEC) three-year agreement aimed at developing new drugs based on genetic resources found in Thailand. In June 2006, Novartis, encouraged by early positive results, renewed the agreement with BIOTEC until May 2001 (BIOTEC Press Release, July 16 2008).6 Another example of this trend was the 2002 collaborative research and benefit-sharing agreement signed between the Japanese pharmaceutical company Nimura Genetic Solutions (NGS) and the Forest Research Institute Malaysia (FRIM) for the collection of soil microorganisms. In late 2002, this relationship was strengthened through the establishment of a subsidiary of NGS under the auspices of FRIM (GRAIN, 2002).7 Similarly, in the last eight years, AztraZeneca has invested about A$100 million in the development of a Natural Products Discover Unit in collaboration with Griffith University8 in Australia. Substances discovered by this joint venture were the source of several patent applications in 2003. Furthermore, in March 2008, Griffith University reported that its partnership with AztraZeneca continues and scientists are currently targeting the development of two promising lead compounds identified from the high-throughput screening of an extensive collection of 45,000 plants and marine invertebrates and their extracts.9 Today, the number of small- and medium-sized biotech and pharmaceutical companies whose core business is the discovery of novel pharmaceutical lead compounds is also increasing and most of them are providing some of the big pharmaceutical and biotech companies with extracts of natural products. For example, the Australian-based company Cerylid Biosciences Ltd has a very extensive library that contains 750,000 extracts. About 80 to 90% of these samples have been collected in Australia and the rest comes from countries such as Malaysia (Sarawak) and Papua New Guinea. Cerylid is also an example of the many firms that have obtained biological samples through collectors such as the Royal Botanic Gardens and the Australian Institute of Marine Sciences. These and other collectors usually establish benefit-sharing agreements with the provider or owner of the resource and a local government agency that include short-term payments and the promise of royalties if products are developed and commercialized. These are examples of ‘best practices’. Nevertheless, it is also important to keep in mind that there are also companies that take the opposite approach with activities that fall within the realm of biopiracy. While some organizations have recently experienced a renaissance in their interest for genetic resources, others' interest has not flagged. Some have been committed to both the potential offered by these resources, and the ideals of benefit sharing with providers of these resources, for many years. Research organizations such as the United States National Cancer Institute (NCI), aware of the potential of natural products as source of treatments for cancer, have continuously and consistently commissioned botanical gardens and universities to collect biological samples of plants and terrestrial and marine microorganisms, from over 35 countries for the last 40 years (see NCI Chapter No. 6, this volume). About four years before the CBD was drafted the NCI pioneered the use of Letters of Collection (LOC) that proposed benefit-sharing terms in the event of the licensing and development of a promising drug candidate. So far 14 countries10 have signed LOCs. Nevertheless, the NCI is committed to the terms of the LOC irrespective of whether or not an official agreement has been signed (pers. comm. G. Cragg, 18 April 2005). Biological samples collected by the NCI are stored in its Natural Products Repository in Frederick, MD (USA). Pharmaceutical companies such as Aphios Corporation have signed Material Transfer Agreements with the NCI (in 2004) in order to access its natural products repository and they are required by the NCI to comply with the terms of LOCs if products are developed and marketed from the samples covered by these agreements. The NCI efforts and the CBD mandate have inspired a major international bioprospecting effort called the International Cooperative Biodiversity Groups (ICBGs). Since 1993, the ICBGs have facilitated the participation of 14 major biotech and pharmaceutical companies11 in bioprospecting projects carried out, currently being implemented, or in planning stages in over 20 countries.12 These projects have delivered mixed results and accomplishments (Rosenthal 1999, Brush and Carrizosa 2004, Larson-Guerra et al. 2004, http://www.fic.nih.gov/programs/icbg.html). The Panamanian ICBG (see Chapter No. 7, this volume) describes the scientific implications of the contract negotiation of the ICBG in Panama, one of the most successful ever implemented. Terrestrial organisms, particularly plants and microorganisms, have been the basis of early developed biotechnology products and continue to be the source of new products, albeit with declining rates of success. Terrestrial microorganisms, for example, have yielded over 120 of today's most important medicines, however, intensive studies of soil microorganisms repeatedly yield species which produce previously described compounds (Jensen and Fenical 2000). Consequently, many scientists have turned their attention to the potential offered by marine organisms and microorganisms, including the so-called extremophiles that are found in extreme habitats where most organisms are not able to survive. Furthermore, in the last few years, scientists have accumulated enough evidence to demonstrate that terrestrial and marine organisms that were thought to be the source of active compounds are just the hosts of microorganisms that are the true producers of these compounds (see NCI Chapter No. 6, this volume). This finding has interesting implications for the sustainable supply of compounds needed for clinical trials and the development of end products. This chapter first provides an overview of the potential offered by marine organisms, extremophiles, and symbionts that are renewing the interest of bioprospecting efforts worldwide. Increasing scientific evidence reveals the role of symbionts as the real producers of natural products. This and other findings will have key implications for the development of ABS agreements. Following this review the impact of science and modern technologies on the discovery process of natural products is examined. Finally, the chapter concludes with an overview and analysis of selected scientific issues that are likely to influence the negotiation of ABS agreements in the future. Global estimates of marine diversity vary between 500,000 and 10 million species and with regards to drug discovery this diversity is just beginning to be examined. The oceans started to attract interest from the pharmaceutical industry only since the 1950s with the discovery of two sponge-derived nucleosides that years later served as a lead structure for the development of commercially important anti-viral drugs such as ara-A and the antileukemia drug ara-C (Proksch et al. 2002 and 2003). But the high rate of discovery of interesting compounds and potential products generated in the last two decades has been the result of complex technological advances in diving technology as well as in molecular biology. Such potential was acknowledged in the mid-nineties through a report from the Biotechnology Research Subcommittee (1995) of the National Science and Technology Council of the United States that underscored the importance of marine organisms as a source of new and improved products for the pharmaceutical, crop protection, and bioremediation industries, among others. In recent years, thousands of active compounds have been extracted from marine organisms that include bryozoans, nudibranchs, sea hares, sponges, soft corals, and tunicates. In January 2006 Marinlit, a database of marine natural products literature, reported that about 15,100 compounds had been derived from 3,088 marine species. Three years later, the number of compounds registered by the database has increased to 22,000 compounds derived from 3,355 species (http://www.chem.canterbury.ac.nz/marinlit/marinlit.shtml). In spite of such increasing amazing diversity of compounds only a few approved pharmaceuticals derived from marine organisms (e.g., cytarabine and vidarabine) have reached the market (Kijjoa and Sawangwong 2004). Nevertheless, as Faulkner (2000) argues ‘pharmacological research involving marine organisms is intrinsically slower and has disadvantages compared with a program based on synthesis, but the number and quality of the leads generated more than justify research on marine pharmacology.’ A handful of such lead compounds have contributed to the development of over 15 marine products derived mostly from invertebrates (sponges, tunicates, mollusks, and bryozoans)13 that are currently in clinical trials mostly in the areas of cancer, pain, and inflammatory disease (see Table 1, this chapter). In addition, since the identification of new compounds is progressing as suggested by the Marinlit database, the potential for new drugs is not only promising, but it is becoming a reality. Revolutionizing compounds like ziconotide (also known as Prialt®), isolated from the cone Conus magus, came out of the pipeline of clinical trials a couple of years ago. The European Union and the USA Food and Drug Administration approved ziconotide for the treatment of severe chronic pain in February 2005 and January 2004 respectively. This is the first compound in over 40 years that has been added to the repertoire of drugs for treating severe pain. Ziconotide is a thousand times more potent than morphine and it works by preventing neurotransmitter release at the synapse, thus blocking pain sensation (Garber 2005). On the other hand, most compounds do not get to market. For example, didemnin B, isolated from a tunicate found in the western Caribbean Sea, went through Phase II clinical trials but it was abandoned during human trials due to its high toxicity. Similarly, girolline and jaspamide, isolated from the Melanesian sponge Pseudoaxyssa cantharella and the Indo-Pacific sponge Jaspis splendus, respectively, were also withdrawn from clinical trials due to their extremely toxic side effects (Arif et al. 2004). The wealth of bioactive metabolites isolated from marine invertebrates that usually lack morphological defenses is a clear indicator of the importance of these compounds for the survival of the species. It has been demonstrated that chemical defense is an effective strategy to fight off predators or to ward off other species competing for space or food (Proksch and Ebel 1998). Therefore, most drug candidates from the sea have been isolated from sessile invertebrates that inhabit coral reefs in tropical or subtropical waters where there is great competition for space and food and significant pressure from predators such as fishes. Deep-diving technologies and remote-operated machines have also opened the possibility to collect and examine the pharmacological potential of entremophiles or organisms that live in extreme environments, such as the deep-water sponge Discodermia dissoluta. This sponge is the source of discodermolide, a secondary metabolite, that has shown potent anti-tumor activity against human lung cancer cells and breast cancer cells and it is currently in clinical trials (see Table 1, this chapter) (Proksch and Ebel 1998). Scientists have found that cytotoxicity of marine organisms clearly surpasses those of terrestrial origin. Therefore, it is no surprise that marine natural products have found their stronghold in the area of anti-cancer chemotherapy (see Table 1, this chapter). Kosan Biosciences, Pharma-Mar, and Eisai Medical Research Inc., for example, have in preclinical and clinical trials several anti-cancer drug candidates from marine genetic resources (http://www.clinicaltrials.gov/ct/show/NCT00100932, http://www.pharmamar.com/es/pipeline/). Marine organisms also have great potential as a source of compounds for other industries that include cosmetics, agribusiness, and orthopedics. Chitin and chitosan have been used in several areas of technology for many decades. Chitin, a polysaccharide, is abundantly available from the shells of arthropods such as shrimp and crab. Chitosan is a biopolymer derived from chitin. These two compounds have multiple applications in drug delivery, cosmetic formulation, surgical wound dressing, hypertension, textiles, and dietary supplements. The skeleton of individuals of the coral family Isididae has also being used as an orthopedic implant in bone grafting surgeries (Maxwell 2005). The pseudopterosins are a group of anti-inflammatory and analgesic compounds isolated from the Caribbean sea whip (Pseudopterogorgia elizabethae) that have cosmetic applications. The company Estée Lauder brought one of the pseudopterosins to market in record time as an additive in the cosmetic line Resilience. It should be noted that economic benefits have not been shared with the Bahamas which is the source country of samples of the Caribbean sea whip (NBSAP 1999, pers. comm. R. Newbold, 28 October 2005). Many eukaryotes14 are themselves involved in a variety of intimate associations with other organisms ranging from symbiotic to pathogenic. In the last decade, scientists have accumulated significant evidence suggesting that bacterial symbionts are responsible for the production of a wide range of natural products isolated from eukaryotes such as plants (Piel 2004). There are many highly evolved groups of microorganisms, known as endophytic microorganisms, residing in the living tissues of plants. Endophytic microorganisms such as fungi and bacteria are found in every plant on earth (over 300,000 species of higher plants) and they produce a great variety of substances that ensure the protection and survival of the host plant. Only grass species (i.e., Neotyphodium sp.) have been extensively studied relative to their endophytic biology (Piel 2004). Isolation and culturing of individual endophytes have led to the identification of a great variety of substances that include antibiotics, antimycotics, antidiabetic, antioxidant, insecticidal, immunosuppressants, and anticancer compounds. Some of the most interesting compounds produced by endophytic microbes include cryptocin, cryptocandin A, jesterone, oocydin, isopest, acin, the pseudomycins, and ambuic acid (Strobel et al. 2004). In addition, some plants that generate bioactive natural compounds have associated endophytes that generate the same product. This is the case of the fungus Taxomycetes andreanae that was isolated in 1993 from the yew tree Taxus brevifolia. Both the fungus and tree produce the famous anticancer agent taxol (Suffness 1995). This might be related to a genetic recombination of the endophyte with the host that occurred during the course of the evolution of these organisms. Therefore, if endophytes can produce the same compound as the host plant this has important implications that might facilitate the sustainable supply of this compound at industrial levels. It is recognized that a microbial source of a high value product may be easier and more economical to produce thereby reducing its market price. However, a great deal of uncertainty exists between what an endophyte can produce under in vitro conditions and what it may produce in nature. All aspects of the biology and relationship between endophytes and their hosts are a vastly unknown and under-investigated field (Strobel et al. 2004). In the marine realm, invertebrates such as the sponge Dysidea herbacea contain bioactive compounds of great pharmaceutical interest that can also be found in associated organisms. The sponge tissue is loaded with Oscillatoria spongeliae, a cyanobacterial symbiont which comprises about 50% of the cellular volume of the sponge. Further analysis has shown that the same bioactive compound can be isolated from the symbionts (Bewley and Faulkner 1998). Circumstantial evidence for a microbial origin of natural products isolated from marine microorganisms also exists for many marine invertebrates. For example, the active compound isolated from the mollusk Dolabella auricularia is also found in the blue-green alga Symploca hydnoides (Harrigan et al. 1998). Clearly, isolation and cultivation of microbial producers of active compounds provide an alternative to facilitate the sustainable supply of these organisms. This can be a viable and cost-effective approach provided that appropriate microbe culture techniques are available. However, this does not seem to be the case for some marine symbionts. In the last decade scientists have been particularly interested in a largely unexplored group of microorganisms that thrive in extreme environments. Some estimate that there are about 2 million species of bacteria in the sea and close to 4 million species of these organisms in a ton of soil (Curtis et al. 2002). Similarly there are about 1.5 million species of fungus in an average soil sample and only 100,000 have been described (Hawksworth 2004). Diversa Corporation, Genencor International, Novozymes, and Vicuron Pharmaceuticals are just a handful of companies that have taken advantage of this diversity. They collect samples of bacteria and fungi that have multiple applications in the pharmaceutical, biotech, agribusiness, chemical, cleaning, and food industries. These companies also have great interest in the so-called extremophiles (or extreme-loving organisms), which include bacteria, archaea,15 protists,16 and eukaryotes that live under extreme conditions that would usually kill other creatures. Scientists have identified several categories of extremophiles that include the following: Acidophiles or acid-loving organisms live in habitats that present pH values less than 2 (e.g., the archaea Ferroplasma acidiphilum can catalyze the accelerated dissolution of sulfidic minerals in industrial tank bio-leaching operations) (Okibe et al. 2003). Alkalophiles or alkaliphiles thrive in acidic conditions at pH values higher than 10. (e.g., a species of Streptomycetes collected from the soda mud flats on the shores of the alkaline Lake Nakuru in Kenya is the source of a cellulase isolated by a Dutch academic researcher, and later commercialized by Genencor International to create the popular stonewashed look in denim jeans (http://www.genencor.com/wt/print/biodiversity). Barophiles or piezophiles are organisms that need high pressures to grow. Recovered at great ocean depths, some of these organisms require pressures hundreds of times greater than that on Earth's surface to survive (e.g., Photobacterium profundum is found where pressures reach 25 megapascals and it is an excellent model for studying adaptation to cold temperatures and high pressures) (Vezzi et al. 2005). Anaerobes are organisms that do not require oxygen to carry out respiration. Some strict anaerobes are actually inhibited from growing in the presence of oxygen (e.g., bacterium Bacillus infernos or ‘bacillus from hell’ is not only anaerobic but also thermophilic. It was obtained at a depth of approximately 2,700 m below the land surface) (Boone et al. 1995). Halophiles or salt-loving organisms inhabit environments consisting of 20 to 30% salt (e.g., the bacterium Halobacterium halobium has a protein known as bacteriorhodopsin which is light sensitive and is used in optical switches) (Roy et al. 2002). Psychrophiles or cold-loving organisms (e.g., bacterium Polamorolas vacuolata, found in Antarctica grows best at 4°C and cannot survive at temperatures above 12°C). Some of these organisms have enzymes that work at refrigerator temperatures and might have applications in the food industry. They also help clean up artic oil spills (Madigan and Marrs 1997). Thermophiles and hyperthermophiles are heat-loving organisms that grow at temperatures between 50 and 70°C (e.g., the frequently cited bacterium Thermus aquaticus found in the 1960s in a hot spring in Yellowstone National Park (US) and source of the enzyme Taq polymerase used in the multimillion Polymerase Chain Reaction (PCR) technique used for the replication of DNA) (Brock 1997). Extremophiles can be found in both terrestrial and marine ecosystems and some of them have also been discovered in the most unusual places and circumstances. In 1956, the bacterium Deinococcus radiodurans was found in cans of meat that had been exposed to supposedly sterilizing doses of radiation. This is the most radiation-resistant organism known to man. It can withstand exposure to radiation levels up to 1.5 million rads (500 rads is lethal to humans). A recombinant strain of this bacterium has been engineered to degrade organopollutants in radioactive, mixed-waste environments (Cavicchioli and Thomas 2000). Genetic engineering techniques used to create this strain of bacterium paved the way for several technologies that have facilitated the discovery of natural products in the last decade. The next section provides an overview of the role of these and other technologies. In the last 40 years or so, scientists have defined the underpinnings of the scientific process of discovery of natural products (i.e., chemical compounds with pharmaceutical, agrochemical, or other industrial uses) which in most cases is usually initiated with the isolation of crude extracts from biological organisms that are purified through a technique known as pre-fractionation. This technique basically increases the concentration of the chemical compounds. The purified extracts are then tested in biological assays in order to identify chemical compounds that are active against a human or plant disease. Subsequently, the chemical compound can either: a) be isolated, purified and used as a drug or agrochemical; b) require structural modification to increase potency and specificity; or c) be used to develop analogs that are structurally less complex and easy to synthesize in the laboratory (ten Kate and Laird 1999, Rosenthal et al. 1999). In the last two decades, modern technologies have not only improved the diversity and accuracy of screens but also facilitated and accelerated steps (b) and (c). Furthermore, developments in genomics, bioinformatics, and novel genetic engineering techniques have turned bacteria into factories for the production of large quantities of natural products. This section presents an overview of the linkages between these and other techniques that promote the mutation and evolution of genes and their contribution for the production of natural products in future decades. In the last few decades, as underscored in the previous section, scientists have standardized the scientific process of discovery of natural products. In 1995, a major contribution to the field occurred with the availability of the complete genomic sequence of the first living bacterium Haemophilus influenzae which opened the field of microbial genomics. Since then, over 100 microbial genomes have been completely sequenced and published and another 200 are estimated to be in progress worldwide. Beyond sequencing, there have been major advances in the field of functional genomics where whole genomes are being characterized in more detail using proteomics and microarray technologies. DNA microarrays, for example, allow for the identification of genes that are turned on or off under different environmental conditions on a genome-wide scale. Also, comparative genome hybridization (CGH) studies that employ DNA microarrays are revealing the extent of diversity across arrays of related and unrelated microbial species (Nelson 2004). Developments in genomics, proteome analysis, and bioinformatics have also enabled scientists to gain a better understanding of the chemical pathways and reactions in living organisms which have led to the identification of new targets for drugs. The targets are proteins produced by genes that cause the disease. Once the genetic basis of a disease and the proteins involved in its phenotype have been elucidated, these proteins can be used as targets in high-throughput screening (HTS) for drug development. Advances in gene technology have also allowed the speeding up of screening programs for new compounds through the development of more sophisticated in vitro assays. For example, genes encoding receptor proteins for certain classes of drugs or enzymes may serve as targets for novel drugs or pesticides that may be cloned and expressed on a large scale in high-throughput in vitro assays into which thousands of plant extracts may be applied (Schmid 2003). As well as finding a suitable target, an important part of the challenge of designing an assay is to find a way to detect whether the compound being tested for its potential effect as a drug does or does not produce the desired result on the target. Thus, an assay usually involves some indicator, a chemical which changes color or reveals in some manner whether the potential drug molecule has interacted biologically or chemically with the target, for example, by killing a cell or rendering an enzyme inactive. There are mechanism-based and whole-organism assays. Mechanism-based assays use individual biochemicals such as enzymes or receptors isolated from cells that will reveal specific biological activity when combined with the chemical to be screened. In this case a collection or library of chemicals to be tested is built and maintained. Each of the chemicals will be tested many times against an ever-changing array of mechanism-based assays. Mechanism-based assays are often changed every three months (Kingston et al. 1999). On the other hand, whole-organism assays operate in vivo and expose an entire cell to the chemical being screened, enabling the potential drug to operate through a range of different mechanisms during the one test. In this approach the assay remains the same. New assays are continually being developed and very few, if any, plants have been screened using all the techniques now available. Also the pattern of disease distribution is not static. As some diseases are brought under control others gain prominence and new ones evolve. In addition to an increasing understanding of genes, targets, and assays, advances in miniaturization and automation of HTS have accelerated the discovery process of new pharmaceuticals. This means that many more biochemical compounds can be screened more rapidly and effectively. HTS screening can test over 1.1 million compounds in six months (Schmid 2003). Combinatorial chemistry is part of an increasing set of tools and procedures to expedite the discovery process of pharmaceuticals and agrochemicals. Combinatorial chemistry allows the generation of a huge number of chemical compounds for screening. This is based on the idea that all but the smallest organic molecules can be thought of as made up of modules which can be assembled in many ways. By going through all the possible combinations a huge number of molecules can be created from a small number of starting modules. Combinatorial chemistry techniques have been used to create large numbers of organic molecules called libraries that can be screened at one time. In the past, chemists have traditionally made one compound at a time. For example, compound A may have been reacted with compound B in order to produce compound AB which may have been isolated and purified through crystallization, distillation, or chromatography. In contrast, combinatorial chemistry offers the potential to make every combination of compound A1 to Ax and compound B1 to Bx. The range of combinatorial techniques is quite diverse and these compounds can be made individually in a parallel or in mixtures, using either solution or solid phase techniques (Schmid 2003). These techniques have allowed an exponential increase in productivity never seen before. In the last century, scientists may have reported the existence of several million biochemical compounds, but today using combinatorial chemistry techniques it is possible that new discoveries will surpass that total amount in a relatively short period of time. Furthermore, in the 1970s a traditional chemist was able to produce about four compounds in a month at a cost per compound estimated to be about US$7,500. Today, using combinatorial chemistry techniques the same chemist can produce over 3,000 compounds in the same period of time at a cost per compound of about US$12 (Borman 1998). This is possible not only due to a convergence of chemistry and biology but also because of fundamental advances in miniaturization and robotics. This relatively new field has captured the attention of scientists in the pharmaceutical, biotechnology, agrochemical, and other industrial areas. After almost two decades, however, as the poor record of development of novel products demonstrates, combinatorial chemistry has not led to many successful products. Furthermore, half of the ten best-selling drugs are derived from secondary metabolites originally isolated from microorganisms or plants. Organic chemistry has not caught up with the capacity of nature to create new structures with a complex molecular diversity. Chemists have the building blocks but they need the directions in order to put them together in a manner that provides benefits to society. The natural world offers the manual. Some argue that organisms making natural products have been conducting combinatorial chemistry and have been screening for activity for hundreds of millions of years before humans adopted a similar strategy (Firn and Jones 1998). The deadly South Pacific cone snail, for example, uses a highly effective peptide toxin to paralyze its prey. This toxin is a mixture of 100 or more venoms produced by the combinatorial scrambling of amino acids that has taken place over 30 to 50 million years of evolutionary history of the cone snail. There are more than 500 species of cone snails, each able to produce more than a hundred unique toxins and they are yielding new treatments for pain, epilepsy, and incontinence (see section ‘Marine organisms’, this chapter). Some argue that the number of possible new drug and agrochemical targets (e.g., proteins produced by genes that cause the disease) has already outgrown the number of existing compounds that could potentially serve as drug candidates. Nonetheless, classical combinatorial chemistry has its limits when it comes to synthesizing new molecules. Also, rational drug design, although successfully used to develop HIV protease inhibitors, is still in its infancy. Naturally occurring compounds account for about one-third of the products that comprise the US$500 billion industry (ten Kate and Laird 1999). Natural products will stay valuable for pharmaceutical, biotechnology, and agrochemical industries due to their wide structural diversity, their excellent adaptation to biologically active structures, and their genetic diversity. Furthermore, in the last few years, recombinant DNA techniques popularly termed ‘gene cloning’, ‘genetic engineering’, or ‘synthetic biology’ have taken advantage of this genetic diversity and offered unlimited opportunities for creating new combinations of genes and natural products. Genetic engineering is the formation of combinations of heritable material by the insertion of nucleic acid molecules produced by whatever means outside the cell into any virus, bacterial, and plasmid on another vector so as to allow their incorporation into a host organism in which they do not naturally occur but in which they are capable of continued propagation. In essence, gene technology is the modification of the genetic properties of an organism by the use of recombinant DNA technology. Genes are the biological software that drive the growth of organisms. Recombinant genes found in wild biodiversity used to be more important for agriculture17 than for the pharmaceutical or biotechnology industries but this is changing. The transfer into plants or microbes of genes from viruses, bacteria, animals, and plants is becoming a standard practice in the pharmaceutical, agrochemical, food, cleaning, and other biotechnology industries. In these industries, recombinant DNA research and development does not require the same amount of random screening carried out in traditional bioprospecting practices. Recombinant pharmaceuticals, agrochemicals, and other biotechnology products are primarily the result of a product-orientated engineering approach (Schmid 2003). In this context, bioprospecting has become a strategy to accumulate and develop libraries of novel genes and proteins from plants, animals, and microbes that are used according to specific needs and circumstances. For example, leeches have been used in traditional medicine to treat thrombosis since ancient times. The active principle from their saliva, the protein hirudin, is now an ingredient of numerous ointments and gels used against varicosis and hemorrhoids. Genetic engineering techniques have facilitated the development of recombinant hirudin that is now being produced by Escherichia coli (Schmid 2003). Today, the search for plants, animals, and microbes with pharmaceutical, agrochemical, and other industrial purposes offer many opportunities for the discovery of genes coding for enzymes and proteins involved in natural-product biosynthesis, many of which might be expected to have a broad substrate tolerance. The addition of these genes to organisms with existing rich natural product diversity should generate even more chemical diversity producing chemical structures that currently lie beyond the scope of combinatorial chemistry. This is the premise of the so-called relatively new field of synthetic biology which involves taking genes and their metabolic pathways found in nature and grafting them into the genetic code of a microbe. The microbe or host organism reproduces and expresses the added genes through the production of natural products. The term ‘synthetic’ comes from the fact that the resulting natural product comes out of an organism with a genetic code that is not ordinarily found in nature. For example, the malaria-fighting compound artemisinin is naturally produced by the wormwood (Artemisia annua), a plant indigenous to Africa and Asia, but in very low quantities. Scientists at the University of California, Berkeley are trying to increase the production level of artemisinin in order to reduce its cost for poor consumers by extracting the artemisinin-producing genes from the wormwood plant and inserting them into the common yeast used in breads and beer. In early 2006, after almost three years of work, the scientists proved that the yeast can produce artemisinic acid, a chemical precursor of artemisinin. Now chemists can use a simple and inexpensive purification process to turn artemisinic acid into the drug artemisinin. Although the yeast is capable of producing artemisinic acid at a higher level of productivity than the wormwood plant, industrial scale-up is required to raise artemisinic acid production to a level high enough in order to reduce the cost of artemisinin therapies (Ro et al. 2006). This process is likely to take two to four years (Hoffman 2006). A few years ago, DuPont scientists pursued a similar synthetic biology experiment by transplanting six genes from two different microorganisms into one microbe. The microbe produced four different enzymes that together turn affordable, corn-derived glucose into propanediol the key ingredient of Sorona, a soft static-resistant polymer DuPont markets as an alternative to polyester and nylon. Before this procedure was designed, instead of the affordable glucose, DuPont scientists were using petroleum for the production of Sorona. This new technology allowed not only a reduction in costs but also in toxic byproducts (Weintraub 2004). Similar initiatives have been pursued by the enzyme industry. Genencor International, for example, obtained extremophile bacteria that had bee collected by academic researchers (see section ‘Extremophiles’, this chapter) in a highly alkaline lake in East Africa that included genes used to create enzymes for the laundry detergent Tide. The extremophile genes responsible for making these enzymes were genetically engineered into the commonplace bacteria E. coli which was then grown massively in giant brewers' vats. It should be noted that Genencor International has over 15,000 strains of microbes stored in deep-freezers in Palo Alto, CA and the Netherlands. Such potential has delivered 11 products that involve the use of living material, enzymes, and proteins to develop cleaner and cheaper ways of making industrial chemicals. ‘Directed evolution’ is a procedure used in genetic engineering to evolve proteins or RNA with desirable properties not found in nature. Directed evolution is usually guided toward a predetermined goal resulting largely in the accumulation of adaptive mutations, whereas natural evolution accumulates adaptive and neutral mutations. The type of properties targeted in in vitro evolution often goes beyond requirements that would make biological sense. The directed evolution technique involves the following three steps: 1) Diversification: The gene encoding a protein of interest is mutated or recombined at random in order to develop a large library of gene variants; 2) Selection: the library is tested for the presence of mutants that exhibit the desired properties using an assay or screen and; 3) Amplification: the mutants identified by the assay are replicated in order to allow scientists understand the type of mutations that have occurred. Directed evolution can be carried out in living cells (in vivo) or directly in DNA (in vitro). Unlike in vivo directed evolution, in vitro experiments can generate large DNA libraries. Directed evolution in which as genome sequencing projects continue to grow, promises to become a principal route for search and discovery. This technique indeed offers a totally new dimension for bioprospecting. The first successful examples of protein or amino acid sequence improvements are the results of screening genes from the wild. Calcitonin is a peptide hormone that inhibits the release of calcium ions and phosphate from the bones and has therapeutic uses for osteoporosis. Research on related hormones from animals revealed that the calcitonin from salmon is more active and has a longer half-life within the human body than the human peptide structure. Protein engineering based on computer simulation and combinatorial chemistry techniques are very powerful tools that can take advantage of the genetic diversity offered by the natural world in order to develop new biotechnology products (Otten and Quax 2005). Companies such as Genencor International and Diversa Corporation have managed to isolate DNA from environmental samples without culturing and to accelerate the evolution of its genes through a technique known as site-directed mutagenesis. This is a technique in which a mutation is created at a defined site in a DNA molecule. Site-directed mutagenesis generates diversity by specific random or cyclic mutagenesis approaches. Thus, scientists are able to generate large information-rich libraries of unique molecules. The selection and screening possibilities are knowledge based, high throughput, and product oriented. The libraries generated are screened for the targeted properties and the best candidate is selected. They perform protein engineering augmented by knowledge derived from structures determined by x-ray crystallography, computational homology modeling, rapid protein characterization, and structure/function relationship analysis to create new products. Scientists have determined over 100 structures for different enzymes including proteases, lipases, amylases, and cellulases. If scientists are unable to find an enzyme to solve a specific problem in nature they are able to develop it by imitating evolution (i.e., molecular evolution). They replicate mutation and recombination in lab conditions. By forcing enzymes to evolve, many new enzyme products are discovered. The company Diversa Corporation, for example, uses a genomic approach in which DNA is isolated directly from environmental samples without culturing. Using Diversa Corporation's gene site-saturation mutagenesis (i.e., a variation of site-directed mutagenesis) and turntable gene reassembly this company has been able to evolve genes in order to create multiple variants based on the original nucleic acid. These genes are then screened for characteristics and activity required for the end product or application. The resulting nucleic acid is then included into Diversa Corporation's proprietary environmental gene libraries which are then screened for a host of various products. Diversa Corporation's unique proprietary approach to discover and evolve novel genes has created environmental libraries comprising millions of genomes. For example, Diversa Corporation marketed a custom enzyme (Luminase) for bleaching paper. The enzyme was collected from organisms collected in a soil sample found near geysers in Russia and then it was engineered to work at different temperatures and alkaline levels (Kretz et al. 2004). Recent scientific findings such as the role of most symbionts (e.g., algae, bacteria, and fungus) as the true producers of natural products and novel technologies that can turn genetically altered bacteria into factories for the production of natural products suggest that both users and providers of genetic resources need to negotiate ABS agreements that reflect these scientific developments and trends. This section underscores key implications of these and other scientific issues that are relevant for bio-prospecting ventures. These implications are described in the context of the following activities that are usually addressed by most ABS agreements: a) identifying biological samples, b) supplying biological samples, and c) transferring technology and building capacity. Successful providers of genetic resources have relied on their expertise to identify biological samples as a strategy to add value to samples and to protect their identity. Organizations such as the Costa Rican National Biodiversity Institute (INBio) (see Costa Rican Chapter No 5, this volume) have developed a barcoding technology to tag and track specimens. This is a key component of the INBio inventorying process and information system that is particularly well developed for plants. Since INBio has already identified over 90% of all Costa Rican plants, this provides a comparative advantage in the negotiation of ABS agreements relating to collection of plant species, because there is an implicit assumption that the identity of the plant will be the same as the identity of the source of the natural product. Nevertheless, increasing evidence indicates that in many cases the producers of natural products are not the plants and animals themselves but the fungi, algae, bacteria, and other microbes that live in association with these organisms (see section ‘Symbionts: Are They the True Sources of Natural Products?’, this chapter). This discovery presents a new technical challenge to all providers of these resources if they want to provide the identity of the sample as an element to add value to the bioprospecting process and the negotiation of ABS agreements. Many of the chemotaxonomy and genomic techniques available to identify algae, fungus, and bacteria are very expensive and are currently available in very few well-equipped laboratories based in developed countries. Several chemotaxonomy and DNA fingerprinting methods for the classification of microbes are available and relatively useful, but each has specific limitations and are data dependent. For example, bacterial phylogenetic classification is based on a sequence analysis of the small subunit 16S ribosomal RNA molecule or its genes (Priest 2004). A major limitation of this approach is that small ribosomal subunit sequencing is not suited for large numbers of isolates that could be provided by bioprospecting initiatives. Therefore, this method is often combined with high-throughput methods such as Fourier-transform infrared spectroscopy. The Center for Microbial Ecology of Michigan State University promoted this concept by establishing a publicly available database that facilitates the identification of bacteria by providing the scientific community with ribosomal RNA phylogenetic trees and ribosome-related data (http://rdp.cme.msu.edu/). The identification and classification of bacteria and other prokaryotes (i.e., organisms without a cell nucleus) is markedly data dependent and it is still relatively data poor. Moreover, classification procedures are in a constant state of change with each influx of new technology and new data. Prokaryotic systematics is wrestling with the imbalance between high-throughput sequencing and the concept of polyphasic taxonomy.18 It should be emphasized that currently bacterial taxonomy is reliable only at the level of broad phylogenetic groups (well delineated by even partial 16S sequences) and at the species level with certain well-studied taxa such as the genus Mycobacterium. For many genera, identification of species remains a major problem as exemplified by the genera Nocardia and Rhodococcus (Goodfellow and O'Donnell 1993). Chemotaxonomy is another approach that has been useful to identify plants, bacteria, and other microorganisms. Chemical data from the analysis of whole organisms and cell components using methods such as gas, thin-layer, and high-performance liquid chromatography, have been used extensively to classify microorganisms according to the discontinuous distribution of specific compounds. Chemotaxonomic analysis of macromolecules, especially aminoacids and peptides, isoprenoid quinines, lipids, polysaccharides and related polymers, proteins, and enzymes were used to classify innumerable taxa prior to the introduction of 16S rDNA sequencing. Chemotaxonomic data proved to be of particular value in the classification of the actinomycetes and coryneform bacteria which initially was essentially morphological in concept. Data from amino acid and sugar analyses promoted an extensive reappraisal of the classification of these taxa (Goodfellow and O'Donnell 1993, Priest 2004). Chemotaxonomy is also contributing to polyphasic taxonomic characterization and it will continue to be important with the availability of high-throughput chemical fingerprinting methods for characterization and identification such as Fourier-transform infrared spectroscopy, pyrolosis mass spectrometry, matrix-assisted laser desorption-ionization with time of flight and spray-ionization mass spectrometry. These high-throughput chemical fingerprinting methods offer the possibility of integration between genomic and phenotypic characterization of organisms which are important, if one is to understand much of the current data and to exploit technology to solve the major problem of rapid and reliable identification of microbes. In general, good congruence has been found between the discontinuous distribution of chemical markers and the positions of the corresponding taxa in the phylogenetic tree as clearly shown with respect to actinomycetes (Priest 2004). This technology, despite its limitations, is important, particularly in those bioprospecting projects where taxonomy has to be assessed and this increases the likelihood of getting a good bioprospecting deal. But an organism should be identified only when a promising lead natural product is identified. Negotiating the inclusion of molecular and genomic taxonomy efforts into agreements is an important option for providers of genetic resources. Nevertheless, if synthetic, semi-synthetic or genetically engineered derivatives are the final product then the user will not need to re-supply by acquiring additional samples. In this case, the identification of the original biological sample is relevant only for scientific purposes. Increasingly, evidence is showing that many of the compounds isolated from marine organisms are produced by symbiont microorganisms. In addition, scientists are focusing on the potential offered by microorganisms including extremophilic bacteria, fungi, and algae. This finding is consistent with INBio reports regarding international requests to collect microorganisms in Costa Rica (see Costa Rican Chapter No. 5, this volume). The advance development of a drug or agrochemical usually requires access to large quantities of the source raw material for production of sufficient drug for preclinical/clinical and product development. As previous sections indicate, new scientific techniques have been developed to grow fungi, algae, bacteria, and other microorganisms in in situ and ex situ conditions. Many of these microbes live in symbiosis or association with other organisms such as plants and marine invertebrates. In most cases, the symbionts can be isolated and cultured in the laboratory in order to obtain large quantities of the active compound. In other cases, both organisms, the host and symbiont, have to be grown together in in situ conditions through mariculture or aquaculture techniques. Marine sponges, for example, are known to include cyanobacterial symbionts that produce secondary metabolites with pharmaceutical potential. A few years ago, scientists assessed the technical and economical potential of using marine sponges for large-scale production of these compounds for two cases: a) the anticancer molecule halichondrin B from Lissodendoryx sp. and b) avarol from Dysidea avara for its antipsoriasis activity. An economic and technical analysis was done for three potential production methods: a) mariculture, b) ex situ culture (in tanks), and c) cell culture. The conclusions indicated that avarol produced by mariculture or ex situ culture could become a viable alternative to currently used pharmaceuticals for the treatment of psoriasis. Production of halichondrin B from sponge biomass was found not to be a feasible process, mainly due to the extremely low concentration of the compound in the sponge (Sipkema et al. 2005). On the other hand, some marine chemical products are naturally more amenable to economical production via laboratory synthesis or semi-synthesis. This is usually related to either the overall complexity of the model compound and/or the number and nature of the steps contained in the biosynthetic pathway. For example, the structural simplicity of some compounds such as dolastatin (originally derived from the sea hare Dolabella auricularia, but found to be cyanobacterial in origin (Luesch et al. 2002) make them prime candidates for their total synthesis. In contrast, a natural product such as ecteinascidin 743 (ET-743) with 60 or more steps required for complete synthesis (Luesch et al. 2002), may never be economically produced in its entirety by synthetic chemists. Analogs of this complex compound, however, can be produced through a semi-synthetic strategy that starts with and builds on one or more precursor molecules. For example, efficient semi-synthetic production of ET-743 has been attained by using the closely related compound safracin B as a starting point. This natural product is produced by an easily culturable pseudomonad bacterium, allowing sustainable and cost-effective semisynthetic production of ET-743 (Luesch et al. 2002). Table 1 lists selected current natural products derived from marine organisms that are being cultured in in situ and ex situ conditions and manufactured via laboratory synthesis or semi-synthesis. Genomic approaches have also been developed to ensure a sustainable supply of natural products. For example, scientists are working on approaches to: Isolate the organism's genes that can subsequently be used to produce the natural product in another organism (e.g., synthetic biology), For example, the bioprospecting program of the Bermuda Biological Station for Research is developing techniques for cloning genes from the host macrofauna and associated microbial symbionts of sponges and other marine invertebrates and inserting them into laboratory bacterial strains (http://www.sciencemag.org/cgi/content/abstract/sci;1093857v1). The hope is that targeted natural products can be sustainably produced using such a strategy, even if the microbial agents responsible for them remain unable to be cultured or even identified. Facilitate the identification and expression of gene clusters from microbes (e.g., fungi such as actinomycetes) that do not produce metabolites in natural conditions (Streit and Schmitz 2004). Evolve genes that can be screened later against a desired property for a specific product (see section ‘Genetic Engineering and Bioprospecting’, this chapter). Screen for a diversity of enzymes in a microbial community. This process, metagenomics, is a creative approach in screening for a diversity of enzymes and is close to the idea of screening a biodiversity library. It is thought to be an elegant strategy in light of the fact that it does not rely on the cultivation of microorganisms, but instead on DNA or mRNA that is directly isolated from an environmental sample, purified, digested, and cloned into suitable cloning vectors to construct complex environmental libraries. These gene libraries are screened using either sequence-based techniques or activity assays. Ideally, cultivation-independent approaches enable microbiologists to fully exploit the biological potential of a microbial community in its totality (Streit and Schmitz 2004). While production of marine and microbe-based natural products via laboratory synthesis and genetically engineered approaches gets around the need to re-supply samples, the complexity of the molecular structure of compounds and the cutting-edge techniques employed will, no doubt, continue to present their own formidable challenges and limitations. Therefore, in many cases the only re-supply alternative will come from the cultivation and/or recollection of the organism itself under ABS agreements. In any case, providers of genetic resources should seek to have access to the know-how and equipment needed to re-supply biological samples through any of the scientific technologies described above. In addition ABS agreements that involve supplying live samples of microbes and other organisms must be carefully evaluated because the sample itself is sufficient to provide endless quantities of the active compound or natural product in most cases. Contractual provisions should be negotiated in order to obtain as much information as possible regarding the future use of these samples. This includes reporting and auditing protocols. Also it must be emphasized that the country is the owner of samples and it should be compensated in case of future benefits. If the organism can be cultured and useful genes can be identified and isolated there would not be any dependence on the original source, hence no need to re-supply or to negotiate prices per sample. On the positive side, this would prevent the environmental impact caused by collecting large amounts of the resource in its original habitat. As underscored above, the total synthesis or semi-synthesis of a drug may be possible, nevertheless the structural and stereochemical complexity of most natural compounds often preclude the development of economically feasible large-scale total syntheses. Similarly, pursuing the development of natural products derived from gene clusters or from microbes grown in lab conditions through synthetic biology and other genetic engineering techniques can be a dead-end initiative. In most cases these are knowledge intensive and multi-year-long enterprises (see section ‘Genetic Engineering and Bioprospecting’, this chapter). Nevertheless, there are companies that have successfully applied these cutting edge technologies for the development of natural products (Brush and Carrizosa 2004). Cutting edge technologies used in the activities described in this Chapter are expensive and difficult for developing countries and their scientific bodies to obtain. Indeed, if those technologies are closely held and used exclusively by the user, may be completely inaccessible for use by countries that provide genetic resources, even if other capacity issues were not a barrier to direct laboratory bioprospecting. Nevertheless, non-proprietary gene and molecular technology is being transferred to providers of genetic resources in the form of protocols, equipment, and training negotiated in the context of ABS agreements (see Costa Rican, NCI and Panama chapters, this volume). For example, since 1991 INBio has increased its capacity by negotiating technology transfer and training provisions with partners such as Phytera, Diversa Corporation, the International Cooperative Biodiversity Groups (ICBG), the Global Environment Facility (GEF), Merck & Co, and the Costa Rica-United States of America Foundation for Cooperation. Consequently, INBio has been able to establish the following laboratories that provide important added value to present and future bioprospecting ventures: Plant biotechnology laboratory: carries out sterilization procedures, preparation of media, includes transference and culture rooms used for micro-propagation of plant material. Molecular biology laboratory: performs DNA extraction and PCR. Microbiology laboratory: provides isolation and culture of bacteria and actinomycetes. Mycology laboratory: carries out activities that range from isolation to taxonomic identification of fungus. Chemical laboratory: carries out extraction, fractionation (BioXplore Technology), and nuclear magnetic resonance services. Informatics unit: provides tailor-made databases according to each agreement (BioXplore Technology) (see Costa Rican Chapter No. 5, this volume). INBio's relationship with users of genetic resources has definitely set a precedent followed by other providers of genetic resources. For example, the Panamanian ICBG (see Chapter No. 7, this volume) shows that local organizations have gained access to novel biotechnologies for bioassays and nonradioactive visualizing techniques. These technologies have allowed these organizations to carry out experiments almost independently of a large laboratory and supply-chain infrastructure, making them ‘portable’ or analogous to ‘field techniques’. Ten years ago, such technology was not so portable. In Panama, this technology has made it possible for support for the training and outfitting of in-country scientists through negotiated ABS agreements. Ten years ago, the emphasis would have been more on up-front payments or royalties, because trained Panamanian scientists and parascientists wouldn't have had a place to work in Panama. The Costa Rican and Panamanian experiences are clear examples of how the impact of science and technology affect the options available to negotiators of ABS agreements. One of the greatest challenges in exploiting the enormous potential benefits of marine and terrestrial natural products is the difficulty in finding sustainable means of production for compounds of interest. Achieving this goal, however, may create an increased difficulty for providers seeking to ensure that users obtain ABS permissions, and share benefits arising from genetic resources that originated in their country. Having sustainable supplies is critical if a chemical is to be marketed as a drug, agrochemical, or other product. Reliable production is also a necessity to support the research needed to study and understand novel compounds before commercial potential can even be evaluated. Today, important developments in chemistry, molecular biology, and genomics provide a comprehensive menu of technologies that address supply and product development issues and contribute to the identification of microbial samples. Furthermore, technologies that mutate genes in order to develop new products (see section ‘Harvesting the Potential of Microorganisms Through Site-Directed Mutagenesis’, this chapter) should raise not only monetary but also ethical concerns among providers of genetic resources. These scientific and technological developments should influence the negotiation of supply, benefit-sharing, monitoring, and other relevant provisions of present and future ABS agreements. Pharmaceutical and biotechnology companies seeking access to work with microbes are also provoking the anxiety of source countries over samples that do not require re-supply for development. There is no longer the control point that results from the need to recollect. In plants the ability to do synthetic biology raises similar concerns (see section ‘Generating Chemical Diversity through Bioprospecting and Synthetic Biology’, this chapter). Furthermore, providers of genetic resources are concerned about potential income and technology-transfer opportunities lost to scientific endeavors such as Craig Venter's efforts to decode and complete genome sequences of organisms (information on Venter's research can be reviewed at http://www.jcvi.org/). There is also some concern that, by making this information public, these researchers are jeopardizing the ability of countries to protect the value of genetic resources over which they have undisputed sovereign rights. The implication of this is that the free international flow of gene sequences may ultimately make control of genetic resources irrelevant. Since these efforts will increase dramatically in the future, source countries may want to strengthen and accelerate efforts to take advantage of opportunities to develop local capacity in order use their genetic diversity before it becomes public and looses its economic potential. Scientific and technological developments are also the core and competitive advantage of companies based in industrialized countries. These companies are concerned that their competitive edge will be compromised if proprietary bioassays, molecular biology approaches and genomic technologies, as well as the nature of any specific leads, or the financial terms of an agreement are shared with parties peripheral to the parties to ABS agreements. Consequently, transfer of technology to providers of genetic resources is unlikely to include state-of-the-art equipment and know-how. Nevertheless, many companies are willing to transfer basic gene technology which in contrast to natural-compound chemistry does not particularly require expensive investments in laboratory equipment. Table 1 Selected marine natural compounds currently under development as drugs (Compiled from Fenical 2006, Maxwell 2005, Kijjoa and Sawangwong 2004, Proksch et al 2002 and 2003, Haefner 2003, and Faulkner 2000) Bewley C.A. and D.J. Faulkner. 1998. Lithistid sponges: Star performers or hosts to the stars? Angew. Chem. Int. Ed. 37: 2162–2178. [CrossRef] Biotechnology Research Subcommittee. 1995. Biotechnology for the 21st Century: New Horizons. Committee on Fundamental Science, National Science and Technology Council. Washington D.C. USA. Boone D.R., Y. Liu, Z.J., Zhao, D.L., Balkwill, G.R., Drake, T.O. Stevens, and H.C. Aldrich. 1995. Bacillus infernus sp. an Fe(III) and Mn(IV) reducing anaerobe from the deep terrestrial subsurface. International Journal of Systemic Bacteriology 45:441–448. [CrossRef] Borman J. 1998. Combinatorial Chemistry. Chemical & Engineering News, April 6. http://pubs.asc.org/hotartcl/cenear/980406/comb.html. Brock T.D. 1997. The value of basic research: Discovery of Thermus aquaticus and other extreme thermophiles. Genetics 146:1207–1210. Brush S.B. and S. Carrizosa. 2004. Chapter 3. Implementation pathways. pp. 67–78 in Carrizosa S., S. B. Brush, B. Wright, and P.E. McGuire (eds.) Accessing biodiversity and sharing the benefits: Lessons from implementing the Convention on Biological Diversity. IUCN, Gland, Switzerland and Cambridge, UK. Brul S. and C.K. Stumm. 1994. Symbionts and organelles in anaerobic protozoa and fungi. Trends In Ecology & Evolution 9:319–324. [CrossRef] Carrizosa S. 2004. Chapter 1. Diversity of policies in place and in progress. pp. 9–50. in Carrizosa S., S.B. Brush, B. Wright, and P.E. McGuire (eds.) Accessing biodiversity and sharing the benefits: Lessons from implementing the Convention on Biological Diversity. IUCN, Gland, Switzerland and Cambridge, UK. Cavicchioli R. and T. Thomas. 2000. Extremophiles. pp. 317–337. in Lederberg J., M. Alexander, B.R. Bloom, D. Hopwood, R. Hull, B.H. Iglewski, A.I. Laskin, S.G. Oliver, M. Schaechter, W.C. Summers (eds.) Encylopedia of Microbiology, 2nd edition. Academic Press, San Diego, CA USA. Curtis T.P., W.T. Sloan, and J.W. Scannell. 2002. Estimating prokaryotic diversity and its limits Proc Natl Acad Sci USA 99:10494–10499. [CrossRef] Fenical W. 2006. Marine pharmaceuticals: Past, present and future. Oceanography 19(2):110–119. Faulkner J. 2000. Marine pharmacology. Antonie van Leeuwenhoek 77:135–145. [CrossRef] Firn R.D. and C.G. Jones. 1998. Avenues of discovery in bioprospecting. Nature 393(6686):617. [CrossRef] Garber K. 2005. Peptide leads new class of chronic pain drugs. Nature Biotechnology. Published Online 1 April 2005. http://www.nature.com/news/2005/050328/pf/nbt0405-399_pf.html#t1. Goodfellow M. and A.G. O'Donnell. 1993. The root of bacterial systematics. pp. 3–54. in Goodfellow M. and A.G. O'Donnell (eds.) Handbook of new bacterial systematics. Academic Press, London UK. Haefner B. 2003. Drugs from the deep: marine natural products as drug candidates. Drugs Discovery Today 8(12):536–544. [CrossRef] Harrigan G.G., H. Luesch, W.Y. Yoshida, R.E. Moore, D.G. Nagle, V.J. Paul, S.L. Mooberry, T.H. Corbett, and F.A. Valeriote. 1998. Simplostatin 1. A dolastin 10 analogue from Marine cyanobacterium. Symploca hydnoides. J.Nat. Prod. 61:1075–1077. [CrossRef] Hawksworth D.L. 2004. Fungal diversity and its implications for genetic resource collections. Studies in Micology 50:9–18. Hoffman I. 2006. Cheaper malaria drugs in works. April 13, Oakland Tribune. Oakland, CA USA. Kijjoa A. and P. Sawangwong. 2004. Drugs and cosmetics from the sea. Mar. Drugs 2:73–82. [CrossRef] Kingston, D.G.I., M. Abdel-Kader, B.N. Zhou, S.W. Yang, J.M. Berger, H. van der Werff, J.S. Miller, R. Evans, R. Mittermeier, L. Famolare, M. Guerin-McManus, S. Malone, R. Nelson, E. Moniz, J.H. Wisse, D.M. Vyas, J.J. Kim Wright, and S. Aboikonie. 1999. The Suriname International Cooperative Biodiversity Group Program: Lessons from the first five years. Pharmaceutical Biology 37:22–34. [CrossRef] Kretz, K.A, T.H. Richardson, K.A. Gray, D.E. Robertson, X. Tan, and J.M. Short. 2004. Gene site saturation mutagenesis: A comprehensive mutagenesis approach. Methods in Enzymology 388:3–11. [CrossRef] Jensen P.R. and W. Fenical. 2000. Marine microorganisms and drug discovery: Current status and future potential. pp. 6–29 in Fusetani N. (eds.) Drugs from the sea. Karger, Basel Switzerland. Larson-Guerra, J., C. López-Silva, F. Chapela, J.C. Fernández-Ugalde, and J. Soberón. 2004. Chapter 6. Mexico: Between Legality and Legitimacy. pp. 123–152 in Carrizosa S., S.B. Brush, B. Wright, and P.E. McGuire (eds.) Accessing biodiversity and sharing the benefits: Lessons from implementing the Convention on Biological diversity. IUCN, Gland, Switzerland and Cambridge, UK. Luesch H., G.G. Harrigan, G. Goetz, and F.D. Horgen. 2002. The cyanobacterial origin of potent anticancer agents originally isolated from sea hares. Current Medicinal Chemistry 9(20):1791–1806. Madigan M.T. and B.M. Marrs. 1997. Extremophiles. Scientific American, April. Maxwell S. 2005. Medicines from the deep: The importance of protecting the high seas from bottom trawling. Natural Resources Defense Council, Marine Conservation Biology Institute. Washington D.C. USA. NBSAP. 1999. The Commonwealth of the Bahamas: National Biodiversity Strategy and Action Plan. Submitted to the United Nations Environment Programme, http://www.biodiv.org/world/map.aspx. Nelson K.E. 2004. Chapter 25. Genomics. pp. 250–259 in Bull T.A. (ed.) Microbial Diversity and Bioprospecting. ASM Press, Washington DC. USA. Newman D.J., G.M. Cragg, and K.M. Snader. 2003. Natural Products as Sources of New Drugs over 1981–2002. J. Nat. Prod 66:1022–1037. [CrossRef] Okibe N., M. Gericke, K.B. Hallberg, and D.B. Johnson. 2003. Enumeration and Characterization of Acidophilic Microorganisms Isolated from a Pilot Plant Stirred-Tank Bioleaching Operation. Appl. Environ. Microbiol. 69(4):1936–1943. [CrossRef] Otten, L.G. and W.J. Quax. 2005. Directed evolution: selecting today's biocatalysts. Biomol. Eng. 22:1–9. [CrossRef] Piel J. 2004. Metabolites from symbiotic bacteria. Nat. Prod. Rep. 21:519–538. Priest F.G. 2004. Chapter 5. Approaches to identification. pp. 49–70 in Bull A. T. (ed.) Microbial diversity and bioprospecting. ASM Press, Washington D.C. USA. Proksch P., R.A. Edrada, and R. Ebel. 2002. Drugs from the seas – current status and microbiological implications. Appl. Microbiol. Biotechnol. 59:125–134. [CrossRef] Proksch P., RA. Edrada-Ebel, and R. Ebel. 2003. Drugs from the sea – Opportunities and obstacles. Mar. Drugs 1:5–17. [CrossRef] Proksch P. and R. Ebel. 1998. Ecological significance of alkaloids from marine invertebrates. pp. 379–394 in Roberts M.F. and M. Wink (eds.) Alkaloids, biochemistry, ecology and medicinal applications. Plenum, New York , NY USA. Ro D.K., E. M. Paradise, M. Ouellet, K.J. Fisher, K.L. Newman, J.M. Ndungu, K.A. Ho, R.A. Eachus, T.S. Ham, J. Kirby, M.C.Y. Chang, S.T. Withers, Y. Shiba, R. Sarpong, and J.D. Keasling. 2006. Production of the antimalarial drug precursor artemisinic acid in engineered yeast. Nature 440:940–943. [CrossRef] Rosenthal J.P., D. Beck, A. Bhat, J. Biswas, L. Brady, K. Bridboard, S. Collins, G. Cragg, J. Edwards, A. Fairfield, M. Gottlieb, L.A. Gschwind, Y. Hallock, R. Hawks, R. Hegyeli, G. Johnson, G.T. Keusch, E.E. Lyons, R. Miller, J. Rodman, J. Roskoski, and D. Siegel-Causey. 1999. Combining high-risk science with ambitious social and economic goals. Pharmaceutical Biology 37:6–21. [CrossRef] Roy S., C.P. Singh, and K.P.J. Reddy. 2002. Analysis of all-optical switching in bacteriorhodopsin. Current Science 83(5):623–627. Schmid R.D. 2003. Pocket guide to biotechnology and genetic engineering. WILEY-VCH. Weinheim, Germany. Sipkema D.R. Osinga, W. Schatton, D. Mendola, J. Tramper, and R.H. Wijffels. 2005. Large-scale production of pharmaceuticals by marine sponges: sea, cell, or synthesis? Biotechnol Bioeng 90(2):201–22. [CrossRef] Suffness M. 1995. Taxol: Science and applications. CRC Press, Boca Raton, FL USA. Streit W.R. and R.A. Schmitz. 2004. Metagenomics – the key to the uncultured microbes. Current Opinion in Microbiology 7(5):492–498. [CrossRef] Strobel G.B., S. Daisy, U. Castillo, and J. Harper. 2004. Natural products from endophytic microorganisms. J. Nat. Prod 67:257–268. [CrossRef] ten Kate T.K. and S.A. Laird. 1999. The commercial use of biodiversity: Access to genetic resources and benefit-sharing. Earthscan. London, UK. Vandamme P., B. Pot, M. Gillis, P. de Vos, K. Kersters, and J. Swings. 1996. Polyphasic taxonomy, a consensus approach to bacterial systematics. Microbiol. Rev. 60(2):407–438. Vezzi A., S. Campanaro, M. D'Angelo, F. Simonato, N. Vitulo, F.M. Lauro, A. Cestaro, G. Malacrida, B. Simionati, N. Cannata, C. Romualdi, D.H. Bartlett, and G. Valle. 2005. Life at depth: Photobacterium profundum genome sequence and expression analysis. Science 307(5714):1459–61. [CrossRef] Weintraub A. 2004. Biotech heads for the factory floor. August 2, Business Week. 1 Regional Technical Advisor for Biodiversity, United Nations Development Programme/Global Environmental Facility. 2 Today, most ex-situ collections have benefit-sharing obligations with the countries that provided their biological resources and these obligations usually extend to all users of these resources (Carrizosa 2004). 3 Combinatorial chemistry was born in the 1980s when Mario Geysen invented the pin method in which simultaneous synthesis of diversified peptides gave rise to the first combinatorial libraries. 4 Natural products are defined as chemical compounds derived from biological sources. 5 A review of the origin of drugs over a 22-year period (1981–2002) indicated that 60 and 75% of drugs in the areas of cancer and infectious diseases, respectively, are of natural origin (Newman et al. 2003). 8 On 11 June 1993, a joint venture agreement was signed between Griffith University and Astra Pharmaceuticals Pty Ltd, Sydney, Australia, a subsidiary of Astra AB of Sweden. Astra AB merged with pharmaceutical giant Zeneca in 1999 to form AstraZeneca. This joint venture is today known as AstraZeneca R&D Griffith University. 10 Australia (Museum of the Northern Territories, 2002), Bangladesh (Bangladesh National Herbarium, Dhaka, 1994), Cambodia (Forest and Wildlife Research Institute, Department of Forestry and Wildlife, Phnom Penh, 2000), Ecuador (The AWA Peoples Federation, 1993), Gabon (Centre National de la Recherche Scientifique et Technologique, Libreville, 1993), Ghana (University of Ghana, Legon, 1993), Laos (Research Institute of Medicinal Plants, Ministry of Public Health, Vientiane, 1998), Madagascar (Centre National D'Applications des Recherches Pharmaceutiques, Antananarivo, 1990), Palau (Government of Palau, 2002), Papua New Guinea (University of Papua New Guinea, Port Moresby, 2001), Philippines (Philippines National Museum, Manila, 1992), Sarawak-Malaysia (State Government of Sarawak, State Department of Forests, 1994 and Sarawak Biodiversity Center, 2002), Tanzania (Traditional Medicine Research Institute, Muhumbili University College of Health Sciences, University of Dar Es Salaam, 1991), and Vietnam (Institute of Ecology and Biological Resources, National Center for Natural Science and Technology, Hanoi, 1997). 11 These companies include: American Cyanamid Company, Anti-Cancer Inc, Bristol Myers-Squibb Pharmaceutical Research Institute, Diversa Corporation, Dow Agrosciences, Glaxo Wellcome, Eisai Pharmaceutical Research, INDENA SpA, Molecular Nature Ltd, Novartis Oncology, Phenomenome Discoveries Inc, Phytomedics Inc, Searle-Monsanto, and Wyeth Pharmaceuticals. 12 Argentina, Cameroon, Chile, Costa Rica, Fiji, Jamaica, Jordan, Kyrgyzstan, Laos, Madagascar, Mexico, Nigeria, Panama, Papua New Guinea, Philippines, Peru, Samoa, Surinam, Uzbekistan, and Vietnam. 13 In contrast, in the terrestrial environment, plants exceed animals with regard to the production of secondary metabolites (Proksch et al. 2003). 14 Organisms whose cells have chromosomes with nucleosomal structure and separated from the cytoplasm by a two membrane nuclear envelope and compartmentalization of a function in distinct cytoplasmic organelles. 15 This is a unique group of microorganisms. They appear to be living fossils, the survivors of an ancient group of organisms that bridged the gap in evolution between bacteria and the eukaryotes (multicellular organisms). The name archaea comes from the Greek archaios meaning ancient. 16 Members of Protista which is the kingdom of eukaryotic unicellular, colonial and multicellular (without tissue specialization) organisms. It includes the Protozoa, unicellular eukaryotic algae and some fungi (myxomycetes, acrasiales and oomycetes). 17 Classical breeding has traditionally used genes from wild ancestors of cultivated plants and animals to promote pest resistance and develop new and improved crop variations or livestock breeds. 18 Polyphasic taxonomy considers all available phenotypic and genotypic data of bacteria and integrates them in a consensus type of classification, framed in a general phylogeny derived from 16S rRNA sequence analysis. In some cases, the consensus classification is a compromise containing a minimum of contradictions. It is thought that the more parameters that will become available in the future, the more polyphasic classification will gain stability (Vandamme et al. 1996). < previous section < index > next section >
<urn:uuid:8e5fed8d-a04f-4def-869a-52c925e3b181>
CC-MAIN-2013-20
http://data.iucn.org/dbtw-wpd/html/EPLP-067-4/section15.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916114
16,876
2.859375
3
?Fast-growing South–South trade and investment is an opportunity to ramp up developing countries’ abilities to master market-useful technologies and to bolster their abilities to innovate new products and services, an UNCTAD report says. The Technology and Innovation Report 20121, subtitled Innovation, Technology and South–South Collaboration, was released today. South–South economic cooperation is one of the major global economic developments of the past two decades. Exchanges between developing countries accounted for 55 per cent of global trade in 2010, as compared to 41 per cent in 1995, and this trend is already leading to useful diffusions of technology and innovative capacity, the Report says. Increased South–South exchange can lead to greater technological sharing, in a variety of ways. A first important channel is the import of goods, the Report says, which are used by importing countries to improve their production processes through copying and reverse engineering. Global production networks and foreign direct investment (FDI) are other factors that could promote transfers of technology and technological development in countries. The rise in the percentage of capital goods within South–South trade is therefore a very encouraging sign, UNCTAD affirms. Data in the Report show that not only have developing countries increased their share of capital goods imports, they are also the main source of high-technology capital goods for all other countries of the South. The share of developing countries’ imports from other developing countries has steadily increased, from 35 per cent in 1995 to 54 per cent in 2010. Trade in such products not only helps to expand economic activity and shift consumption patterns; it also shows that developing countries are increasingly offering competitive products in a variety of industries and involving a range of technologies. Similarly, rising South–South investment, which is increasingly related to activities in the services economy, bodes well for technology and innovation activities in the developing world, as services are generally based on advanced knowledge and technology, the Report notes. The share of developing countries in total outward FDI rose from 15 per cent in 2005 (at $132 billion) to 27 per cent in 2010 (at $400 billion), a large share of which is directed to other developing countries. Nevertheless, these benefits, which allow countries to technologically learn, innovate and integrate themselves into global production networks, both in low-cost manufacturing and in high-tech sectors, currently tend to be focused on only some countries. For example, East Asia accounts for the largest share of FDI outflows among developing countries, and most services-related FDI is directed towards other East, South-East, and South Asian countries, the Report says. Similarly, a large part of manufacturing FDI from these sources is directed at the electronics and automobile sectors of East Asian countries. By discussing this, the Report highlights an important problem, namely that the developments in South–South trade and technological exchange remain uneven, and that the existing technological divide prevents many countries from participating in and benefiting from South–South exchange. This situation does not mean that there are no technological collaborations elsewhere in the developing world, however it means that, by comparison, there are far fewer of them. Such collaboration, where it has occurred, is nonetheless promising, the Report notes. For example, the Pan-African e-Network Project is an initiative led by the Government of India, undertaken in partnership with the 53 members of the African Union. The Brazilian National Service for Industrial Training (SENAI) has so far provided international technical assistance through 48 international partnerships with 25 countries, leading 29 projects, with five in sub-Saharan Africa. There also are some important collaborations in the field of health and pharmaceuticals between Quality Chemicals (Uganda) and Cipla Pharmaceuticals (India), and between Brazil and Mozambique for the production of anti-retroviral drugs to battle HIV/AIDS. Similarly, the “Lighten Up Africa” project is a joint collaboration between China and 10 African countries, supported by the United Nations Industrial Development Organization (UNIDO), to help set up hydropower stations in Africa. The challenge is to expand these benefits to countries that currently are largely left out. UNCTAD calls for governments to take astute and cooperative steps to promote local technological learning, so that their firms can partake of the opportunities presented by increasing South–South exchange. It also calls for governments in all countries to provide clear incentives to firms to engage in technological sharing. In addition, there is a need to better coordinate State-led efforts to spur entrepreneurship with ongoing scientific and technological research. Currently, many countries, especially the least developed countries, are unable to capitalize fully on the existing and emerging opportunities in trade and technology due to the low absorptive capabilities of their firms and organizations. Full Report - http://unctad.org/en/PublicationsLibrary/tir2012_en.pdf Overview - http://unctad.org/en/PublicationsLibrary/tir2012overview_en.pdf
<urn:uuid:953bd05c-29da-40ef-b163-bd938f925d80>
CC-MAIN-2013-20
http://unctad.org/en/Pages/PressRelease.aspx?Me=,,ows_ContentType,ascending&OriginalVersionID=103&Sitemap_x0020_Taxonomy=UNCTAD%20Home&Product_x0020_Taxonomy=Press%20Release
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951879
1,010
2.578125
3
An electronic filter for any 60 hz interference. The most common type of interference heard in recordings, public address systems, and cordless phones comes from faulty grounding or radio interference. Sometimes a hum and sometimes a buzz, it repeats 60 times a second because it is caused by the power grid or some device in it. Simple filters will remove a smooth 60 hz hum but pass a spike that repeats 60 times per second as a buzz. The constant frequency of 60 hz in the power grid means the interference also repeats with a frequency of 60 hz. The buzz filter adds a signal to the input which is the inverse of anything repeating at 60 hz. it consists of 1) Computer memory sufficient to store at least one second of sound. 2) Enough CPU to perform the calculations described below in real time. 3) Digital - audio chips for IO. Allocate 10 strings of memories each sufficient to hold 1/60 second of sound. Create an index pointer to the strings. Sum 10 consecutive 1/60 sec segments to initialize total string. Wait for next 1/60 second cycle start. Index to next string in memory array. Recall string from indexed memory. Subtract it from total string. Recall input string from buffer. Store it at indexed memory. Add it to total string. Divide total string by 10. Invert total string. Store into output buffer. Return to first step. The output buffer is the source of a signal which is the inverse of any repeated 60 hz sound. If this is added to the input signal through an amplifier the resulting output will be free of any signal which repeats 60 times per second. Due to the averaging process white sound is added to the output. The amplitude of the white sound is proportional to the amplitude of the sound signal so it will not be heard in silent parts. The amplitude of the white noise can also be reduced by sampling a larger number of 1/60 second sound segments. Ten segments were used in the example and this gives a rise-decay time of 0.17 second. If 100 segments were sampled there would be a rise-decay time of 1.7 seconds. The retail device might be as small and powered by the sound signal with audio in and out jacks the only visible features. Sealed chips in plastic with no moving parts are durable and cheap. Buzz filter Technical notes: Today the functions of a buzz filter require at least a desktop computer. Computers will certainly become small and fast enough to bring the buzz filter down to a reasonable price. More intelligent noise reduction computers are presently installed in cars to cancel repetitive noises made by the car by making inverted sounds through the car stereo speakers. top of page
<urn:uuid:6f58fc0a-5d20-4048-8d9a-ff3adac576de>
CC-MAIN-2013-20
http://www.chaospark.com/inventions/buzzf.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921896
565
2.921875
3
On this scrap of island in Barataria Bay on Louisiana's Gulf Coast, a pelican egg lies bleaching in the sun, separated from its mother by waves that washed over its nest in a spring storm. Chilled to the bone, the tiny bird died inside the shell, a victim of the forces that are causing this island and much of Louisiana's coastline to crumble into the water. The coastline is eroding and sinking at a rate of 35 to 50 square miles a year, and Queen Bess is going even faster. In the 1950's the island had 45 acres; now it has shrunk to 20. And to the brown pelican, an endangered species that is Louisiana's state bird, Queen Bess is not just another island but a choice habitat. With 342 pairs of pelicans, it is one of the state's major rookeries. Pelicans breed at home. Richard Martin, a state zoologist, says a bird will return to the place it was born, even as that place is washed away to nothing. ''You'll see birds actually sitting on nestings,'' he said, ''as waves lap at their breasts.'' The Army Corps of Engineers and state officials are now working together to rebuild Queen Bess. A dike of seashells is being built into the bay, and mud dredged from the bay beyond will be used to extend the island to the edge of the dike. If the plan works, Queen Bess will grow by 15 or 20 acres. The project will cost $561,000, with $400,000 paid by the Army and $161,000 by the state, partly with money from a new Wetland Conservation and Restoration Fund. The fund, created last year by Louisiana voters to protect the coastline, gets money from taxes on oil and gas produced in Louisiana's oil patch. Once Louisiana had as many as 85,000 brown pelicans. But in the early 1960's the birds vanished; scientists suspect they were killed by Endrin, an agricultural pesticide that found its way into the food chain. ''Louisiana,'' Mr. Martin said, ''has the dubious distinction of being the only state in the country to have its state bird extirpated.'' The pelicans came back in trucks. In 1968 state scientists began bringing them in from Florida and depositing them along the coast. Since then, the birds have taken tenuous hold despite hurricanes and hard freezes. Aerial surveys show ''maybe 1,600 to 1,700 nesting pairs,'' Mr. Martin said. But Louisiana brown pelicans are still on the Federal list of endangered species. Pelicans are also listed as endangered in Texas and Mississippi and along the Pacific Coast. Mr. Martin said they had returned ''to more or less historic levels'' along the Atlantic Coast and in Florida and Alabama, where they were hurt by DDT. This year, for the first time since the birds vanished from Louisiana, pelicans struck out on their own. Twenty-seven pairs nested on a small, remote island, Mud Lump, near the mouth of the Mississippi. The pelicans may have flown there from Queen Bess. ''We were real excited,'' Mr. Martin said. But the Mud Lump pioneers' eggs disappeared. On June 1, Federal and state agents found 83 eggs stashed in a cubbyhole on a Vietnamese commercial fishing boat. Five people were arrested. ''The word on the street,'' Mr. Martin said, is that the eggs are highly sought as delicacies. Queen Bess birds will nest again in early November. In late September the pelicans took possession of the new shell dike when it was just a few hours old. They preened in the sun as porpoises played in the bay. But erosion will continue, because there is no cure short of turning back time and unleashing the Mississippi River from its levees. Those levees stopped the floods that carried sediment, building and replenishing the land. The river carries less sediment now. Canals for shipping and petroleum exploration are chewing up marshes, salt water is intruding and land is sinking. Sea levels are rising all over the world. ''We'd be lying to the public if we said we can stop coastal erosion,'' said Bill Savant, a state conservationist on a visit to Queen Bess. ''But we can slow it down. Unfortunately, we should have been slowing it down 30 years ago.''
<urn:uuid:fb8abe62-81dc-461d-9923-53a09327c3c2>
CC-MAIN-2013-20
http://www.nytimes.com/1990/10/02/us/queen-bess-island-journal-on-a-ravaged-coastline-the-state-bird-hangs-in.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967053
904
3.171875
3
Tonsillitis is an inflammation of the tonsils. Your tonsils are situated on either side of the back of your throat. They are part of the body's immune system and work by trapping viruses and bacteria travelling through the body. Tonsillitis is spread by air borne droplets, hand contact and kissing. Symptoms normally show within two to four days after contact with the infection. If you have a sore throat that lasts for more than a few days or is very painful with a marked difficulty in swallowing you should consult your GP. In the majority of patients, the throat infection is caused by a virus. A viral infection cannot be treated with antibiotics and patients should take painkillers to bring down any temperature and kill any pain. You may also wish to use a mouthwash to reduce discomfort. In a small minority of people, tonsillitis caused by bacteria is treated with antibiotics. If antibiotics are prescribed it is important that you finish the course to ensure that the infection is cured. Tonsillectomies, where the tonsils are surgically removed, may be necessary for those who suffer severe, repeated bouts of tonsillitis that do not respond to traditional methods of treatment. However, nowadays this practice is relatively rare.
<urn:uuid:c86caad0-7c74-4fc9-8cc6-4734eafc93e2>
CC-MAIN-2013-20
http://www.rowlandspharmacy.co.uk/advice/teething/index.cfm?event=shop.shopfront.advice.condition.view&categoryId=265
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952503
254
3.703125
4
Bright city lights along the coastline and interior delineate the eastern coast of the United States at night. Known as the “city that never sleeps,” New York City with its population of more than 8 million residents (in 2000) is the largest and brightest metropolitan area along the coast. The metropolitan area straddles the Hudson River and spreads eastward over Long Island. Philadelphia is the second largest city in this image, situated south of New York (lower left in this scene). One of the most richly historic of U.S. cities, Philadelphia is where the Declaration of Independence was signed in 1776. The crew of the International Space Station took this image from a vantage point well to the northeast of the cities, with the camera pointed westward back towards New York City and the coast. The result is that the perspective is highly distorted but still recognizable. Low clouds have formed over the waters of the Atlantic and have settled into some of the valleys of the Appalachian Mountains to the northwest. Astronaut Photograph ISS006-E-18382 was taken with a Nikon D1 digital camera and is provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth. - ISS - Digital Camera
<urn:uuid:5c4a8e3a-9e0a-42a0-aab0-8989d325ab65>
CC-MAIN-2013-20
http://www.visibleearth.nasa.gov/IOTD/view.php?id=3372
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94951
271
3.078125
3
Paper Modelling | Home Page | Links | What's New You may be aware that any surface can be approximated by a surface built from triangle shapes. The smaller the triangles, the better the approximation between the actual and the "meshed" surface. This principle is used in engineering to split technical surface shapes (like sheet metal) into a "mesh" of triangles. On this approximated surface you can work with recursive algorithms to calculate the tensions and deformations that you get when applying force to the structure; the method is called FEM (Finite Element Method) because the complete surface is split into "finite", i.e. limited, elements. (Similar methods exist for 3D bodies, but they are of no interest here.) The nice thing about meshing is that you can use it to model just any surface shape you want, with any precision you need. Fortunately, the precision needed in paper modelling is far less than what you need in FEM calculations, but nevertheless the calculations could become tedious. This page shows how to apply the method on a paper model, step by step, as I made the model. I put the algebra that may be necessary for a manual meshing process (and some more) onto a dedicated page, but actually it is not needed to make a model. CAD (computer aided design) software does the job much faster and neater, and very good programs are available very cheap today- free, in fact. A very nice and intuitive tool I found is Google Sketchup, a free polyeder-modelling system. With Sketchup you build your model line by line, plane by plane, with an accuracy that is amply enough for paper models. It would not be enough for most engineering work, which is why I would not call it a CAD system (neither does Google), but if you see the work others already did with the tool you could be fooled. The good thing is that Sketchup works very, very intuitively, and you can progress step by step towards very sophisticated work once you mastered the basics. However, be warned that it is next to impossible to derive a correctly scaled printable output from the freeware version of the tool. To get that, you need the professional version that includes the "layout" tool, and this version comes at about EUR 381 (as of July 2009), which is more than I am prepared to spend for my hobby just now. But SketchUp certainly is addictive, so one day, maybe... maybe soon! For the work described here I used Google SketchUp 7 (freeware edition, which means that I worked from unscaled JPEG exports instead of PDFs). By the way: If you want to do serious work with SketchUp I can very much recommend the books by Bonnie Roskes (about $70 for the set of basic and advanced exercises as PDF books). They are comprehensive, accurate, fun to work with and, in my opinion, worth every cent. I note that her website changed recently, please google for her new web address. Let's assume we want to design a paper model of the Muckle Flugga lighthouse (aka North Unst lighthouse, see also the satellite image). This lighthouse certainly is most impressive if built together with the rocks it stands on, otherwise the architecture is not that special (though the location certainly is- it is the northernmost building of the British isles). Here are some impressions of what this lighthouse and the rocks it stands on look like: View of the complete group of rocks that lie before the coast Unst, one of the Shetland Islands. Impressive view from the water level, showing the slab-like structure of many rocks. The light's height above the water is given as 66 metres. Good aerial view of the rock and the lighthouse Again, a good view from the waterline, with more details of the lighthouse visible A smaller view that shows more details of the buildings... but I forget the page theme already. It is not about lighthouses. I will use these images, Google Earth aerial views and some maps as my reference (the latter not included for copyright reasons). I work in a sequence of steps that are explained, one by one, below. Please keep in mind that this page is about terrain modelling, so the focus is on modelling the rock, not the lighthouse itself. I am no professional paper model designer, so there may be there are easier ways to perform certain steps- if you find a way to improve on the way I worked, please let me know! Some lighthouse facts: In this step I decide on the layout of the design, i.e. which planes and contours I need to have, which can be adapted if necessary, which details are the most important and which may be omitted. In many cases the easiest approach to this step will be to just copy the contour lines from a topographic map, decide on the most characteristic points, mark them on the contour lines and measure the X- and Y-co-ordinates. The big advantage of using contour lines is that all points on the same contour line have the same Z co-ordinate (=height), and the line on a map is even labelled with the actual height. This method can often be applied if the terrain is moderate, i.e. "hilly" rather than "rocky". A more elegant way to obtain the contours would be to import an image of the map into a CAD system and draw the contours from the image, afterwards moving (elevating) the contours to their correct height. Note that the example in the picture is just meant to illustrate the idea, it is not an example of a good terrain meshing. However, it shows some of the principles, for example that larger planar faces need not be split into triangles but can remain as polygons. For some reasons, this approach does not work very well for the Muckle Flugga model: So I started with a few images found in the internet and an aerial view of the rocks. I identified the main contours in the pictures and marked them in the aerial view. By the way, I decided to use 1:1000 scale, and I did so for very practical reasons: The maximum base size was to be A4 (297x210 mm) because the model had to be transported (I was on holidays when I started this). I also decided to model just the actual lighthouse rock and, just for fun, one small rock immediately beside it, omitting the other rocks in its immediate vicinity. The characteristic contours as I saw them in the aerial image. After that, I elevated the lines and faces to their correct height. I derived heights mainly from the photos (measured and estimated) with the main additional information being the height of 66 m given on the internet as the height of the light above the water. Some of the contours and points already elevated to their correct height. The photos were always used to compare the impression given by them with what the model shows. I now proceeded to add triangle surfaces to complete the model, and then simplified some nearly-planar surfaces into really planar ones to improve the impression of how the rock structure was captured (and to simplify the model, of course). Interlude: SketchUp makes it really simple to create planar surfaces. When I was done with the rock modelling, I checked result against the images again to see whether the characteristics of the landscape had been captured in the 3D model, and then applied some necessary corrections. This is the result that I got after a few iterations and that seemed good enough to me: The SketchUp model and the photos compared The SketchUp model, coloured and with buildings added (as I would add to to Google Warehouse, for example) We move to the paper model design now. In the first step I decide whether the paper model needs any additional internal structure to strengthen it and keep it in shape. Certain structures may also be needed just to make the model easier to assemble, or to create "tolerance buffers" that help to hide small mistakes you inevitably make while assembling the model. For this model things are not very complicated. I decide to use a column-like base structure for the lighthouse platform polyeder and to attach the cliff sides to this pillar. This should be enough to get the necessary structural strength for this small model. (Note: Later I added some more internal structures to make assembly easier.) The basic columns that will give the model its static structure If you do not use a CAD system you need to calculate the line lengths from the point co-ordinates and then transfer them to the drawing board. CAD makes this step obsolete. Ah, the wonders of 3D graphics technology! Now we must decide which surfaces will form the individual pieces of the assembly, i.e. where the cuts and the fold lines will go. Working with 3D CAD means that we can just select the faces we want to keep together on the model sheet, copy them, and unfold them step by step into the drawing surface (I used the X-Y-plane, i.e. top view). I did all the unfolding manually, but there are some plug-in programs available for Sketchup that assist you in the task. Part of the triangulated rock copied and unfolded into the X-Y plane The problem here is to choose the sections in a way that there are no overlapping faces after unfolding them- you can't overlap your cardboard when printing, after all. Otherwise this is a straightforward, creative, if sometimes tedious process. If overlapping occurs there are two ways out of it: Reconsider which faces you keep together, or split the section, adding one more assembly step for the model builder. You can add glue tabs now or later, as you wish (I did it later and improvised them for the check build). The glue tabs will probably have to be corrected later anyway, so maybe it is best to do them after the print sheets have been assembled and scaled to print size, and after your first test model has been built. I marked the rock sections with different colours before "exploding" the model; this helped a lot to see what was what, even a few hours after the work was done. I also used the colours as a reference for glue tabs. The rock sections colour-marked and exploded, waiting to be unfolded. The places where the different rock sections will be placed. Some (gray) auxiliary Structures have been added, but they were not used in the test model. And now comes the part that was more difficult than I expected: Creating scaled, printable sheets with correct glue tabs and an intelligent sheet layout. Something always seemed to go wrong. Which is precisely one of the reasons why you build a test model. The print sheets in a somewhat-final version. Glue tabs are colour-coded (blue goes to base, buff to other parts of the model- see attached colour code blob to see to which). "X" surfaces will be invisible in the finished model. Freeware SketchUp does not support precisely scaled output (only the Pro version does), so I had to improvise with JPEG exports of the sheets. This works well if you create all JPEGs from the same zoom factor, in one session, but it is next to impossible to re-export a sheet in exactly the same size. I imported the JPEGs into Macromedia Freehand (alas, a very dead tool by now) and printed from there. Sketchup has very nice features to add colour or photo-quality textures to your model, but photo matches will not survive unfolding the model. Since the Pro version lets you export into formats for professional graphics design programs, such programs may be the better choice of tool if you go for a real print edition. The moment of truth. Does this work as I imagined it? Do I have access to all the gluing tabs, and do the tabs fit to where they are supposed to go? Do the unavoidable tolerances build up to huge gaps, or can I hide them in the section boundaries? Here are some images of the progress of my "white model". I did this in my holiday hotel room, without a proper toolset, and the model was printed on very flimsy paper (80 or 100 g/m2, I believe, i.e. ordinary all-purpose printer paper). Still, it worked. The central "column"with the first rock sections attached. The "X" faces will be hidden by other parts of the model. The base sheet is a photoshopped aerial image, overlaid with the rock base contour. Same building stage, other view. Note the building contours on the top platform. The next images are after completion of the rock structure. The moment of suspense came, of course, when fitting the last part. Can you guess which one that was? The view from the South-East The North face of the rock; the complete future building arrangement is in view. North-East view with buildings in place Same model, seen from the South-East side The buildings in a close-up view. Well, you cannot expect too much detail at this scale (1:1000, the lighthouse itself is about 18 mm high from plateau base to top). If you encounter problems, you need to go back to the 3D modelling or the print sheets and to correct things that do not fit; I was lucky to have no problems that I could not fix without resorting to new prints. The white "check" model is finished now. For publishing work, this would be the time to go back to the design board and apply colours, textures and more detail. The final step before actually publishing the model would be to prepare the final print sheets, which is a major task in itself if you need to achieve professional results. I can not give a lot of advice for that kind of work. Finally, two pictures showing the same views, once from the Sketchup virtual model and once from the finished white model: The virtual rock and plateau The white model of same. Similarities are by no means unintended. When I was done, I found the result to pretty to throw away, so I proceeded to colour the surfaces. I wanted to concentrate on the terrain-building technique here, so I just painted the completed model instead of adding texture and colour to the print sheets and doing it all again. I used water colours, and as you can see I am no expert in this. Doing it with bad brushes (no good ones to have where I worked) did not help, either. And of course there is the problem of what colour those rocks really have; the pictures I had differed considerably, and I had to decide on some compromise. The South-East side with the small companion rock. This is the only side that shows some green on some of the pictures. North-Western view, the side looking out to the open sea The big cliffs on the South-Western tip of the rock Closer view on the buildings. In fact they had to be reinforced after the initial construction because during winter gales the waves swept the plateau and damaged walls and buildings. It must have been a frightful experience for the lighthouse wardens! Last but not least, a comparison again between the finished model and the original images I worked from: The aerial view, compared to the Google Earth image The big rocks on the West-South-Western side The complete rock from the sea side. The image background shows an adjacent rock I did not include in the model. Well, that's it! All in all, I did worse. Have fun with your own models, and if you decide to build Muckle Flugga, by all means tell me about your experience. As the author of this page I take no expressed or implied responsibility for the content of external links; opinions expressed on such pages are not necessarily mine. The web space provider is not responsible for the contents of this page or any linked pages. , last change 2013-03-18
<urn:uuid:2ef0dec0-14cb-4c1e-bbf9-aabd7f6fd413>
CC-MAIN-2013-20
http://flyhi.de/modellbau/modellbau_04_muckleflugga.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953435
3,335
3.078125
3
Your comfy couch. The wooden chairs in your kitchen. The paint you spent months picking out. Cleaning supplies piled under the bathroom sink, and even the sink itself. Nearly every item and surface in your home emits toxins into the air that you breathe. According to the EPA, concentrations of these airborne pollutants are up to 10 times higher indoors than out –levels high enough that they can cause short- and long-term health problems such as cancers, depression, and decreased kidney and immune function. In the past couple of years, companies and scientists have launched several research initiatives aimed at studying and solving the problem. Johnson & Johnson, the pharmaceutical behemoth that manufactures everything from baby shampoo to glucose management systems, announced it would phase out toxins from all products by 2015. Scientists at universities and private companies are also developing new technologies that could give our clothing or smartphones the power to monitor dangerous chemical levels wherever we are. Click the slideshow above for PopSci's breakdown of some of the chemicals to watch out for, easy low-tech ways you can mitigate dangerous levels, and innovative technologies under development that will help you reduce your overall exposure.
<urn:uuid:530320b6-9ef4-4663-b94e-bf9d4a41a3ee>
CC-MAIN-2013-20
http://www.popsci.com/environment/article/2012-11/your-house-killing-you-here-are-6-ways-stop-it
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947764
230
2.90625
3
What's The Difference Even after reaching dry land, Yonah does not verbally acknowledge that he had received a Divine command; neither does he set off on his mission. Hashem speaks to Yonah again… but why was that necessary? Now that Yonah humbled himself in the belly of the fish, should he not simply fulfill the original command? Was it because the first command lapsed? Or was it because it had changed? An examination and comparison of the first and second command reveal significant differences between them. Arise and go to Nineveh, the great city and cry against it for their evil has come unto me. Arise and go to Nineveh, the great city and cry to it the call that I speak to you. In the later version, there is no mention of the wickedness of the people of Nineveh; neither is the prophet told to go against the city but rather to speak unto it something that he already knows. Granted, "evil" may not refer to the sins of the city but to the evil that G-d has planned to bring upon it (see Ibn Ezra), as some have argued on the basis of similar usages in Exodus 32,14 ('and G-d repented from the evil that He spoke against His people') or even in the book of Yonah itself (1,7 -'let us throw lots and we will know on whose account this evil has come upon us' and 3,10 'and G-d repented of evil that He spoke to do to them'). I tend to side with the interpreters, such as Malbim and Metsudos, who understood it as referring to the wickedness of the inhabitants of Nineveh. It seems to me that this interpretation is strongly supported by the implied parallel to the other great ancient city, one that is alluded to several times in the course of our story - the city of Sodom (see Genesis 20, 21-22). Both of them were great metropolises that deserved destruction; when we discuss what the actual sins of Nineveh may have been, we will return to this point. What has changed? Where did their wickedness go? G-d no longer seems as antagonistic to this city as in the beginning; instead He asks the prophet to call it to repentance. This question led some commentators, for example Abarbanel, to conclude that during delay caused by Yonah's escape, G-d changed His mind. For some reason He became more favorably disposed to Nineveh, offering it another chance and no longer actively seeking its destruction. If so, this would be a fine example of Divine irony, for Yona's escape availed only to bring closer that which he sought to prevent. In this view the second version of the command is necessary because the first one is no longer operative. One might suggest a different explanation. Perhaps, the original command included within it two different imperatives. It allowed for the possibility of repentance but its focus was on the stern message of coming annihilation. The same is true of the second prophecy. This is pointed out by Rashi to 3,4. And Yonah began to come into the city one day's walk and he called out and he said: "Forty days more and Nineveh is overturned". Rashi: Overturned means destroyed. He did not say "destroyed" because "overturned" has two meanings, one good and one evil. If they do not repent - destroyed. If they do repent it will be overturned for the men of Nineveh will turn over from evil ways to ways of goodness. As we have previously discussed (see Malbim, Abarbanel to 1,2 and Responsa Radvaz 2, 842), Yonah did not fully perceive or completely understand the full depth and content of the original prophecy. Its full meaning escaped from him for he was committed to Justice over Compassion. His spiritual point of view and assumptions were such that only the message of destruction came through loud and clear. The pain, suffering, and his own near death led to a process of growth that awakened in the soul of the recalcitrant prophet a measure of empathy for hapless inhabitant of Nineveh. Only now was he able to hear fully, though not yet fully accept, the other side of Hashem's message. The rest of the book of Yonah is about the growth of this realization and Yonah's struggle to reconcile it with his previous world-view, in short, his engagement with Divine Mercy as the underlying element of Divine Text Copyright © 2004 by Rabbi Dr. Meir Levin and Torah.org.
<urn:uuid:85f820bc-c085-4e83-bd4f-9592ab6af2f0>
CC-MAIN-2013-20
http://www.torah.org/learning/yonah/class27.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968602
1,020
2.875
3
In an eclectic discipline such as Neuroscience, models are built using many different research paradigms. In Chapter 8 of Thomas Kuhn’s ‘The Structure of Scientific Revolutions’, he writes about the response of the scientific communities to crises in science. Kuhn suggested that a paradigm was either successful in which case there would be an opposing paradigm (or paradigms) or else the paradigm was static and became a research tool. If we consider a Neuroscience model which borrows from several paradigms, how will Kuhn’s insights influence our understanding of this? Kuhn’s insights can be restated as ‘If a research paradigm is successful it will face competing research paradigms otherwise the unopposed paradigm will become an inactive science‘ If the Neuroscientist constructs an interdisciplinary model, then the Neuroscientist will borrow from several research paradigms. This leads to several possibilities according to the above statement. The model may incorporate a combination of active and inactive research paradigms. For the active paradigms, the Neuroscientist will need to choose one of the competing paradigms. In contrast if the model borrows from inactive research paradigms then no choice is needed as the dominant paradigm is unopposed. This latter possibility is more straightforward in terms of model building. However if we return to the first example, what happens when a Neuroscience model borrows from active paradigms? Firstly the Neuroscientist must choose between competing paradigms and validate this choice. Secondly the validity of this model will be contingent on the paradigm debate within the research community. If the opposing paradigm prevails then the model becomes invalidated. Contrasting again with the second example of a model which borrows from inactive sciences – this model is more robust because the state of flux in the research community is absent. In practical terms however the research paradigms more relevant to Neuroscience are numerous and we can ask what can we properly consider as a research paradigm. If we look at the actions of Serotonin on mood in the Limbic System, the phenomenon can be broken down into several components. The question of whether Serotonin is a Neurotransmitter that acts on neurons relates to a paradigm which can be considered as inactive. The ability of Serotonin to act as a neurotransmitter is not seriously challenged. A Medline search using the term “Serotonin Neurotransmitter” returns 100, 379 articles. Searching through the first 20 abstracts, none of the papers challenged the basic assertion that Serotonin is a neurotransmitter. Restricting the search to reviews using the term “Serotonin Neurotransmitter Review” retrieved 12, 319 articles. Looking at the first 20 abstracts, this paper suggests additional roles for Serotonin in platelets and via an action on Liver Serotonin receptors. However this does not challenge the theory that Serotonin acts as a Neurotransmitter. We can easily find contemporary studies that support the theory that Serotonin acts as a neurotransmitter. Searching Medline using the term “Serotonin receptor depression” retrieves this paper in which Positron Emission Tomography was used. The researchers show that 5HT1 A receptor binding changes after treatment with a medication that increases Serotonin levels in the intracellular space – the Serotonin Reuptake Inhibitors. The central assumptions in this study are fairly straightforward including Serotonin’s action as a neurotransmitter. At this stage it is not too far fetched to say that researchers take it as read that Serotonin is a neurotransmitter and have moved on with their inquiries which in terms of the broader literature on Serotonin have become ever more esoteric. Turning next to the relationship between Serotonin and the brain’s emotional centre – the Limbic System, this paper looks at the research evidence which shows that almost every type of Serotonin receptor is present in the Hippocampus. This discussion occurs in the field of Histology, the study of the microscopic properties of cells and tissues. This in turn borrows from a number of other research paradigms in order to build working models that are used to interpret the data. There are a large number of papers retrieved using the search term “Hippocampus Serotonin Receptor” although the central question of whether there are Serotonin receptors reliably found in the human Hippocampus is less clear without a more detailed analysis of the abstracts and papers. Finally what can we say about the relationship between Serotonin, Mood and the Hippocampus? (limiting the Limbic system question to the Hippocampus). Using the search term “Serotonin Receptor Hippocampus Mood” did not retrieve any studies. However the PubMed interface automatically generated an alternative search term which utilised other terms as well as the OR operator to yield 403 results with varying degrees of relevance. These papers used a variety of different models. Again a superficial examination of the results did not show a clear answer to the question of whether mood was related to the Serotonin receptors in the Hippocampus. In addition, the research studies were complex and some were in vitro which meant that limited conclusions could be made regarding mood. The analysis of one simple example above shows that the complex theoretical problems in understanding the psychopharmacological aspects of mood in relation to the Limbic Cortex are not resolved by simply considering the debate between two or more opposing research communities with different research paradigms. Instead there are many research paradigms. The central theories in these paradigms are robust and the the research (perhaps the ‘normal science’ that Kuhn refers to) becomes increasingly esoteric. By combining these research paradigms it becomes difficult to establish a clear causal pathway between receptor activation in one brain region and changes in mood. The problem is that science works best when it takes a small part of the world under carefully controlled conditions and the scientist is able to manipulate a few variables leaving all other conditions invariant. In this regards physicists have had it lucky! The question of whether we can relate mood to changes in Serotonin in the Hippocampus is partly a ‘real world question’. To understand the relation to mood we must measure the person’s mood and how it changes over time. We cannot isolate a few molecules or a tissue. We must see the whole person. As soon as that is done, it becomes very difficult to produce controlled conditions. Ecologically valid studies require that the person is evaluated in the natural environment. Under those conditions there are large numbers of other factors that may influence mood. For instance there may be changes in the activity of Serotonin or other neurotransmitters in other areas of the brain, the optimal time period for evaluation may be unclear, there may diurnal changes in mood, physical activity levels may alter, hormonal changes, dietary changes, the metabolism of Serotonin may fluctuate due to various factors, relationships with other people may influence affect and mood and so on. Perhaps it is the question of ‘real world evaluation’ which is the central problem for Neuroscience research and indeed for Psychiatric Research. Nevertheless when significant results are found this means that the observed effects are being seen despite this ‘real world’ problem. That in turn means that despite such challenges the scientists have been able to reliably identify real and important phenomenon. If we take the analogy of science as a magnifying glass looking at nature however the more esoteric studies are probably testing the resolution of the magnifying glass. Sometimes they exceed the resolution and produce artefacts while at other times they get it just right. Appendix 1 – Review of Chapter 8 In Chapter 8 of Thomas Kuhn’s ‘The Structure of Scientific Revolutions’ is titled ‘The Response to Crisis’. Whereas in Chapter 7, Kuhn focuses on how the crisis in science arises in this chapter he elaborates on how the scientific community responds to this crisis. He makes the interesting point that in criticising one theory the scientist must propose an alternative otherwise this is not the pursuit of science. What is also interesting is that he suggests that when this competitive process ends, the branch of science becomes static and in the example he gives it becomes a ‘research tool’. Kuhn suggests that there are always discrepancies even in the most successful of paradigms. With a move towards crisis there are increasingly divergent explanations and there is a loss of identity within the field. Indeed Kuhn maintains that all crises involve a blurring of paradigms. The crises are closed in one of three ways. In the first case, the crisis is handled. In the second scenario there is a resistance to radical approaches. In the final scenario the crisis leads to the emergence of a new candidate for paradigm. Kuhn then goes onto discuss commentators on the field who refer to Gestalt theory in which a visual perception is dependent on the whole rather than part of an object. So if the reader looks at the cube below, the lower square face can be interpreted either as sitting at the front of the cube or the back of the cube. In both cases the square takes on a different meaning within the whole object that is perceived. In the same manner Kuhn suggests that new paradigms lead to a different way of seeing a body of empirical facts. He is quick to point out however that this is a crude analogy and that scientists do not quickly switch back and forth between paradigms. Nevertheless it illustrates the essence of his arguments well. Alan De Smet, ‘Multistability‘ (Public Domain) Kuhn then goes on to say that the scientist having identified the anomaly central to a crisis will go on to explore the anomaly and to better characterise it. In crisis, speculative theories multiply and increase the chance of a successful paradigm being reached. He also suggests that philosophical inquiry into assumptions can challenge some of the tenets of the current paradigm. Finally Kuhn finishes by commenting that many scientists leading to scientific revolutions are deeply immersed in crisis and they are either very young or new to the field in change which he interprets to mean that there thinking has not been shaped by the component rules of a paradigm. However Charles Darwin would be a notable exception having published ‘On the Origin of Species’ at a mature age and with a comprehensive knowledge of the related fields in biology. Nevertheless there are numerous counterexamples and the main result of this chapter is that Kuhn provides the reader with very effective tools for thinking about science in transition. * One thought I had here was that in the very early stages of a science there must be a lot of theories that are initially developed but which are quickly shaped by the experimental facts. In this way many theories would exist before quickly falling to experimental findings in which case there would be a ‘survival of the fittest’ theories which are tested against each other. This has a number of implications. Firstly that a philosophical system might define this pre-science phase in which a large number of theories exist without being tested against the experimental facts. The brain’s analytical and other abilities are used as an alternative to hypothesis testing in the real world in order to generate ‘realistic’ solutions based on experience and intuition. As time proceeds and assuming the system has an efficient or effective ‘memory’ and scientific inquiry produces a growing body of empirical facts the competitive process in which proponents of different models challenge each other’s models and refine their own leads to ‘fitter’ models (using evolutionary terms). However these models are adapted to the empirical facts which in turn are a byproduct of the initial inquiries in this area.In this manner, mathematics might offer the best ‘starting conditions’ for this philosophical inquiry as these starting conditions give philosophical inquiry the least opportunity for diverging from reality using such an approach. Secondly fitter theories might well diverge significantly from an explanation of reality depending on their starting conditions although there might be other phenomenon which curtail that line of inquiry as this divergence becomes more evident. What this would also mean is that the development of the most effective scientific theories is not only a measure of how effectively a theory fits with the empirical data but is also a marker of how effectively a theory keeps the focus on the empirical data in which the theory initially flourished as well as a measure of how effectively the theory recruits and retains proponents. Appendix 2 – Resources on this Site Index: There are indices for the TAWOP site here and here Twitter: You can follow ‘The Amazing World of Psychiatry’ Twitter by clicking on this link. Podcast: You can listen to this post on Odiogo by clicking on this link (there may be a small delay between publishing of the blog article and the availability of the podcast). It is available for a limited period. TAWOP Channel: You can follow the TAWOP Channel on YouTube by clicking on this link. Responses: If you have any comments, you can leave them below or alternatively e-mail email@example.com. Disclaimer: The comments made here represent the opinions of the author and do not represent the profession or any body/organisation. The comments made here are not meant as a source of medical advice and those seeking medical advice are advised to consult with their own doctor. The author is not responsible for the contents of any external sites that are linked to in this blog.
<urn:uuid:944fe8f6-bb9a-4ee9-a15d-d62b88676629>
CC-MAIN-2013-20
http://theamazingworldofpsychiatry.wordpress.com/2012/08/16/what-is-the-effect-of-a-scientific-crisis-in-neuroscience-an-interpretation-of-scientific-revolutions-part-9/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944627
2,745
2.578125
3
Max Weber (1864-1920), who was a German sociologist, proposed different characteristics found in effective bureaucracies that would effectively conduct decision-making, control resources, protect workers and accomplish organizational goals. Max Weber's model of Bureaucracy is oftentimes described through a simple set of characteristics, which will be described in this article. Max Weber's work was translated into English in the mid-forties of the twentieth century, and was oftentimes interpreted as a caricature of modern bureaucracies with all of their shortcomings. However, Weber's work was indented to supplant old organizational structures that existed in the earlier periods of industrialization. To fully appreciate and understand the work of Max Weber, one therefore has to keep the historic context in mind, and not "just" see his work as a caricature of bureaucratic models. Below, some characteristics of the bureaucratic model are presented. Each characteristic is described in relation to which traditional features of administrative systems they were intended to succeed. Fixed division of labor The jurisdictional areas are clearly specified, and each area has a specific set of official duties and rights that cannot be changed at the whim of the leader. This division of labor should minimize arbitrary assignments of duties found in more traditional structures, in which the division of labor was not firm and regular, and in which the leader could change duties at any time. Hierarchy of offices Each office should be controlled and supervised by a higher ranking office. However, lower offices should maintain a right to appeal decisions made higher in the hierarchy. This should replace a more traditional system, in which power and authority relations are more diffuse, and not based on a clear hierarchical order. A bureaucracy is founded on rational-legal authority. This type of authority rests on the belief in the "legality" of formal rules and hierarchies, and in the right of those elevated in the hierarchy to posses authority and issue commands. Authority is given to officials based on their skills, position and authority placed formally in each position. This should supplant earlier types administrative systems, where authority was legitimized based on other, and more individual, aspects of authority like wealth, position, ownership, heritage etc. Learn more about Max Weber's types of authority Creation of rules to govern performance Rules should be specified to govern official decisions and actions. These formal rules should be relatively stable, exhaustive and easily understood. This should supplant old systems, in which rules were either ill-defined or stated vaguely, and in which leaders could change the rules for conducting the daily work arbitrarily. Separation of personal from official property and rights Official property rights concerning e.g. machines or tools should belong to the office or department - not the officeholder. Personal property should be separated from official property. This should supplant earlier systems, in which personal and official property rights were not separated to the needed extent. Selection based on qualifications Officials are recruited based on qualifications, and are appointed, not elected, to the office. People are compensated with a salary, and are not compensated with benefices such as rights to land, power etc. This should supplant more particularistic ways of staffing found in more traditional systems, where officials were often selected due to their relation with the leader or social rank. Benefices such as land, rights etc. were also common ways of compensating people, which was to be replaced by a general salary matching qualifications. Clear career paths Employment in the organizations should be seen as a career for officials. An official is a full-time employee, and anticipates a lifelong career. After an introduction period, the employee is given tenure, which protects the employee from arbitrary dismissal. This should supplant more traditional systems, in which employees' career paths were determined by the leader, and in which employees lacked the security of tenure. Max Weber viewed these bureaucratic elements as solutions to problems or defects within earlier and more traditional administrative systems. Likewise, he viewed these elements as parts of a total system, which, combined and instituted effectively, would increase the effectiveness and efficiency of the administrative structure. The bureaucratic structure would to a greater extent protect employees from arbitrary rulings from leaders, and would potentially give a greater sense of security to the employees. Additionally, the bureaucratic structure would create an oppurtunity for employees to become specialists within one specific area, which would increase the effectiveness and efficiency in each area of the organization. Finally, when rules for performance are relatively stable, employees would have a greater possibility to act creatively within the realm of their respective duties and sub-tasks, and to find creative ways to accomplish rather stable goals and targets.
<urn:uuid:d1dc2a9e-74d0-45ad-9590-2325b7a83790>
CC-MAIN-2013-20
http://www.businessmate.org/Article.php?ArtikelId=30
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969682
951
3.640625
4
HaShem brought us out of Egypt with a Mighty Hand and an Outstretched Arm (Deuteronomy 26:8) When Does Passover Begin? According to Biblical law, Passover is determined by the Jewish lunar calendar, and begins on the eve of the fifteenth day of the month of Nisan. The English date varies from year to year, falling in March or in April.
<urn:uuid:7473a0b2-2911-4eb3-8a4c-02b0d715b23c>
CC-MAIN-2013-20
http://www.ou.org/chagim/pesach/pesachguide/maze/basic2.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904116
85
3.453125
3
Softening Water Is A 4-Step Process - The body of a water softener is a tank filled with resin beads. These beads are covered with sodium ions. As hard water passes through, the resin beads act like a magnet attracting the calcium and magnesium ions(hardness) in exchange for the sodium ions. - Eventually the resin beads become saturated with mineral ions and have to be"re-charged." The process is called regeneration, and is conducted by the control valve on the top of the tank. The control valve is the brain of the system. - During regeneration, a strong brine solution is flushed through the resin tank, bathing the resin beads in streams of sodium ions which replace the accumulated calcium and magnesium ions (hardness). In a single tank system this normally happens when you are asleep. - The brine solution, carrying the displaced calcium and magnesium ions, is then flushed down the drain by fresh water. The regenerated resin beads can be used again and again. Bridging occurs in the brine tank when salt sticks together creating a "bridge" that keeps the salt from dropping down to the water in the bottom of the tank. You can eliminate bridging by using the appropriate salt for your softener. No, salt is used in your water softener is to regenerate or clean the resin beads that actually take the hardness out of your water. This regeneration should not make your water taste salty. An efficiently operating water conditioner adds approximately 7.5mg of sodium per quart of water for each "grain per gallon" of hardness removed. For example, if your water contains ten grains per gallon of calcium/magnesium, the softened water would contain 75.0mg per quart of added sodium. A slice of bread contains approximately 114mg of sodium and an 8oz glass of milk contains 120mg of sodium. The more often your softener regenerates, the more often you'll need to add salt. A general rule of thumb is to check your softener once a month. To maintain consistently soft water, keep your salt level at least half-full at all times, but do not overfill. There are 3 basic types of softener salt: rock salt, solar salt (crystals) and evaporated salt (pellets). Rock salt is a naturally occurring mineral, which is obtained from underground salt deposits by traditional mining methods. It’s chemical purity, runs from 98% to 99% sodium chloride. It has a water insoluble level of about 0.5% to 1.5%. Solar salt is a natural product created by the evaporation of seawater or inland brine sources. It has a sodium chloride content of 99.5% or higher, and water insoluble level of less than 0.03%. It is most commonly sold in a crystal form, but also may be sold in the form of compressed pellets or blocks. Evaporated salt is manufactured by solution mining underground-bedded salt deposits of dissolving salt to form brine and then evaporating the moisture using energy in the form of natural gas or coal. Evaporated has a sodium chloride content ranging from 99.6% to 99.99%. Water insoluble matter generally is less than 0.01%. Solar salt contains slightly more water insoluble material than (evaporated salt) pellets. If your system regenerates frequently these insoluble materials will build up in the brine tank and need to be cleaned out. If your regeneration time is less frequent these products could be used interchangeably. Rock salt will work in a softener; however, because of the relatively high level of water insoluble matter it is not recommended. If used the brine tank will need to be cleaned several times a year depending on the purity of the salt. Unless the salt product being used is high in water-insoluble matter, or there is a serious malfunction, it is usually not necessary to clean out the brine tank. If you are on our salt delivery plan Erkens Water inspects your brine tank for this condition. No, solar salt is a natural product made by evaporating seawater. It is collected much like an agricultural crop and may contain minute amounts of dirt, small pebbles, and other naturally occurring materials. Since these materials are of a different density than the brine in the bottom of the brine tank, they are generally left behind or flushed from the resin during the rinse cycle that follows regeneration. Normally, blocks are used in specially designed salt holding tanks. For proper operation, the water level in the holding tank is raised to keep the blocks submerged for maximum brine formation. If you want to switch to salt blocks you may have to reset the water level in the salt keeper. The smell of rotten eggs is generally caused by hydrogen sulfide gas that may be present in the water supply. Softener salt does not remove this odor or the gas. Contact Erkens Water for options to remove this type of odor from your water. Check the salt at the water level to see if a solid mass has developed (called a "bridge"), or if fine "mushy" salt is lying at the bottom of the tank (called mushing). If a bridge has developed, carefully break up the mass to allow it to drop into the water below. If mushing, remove the good pellets, scoop out the "mushed" salt, and reload the good pellets. If neither of these conditions are the cause, call Erkens Water to inspect your softener. The best practice is to purchase a product that is specially designed for snow and ice removal. Generally the salt crystals are smaller in products that are designed for snow and ice. Erkens Water carries salt for this purpose. Please call for details. Potassium Chloride may be used for ion-exchange resin regeneration. It is a different type of salt that uses potassium in the ion exchange process instead of sodium. It is a more expensive product. Usually you will see discolored water at the faucet after regeneration. Another sign is poor service flow through the softener. You nay also see tiny round particles coming out of the faucets. Normally if serviced regularly a softener life span will be approximately 20-25 years. This is an example of water-insoluble matter from salt or the water supply. This water-insoluble matter may have the appearance of a brown or black sludge or appear oily. While a certain amount of this is normal, if it is excessive the tank will need to be cleaned. The Water Quality Association has preformed studies that indicate that the brine discharged from a water softener will no way harm a properly placed septic tank with an adequate septic field. Direct discharge of either sodium or potassium chloride brine on a lawn or garden should be avoided. Over a long period of time the sodium or potassium chloride brine will build up in the soil. Yes. Evaporated salt ranges from 99.7 to 99.99% pure sodium chloride. Solar salt is typically 99.6 to 99.8% sodium chloride. Rock salt used for water conditioning may run from 95 to more than 98.5% sodium chloride, depending on the source. Rock or solar water softening salt tends to be coarse and will work well for this purpose. Pellets are too coarse and should not be used. In most cases no; however, certain water softeners are designed for specific water softener products. Mixing of coarse and fine products (for example pellets and rock salt) is not recommended as it may create bridging. It is recommended that you allow your softener to go empty (or nearly empty) of one type of salt before adding another. The water level should be set according to your owner’s manual or at your water conditioning technician's recommendation. The salt level should be a minimum of 3 to 4 inches above the water level, unless otherwise directed by the owner’s manual or water conditioner technician. Loosen any encrusted salt that may be adhering to the perimeter of the salt tank, making sure that any large pieces are broken up. Distribute the salt evenly across the salt tank. Make sure water level is appropriate for optimum operation. As with food considerations, water-softening salts are not intended for human or animal feeding. The particle size is inappropriate for small animals. Also, water-softening salt may have additives that are not suitable for animal feeds. No, salt is used only to regenerate or clean the resin in the softener tank. Salt does not directly soften the water. Both do the same job. They replace calcium and magnesium on the softener resin during the regeneration process. When you use sodium chloride, sodium will be added to the soft water during use and when you use potassium chloride, potassium will be added to the soft water. People whose physicians have advised them to eliminate sources of sodium from their drinking water normally use potassium chloride. In some people who have kidney or other renal problems, potassium can aggravate those problems. Most healthy people (>97%) can use sodium chloride without trouble and sodium chloride is less expensive. If you have any questions, consult your physician. It depends on the hardness of your water, but on average less than 3% of your sodium intake comes from drinking softened water. It is estimated that the average person consumes the equivalent of two to three teaspoons of salt a day from various sources. Assuming a daily intake of 5 grams (5000 milligrams) of sodium in food and the consumption of three quarts of water (i.e., coffee, tea, fruit juices, and drinking water), the contribution of sodium (Na+) in the water from the home water softening process is minimal compared to the total daily intake of many sodium-rich foods. The formula for calculating the amount of additional sodium follows: mg of Na / quart of softened water = grains of hardness X 7.5 mg Na / grain of hardness. Yes, Most bottle water containers are universal and Erkens Water Bottles fit most popular make and model coolers.
<urn:uuid:52f8dcc2-4e68-4f6f-95d0-0471e33baee9>
CC-MAIN-2013-20
http://erkenswater.com/frequently-asked-questions-water-softener.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931452
2,073
3
3
Tackling social factors to save lives in India Health inequalities persist amid a booming economy. Patralekha Chatterjee reports. Inside the primary health centre at Gharuan village in the north Indian state of Punjab, a family is excited about its newest member. Ram Kaur looks fondly at her grandson, barely hours old, lying beside his mother. Although she herself had given birth at home, she encouraged her daughter-in-law Karamjeet to give birth at a health centre. “Many women die in childbirth because they do not make it to a hospital or a health centre in time. I did not want that to happen in my family,” she says. Karamjeet is fortunate. In India, only 41% of births take place in a health facility and only one in seven babies born at home is delivered by a skilled birth attendant, according to the most recent figures. Still, thousands of women die every year in childbirth or just afterwards across this vast country of 1.2 billion because they live too far from a health facility to get antenatal care and because they can’t afford the transport and other costs linked to hospitalization. Also – unlike Ram – family elders often have more faith in traditional birth attendants, who are not always able to handle obstetric emergencies. A community health worker convinced Karamjeet’s family of the benefits of institutional delivery. Such workers are vital for improving health in communities with little education. “In the beginning, I was scared,” says Karamjeet. “But after a few visits to the health centre for check-ups the fear disappeared.” She smiles after receiving her US$ 15 (700 Indian rupees) cheque under the Janani Suraksha Yojana (Motherhood Protection Scheme), a federal government cash assistance programme which requires recipients to undergo at least three antenatal check-ups and give birth in a health facility. The health worker who persuaded Karamjeet’s family is one of a national cadre of village-based workers (accredited social health activists). These workers also advise villagers on sanitation, hygienic practices (including hand washing), contraception, immunization and other health issues. They form the backbone of a flagship government programme launched in 2005 known as the National Rural Health Mission. The Planning Commission, the government agency responsible for the country's Five-Year Plans, has pledged to spend more on health in the next plan, which begins next year. And, as funds for health are set to increase, public health advocates are calling for more to be spent tackling the social factors that determine health. “While some initiatives have been taken to address the social factors that impact health … the government has not adequately invested in employment,” says Mirai Chatterjee, the director of social security for the Self-Employed Women's Association. “Since 1990 we have seen mostly ‘jobless growth’. This has led to increasing inequalities,” she says. For Nata Menabde, the World Health Organization (WHO) Representative to India, despite the fact that the health system is there to serve everyone who needs it, the rich often have better and easier access to services than poor and vulnerable populations. “This is, among other reasons, because the poor have reduced access to information and take more time to find their way to needed services within the system. Therefore, having services available to all is not enough. Extra efforts are needed to ensure that all people can benefit from them in an equitable manner.” “A significant body of evidence shows that living conditions and poverty in a broader sense are important determinants of health. Can WHO improve health without addressing poverty and living conditions? Although poverty has many dimensions – not all of which fall within the scope of WHO’s mandate – WHO can contribute to concerted actions towards poverty reduction by bringing evidence to the attention of policy-makers and making them aware of the links between those determinants and health outcomes,” Menabde says. “There are promising signs of change,” says Chatterjee, who was a member of the Commission on Social Determinants of Health, a group of policy-makers, researchers and activists set up by WHO in 2005. She is now a member of the High-Level Expert Group appointed by India’s Planning Commission in October 2010 to develop plans for universal health coverage. “The High-Level Expert Group has noted that universal health coverage will only be possible if there is accompanying action on the social determinants of health,” says Devaki Nambiar, a member of the Group’s secretariat. These, she says, include “food and nutrition security, social security, water and sanitation, work and income security as well as … gender, caste and religion”. Nambiar says that such action is needed “within a broader macroeconomic policy context that prioritizes equity”. It remains unclear what form such initiatives would take. Meanwhile, the merits of some current attempts to tackle social factors affecting health remain in doubt. Nambiar questions whether cash incentive schemes really go to the heart of the matter. “Equity advocates feel that cash incentives take away from a rights or entitlements-based approach by incentivizing certain behaviours,” she says. And it has yet to be seen whether Janani Suraksha Yojana, launched in 2005, will reinforce recent improvements in maternal health. The number of rural women giving birth in hospitals across India has increased from 34% in 1998–99 to 41% in 2005–06, according to official government figures. This trend is evident in Punjab. “Institutional births are going up,” says Ashok Nayyar, director of Health Services Family Welfare there. He adds that the maternal mortality ratio in the state has fallen from 192 deaths per 100 000 live births in 2004–06 to 172 in 2007–09. Punjab tops up the federal government 700-rupee cash incentive with its own scheme offering an additional 1000 rupees to mothers – like Karamjeet – who give birth in a health facility. It also offers them free transport to these facilities. Chatterjee points to other attempts to tackle the social factors that affect health. For example, the national union, to which she belongs, represents over 1.3 million poor, self-employed women mostly in the informal sector and offers its members social security schemes and health insurance according to their ability to pay. She says that more girls are going to school since India's flagship programme for universal education and the Right to Education Act (2009) came into force. Literate women are more receptive to community health workers’ messages. Chatterjee also cites the Right to Information Act (2005), under which citizens have the right to request and receive government information in a timely manner. “People have started asking questions citing the Right to Information Act, asking why basic health services are not reaching them,” says Chatterjee. Maternal mortality has been declining across the country, from 254 deaths per 100 000 live births in 2004–06 to 212 in 2007–09. To further reduce these deaths, the Ministry of Health and Family Welfare launched the Maternal Death Review in 2010 to track key factors underlying such deaths. A key component is a verbal autopsy, consisting of a questionnaire that queries relatives or others who were caring for the deceased at the time of her death on the non-medical circumstances surrounding the death. This can help to identify the factors leading to death to allow the health system to take corrective measures. Punjab is one of several Indian states implementing the Maternal Death Review. Another initiative is the Delhi Sehri Swastya Yojna (Delhi Healthy Urban Project), a recently launched partnership between Sulabh International Academy of Environmental Sanitation and Public Health, a nongovernmental organization, and local authorities. It takes a holistic approach to improving health. The increase in noncommunicable diseases in recent years in India also strengthens the case for greater emphasis on the social and cultural factors that determine health. “One good example of how various departments can come together is the fight against tobacco. There is a national level Inter-Ministerial Task Force on Tobacco Control, which has the finance ministry and the health ministry on board. If we want to really control noncommunicable diseases, we have to tackle their social roots,” says Jarnail Singh Thakur, a noncommunicable diseases expert at the WHO Country Office in India. “The school should be the starting point for health promotion,” Thakur says. WHO, he notes, is working in partnership with other United Nations agencies as well as academia, nongovernmental organizations and the public health community to advocate on this issue. These may be steps in the right direction. But the recent turmoil across India also shows that there is a groundswell of rising aspirations, with people calling on politicians to tackle corruption. Karamjeet, the young mother from Punjab, one of India’s affluent states, shows what can be achieved. The challenge is to make such cases the norm.
<urn:uuid:d397eb37-43aa-4c04-9930-4c0f43d52343>
CC-MAIN-2013-20
http://who.int/bulletin/volumes/89/10/11-031011/en/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960834
1,900
3.046875
3
Humidifiers and Vaporizers A humidifier is a device that blows cool to lukewarm mist (vapor) into the air to increase humidity (moisture) in a room. A vaporizer is a device that releases a cool to hot mist into the air to help increase humidity or to help with breathing. These devices tend to be used when a person is sick or when air in the house is dry (low indoor humidity). When a person's skin and mucous membranes are dry, the dryness can aggravate an illness of the head, neck, or chest. Or it can lead to chapped lips, a dry throat, or dry and itchy skin. Mist therapy with one of the devices can help provide relief. These devices differ in a few ways. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org Find out what women really need. Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
<urn:uuid:5b2f4a26-be73-414f-949a-e4d021d0092b>
CC-MAIN-2013-20
http://www.emedicinehealth.com/script/main/art.asp?articlekey=138341&ref=133215
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.890467
215
2.984375
3
Spanish-American War: 1 Day Lesson In this lesson, you will use contrasting newspaper accounts of the explosion of the Maine to guide students in thinking about how an author’s word and information choices influence the message and tone of the text. Students will view a 3-minute movie to establish context, use a graphic organizer to compare the articles, and in writing, take a position about which account is most believable. Students will learn that the way reporters employ language and evidence can result in vastly different accounts of the same event. Plan of Instruction: Step 1: 5 minutes: Warm up Put the following headlines on the board: - Search for Missing Bride Continues - Cold Feet Suspected in Case of Missing Bride - Bride Missing! Recent Fight With Groom’s Family Have each student respond in writing: - How do these headlines differ? - Consider the wording and how a reader might respond to each article. Step 2: 8 minutes: Discussion - What does each headline imply? - If these were all articles, which would you have wanted to read first? - Which do you think would have been the most reliable story? Why? - Why might different newspapers choose to present the same event so differently? Step 3: Transition Today we are going to be comparing two newspaper accounts of an event that happened in 1898. First we will watch a short movie that introduces the event to you. Step 4: 3-8 minutes: Show movie Show Maine movie and discuss. Step 5: 20 minutes: Read, analyze, discuss Each student reads the Journal document and fills out the first three questions on the organizer. Check-up – ask students to share answers in whole-class discussion. Ask students to quote from the article to support their answers. Students read Times document and fill in first three columns of organizer. In pairs or small groups, students check their answers and then answer the fourth column question together. Step 6: 10 minutes: Whole class discussion - Do you know what happened to the Maine? - What evidence do you have for your answer? Give an example where the reporter uses solid evidence to support a claim made in the article. - Do you think these articles would have been received differently by their readers in 1898? How so? - What effect might the Journal article have had on its readers? - What effect might the Times article have had on its readers? - How significant do you think the Maine explosion was to the American people at this time? Why? Step 7: Assessment Writing prompt: Which account is more believable? Why? First section: Compare the evidence used by both papers to support their claims that the Maine was blown up by attack or by unknown causes. Which uses stronger evidence? Use at least three specific examples/phrases/words from the articles to support your position. Second section: Does this difference in accounts matter? Why or why not?
<urn:uuid:c0906a1e-5277-4653-8174-d92d9589ab2f>
CC-MAIN-2013-20
http://historicalthinkingmatters.org/spanishamericanwar/1/materials/1day/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941545
613
4.03125
4
Potential Oil Production from the Coastal Plain of the Arctic National Wildlife Refuge: Updated Assessment This Service Report, Potential Oil Production from the Coastal Plain of the Arctic National Wildlife Refuge: Updated Assessment, was prepared for the U.S. Senate Committee on Energy and Natural Resources at the request of Chairman Frank H. Murkowski in a letter dated March 10, 2000. The request asked the Energy Information Administration (EIA) to develop plausible scenarios for Arctic National Wildlife Refuge (ANWR) supply development consistent with the most recent U.S. Geological Survey (USGS) resource assessments. This report contains EIA projections of future daily production rates using recent USGS resource estimates. The Coastal Plain study area includes 1.5 million acres in the ANWR 1002 Area, 92,000 acres of Native Inupiat lands and State of Alaska offshore lands out to the 3-mile limit which are expected to be explored and developed if and when ANWR is developed.(Figure ES1) About 26 percent of the technically recoverable oil resources are in the Native and State lands. The Coastal Plain region, which comprises approximately 8 percent of the 19 million-acre ANWR, is along the geologic trend that is productive in the Prudhoe Bay area, 60 miles west. This is the largest unexplored, potentially productive onshore basin in the United States. The 1002 area is now closed to exploration and development, although Native and State lands are open. The USGS made the following estimates in 1998 of technically recoverable oil and natural gas liquids from the ANWR Coastal Plain: By comparison, total 1998 U.S. proved reserves of crude oil were estimated to be 21 billion barrels and the 1993 estimate of undiscovered technically recoverable oil for the onshore lower 48 States (that would come from tens of thousands of small fields) was about 23 billion barrels. EIA postulated yearly development rates of the resources without specifying the effect of various levels of oil prices and technology advances, and then projected daily production rates based on the USGS estimates, as follows:
<urn:uuid:a49f3571-5634-4849-a697-356fbaed9026>
CC-MAIN-2013-20
http://www.eia.gov/pub/oil_gas/petroleum/analysis_publications/arctic_national_wildlife_refuge/html/execsummary.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930057
419
2.84375
3
The study led by Karen Wilson, MD, MPH, of Pediatrics at the University of Rochester Medical Center’s Golisano Children’s Hospital found children living in apartments had 45 percent higher exposure to passive smoking. For the study, the researchers analysed data on cotinine, an alkaloid found in tobacco and also a metabolite of nicotine in more than 5,000 children ages 6 to 18 in a national database. Cotinine levels were found highest in children who were under age 12, black and of a family with its income below the federal poverty level. Children are believed to be more vulnerable to passive smoking and exposed children were at higher risk for various illnesses including respiratory infections, asthma and sudden infant death syndrome among others. But not all children suffer the same. Another study led by Schultz E.N. and colleagues from the University of Western Australia in Perth, Australia suggests genetics plays a role in the vulnerability of a child to certain tobacco smoke-induced health conditions. The researchers reported in the Nov 2010 issue of the Journal of Asthma that the glutathionine S-transferase (GSTs) enzymes play a critical role in the detoxification of tobacco smoke compounds, which boost risk of asthma among other things. Genetic variation in the GST genes influences a child’s ability to detoxify the smoke pollutants, according to the study. Cigarette smoke contain more than 7,000 chemicals and hundreds of them are toxic and more than 70 are cancer-causing agents, according to a report released recently by the Surgeon General.
<urn:uuid:9c3e03e7-6268-40c5-b344-27eea78416c2>
CC-MAIN-2013-20
http://www.tobacco-news.net/kids-living-in-apartments-at-risk-of-passive-tobacco-smoking/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952885
325
3.46875
3
[Previous] | [Session 130] | [Next] D. E. Osterbrock (UCO/Lick Obs/UCSC) Long before he ``discovered" the two stellar populations, Walter Baade was a pioneer in research on supernovae and their remnants. In 1927, while still in Germany, Baade emphasized what he called ``Hauptnovae" (chief novae) as highly luminous, potential distance indicators. He joined the Mount Wilson staff in 1931, bringing the ``secret" of the Schmidt camera with him, and encouraged Fritz Zwicky to carry out a supernova search with one at Palomar. Baade and Zwicky used the term ``supernova" in their 1933 joint paper. Zwicky began a systematic search in 1936, and Baade followed up with the 100-in reflector to derive light curves. He confirmed that Tycho's ``nova" of 1572 and the Crab nebula had been supernovae in our Galaxy. Baade advised N. U. Mayall, at Lick, on his spectroscopic study of the Crab nebula. In 1933, after Hitler came to power, Rudolph Minkowski had to leave Germany. Baade managed to get him a Mount Wilson staff position. Minkowski then did the spectroscopic observations of supernovae, beginning in 1937. Within a few years he and Baade were able to distinguish type I and II supernovae. Baade's further work on supernovae included historical research in Latin, Italian, and German, as well as filter photography. He searched hard for a remnant of SN 1885 in M 31, but never succeeded in finding it. After World War II the Crab nebula was found to be a strong radio source, and Baade and Minkowski used the 200-in to identify other supernova remnants, beginning with Cas A. Baade collaborated closely with Jan Oort and his student, Lo Woltjer, in their studies of the Crab nebula. After Baade retired in 1958, Minkowski continued supernova research for more than a decade; one of his favorite objects was the expanding Cygnus Loop.
<urn:uuid:1ba8cb3d-b853-4a9a-bf28-9a15611a6cdd>
CC-MAIN-2013-20
http://aas.org/archives/BAAS/v31n5/aas195/1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958057
453
3.65625
4
Photo #: NH 83003 Admiral Graf Spee (German Armored Ship, 1936) Ship's port bow, taken while she was at Montevideo, Uruguay in mid-December 1939, following the Battle of the River Plate. Note crew members working over the side to repair damage from an eight-inch shell fired by the British heavy cruiser Exeter. The notation "The 'Moustache'" refers to the false bow wave painted on Admiral Graf Spee's bows. The original photograph came from Rear Admiral Samuel Eliot Morison's World War II history project working files. U.S. Naval Historical Center Photograph. Online Image: 46KB; 740 x 530 pixels Click on the small photograph to prompt a larger view of the same image. If you want higher resolution reproductions than this digital image, see: "How to Obtain Photographic Reproductions." Image posted 3 August 2006
<urn:uuid:075bfa60-255f-4a82-baf8-5cec2cbd6843>
CC-MAIN-2013-20
http://www.history.navy.mil/photos/images/h83000/h83003c.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.884424
193
2.71875
3
The science of numbers; the art of computation by figures. A book containing the principles of this science. The mathematics of solving addition, multiplication, subtraction, and division. Method of computing using addition, subtraction, multiplication, or division. A branch of mathematics usually concerned with the four operations (adding, subtracting, multiplication and division) of positive numbers. a subcategory of mathematics that includes the study of number, concepts, and basic operations. Arithmetic is the foundation of elementary-level mathematics programs. "the art of computing using addition, subtraction, multiplication, and division" The science of addition and multiplication (subtraction and division are included, since they are the inverse operations of addition and multiplication). Proof by Contrapositive the branch of pure mathematics dealing with the theory of numerical calculations A general area of math dealing with addition, subtraction, multiplication, and division. The simplest part of mathematics. When you study arithmetic you learn about addition, subtraction, multiplication, and division (called operations). Some more advanced ideas are included in arithmetic, but those are the big four. They are the foundation for all higher mathematics. A branch of mathematics that involves combining numbers by addition, subtraction, multiplication and division. The mathematics of integers, rational numbers, real numbers, or complex numbers under addition, subtraction, multiplication, and division. Mathematics: Arithmetic is the first kind of mathematics normally studied by beginners. It is essentially the art of numeric computation according to rules pertaining to the combination of two or more numbers and applications of that art. The operations involved are addition, subtraction, multiplication, and division. The results obtained are, respectively, a sum, a difference, a product and a quotient. The four fundamental operations of addition, subtraction, multiplication, and division. Sometimes the operation of taking the square root is included. The arithmetic unit of a computer is usually fitted with the capability of performing only those operations. All other "computing" must be built up of combinations of them, plus storing and retrieving numbers from the memory. Arithmetic is the study of addition, subtraction, multiplication, and division. Arithmetic or arithmetics (from the Greek word αÏιθμός = number) is the oldest and most elementary branch of mathematics, used by almost everyone, for tasks ranging from simple daily counting to advanced science and business calculations. In common usage, the word refers to a branch of (or the forerunner of) mathematics which records elementary properties of certain operations on numbers. Professional mathematicians sometimes use the term arithmeticDavenport, Harold (1999).
<urn:uuid:970c36e9-6143-41ce-a55a-be7aae59adb1>
CC-MAIN-2013-20
http://www.metaglossary.com/meanings/284043/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92652
562
3.78125
4
Located on 140 acres of remote farmland in the Okanagon Highlands in eastern Washington, Sally and Roger Jackson raise a small number of sheep, goats, and cows, using their milk for farmstead cheese production. Having initially sold their cheeses just in Washington state, their business has grown over the years and limited quantities of cheese are now shipped to various locations around the United States. Cheesemaking takes place in the small, detached make room on the farm. Milk is heated in a large pot over a wood burning stove which sits in a corner of the room. Adjacent is an area used for storing the chestnut and vine leaves that have become synonymous with Sally Jackson's cheeses. The sheep's and cow's milk versions are wrapped in chestnut leaves, while the goat cheese is wrapped in vine leaves, all of which are gathered by hand from a local chestnut grove where the vines grow around the perimeter. The leaves are soaked in alcohol prior to wrapping, both to give the leaves flexibility and to impart flavor to the cheeses. Wrapped in leaves and tied with string, Sally's cheeses are instantly recognizable for their rustic appearance. Unnamed, the sheep's and goat's milk versions are simply referred to as Sally Jackson's Sheep's/Goat's Milk Cheese Wrapped in Chestnut/Vine Leaves. The cow's milk version is known as Renata, named after one of Sally's cows. After draining, the young cheeses are wrapped in chestnut leaves from a local chestnut grove and matured for approximately eight to ten weeks. The flavors of Renata are rich and buttery and nutty, with distinct earth and mushroom tones. The interior paste is a pale, butter-yellow, becoming darker towards the rind.
<urn:uuid:e2a8073b-0ab0-420a-bd9f-67c94d7f7c56>
CC-MAIN-2013-20
http://www.culturecheesemag.com/node/1253
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96759
363
2.5625
3
Massive Lockheed Martin Solar Arrays To Be Launched To International Space Station SUNNYVALE, Calif., 21-AUG-06 -- The second of four pairs of massive solar arrays and a Solar Alpha Rotary Joint (SARJ), built by Lockheed Martin [NYSE: LMT] at its Space Systems facility in Sunnyvale, will be launched aboard the space shuttle Atlantis to the International Space Station (ISS) as early as August 27, 2006. Atlantis’ launch window extends through September 13, 2006. During the 11-day STS-115 mission, astronauts will connect the package of giant solar arrays and the rotary joint – incorporated into an integrated truss segment – to the Station. A second rotary joint and a third pair of solar arrays will be delivered to ISS on STS-117. The second pair of solar arrays will nearly double the power available to the Space Station, and we’re very proud to play a role in this vitally important international mission,” said Brad Haddock, Lockheed Martin ISS program director. “The first arrays have performed superbly, and beyond expectation, and we’re confident that this addition to ISS will further harness the Sun's energy for the Space Station and provide the power required for many years to come.” The Space Systems ISS solar arrays are the largest deployable space structure ever built and are by far, the most powerful electricity-producing arrays ever put into orbit. When the Station is completed a total of eight flexible, deployable solar array wings will generate the reliable, continuous power for the on-orbit operation of the ISS systems. INTERNATIONAL SPACE STATION SOLAR ARRAY BLANKET – A solar array blanket for the International Space Station (ISS) is seen here fully deployed at Lockheed Martin Space Systems Company of Sunnyvale, Calif. Two blankets comprise each solar array wing. (Photo Credit: Russ Underwood, Lockheed Martin Space Systems) The eight array wings were designed and built under a $450 million contract from the Boeing-Rocketdyne Division in Canoga Park, Calif., for delivery to the Boeing Company and NASA. Each of the eight wings consists of a mast assembly and two solar array blankets. Each blanket has 84 panels, of which 82 are populated with solar cells. Each panel contains 200 solar cells. The eight photovoltaic arrays thus accommodate a total of 262,400 solar cells. When fully deployed in space, the active area of the eight wings, each 107 by 38-feet, will encompass an area of 32,528-sq. ft., and will provide power to the ISS for 15 years. The SARJ, 10.5 ft in diameter and 40 inches long, will maintain the solar arrays in an optimal orientation to the sun while the entire space station orbits the Earth once every 90 minutes. Drive motors in the SARJ will move the arrays through 360 degrees of motion at four degrees per minute. The joints must rotate the arrays smoothly without imparting vibrations to the laboratories and habitation modules on the station that would impact microgravity-processing activities. At the same time, 60 kW of power at 160 volts and multiple data channels are carried across each joint by copper “roll rings” contained within. In addition to the arrays and SARJ, Space Systems in Sunnyvale designed and built other elements for the Space Station. The Thermal Radiator Rotary Joints (TRRJ) – each five and a half feet long and three feet in diameter – were launched in 2002. The two joints maintain Space Station thermal radiators in an edge-on orientation to the Sun that maximizes the dissipation of heat from the radiators into space. Space Systems also produced the Trace Contaminant Control System ?– launched to ISS as an element of the U.S. Destiny Laboratory module in 2001 – an advanced air processing and filtering system that ensures that over 200 various trace chemical contaminants, generated from material off-gassing and metabolic functions in the Space Station atmosphere, remain within allowable concentration levels. It is an integral part of the Space Station's Cabin Air Revitalization Subsystem. Lockheed Martin Space Systems Company (LMSSC), a major operating unit of Lockheed Martin Corporation, designs, develops, tests, manufactures and operates a variety of advanced technology systems for military, civil and commercial customers. Chief products include a full-range of space launch systems, including heavy-lift capability, ground systems, remote sensing and communications satellites for commercial and government customers, advanced space observatories and interplanetary spacecraft, fleet ballistic missiles and missile defense systems.
<urn:uuid:bd7d0b97-8abb-425e-a3b9-aece37bfe4d2>
CC-MAIN-2013-20
http://www.lockheedmartin.com/us/news/press-releases/2006/august/MassiveLockheedMartinSolarArraysBeL.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.894865
938
2.515625
3
Talking About Tides Astrobiology Magazine presents its latest podcast with our host Simon Mitton. In this interview, Brian Jackson, a NASA Earth and Space Sciences Fellow in the Department of Planetary Sciences at the University of Arizona, explores the importance of tidal heating in determining the habitability of planets. Tidal heating, which is generated by orbiting a massive gravitational body like a star or giant planet, can heat up a planet or moon. If there is too much tidal heating, though, the resulting volcanism can create torrid conditions unsuitable for life. Listen to our other podcasts The Evolution of EPOXI: Interview with Tim Livengood and Vikki Meadows
<urn:uuid:23b40703-ef5d-4353-ae5a-7b3a1e976317>
CC-MAIN-2013-20
http://www.astrobio.net/includes/html_to_doc_execute.php?id=3000&component=news
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.818403
136
2.921875
3
We need to limit climate change and start adapting to those changes that are already unavoidable due to past emissions. Climate change is the biggest threat our environment faces. The Intergovernmental Panel for Climate Change (IPCC) tells us that our climate is changing and it is very likely that human activity is causing it. Over the past century, average global temperatures have risen by 0.78ºC and eleven of the warmest years on record have occurred over the last 12 years. We are already experiencing the unwelcome effects of increasing temperatures and more frequent drought and flooding on our economy and people's quality of life. Based on the IPCC's findings, we believe that we should be aiming to limit increases in average global temperature to below 2°C. If temperatures rise above this level, we can expect severe consequences for food and water supplies, human health and ecosystems. If we do nothing, people across the globe are unlikely to be able to cope with the results of climate change and the most vulnerable countries will suffer the worst impacts. The Stern Review provides us with a challenging, but positive message. Global emissions of greenhouse gases need to peak in the next 10-15 years and then fall rapidly to limit future climate change to manageable levels. This is possible using technologies that are already available and would cost an estimated one per cent of global GDP by 2050. This would be a small fraction of the estimated £3.82 trillion in damages from future excessive floods, droughts, storms and heat. Whatever we do now to limit climate change in the future, we need to start adapting to those changes that are already unavoidable due to past emissions. By becoming more resilient to future weather conditions, we will reduce the impacts of climate change on our economy and our quality of life. By factoring climate change into our planning and investment decisions now, we can minimise the costs of adapting to that change. For example, the floods of summer 2007 are estimated to have cost £2.7 billion in damages. We are working to make sure that our current spending of £600 million a year on flood risk management takes climate change into account. We are taking a leading role in limiting and preparing for the impacts of climate change. - Our focus is to make sure that England and Wales are able to adapt to the changing climate, and particularly the increasing risks of river and coastal flooding, the growing pressures on water supplies for people and the environment and changing climate space for biodiversity. - We are a prominent advisor to the UK Government. - As an organisation we are in the front line of climate change and are therefore adapting our own policies, strategies and plans to make sure that we factor climate change into our own work, as well as providing reliable advice to others. We also have an extensive monitoring network that is providing evidence of climate change in our environment. - We are helping drive down emissions of greenhouse gases. We regulate industrial sites responsible for approximately 40 per cent of UK greenhouse gas emissions. We also act as the Competent Authority for the EU Emissions Trading Scheme in England and Wales. We are working with the Government to get the Carbon Reduction Commitment - a mandatory trading scheme for large commercial and public sector sites - up and running in 2010. We have a significant role in regulating some existing and future low carbon technologies, such as biomass and carbon capture and storage (CCS). What we would like to see The Climate Change Act - We are pleased to see that the Government has passed the Climate Change Act into law. This has established a legal framework for managing UK emissions of greenhouse gases and preparing for the impacts of climate change. - We support the target of reducing greenhouse gas emissions by 80 per cent by 2050 and introducing statutory five-year carbon budgets. The independent Committee for Climate Change is a major step forward. But, we are concerned that, without a greater focus on compliance, the carbon budgets may still lack credibility with long-term investors and the Government departments that will have to change their policies to meet the targets. - We are reassured that the Act means that the Government has to publish a Climate Change Adaptation Programme every five years. This will give the Government, its agencies and other delivery partners such as utilities, transport networks, local authorities and regional bodies more incentive to assess the risks that climate change will bring and to make sure that they are acting in an appropriate and timely way. Introducing regular reporting will be essential for monitoring progress. Adapting to change - Investing in flood defence. We would like to see the Government increase its investment in flood management to £1.1billion a year by the middle of the next decade to keep pace with climate change. Defra's Foresight report, Future Flooding, supports this level of investment. Implementing the Making Space for Water strategy will be the most cost effective way of investing this money. - A revitalised coastal strategy. We need a more strategic approach to managing our coastline. Organisations involved along the coast need to work together to take sensible, long-term decisions about the way we use our coasts. They will need to consider protecting them from future coastal flooding and the risk of storm surge, as well as realignment, and possibly relocating people and homes or abandoning agricultural land. - Using water more efficiently. We want to see greater emphasis on managing demand for water, as well as using water more efficiently to help manage pressures on water resources. Climate change is expected to reduce the amount of water available, particularly in the South East, whilst, at the same time, we continue to use even more water. - Protecting conservation and habitat. We need to manage biodiversity in different ways in the face of climate change. Whilst making sure our existing protected sites are resilient to climate change, we need to move to landscape scale approaches to managing habitats to help encourage the movement of species as the climate changes. - The revised Climate Change Programme (2006) and the Energy Review (2007) will help to meet the Government's target of an 80 per cent reduction in greenhouse gas emissions by the middle of the century. - We welcome the Government's commitment to establishing a strong carbon price, which will provide a financial incentive to invest in energy efficiency and low carbon technologies. In line with this, we want to see the EU Emissions Trading Scheme strengthened by setting an EU centralised cap and 100 percent auctioning. - We recognise that coal will continue to be a significant source of energy for the next few decades. There needs to be greater emphasis on making coal technology more efficient, encouraging combined heat and power developments and designing and locating new plants so that CCS can be retrofitted when it become cost-effective. - We support using larger amounts of renewable energy from a wider variety of sources and we will continue to remove any unnecessary regulation to make low carbon technologies available. But, we need to be convinced that the effect on the environment can be managed. For example, we have concerns that growing biomass crops could, in certain cases, have a damaging effect on the land and water quality. Working with Defra, we have produced a Biomass Environmental Assessment Tool (BEAT) to understand the impacts and benefits of biomass energy developments. - We continue to have doubts about nuclear power until radioactive waste can be effectively managed in the long-term. There do not appear to be enough incentives for nuclear sites to invest in a new generation of plants. - The Government has ambitious targets on renewable energy generation and harnessing the tidal power of the Severn Estuary could help to achieve them. However, these areas also include important ecological sites and protected species, and are some of the most important fishing rivers in Britain. Government must identify schemes that are environmentally-sensitive, but also help us meet renewable energy targets. The Environment Agency will assess the environmental impacts of any proposed options and provide expert advice. - By the 2080s average annual temperatures across the UK may rise between 2 and 3.5ºC, with some areas warming up by as much as 5ºC. We expect the heavy winter rain that currently happens every two years to become even more extreme, increasing by between five and 20 per cent by 2080. - Current guidance from Defra suggests that sea levels could rise by up to 1m by 2100. The IPCC is currently projecting an average sea level rise of 20-60cm during this century, but this does not include current rates of ice sheet melt. We expect this will increase as scientists understand more about the Greenland ice melts. - 30,000 people across Europe died in the summer heat wave of 2003. The temperatures experienced during this extreme event are predicted to be the equivalent of a cool summer by 2060. - Rises in sea level, increased rainfall and storm frequency mean that London and the Thames Estuary will be at greater risk from flooding in the future once the current barrier expires. We are developing a tidal flood risk management plan for London and the Thames estuary for the next 100 years - Thames Estuary 2100. - We carried out a climate change impacts study of the Wear catchment in Northumbria to assess the future impacts on buildings, infrastructure and vulnerable members of society, and identified possible measures to tackle these impacts. We will be carrying out similar studies in the future. - In five years we aim to reduce carbon emissions from our own activities by 30 per cent and offset the remainder through credible schemes. Our 2000 sites and facilities are supplied by 100 per cent renewable electricity. In some cases, we have built renewable energy supplies into our new developments.
<urn:uuid:33fac6a8-613d-496e-8031-38b9a4aef907>
CC-MAIN-2013-20
http://www.environment-agency.gov.uk/research/library/position/41209.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94685
1,939
3.59375
4
Burning Picassos for Heat Burning natural gas to extract and process oil from the Canadian tar sands has been likened by one industry insider to burning Picassos for heat. But the bidding at the "Picassos for heat" auction may go even higher as those involved in tar sands and oil shale development push for nuclear power to fuel their projects. |Guernica - Picasso (1937)| It is a truism that one ought to match the tool to the task. Energy is a tool, and we try to match the proper type of energy to the task. No one would try to put coal into an automobile gas tank. Even if it would actually burn in the engine, coal is so bulky one would have to pull a large trailer filled with it behind the car--in the manner of an old steam locomotive--to make a long trip without refueling. In our homes we use electricity for most tasks instead of gasoline-powered engines because electricity is so versatile. It can be used to power vastly different appliances. We also prefer electricity because, at least inside our homes, it gives off no fumes. Perhaps some will remember the all-electric home, an idea out of the 1950s that now finds its place in museums instead of new construction. That's partly because it is wildly inefficient to burn fossil fuels to make electricity and then convert that electricity back to heat. About two-thirds of the energy in fossil fuels is wasted as heat when they are turned into electricity. It is a law of physics that each transformation from one state of energy to another involves loss. We are therefore advised to match carefully each task to the type of energy required. Yet, this basic lesson in physics and energy efficiency seems lost on those pursuing the extraction of oil from the Canadian tar sands and the oil shale fields of the United States. This is in part because so much of our current infrastructure is dependent on liquid fuels from petroleum. In the United States petroleum accounts for 94 percent of all transportation fuel. That includes cars, motorcycles, trucks, busses, ships and planes. Some 8.1 million homes use heating oil for space heat. Changing these two components of the infrastructure to run on other fuels would be costly and time-consuming. And yet, this might be worth doing to avoid the folly of using high-quality energy sources to produce lower-quality ones from extremely dirty sources. To see how dirty, one need only take a trip using Google Earth to the section of Alberta where tar sands are being exploited to view the huge wastewater ponds--ponds that can be seen from space--filled with sludge which the industry has yet to figure out how to purify. Extracting and processing tar sands also produces two to three times as much greenhouse gas as extracting and refining of conventional crude oil. Then there is the question of burning huge amounts of natural gas to heat the water used to separate the gooey bitumen from the sand. In addition, hydrogen is stripped from natural gas and used to upgrade the bitumen into something that can be sent to a conventional refinery. Since natural gas is now in decline in Canada, plans are afoot to build nuclear power plants to provide process energy for tar sands operations and possibly to produce hydrogen through the electrolysis of water. Oil shale developers are faced with the same challenges. They need heat, and for some extraction processes they need hydrogen. Either they will use natural gas which appears to have peaked in North America--the hype about shale gas notwithstanding--or they will use nuclear power. The question then is this: Why not use these high-quality energy sources to power transportation and heat homes directly? Doing so would produce far fewer greenhouse gas emissions. And, direct use of these energy sources would be far more efficient than using them to transform tar sands and oil shale into useable petroleum products. The response from the oil industry has always been that we'd need a different infrastructure. But the answer to this objection is as follows: Why not build that infrastructure now? Why wait until the oil flow tops out at the tar sands and oil shale fields to do this? Why accept the many risks and uncertainties associated with further development of these unconventional oil sources including the risk that oil shale may never prove to be economically feasible to exploit? My recommended path would be to electrify transportation as much as possible. Obviously, planes would have to be an exception. Ships might also be an exception; but oceangoing ships could reduce their fuel consumption greatly by adding sails which are increasingly becoming available. Cars, motorcycles, trucks and trains, however, could all be electrified with a few exceptions such as emergency vehicles which must run whether electricity is available or not. We can generate electricity from many sources including renewables such as wind and solar. And, the future supply crunch which we are likely to experience in oil could be averted. In fact, oil could be saved for critical nonfuel uses such as pharmaceuticals, plastics, fabrics, lubricants and myriad other products upon which our society now depends. Given the singular versatility of oil as a feedstock for so many types of materials in modern society, it might even be appropriate to add burning oil for fuel to the list of actions that are the equivalent of burning Picassos for heat. That might turn out to be the best art lesson of all.
<urn:uuid:a8b6f90f-678f-4670-9d34-fdf5b84bc092>
CC-MAIN-2013-20
http://www.resilience.org/stories/2009-08-31/burning-picassos-heat
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961158
1,086
3.265625
3
Gestational Diabetes (cont.) In this Article - What is gestational diabetes? - What causes gestational diabetes? - What are the complications of gestational diabetes? - Who is at risk for gestational diabetes? - How is gestational diabetes diagnosed? - How is gestational diabetes managed? - Do I need to take insulin? - How do I monitor my blood glucose levels? - How will my diet change? - How much exercise is safe? - How much weight gain is safe during pregnancy? - What happens to my baby after delivery? - Will gestational diabetes cause the baby to have diabetes? - Will I still have diabetes after I deliver my baby? - Find a local Obstetrician-Gynecologist in your town Do I Need to Take Insulin for Gestational Diabetes? Based on your blood sugar monitoring results, your health care provider will tell you if you need to take insulin in the form of injections during pregnancy. Insulin is a hormone that controls blood sugar. If insulin is prescribed for you, you may be taught how to perform the insulin injection procedure. As your pregnancy progresses, the placenta will make more pregnancy hormones and larger doses of insulin may be needed to control your blood sugar. Your health care provider will adjust your insulin dosage based on your blood sugar log. When using insulin, a "low blood glucose reaction," or hypoglycemia, can occur if you do not eat enough food, skip a meal, do not eat at the right time of day, or if you exercise more than usual. Symptoms of hypoglycemia include: Hypoglycemia is a serious problem that needs to be treated right away. If you think you are having a low blood sugar reaction, check your blood sugar. If your blood sugar is less than 60 mg/dL (milligrams per deciliter), eat a sugar-containing food, such as 1/2 cup of orange or apple juice; 1 cup of skim milk; 4-6 pieces of hard candy (not sugar-free); 1/2 cup regular soft drink; or 1 tablespoon of honey, brown sugar, or corn syrup. Fifteen minutes after eating one of the foods listed above, check your blood sugar. If it is still less than 60 mg/dL, eat another one of the food choices above. If it is more than 45 minutes until your next meal, eat a bread and protein source to prevent another reaction. Record all low blood sugar reactions in your log book, including the date, time of day the reaction occurred and how you treated it. Parenting and Pregnancy Get tips for baby and you.
<urn:uuid:6b0d21c9-35bb-48b1-854f-729ad37528af>
CC-MAIN-2013-20
http://www.rxlist.com/gestational_diabetes/page4.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908331
557
2.640625
3
During the Medieval Era, there were many forms of vocal music. They were very simplistic in nature. One of the most common vocal forms of the time was called plainchant, the Gregorian chant, or plainsong. It is known that this form of vocal music was the main root of during both the Medieval era and in the Renaissance era. While little secular song had been preserved to date, it was still a very important musical form during the Medieval era. It was very similar to in that it had single note notation, had no accompaniment, and was written in the style. The difference between secular song and plainsong was its meter. It was mostly written in triple meter. Additionally, it also dealt with a wider range of subjects than the very religious plainsong. Furthermore, secular song had clear phrase and sectional structure , was written in most vernacular languages instead of the Latin-only plainsong, and used shorter and more regular rhythms. One of the greatest musical achievements in the history of music occurred during the Medieval era. This was the coming of polyphony. Polyphony is two or more vocal parts, each with its own individual melodic importance within a work. The earliest known polyphony occurred in of the 8th century. However, from the 9th to the 13th centuries, polyphony grew in style and popularity and evolved into church music, which was based on plainsong. Ars Antiqua is the time period between the mid 1100s to the end of the 1200s. This phrase means "The Old Art." This was a time during the Medieval Era when polyphony developed even further. Notre Dame Organum The Notre Dame organum developed shortly after the year 1150. In this form of polyphony, there were two parts sung by solo voices, alternating with sections of plainsong sung by a choir. Appearing for the first time was dicant style. This style had sections in which the tenor part contained shorter and measured notes. The polyphonic conductus was in wide usage during the beginning half of the 13th century. The tenor part of this musical form was composed, instead of borrowed from plainsong, as it was in organum. Additionally, the parts moved together rhythmically, and the piece was written for two to four parts. The polyphonic conductus was composed in non-liturgical or secular form. Around the year 1250, the became the main polyphonic form. It started to replace organum and conductus. A motet consisted of specific musical guidelines. A plainsong was sung by the tenor voice, and above it, two other parts were sung in faster moving notes. It was written in either sacred or secular style (in Latin or in vernacular) and usually was played in triple meter with clashes of intervals. Hocket was a form of polyphony that was often found in the music of the late 1200s to the 1300s. It was a technique that interrupted the melody line by frequently placing rests (which alternated between two voice parts) into the piece. Although not many works had this form during the Ars Antiqua stage of the Medieval Era, the rota still was present. It was a round or cannon in which two or more parts carried the same melody at different times. The rondellus was a three part, secular form, in which exchange occured between the three different melodies. This polyphonic work involved all the parts starting together rather than starting consecutively. Each part then rotated the melody. The Ars Nova, or "The New Art," took place during the end of the Medieval era while foreshadowing some of the Renaissance trends that were to ome. Written in two vocal parts, this musical form was the first polyphonic form to appear in Italy. The had each stanza written in duple time and ended with a ritornello section in triple meter. The caccia was at its musical height from 1345 to 1370. It was the primary musical form that employed the canon within it. The canon was based on a continuous imitation of two or more parts. The two upper parts were sung in strict imitation with long intervals between the two parts while the third lowest part was composed in slow moving notes and was probably played on an instrument. This form came about after the madrigal and the caccia and originated as a dance song. The ballata had a sectional structure with refrains, called ripresa sung at the beginning and end of each stanza. Top - Medieval | Instrumental | Composers Music Styles | Music Theory | Music History | Musical Instruments | Music Professions | Music Links | Music Games | Glossary | Guestbook | Message Board | Search Music Notes | Meet the Treble Rebel Team | Music Notes Home
<urn:uuid:ddffaf1f-c8f5-4035-b559-3588a19c6d81>
CC-MAIN-2013-20
http://library.thinkquest.org/15413/history/history-med-voc.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984534
989
4.125
4
Dogon myth as presented in Marcel Griaules ìDieu díEau - Entretiens avec OgotemmÍliî (1948) describes the origin of textile making as having taken place on the third day of creation, and thus ascribes this craft a deep and unique tradition among the Dogon. However, compared to other textiles of Mali, such as those of the Marka or the Peul, Dogon textiles are not exceptional. In comparison to these textile traditions the Dogon technical vocabulary is limited, implying that textile making was adopted into Dogon culture only recently. Close similarities are visible in technical and typological aspects to the textiles of the Peul (or Fulani) pastoralists. Consequently, while ìDieu díEauî describes Dogon myth and the origin, fabrication and understanding of textiles as deeply ëinterwovení, there is very little actual evidence to support such claims.Check the following links for further details Traditional Dogon textiles are made of cotton, which is spun by the women and woven by the men into long narrow strips on double-heddle looms. These strips are sewn together to form cloths. Special to the Dogon are additions of small tassels to the textiles. Dogon textiles are either used in their original white colour, or dyed black, brown, or most commonly, indigo blue. Indigo is extracted from a plant (Indigofera tinctoria). The Dogon term for indigo is gara or gala, a word used throughout Senegambia from Mauretania to Burkina Faso. Indigo dye is particularly closely associated with the SoninkÈ. Dogon textiles comprise blankets or covers and personal dress items. The traditional male dress of the Dogon, as apparent in early photographs, consisted of trousers and of large shirts of varying cuts, also often a head dress. The clothing was of an entirely white, blue or brown colour and was lacking woven ornamentation. The wrapper is a fundamental part of female dress among the Dogon. It is worn together with a simple wide sleeveless shirt. In most of the Dogon settlement area the wrapper is of an indigo colour without or with only few and discrete patterns. As is known from finds in burial caves the previous population of the Bandiagara escarpment, the so-called Tellem (11th-16th century AD), did not know the cotton wrapper (Bolland 1991). Tellem women wore skirts of vegetal fibres. 1991 Tellem Textiles. Archaeological finds from burial caves in Maliís Bandiagara Cliff. With contributions by R.M. A. Bedaux and R. Boser-SarivaxÈvanis. Royal Tropical Institute, Amsterdam; Rijksmuseum voor Volkenkunde, Leiden; Institut des Sciences Humaines, Bamako; MusÈe National, Bamako. 1975 Recherche sur líhistoire des textiles traditionnels tissÈs et teints de líAfrique occidentale. Compte rendu de la mission R. Boser et B. Gardi ‡ travers le Nigeria, le Niger, la Haute-Volta, la CÙte díIvoire, le Mali et le SÈnÈgal (octobre 1973 ‡ fÈvrier 1975). Verhandlungen der Naturforschenden Gesellschaft in Basel 86, 1 et 2 : 301-341. 1990-1991 Empty space : the architecture of Dogon cloth. Res 19-20 : 162-177. Engelbrecht, B. & B. Gardi (Èds.) 1989 Man does not go naked. MÈlanges offerts ‡ RenÈe Boser-SarivaxÈvanis. Basler Beitr‰ge zur Ethnologie 30. Basel, Wepf & Co. 2000 Le Boubou - cíest chic. Les boubous du Mali et díautres pays de líAfrique de líOuest. Basel, Christoph Merian Verlag. 1951 Le vÍtement dogon, confection et usage. Journal de la SociÈtÈ des Africanistes 21 : 151-162. 1948 Dieu díEau. Entretiens avec OgotemmÍli. Paris, Les Editions du ChÍne. 1971 Iconologie des poulies des mÈtiers ‡ tisser dogon. Objets et Monde 11 : 355-370. Dogon man's cap from the National Museum of Ethnology, Leiden, Netherlands Dogon man's tunic (National Museum of Ethnology, Leiden, Netherlands). This garment is made of long narrow cotton strips sewn together and dyed with indigo. Dogon woman's wrapper (National Museum of Ethnology, Leiden, Netherlands) Folded narrow cotton band (National Museum of Ethnology, Leiden, Netherlands). The long cotton strips are woven by Dogon men on shaft looms. They are usually sold folded or rolled up. Formerly, rolls of cotton strips were used as currency. Author(s): Text: Cornelia Kleinitz, abstract of B. Gardi (2004), Textiles Dogon, in R.M.A. Bedaux and J.D. van der Waals, Regards sur les Dogon du Mali. Leiden (Exhibition Catalogue) Date created: 2003-10-16 - Date modified: 2004-03-29
<urn:uuid:647b4015-2df6-464e-b50f-740e87e77e78>
CC-MAIN-2013-20
http://www.necep.net/articles.php?id_soc=12&id_article=83&cat=Religion
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.763712
1,204
3.65625
4
A kidney Reference biopsy Opens New Window is usually done using a long thin needle put through the back (flank) into the kidney. This is called a percutaneous kidney biopsy. A tissue sample is taken and sent to a lab. It is looked at under a microscope. The sample can help your doctor see how healthy your kidney is and look for any problems. The two Reference kidneys Opens New Window Reference Opens New Window are found on either side of the spine, in the lower back. They help the body balance water, salts, and minerals in the blood. The kidneys also filter waste products from the blood and make urine. A kidney biopsy may be done to check for kidney problems. It may also be done after other tests for kidney disease, such as blood and urine tests, Reference ultrasound Opens New Window, or a Reference computed tomography (CT) scan Opens New Window, show a kidney problem. |By:||Reference Healthwise Staff||Last Revised: Reference September 25, 2012| |Medical Review:||Reference E. Gregory Thompson, MD - Internal Medicine Reference Christopher G. Wood, MD, FACS - Urology, Oncology
<urn:uuid:c8054e1a-09fb-476d-9974-3c95a1b71ff6>
CC-MAIN-2013-20
http://www.pamf.org/teen/healthinfo/index.cfm?A=C&type=info&hwid=hw231586&section=hw231589
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915077
246
3.1875
3
- By 2008, half of all mortgages in the United States--28 million loans--were subprime or otherwise weak. - The QM rule takes the underwriting standards out of the hands of the lender and gives them to the government. - Political pressure to continue lending to borrowers with weak credit standing has trumped common sense underwriting standards. Despite the claim that it is “protecting consumers from irresponsible mortgage lenders,” the new Qualified Mortgage rule finalized in January by the Consumer Financial Protection Bureau turns out to be simply another and more direct way for the government to keep mortgage underwriting standards low. This sets the country up for a repetition of the mortgage meltdown of 2007 and 2008. Simply put, government housing policies, implemented by the Department of Housing and Urban Development (HUD), caused the 2008 financial crisis. Before 1992, the vast majority of mortgages in the United States were prime loans. Yet a 1992 law required the government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac—then the dominant players in the U.S. mortgage market—to purchase an increasing quota of loans that were made to borrowers at or below the median income in their communities. Finding prime loans among borrowers who were below the median income was difficult, especially when, by 2000, HUD had raised the quota from 30 percent to 50 percent. To meet this goal, Fannie and Freddie had to reduce their underwriting standards. In 1995, they were acquiring loans with 3 percent downpayments, and five years later they were advertising mortgages with no downpayment at all. The credit scores required of borrowers were also reduced. As a result, by 2008, half of all mortgages in the United States—28 million loans—were subprime or otherwise weak. Of these low quality loans, 74 percent were on the books of government agencies or government-regulated entities like the GSEs, showing clearly where the demand for these loans originated. When the housing bubble deflated in 2007, mortgages that should never have been made went into default in unprecedented numbers, further driving down housing prices, weakening financial institutions and causing the panic that we know as the financial crisis. Now along comes the QM rule, which is based on a wholly different narrative about the financial crisis. This view—repeated innumerable times in the media—is that the crisis was the result of inadequate regulation, private-sector irresponsibility and predatory lending. Founded on this story, the rule turns the usual lending process on its head, making the lender liable for various penalties—not just the loss on the loan—if it is ultimately determined that the borrower could not afford the mortgage. To avoid these penalties, which may include a defense to foreclosure, lenders must observe several rules. The loan must amortize principal in even monthly payments, may not result in a debt-to-income ratio greater than 43 percent, must have adequate income and assets documentation, and may not be priced at more than 1.5 percent above the prime mortgage rate at the time the loan rate is fixed. If all of these tests are met, and the lender concludes that the borrower can afford the loan, the lender gets “safe harbor” protection against the penalties and the right to call the loan a prime mortgage. By themselves, the rule’s draconian provisions would probably have addressed the problem of low underwriting standards. The penalties associated with making a loan that ultimately defaults would have reinforced the natural desire of lenders to make mortgages that pay off. After making the required determination that a borrower has the ability to pay under the rule, lenders would have added the other keys to a sound loan—a substantial downpayment and a good credit history. These are intended to assure that the borrower has skin in the game and is willing to pay. That’s where this alleged reform goes off the rails. Substantial downpayments and good credit histories are unpopular with community “activists,” realtors, homebuilders and other members of what we call the Government Mortgage Complex. They want continued lending to as many potential home buyers as possible, even if these borrowers don’t have the incomes, assets and credit histories to meet common sense underwriting requirements. This coalition has been effective in moving Congress in the past—and the Consumer Financial Protection Bureau in its recent rule—to encourage lending to borrowers whose credit positions are shaky. So the rule, in effect, takes the underwriting standards out of the hands of the lender and gives them to the government. If the automated underwriting systems of the GSEs or the FHA give the loan their stamp of approval—even if it is not ultimately guaranteed by these agencies—the loan is considered a prime loan, no matter what its quality. For example, a mortgage with a 3 percent downpayment, a 580 FICO score and a 50 percent debt-to-income ratio—a loan that would have been considered a subprime loan before the financial crisis—will now be marketed as prime if it is declared eligible for purchase by one of the GSEs or for insurance from FHA. Indeed, FHA approves such loans today. Because of their government backing, FHA and the GSEs have lower cost structures, which make it much easier for them to stay below the 1.5 percent cap on risky loans. Thus, originators will have competitive incentives to sell their loans to the GSEs and FHA rather than through private channels. Once again, this threatens the taxpayers with potential losses when these weak loans default. Thus, neither Dodd-Frank nor the new QM rule has changed anything significant. Political pressure to continue lending to borrowers with weak credit standing has trumped common sense underwriting standards. The only things missing are Chris Dodd and Barney Frank. Peter J. Wallison is a senior fellow and Edward J. Pinto is a resident fellow at the American Enterprise Institute.
<urn:uuid:e4dc44a9-41d1-4f54-b5c0-798b291d77f4>
CC-MAIN-2013-20
http://karin.agness@aei.org/article/economics/financial-services/housing-finance/new-qualified-mortgage-rule-setting-us-up-for-another-meltdown/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00042-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968855
1,209
2.84375
3