text
stringlengths 174
640k
| id
stringlengths 47
47
| dump
stringclasses 17
values | url
stringlengths 14
1.94k
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 43
156k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
|---|---|---|---|---|---|---|---|---|---|
Background: Britain conquered Burma over a period of 62 years (1824-1886) and incorporated it into its Indian Empire. Burma was administered as a province of India until 1937 when it became a separate, self-governing colony; independence from the Commonwealth was attained in 1948. Gen. NE WIN dominated the government from 1962 to 1988, first as military ruler, then as self-appointed president, and later as political kingpin. Despite multiparty legislative elections in 1990 that resulted in the main opposition party - the National League for Democracy (NLD) - winning a landslide victory, the ruling junta refused to hand over power. NLD leader and Nobel Peace Prize recipient AUNG SAN SUU KYI, who was under house arrest from 1989 to 1995 and 2000 to 2002, was imprisoned in May 2003 and is currently under house arrest. In December 2004, the junta announced it was extending her detention for at least an additional year. Her supporters, as well as all those who promote democracy and improved human rights, are routinely harassed or jailed.
Photos from Myitkyina
No Start a Travel Diary Free Travel Blog Site
Blogs from Myitkyina
Latest Blogs from Myitkyina
- October 2nd 2011 Burmese Food/Myanmar Cuisine: Curry, Rice, Noodles...........Tea?
by Words: 1788 Photos: 9
- October 7th 2010 Four days on the Irrawaddy
by Words: 1468 Photos: 0
- June 14th 2007 Visa Run
by Words: 135 Photos: 0
- January 12th 2007 Burmese daze
by Words: 1355 Photos: 35
|
<urn:uuid:a6c878c4-8bbb-419d-8977-ce2b9bd569a6>
|
CC-MAIN-2013-20
|
http://www.travelblog.org/Asia/Burma/Northern-Burma/Myitkyina/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957203
| 341
| 3.078125
| 3
|
Many solvents are used in organic synthesis, but higher exposures to workers occur when these chemicals are used in open processes, e.g., as degreasers and as solvents for paints, dyes, and pesticides. [Ladou, p. 481] Highest exposures in the past occurred in dry cleaning, screen printing, rotogravure printing, industrial painting, manufacturing of glass reinforced plastic, and tile fixing; [Reference #2]
Most industrial solvents are in a liquid state at room temperature. They are used to disperse other substances into solution, e.g., cleaning, degreasing, thinning, and extracting. Organic solvents include several classes of chemicals: hydrocarbons (aliphatic, alicyclic and aromatic), petroleum distillates, alcohols, glycols, phenols, ketones, esters, ethers, glycol ethers, chlorinated hydrocarbons, and chlorofluorocarbons. [LaDou, p. 481-5] See the Disease, "Solvents, acute toxic effect."
CNS Solvent Syndrome
|Links to Other NLM Databases|
|Related Information in HazMap|
Occupational diseases associated with exposure to this agent:
Industrial Processes with risk of exposure:
Activities with risk of exposure:
|
<urn:uuid:91d2135d-3bbb-48c1-b9d5-691af7eeab01>
|
CC-MAIN-2013-20
|
http://hazmap.nlm.nih.gov/category-details?id=768&table=copytblagents
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.852012
| 281
| 3.21875
| 3
|
Progress in agriculture and rural development is defined as sustained increases in output and productivity that contribute to improved food security and poverty reduction.
Progress in health is defined as equitable, substantial and sustainable improvements in access to, participation in and quality of health services, leading to improved physical, mental and social wellbeing of the population.
Progress in governance is defined as improved functioning of rule-governed arrangements, providing incentives for the state to act in ways that promote the wellbeing of the population.
Progress in education is defined as significant improvements in access to and quality of education at primary and/or secondary levels.
Progress in economic conditions is defined as sustained periods of inclusive growth and reductions in income poverty that allow poor and non-poor people to contribute to, and share in, the benefits of economic growth and development.
Progress in environmental conditions is defined as improved enabling conditions for environmental management and governance and more equitable and sustainable access to ecosystem services across four domains: atmosphere, land, water and biodiversity.
Progress in water, sanitation and hygiene is defined as sustainable and equitable improvements in the coverage, access and quality of water, sanitation and hygiene services.
Progress in social protection is defined as reduced vulnerability to shocks and stresses (such as production failure, hunger, chronic illness and age) through social protection delivery.
|
<urn:uuid:7e0a8f44-ee42-4293-9515-9da195bcef32>
|
CC-MAIN-2013-20
|
http://www.developmentprogress.org/progress-stories/mauritius%E2%80%99-sustained-progress-economic-conditions
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943473
| 261
| 3.1875
| 3
|
A considerable literature has emerged recently on experiences with technologies, practices, and products that increase resource productivity and ecological efficiency, and thereby reduce the volume of resource input per unit of economic output. The ultimate hope is to shed light on ways in which economic growth and social security can be sustained while resource flows decline in developed countries and/or grow more slowly in developing countries. This literature cites macroeconomic trends with relative reductions in the intensity of resource use coupled with slight increases in absolute levels in the developed economies (Adriaanse et al., 1997). It deals with issues that are central to alternative development paths that are also discussed in the SRES (IPCC, 2000a) and Chapter 2. It also notes leapfrogging phases of technological development for developing economies (UNDP, 1998, p. 83). On the micro level, it identifies experiences with cleaner, more economical energy systems, and the potential for information technology to increase resource efficiency. In either case, authors uncover policy options that pertain mainly to support the proliferation of these trends. These options emerge from a broader conception of climate mitigation than has typically been captured in the energy supply and demand technologies represented in existing energyeconomic models. Each option has the potential to reduce GHG emissions, but each needs to be carefully evaluated in terms of its impacts on economic, social, and biological systems. Moreover, each of these options needs to be evaluated alongside conventional energy supply and demand alternatives in terms of their impacts. Expanding the analysis of the set of available options in this way should make us better off, as some of the new options will be attractive upon further analysis, although others will not.
Many authors argue that progress in developed countries has been driven largely by the technologically based substitution of natural resources for labour. As a result, labour productivity has generally grown faster than resource productivity. Against the background of environmental scarcities, though, this pattern has and will continue to change so that innovation may increasingly be shifted away from labour-saving advances towards resource-saving technologies.
In a complementary strand of literature attention has focused on the greater scope for a transition in developing countries by decoupling investment from resource depletion and the destruction of ecological processes. More specifically, since the physical infrastructure in developing countries is still being designed and installed, they have a better opportunity to avoid the resource-intensive trajectories of infrastructural evolution adopted by developed countries (Shukla et al., 1998, p. 53; Goldemberg, 1998a). Specific examples cited in this context are efficient rail systems, decentralized energy production, public transport, grey-water sewage systems, surface irrigation systems, regionalized food systems, and dense urban settlement clusters. These can set a country on the road towards cleaner, less costly, more equitable, and less emission-intensive development patterns. The costs of such a transition are probably higher in places where considerable capital investments in infrastructures have already been made and where turnover is rather slow. For this reason, the timing of such choices is vital, as decisions about systemic technological solutions tend to lock economies onto a path with a specific resource and emission intensity.
In the context of climate policies, innovations in energy systems are of particular importance. Possible strategies advanced in the literature include a shift from expanding conventional energy supply towards emphasizing energy services through a combination of end-use efficiency, increased use of renewables, and new-generation fossil-fuel technologies (Reddy et al., 1997, p. 131). Developing countries that take advantage of these sorts of innovations could follow a path that leads directly to less energy-intensive development patterns in the long run and thereby avoid large increases in energy and/or GDP intensities in the short and medium term.
Box 1.4. The Brazilian Ethanol Programme
In 1974, Brazil launched a programme to shift to sugarcane alcohol (ethanol) as an automotive fuel, initially as an additive to gasoline in a proportion of about 20%. After 1979, pure alcohol-fuelled cars were produced, with the necessary technological adaptation of engines, through an agreement between the government and multinational car companies in Brazil. The conversion was driven primarily by tax policy and the regulation of fuel and vehicles. The relative prices of alcohol and gasoline were adjusted through Petrobras, the state owned oil company. In 1981 the price of alcohol was set 26% below that of gasoline, although gasolines production cost was lower than that of alcohol (Pinguelli Rosa et al., 1998).
The alcohol programme created more than 500,000 jobs in rural areas and allowed Brazil to reduce oil imports. The sales of new alcohol-powered cars grew to 30% in 1980 and to more than 90% of the total car sales after 1983 until 1987. Alcohol accounted for about 50% of car fuel consumption at that time. However, the sharp decline in world oil prices along with deregulation in the energy sector meant the abandonment of alcohol-fuelled cars. Even in 1995, though, avoided emissions through alcohol fuel use in Brazil were 24.3MtCO2. The cumulative avoided emissions from 1975 to 1998 can be calculated as 385MtCO2 (Pinguelli Rosa and Ribiero, 1998).
In many places, renewable energy technologies seem to offer some of the best prospects for providing needed energy services while addressing the multiple challenges of sustainable development, including air pollution, mining, transport, and energy security. For instance, 76% of Africas population relies on wood for its basic fuel needs; but research and policy design targetted to improve sustainability has been largely absent. Solar energy has a significant potential in sahelian Africa, but slow technological progress, high unit costs, and the absence of technology transfer have retarded its installation. The Brazilian ethanol programme to provide automotive fuel from renewable resources (see Box 1.4) is another example. Throughout the developing world the exploitation of hydro potential also remains constrained because of high capital requirements and environmental and social concerns generated by inappropriate dam building.
Development of so-called appropriate technologies could lead to environmental protection and economic security in developing countries. The label appropriate technologies is used because they build upon the indigenous knowledge and capabilities of local communities; produce locally needed materials, use natural resources in a sustainable fashion, and help to regenerate the natural resource base. They may enable developing countries to keep an acceptable environmental quality within a controlled cost (Hou, 1988). Low-cost, but resource-efficient technologies are of particular importance for the rural and urban poor (see Box 1.5). There is a latent demand for low-cost housing, small hydropower units, low-input organic agriculture, local non-grid power stations, and biomass-based small industries. Sustainable agriculture can benefit both the environment and food production. Biomass-based energy plants could produce electricity from local waste materials in an efficient, low-cost, and carbon-free manner. Each of these options needs to be evaluated alongside conventional energy supply and demand alternatives (see Chapter 3) in terms of the impacts and contribution to sustainable development. Expanding the analysis of the set of available options in this way should make us better off, as some of the new options will be attractive upon further analysis, although others will not.
It is important, in light of these examples, to realize that the results of greater resource efficiency differ according to the performance level of the technology under consideration. Technologies devised for high eco-efficiency and intermediate performance levels consume, for example, lower absolute amounts of resources than comparable technologies designed for high eco-efficiency and high performance levels. By design, performance levels can vary in such dimensions as level of power, speed, availability of service, yield, and labour intensity. Indeed, intermediate performance levels are often desirable because of their higher employment impact, lower investment costs, local adaptability, and potential for decentralization. For this reason, technologies that combine high eco-efficiency with appropriate performance levels hold an enormous potential for improving peoples living conditions while containing the use of natural resources and GHG emissions.
Changing macroeconomic frameworks is often considered indispensable, in both developed and developing countries (Stavins and Whitehead, 1997), to bringing economic rationality progressively in line with ecological rationality. Economic restructuring and energy-pricing reforms both compliment and are a prerequisite for the success of many environmental policies (Bates et al., 1994; TERI, 1995). As long as natural resources, including energy, are undervalued relative to labour, the tendency should be to substitute the cheaper factor for the more expensive one. Giving a boost to efficiency markets requires, first of all, the elimination of environmentally counterproductive subsidies (at least over the medium-to-long term), as on fossil fuels, motorized transport, or pesticides, as much as concessions for logging and water extraction (Roodman, 1996; Larraìn et al., 1999). Reform of environmentally destructive incentives would remove a major source of price distortions. Finally, shifting the tax base gradually from labour to natural resources in a revenue-neutral manner could begin to rectify the imbalance in market prices (European Environment Agency, 1996; Hammond et al., 1997). A more extensive discussion of eco-taxation, reporting a wide-ranging debate, is given in Chapter 6 of this report.
|Box 1.5. Resource-efficient Construction in India
Recent analysis shows construction-sector activities to be major drivers of Indian GHG emissions. In addition, conventional building costs place traditional construction beyond the means of an increasing fraction of rural families. A new building technology developed by an Indian non-profit organization, Development Alternatives, reverses this trend. This technology uses hand-powered rams to shape compressed earth into strong, durable, weather-resistant but unbaked bricks. The ingredients for the bricks include only locally available materials, mostly soil and water.
Building new residential and commercial structures with these rammed-earth bricks creates rural jobs and delivers structurally sound buildings with high thermal integrity and few embodied emissions of GHGs. As a result of their inherently high thermal mass, these new buildings easily incorporate passive solar design for heating and cooling. Since they use little purchased input besides human labour, their cost is well within the range of poor families.
Other reports in this collection
|
<urn:uuid:6ae84c15-bf3e-44ee-b47a-280c161fb4c8>
|
CC-MAIN-2013-20
|
http://www.grida.no/climate/ipcc_tar/wg3/062.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931028
| 2,065
| 2.65625
| 3
|
Displaying 1-2 of 2 key documents
Source: Millennium Project | January, 2005
This report outlines the role that science, technology and innovation can play in implementing the Millennium Development Goals (MDGs). It draws from lessons learned over the past five decades, and describes actions needed to help achieve the MDGs through technological innovation, including building scientific infrastructure, investing in education and promoting business activities in science and technology.
The report acknowledges three main actors in technological innovation: governments, academic institutions and private enterprise. It argues that they must work together to improve the policy environment, technological infrastructure and capacity-building in developing nations. It suggests that global partnerships, advising policymakers and good governance should be encouraged, and points out that the diversity of political environments and resources means that countries should not have a one-size-fits-all approach to policy development.
Source: UNU-MERIT | June, 2011
This paper describes two case studies of smallholder farms in South Africa to assess the processes involved in agricultural innovation carried out jointly with farmers. It highlights the importance of experimentation and cooperation for cash crop and subsistence farmers, and reviews current policies to evaluate how grassroots innovation is being supported in South Africa.
The paper points to inadequate policy support for grassroots innovation. It outlines the characteristics of innovation systems including social contexts, learning cycles and self-reflection, and discusses intellectual property rights. The authors identify triggers for innovation, including the potential to cut down on labour, and suggest that policymakers and local communities need to engage in cooperative activities to create an enabling environment for grassroots innovation. Policy suggestions include creating links between formal and informal research and viewing collaboration as a key indicator of success.
|
<urn:uuid:628e1936-9952-41ef-abcd-86ee39ef10f9>
|
CC-MAIN-2013-20
|
http://www.scidev.net/en/supporting-grassroots-innovation/key-documents/policy-papers-and-regulatory-issues/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.923228
| 337
| 2.703125
| 3
|
In every major aspect of human variation we see race, and always the same races. Races are not arbitrary categories; the more we study human variation, the more consistent races become.
Very, very nice talk. E/PO is such a difficult thing to do.
"the more we study human variation, the more consistent races become". I suppose to avoid the conotations of the word 'race' we could use 'subspecies'. But that introduces more problems because many associate the prefix 'sub' with 'inferior'. Perhaps we should just refer to 'geographically distinct populations'.
I suppose to avoid the conotations of the word 'race' we could use 'subspecies'. But that introduces more problems because many associate the prefix 'sub' with 'inferior'. Perhaps we should just refer to 'geographically distinct populations'. Actually, race is an unnecessary term as subspecies already denotes the same meaning and in a more unambiguous way and, unlike race, it is used not just for humans but also for other animal species. The term geographically distinct population is too long thus necessitates using abbreviations like GDP. Also we do not need to create a new category just for humans. The subspecies category we apply to other animal species equally applies to humans. So subspecies is the best term to use for human macro groupings.
But that introduces more problems because many associate the prefix 'sub' with 'inferior'. Irrelevant, as everyone either belongs to a subspecies or is a hybrid of two or more subspecies.
"The subspecies category we apply to other animal species equally applies to humans". I tend to agree. However the fact that human subspecies have become more mobile than are most other subspecies we have more clinal variation in our species than occurrs in most other species. That clinal variation is far from unknown in other species of course, but it does mean many people are able to claim that human subspecies are somehow different from other subspecies.
I tend to agree. However the fact that human subspecies have become more mobile than are most other subspecies we have more clinal variation in our species than occurrs in most other species. That clinal variation is far from unknown in other species of course, but it does mean many people are able to claim that human subspecies are somehow different from other subspecies.Subspecies aren't pure categories in whatever species they are found. Almost every subspecies (not necessarily every individual member of the respective subspecies though) has some admixture from other subspecies. In contact zones of different subspecies, admixture from other subspecies rises to high levels and we classify such individuals and groups as hybrids of different subspecies. To give examples from modern humans; there is a transition zone in the Sahara Desert where Caucasoid-Negroid hybrids dwell; Ural/West Siberia area of Russia (I am only referring to the indigenous non-Russian people there) and Central Asia are transition zones where Caucasoid-Mongoloid hybrids dwell; another transition zone is between the Greater Iran and the Indian Subcontinent where Caucasoid-Dravidoid hybrids dwell (I am using Darvidoid as a subspecies category for the vast majority of Subcontinentals); there are many people who are hybrids of two of more subspecies in South and Central Americas today as a result of the European colonialism and African slavery.
I agree 100%. But persuading others is almost impossible. Maju, for one, is very reluctant to see things in such a manner.
Mr Dienekes.You have authorized my two last message about this issue please publish it; I will thank you in advance.
horacioh, you have been told repeatedly that you are banned, so no more messages by yourself will be accepted on the blog. All your blog posts and e-mails to my address immediately go to the spam folder.
The North Americans are a variable mixed people with haplotypes from Europe –exclusively or not-, Africa and indigenous Americans also Asians as a "result of the European colonialism and African slavery" consolidated progressively and more extent day bay day in the whole population. The southern end of America from Argentine mainly -with more than 97.5% of Y European markers the rest from indigenous Y males- also Uruguay in a few minor extent, etc. are almost only population from Europe mixed progressively or not with mestizo -Europeans and indigenous the last by mtDNA maternal basically - at different levels, with the resemblance of only Europeans.Races are Genetically and scientifically False also inexistent more at the interbreeding history of the world at whole times from prehistory to now. There are not races but Ethnics yes –when established population at a while in the OoA Theory shows more and more valid-. We say haplotypes frequencies and the haplotype profiles of populations that come from haplogroups and familiar pedigrees related within geographical, historical, Linguistical and cultural –also creeds- binds. Therefore, in Humans the term “Sub- Species” or “races” have not scientific basis.Dr. Claudia Manzini,Geneticist Ph D
Claudia, what has happened in the New World in the last 500 years is irrelevant for the rest of the world and for the status of the extant subspecies (=races) of humans. The human variation cannot be explained just by using the ethnicity category; that is like focusing just on a piece of a puzzle and neglecting the whole picture. In reality, many human geneticists already use subspecies categories for modern humans, even if unwittingly, by grouping modern humans based on geographical categories such as West Eurasians, East Eurasians, South Asians, Amerindians, Sub-Saharans that correspond to human subspecies. Genetics has empirically shown more than other disciplines that modern human subspecies are real categories that have real correspondences in human variation.
"We say haplotypes frequencies and the haplotype profiles of populations that come from haplogroups and familiar pedigrees related within geographical, historical, Linguistical and cultural –also creeds- binds. Therefore, in Humans the term 'Sub- Species' or 'races' have not scientific basis". That same argument could be used to claim there is no such thing as 'subspecies' in many other species as well. For example American bison have different mtDNA from European bison. European bison mtdna is much closer to cattle mtDNA than is American. Similarly with mallard ducks. European mallards share mtDNA with Asian spot-billed ducks whereas American mallards share mtDNA with American black ducks. No-one would claim anything other than that the three were separate species, but also few would argue than the two mallard populations are separate subspecies. Some populations of wolves and coyates share mtDNA but few would regard them as anything other than distinct species. There are numerous other examples. So haplotypes can be shared by species or subspecies. Consequently haplogroups are insufficient for distinguishing species or subspecies.
Yes, Terry. Extant human subspecies are in the same degree of variation as many other animal subspecies. The only reason that comes to my mind about why extant human subspecies are denied by the biological consensus is ideological unwillingness to make biological divisions between modern humans. Making such a division is considered as something "evil". Sure, there are some issues about extant human subspecies that are open to dispute, such as the status of South Asians and whether East Eurasians and Amerindians are related but separate subspecies or parts of the one and the same subspecies (the same dispute can be made for Pelaeo-Africans and Neo-Africans although they are more likely to be separate subspecies), but the main divisions are unchallengeable. Modern humans must have at least 4 subspecies (the optimal number is more than that).
"Modern humans must have at least 4 subspecies (the optimal number is more than that)". I think we could get away with five. Two African types, 'Khoisanoid' and West African. Other Africans are basically a hybrid between these two with some additions from outside Africa. The other three are found at the geographic margins of the ancient human geographic range: 'Caucasian', Northeast Asian and 'Australoid. Again all other populations are basically a hybrid between one or more of these. "The only reason that comes to my mind about why extant human subspecies are denied by the biological consensus is ideological unwillingness to make biological divisions between modern humans". And ther fact that the divisions can be only very ill-defined. The hybrid zones or clines between the five basic groups are very gradual, usually. There is a fairly abrupt division between the Papuans and the SE Asian hybrid zone for example. "there are some issues about extant human subspecies that are open to dispute, such as the status of South Asians" I would guess that South Asians have a basis of 'Australoid overlain by later 'Caucasian' and Northeast Asian immigrants. "whether East Eurasians and Amerindians are related but separate subspecies or parts of the one and the same subspecies" The Amerinds seem to have some 'Caucasian' element, probably represented by Y-hap Q. The mtDNAs are mainly Northeast Asian, so they are yet another hybrid between two of the basic human 'subspecies'.
Terry, you seem to be too purist when it comes to define subspecies. As I wrote above, subspecies aren't necessarily pure categories. This is true for other animal subspecies and also true for human subspecies. But of course there is a limit in the allowable admixture from other subspecies in a subspecies; beyond that limit individuals and populations are no longer defined as members of a subsepecies but as hybrids of two or more subspecies. In short, the hybrid zones between human subspecies are much more limited in size and scope than you suggest.The origin of the Y-hap Q is open to dispute. In Eurasia some clades of it seem to be Caucasoid (especially those in West Asia and Europe) while most of its clades seem to be Mongoloid, so Amerindian clades are more likely to be Mongoloid. Autosomally also Amerinidans are WAY closer to East Eurasians than to West Eurasians, so they can be defined as Mongoloid.I think the most contentious area is South Asia. It can be defined as a hybrid zone between Caucasoids and an ancient subspecies (not Australoid according to genetics) that is somewhat closer to Mongoloids than to Caucasoids. The 4 subspecies system isn't far from optimal actually. I name the 4 subspecies so: Caucasoid, Mongoloid, Negroid (including all Sub-Saharans) and Australoid. For the sake of practicality, I favor categorizing the overwhelming majority of South Asians as a single subspecies of their own, which I call Dravidoid. So we come up with 5 extant subspecies of humans.
Actually, much (not necessarily most) of the Horn of Africa is a hybrid (transition) zone.
"Autosomally also Amerinidans are WAY closer to East Eurasians than to West Eurasians, so they can be defined as Mongoloid". I agree that Amerinds look a bit like a branch of the Mongoloid subspecies, but by no means entirely. As 'splitters' we could nominate them as yet another subspecies but to me it seems we can easily see them as a hybrid between the Mongoloid and Caucasian subspecies. "The origin of the Y-hap Q is open to dispute. In Eurasia some clades of it seem to be Caucasoid (especially those in West Asia and Europe) while most of its clades seem to be Mongoloid, so Amerindian clades are more likely to be Mongoloid". I think it unlikely that Q was originally associated with the Mongoloid subspecies. Its closest relation is R, almost certainly and India haplogroup that expanded into Central Asia via Iran and Afghanistan. To me it looks most likely that Q formed somewhere in that region from the accompanying ancestral haplogroup: P. With the possible exception of X it seems that none of the American mtDNA lines accompanied Q at that early stage. The American mtDNA lines are all East Asian: C, D, B and A. Those haplogroups must have been picked up along the way as the wave advanced. These East Asian femele lines provided the genetic drift towards the Mongolian subspecies as Y-hap Q moved northwest to America.
A few more thoughts, with apologies to Dienekes. This may take a couple of posts. I suggest the American mtDNAs were picked up in the above order. It's possible that A didn't even come in with Q. It may have been the last in, by land, accompanied by the Mongoloid subspecies' haplogroup C3b. The two haplogroups may form the basis of the most Mongoloid-looking Americans, the Na-Den-speaking people (although the haplogroups, especially mtDNA A, spread further than the languages). And some Americans along the Northwest coast look to be almost part of the Polynesian subspecies. The haplogroup most definitely circum-Pacific is mtDNA B. One subclade of B4b (B2) in America and one subclade of B4a (B4a1a1a) in Polynesia. Opposite ends of a coastal expansion around the Pacific from somewhere between the Southn China Sea and the Sea of Japan. Possibly the B4a subspecies in the south and the B4b subspecies in the north. The other two haplogoups, C and D, seem to have originally had a home in China as far north as Mongolia. And I'm sure they were the first mtDNA haplogroups to reach America. The N-derived haplogroups arrived later. So I suggest the Americans are basically a hybrid between two subspecies. That ntakes care of the two North eurasian subspecies. Any comments?
"I think the most contentious area is South Asia. It can be defined as a hybrid zone between Caucasoids and an ancient subspecies (not Australoid according to genetics) that is somewhat closer to Mongoloids than to Caucasoids". We could quite validly claim two subspecies to the southeast beyond Eurasia, the region of the Australoids. It is generally easy to tell an Australian Aborigine apart from a New Guinea Melanesian. Australians have wavy hair and Melanesians have tightly curled hair. It is not the same as African 'peppercorn' hair though. It doesn't break off, and so makes perfect 'Afros'. And just as Y-hap Q may not have originally been associated with the Mongoloid subspecies so Y-hap C may not have originally been either. Its related haplogroups C2 and C4 are present in southern Wallacea and in Australia respectively. But apart from (probably Austronesian) C2a Y-hap C is not present in New Guinea. There we find mainly F-derived Y-haps: M, S and K (although K2 is also present in Northwest Australia). In further support of the two subspecies idea we even have reasonably separate mtDNAs. Australia has N-derived haplogroups S, N13 and N14 as well as M-derived haplogroups M14, M15 and M42'74. New Guinea has M-derived haplogroups M27, M28 and M29'Q but its only N-derived haplogroup (apart from apparently Austronesian B4) is R-derived P. Northwest Australia even has representatives of this haplgroup. So it looks as though we may have two subspecies in the region. One of, or more probably both, the Australian and Melanesian subspecies must have come from mainland SE Asia. But the Mongoloid subspecies has subsequently spread over and hybridised with these subspecies on the mainland. And eventually this mixture took off into the Pacific, where it formed the Polynesian subspecies. But this last subspecies is also easily explained as being a hybrid between at least two other subspecies. So the eastern element of the Dravidoid subspecies could easily have been brought in by any of the two (or three) eastern subspecies, but mainly by elements of the Mongoloid. The Dravidoid subspecies is most easily explained as being a hybrid between eastern and western subspecies, Caucasian and Mongoloid/Australoid. So the Americans, the Polynesins and the Dravidoids are all hybrids between other subspecies. I'd bet a lot of money that the 'orginal' subspecies they were formed from were themselves formed from hybrids between previous subspecies of H. erectus. Any thoughts on the matter?
Terry, even if Y-hap Q was originally Caucasoid, I think its varieties in Siberia and East Asia were already Mongoloidized before reaching the Americas. Also, I tend to see the divergence of Asian Mongoloids and Amerindians more as Asian Mongoloids deviating from the original Mongoloid type than the other way around. So I see Amerindians closer to the original Mongoloid type than Asian Mongoloids are (this is the position of many biological anthropologists).Genetics neither supports an Australoid or Mongoloid or Australoid-Mongoloid origin scenario for the ASI part of South Asians. Any connection of ASI with Australoids and Mongoloids should be distant enough to separate it from both as a distinct subspecies. I think a part of Southeast Asia, before being swarmed by Mongoloids, was a transition zone between ASI and Australoids.
"So I see Amerindians closer to the original Mongoloid type than Asian Mongoloids are (this is the position of many biological anthropologists)". I've often seen that claimed. However it doesn't make sense. Presumably by the time people were able to enter America the population in northeast Asia was already fairly substantial. What on earth could have led to a widespread change in just that northeast Asian population towards a single particular characteristic? It implies a very strong selection pressure in the northeast Asian population that had not existed earlier, and had no effect on human populations anywhere else. "Genetics neither supports an Australoid or Mongoloid or Australoid-Mongoloid origin scenario for the ASI part of South Asians". There is a reasonable amount of Y-hap O in South Asia, presumably representing a Mongoloid input. And some mtDNA M haplogroups are almost certainly back-migrations to India from further east: D, M8/C, M9/E, M10, M12'G. And others could be: M13'46'61, M49, M50, although they may simply have originated in Northeast India and subsequently spread into East Asia. "I think a part of Southeast Asia, before being swarmed by Mongoloids, was a transition zone between ASI and Australoids". Almost certainly so. And Y-hap MNOPS is almost certainly SE Asian in origin, so P/R/Q must have carried some SE Asian/Australoid genes with it as it moved back west through India. And mtDNA R looks very likely to also have originated in SE Asia, so that provides more than enough SE Asian/Australoid genetic influence in South Asia. So there is actually no shortage of genetic support for movement into South Asia from further east, both Mongoloid and Australoid. "Any connection of ASI with Australoids and Mongoloids should be distant enough to separate it from both as a distinct subspecies". And that introduces the old problem of 'splitters' versus 'lumpers'. How different does a population have to be before we classify them as 'subspecies'?
Terry, I don't have a strong opinion regarding the origin of the Mongoloid subspecies. But I don't see a strong sign of a Caucasoid element in unadmixed Amerindians; they seem to be just a variety of the Mongoloid subspecies. Asian Mongoloids only gained high population densities with the Neolithic expansions; so they could evolve in significant ways prior to the Neolithic expansions. Genetically ASI are distinct enough from Mongoloids and Australoids to be classified as a different subspecies. On the other hand, Andaman Islanders seem to be genetically close enough to ASI to be classified as from the same subspecies with ASI.
Post a Comment
|
<urn:uuid:c3ca650b-bcf6-42b4-b496-edf48b787d9b>
|
CC-MAIN-2013-20
|
http://dienekes.blogspot.com/2011/06/interview-about-morton-skull-collection.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947819
| 4,254
| 2.578125
| 3
|
The purpose of this project is to establish an ecosite evaluation of the study area for future monitoring as to the effects of military training in the area. The project was initiated by the Department of Defense and is headed by the USDA from the Joranada Experimental Range office at New Mexico State University in conjunction with Utah State University. The project area has been historically used for grazing with little or no development and no impact on the landscape other than livestock and wildlife. The area has been maintained and the livestock has been managed by the BLM for the last few decades. The military is now expanding training operations into the area to include field exercises by troops on the ground. They will conduct operations by foot and vehicle in open ground areas. Training will also include the construction of temporary encampments that will result in high traffic areas. The future monitoring of the site will include possible changes to vegetation and soil conditions.
The location of this site is Otero Mesa, New Mexico. The mesa is situated at the northern extreme of Fort Bliss and 22 miles east northeast of Orogrande, New Mexico. The mesa is at an elevation of ≈5100 ft. which is approximately 900 ft. above the surrounding basin floor. The area is accessible by route 506 from state highway 54. The generalized landform of the project area is alluvial fans/remnant fans in the upland which give way to lowlands of basin floor with limited drainage. The soil temperature regime is mesic and has a soil moisture regime of aridic boarding on ustic. The parent material for this area is limestone derived alluvium.
One hundred sample locations were chosen with conditioned Latin hypercube based off of environmental covariates which included but not limited to topographic, geologic and remotely sensed spectral data. At each sample location the genetic horizons were sampled and described to a depth of one meter unless lithic contact or a limiting layer was encountered. The pedons were classified to the family level using field sampling techniques. The ecosite was evaluated from soil type and vegetative cover that was recorded on site to state and phase.
At the completion of this phase of the project we will be able to predict soil distribution using random forest classification and produce a preliminary predictive map that encompasses soil and ecosite type. The preliminary prediction is subject to change with results from ongoing laboratory analysis. Currently the mineralogy class of majority of soils sampled are of a mixed mineralogy class but analysis could easily place soils into a carbonatic classification.
|
<urn:uuid:56de9540-f010-4ee5-a070-dc55ce34b116>
|
CC-MAIN-2013-20
|
http://psc.usu.edu/htm/research/research-groups/pedology/people/armentrout/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965034
| 505
| 2.75
| 3
|
The big question with the Solar Impulse, an aircraft designed to fly wholly using solar power, is whether or not it'd be able to continue to operate at night. Well, turns out it can, as the Solar Impulse landed after successfully completing a 26-hour test flight.
The plane itself is something of an oddity, so don't expect it to replace the jets you see in the sky every day just yet. It's lengthy 207-foot wingspan flops around a bit so helpers had to hurry out as it landed, making sure that the wings didn't scrape the ground. It also only seats one, and needs pretty much the rest of its body for the 12,000 solar cells that allowed it to get through the night.
It wasn't a cakewalk for test pilot Andre Borschberg, either. He was stuck in the craft for 26 long hours in a cockpit the size of a bathtub, and had to put up with freezing temperatures during the night.
Still, it's an important milestone for the Solar Impulse. What's next is even more ambitious: flying around the entire world, showing that the craft can recharge during the day and last all night.
Solar passenger planes may not be landing at airports any time soon, but, for the crew of the Solar Impulse — and for the rest of us — it's an impressive display of how far you can stretch today's greener technologies.
|
<urn:uuid:978bdf64-e7b2-4265-9400-85244dd2f67b>
|
CC-MAIN-2013-20
|
http://www.dvice.com/archives/2010/07/solar-plane-con.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964246
| 293
| 2.84375
| 3
|
By Bill Robinson
Senior News Writer
In 1775, Daniel Boone and 30 axmen marked a trail from the Watauga Settlements on the Holston River in East Tennessee through the Cumberland Gap to what would be the site of Fort Boonesborough on the Kentucky River.
Called Boone’s Trace because it was not a road, the trail generally followed the migratory path used by buffaloes and American Indians.
When Kentucky was admitted to the Union as the 15th state 17 years later, an estimated 200,000 settlers had taken the trail to the Bluegrass region.
The Wilderness Road, over which wagons could pass, was cut in 1796. When the automobile became the principal mode of transportation in America, the federal highways known as US 25 and 25E in Kentucky still generally followed the route of Boone and the settlers who followed him.
By the 1970s, however, even those highways became mostly local roads as Interstate 75 became the principal north-south route from the Great Lakes to Florida and the Gulf of Mexico.
Riding on rails and then concrete and asphalt, the modern age has all but forgotten Boone’s Trace, and soon no trace of it may be left.
Two physicians, one from Lexington and the other from Nashville, who belong to the Boone Society are working to change that.
Dr. Sam Compton of Nashville, national president of the Boone Society, and Dr. John Fox of Lexington conducted a meeting in Richmond on Wednesday night to share their interest in preserving Boone’s Trace and turning it into an avenue of economic development, education and recreation along the corridor.
They earlier conducted a similar meeting in London.
The Richmond gathering was attended by 28 people, with most of them coming from outside Madison County.
In 1915, Kentucky members of the Daughters of the American Revolution erected 14 markers at strategic points along the trace, Compton said.
One can be found in front of Boone Tavern in Berea and the other sits on the courthouse lawn in Richmond.
In 1942, members of the DAR in Laurel County erected another seven markers there.
Properly marked and promoted, Boone’s Trace and the associated history could become “an economic engine” for the 120-mile corridor stretching from Martin’s Station, Va., just east of Cumberland and Gap to Fort Boonesborough State Park in Madison County, said author K. Randell Jones.
His book, “In the Footsteps of Daniel Boone,” lists every highway marker in the nation that documents Boone’s journeys.
An example of how a historical trail can be preserved and promoted for cultural and economic gain is more than 30 years old, Jones said.
The Overmountain Victory National Historic Trail, which marks the route of settlers took from Virginia, Tennessee, North Carolina and South Carolina to the Battle of Kings Mountain, was completed during the American Revolution Bicentennial in 1980 on the battle’s 200th anniversary.
It involves more than 215 miles of trails recognized by Congress with portions maintained by the National Park Service.
Boone’s Trace passes through the Cumberland Gap National Historic Park and three Kentucky state parks, Pine Mountain in Bell County, Levi Jackson in Laurel County and Fort Boonesborough in Madison County, its northern terminus.
The directors of the Cumberland Gap and Levi Jackson parks are interested in the trace project, Fox said.
Scott New, who has portrayed Daniel Boone in several documentaries and re-enactments, including several events and Fort Boonesborough, also attended Wednesday night’s meeting and expressed his support.
A grant to resume an archeological dig at the fort’s site recently was awarded, New said.
A representative of the Kentucky tourism department also attended the meeting as did eight members of the Rockcastle County DAR chapter, four members of the Society of Boonesborough and the Madison County Historical Society.
Richmond City Commissioner Donna Baird, who has been a volunteer for the Richmond Tourism Commission since before she was elected the city commission, also attended.
Getting state and local governments, chambers of commerce and other local groups involved, will be key to preserving and promoting Boone’s Trace, Fox said.
For more details about the Boone’s Trace project, e-mail Compton at firstname.lastname@example.org.
|
<urn:uuid:54eff33a-29e2-4e02-a1da-e452572861d6>
|
CC-MAIN-2013-20
|
http://richmondregister.com/localnews/x673420945/Boone-s-Trace-in-danger-of-disappearing-without-a-trace/print
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954389
| 894
| 3.265625
| 3
|
Tuesday, March 13, 2012
Happy Pi Day! Home school resources
Happy Pi Day! It's not a day to eat pie....although you can. It's a day to remember that wonderful little number we all learned (or tried to learn) in school. Pi day is celebrated on March 14 every year (get it? 3.14)
How many digits of pi do you remember?
3.1415926535......and so on
Do you remember the first time you heard a teacher say "Pie are square" in math class and were totally confused yet craving a slice of pie at the same time. I'm going to take a break from food blogging for today to share some of my favorite resources for home school Pi day activities.
When we were homeschooling Roma, Pi Day was a day devoted to lessons about Pi and was her first exposure to kitchen math.
A typical Pi day started with making pancakes for breakfast and then a quick lesson on radius versus diameter and how to figure out the circumference. As she got older we moved onto area of circles and volume of cylinders. Two pancakes overlapping led to a discussion of the Venn diagram.
An afternoon cooking lesson on how to make an apple pie taught basic food chemistry, kitchen sanitation, how to work with fractions (especially when we doubled a recipe), and gave us time to talk and share which is the best part of homeschooling. As she got older, I did less and hung out more. Her Pi day training was put to the test last Thanksgiving when she made her first solo pumpkin pie from scratch.
For Pi day dinner we'd have pizza.....naturally! Pi day is all about round food.
Here are some of my favorite links for Pi day resources and fun ways to spend it. Come back and comment and let me know if you tried any of them.
Exploratorium Pi Day - has a Brief History of Pi, activities, and more links
Pi and the Fibonacci Numbers - for advanced students - this site looks at how Pi is calculated
Pi Day Song - uses the numbers in Pi as musical notes (violin music)
Khan Academy - Circles: Radius, Diameter, Circumference
Teach Pi - teacher site for ideas for teaching Pi
|
<urn:uuid:21a102b4-dc74-41dd-a454-c9c4d4e03d62>
|
CC-MAIN-2013-20
|
http://www.cafe-pangaea.com/2012/03/happy-pi-day-home-school-resources.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.9671
| 464
| 3.359375
| 3
|
megalithic monument (mĕgəlĭthˈĭk) [key] [Gr., = large stone], in archaeology, a construction involving one or several roughly hewn stone slabs of great size; it is usually of prehistoric antiquity. These monuments are found in various parts of the world, but the best known and most numerous are concentrated in Western Europe, including Brittany, the British Isles, Iberia, S France, S Scandinavia, and N Germany. Aside from the standing stones and stone heaps that are still raised occasionally as boundary marks or memorials of personal and public events, most megalithic monuments seem to have been erected for funerary and religious purposes. The Western European megaliths were constructed during the Neolithic and the Bronze Age and are believed to range in date from c.4000 B.C. to 1100 B.C. Most chamber tombs were probably built during the 4th millennium B.C., and the stone circles generally date somewhat later. Megalithic monuments may be divided into four categories: the chamber tomb, or dolmen; the single standing stone, or menhir; the stone row; and the stone circle. Chamber tombs were usually covered with earth mounds, forming a barrow. Menhirs sometimes stood alone near the entrance of a tomb or on top of the mound. Sometimes they were set in long rows called alignments, as at Carnac in Brittany; in other places they were arranged in a circle, the most elaborate of which is Stonehenge in England (these are known as cromlechs outside Britain). The individual stone slabs may reach 65 ft (20 m) in length and 100 tons (90 metric tons) in weight. Such massive structures testify to the engineering feats possible with the concerted efforts of relatively ill-equipped peoples.
See G. Daniel, The Megalith Builders of Western Europe (1958); A. Thom, Megalithic Sites in Britain (1967) and Megalithic Lunar Observations (1973); C. Renfrew, Before Civilization (1973); J. Mitchell, Megalithomania (1982); R. Joussaume, Dolmens for the Dead (tr. by A. and C. Chippendale, 1988).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on megalithic monument from Fact Monster:
See more Encyclopedia articles on: Archaeology: General
|
<urn:uuid:6286c71d-d586-4f1b-8b07-2c954385cb5f>
|
CC-MAIN-2013-20
|
http://www.factmonster.com/encyclopedia/science/megalithic-monument.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.933158
| 522
| 3.65625
| 4
|
July 22, 2009
This year marks the 100th anniversary of the discovery of the fossil-rich Burgess Shale in British Columbia by Charles Doolittle Walcott, the fourth secretary of the Smithsonian Institution. The centennial is being celebrated many ways, from articles to conferences, but one tribute has caught more media attention than others.
The Burgess Shale Geoscience Foundation, a nonprofit educational organization, has partnered with Big Rock Brewery, in Alberta, Canada, to create Shale Ale. As Randle Robertson, executive director of BSGF, said in a press release:
This is the champagne of beers to celebrate the contribution geologists have made to science. Shale Ale kicks off our 1909-2009 centennial celebrations, which are designed to engage the public in geology, climate change and the history of exploration and discovery in the Rocky Mountains.
Combining beer and science, Shale Ale’s label features Walcott and recreations of animals whose fossils he found. The vast majority of fossils that Walcott recovered were of soft-bodied creatures that are normally not preserved, making the Burgess Shale discovery one of the most significant in paleontology. The time period in which the Burgess creatures lived also adds to their importance. The fossils date to 505 million years ago and give a glimpse into life in the Cambrian Period—a time described by some as evolution’s big bang.
Unfortunately, Shale Ale is available only through the Burgess Shale Geoscience Foundation because of provincial liquor laws.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
No Comments »
No comments yet.
|
<urn:uuid:6b509d8b-b252-4c53-bda7-7908855e72a3>
|
CC-MAIN-2013-20
|
http://blogs.smithsonianmag.com/food/2009/07/bottoms-up-for-the-burgess-shale-centennial/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.929812
| 338
| 3.609375
| 4
|
This file is also available in Adobe Acrobat PDF format
The Politics of Lunch
If you search the Internet for “school lunch” these days, two types of sites will come up. The vast majority of references lead to cheery government articles about “team nutrition,” brightly decorated menus from school lunchrooms, and manuals about managing cafeteria budgets. Sprinkled here and there among the search results, however, will be another type of article entirely. Celebrity chefs have lately entered school lunchrooms. They have come to prove that school lunches can be healthy. Their aim is to rescue children from greasy food and teach students to prefer zucchini over French fries. The task is daunting. The chefs are forced to use U.S. Department of Agriculture surplus commodities that hardly make for health-food menus. The chefs must also follow federal nutrition guidelines and meal subsidies, which generally allow for a maximum of about $2.40 per lunch for free meals. But these chefs soldier on, we are told, valiantly bucking the system in order to transform school lunches. Somewhere, buried in the articles, we inevitably find that private foundations are underwriting these experiments. In some cases, the food is subsidized, in others the chefs’ salaries are covered—usually at rates considerably higher than those of ordinary school lunch employees.1
This book, in its own way, explains why celebrity chefs and private foundations alone cannot save the National School Lunch Program. Readers will become acquainted with the history of one of America’s most remarkable and popular social programs. But they will also learn how the politics of school lunch created structural barriers that limited which children received nutritious meals and that shaped lunchroom menus. The history of school lunch politics encompasses a combination of ideals and frustrations, reflecting, at base, America’s deep ambivalence about social welfare and racial equality. It also reflects the tension in American politics about whether public policy should address individual behavior—in this case, whether food policy should focus on convincing people to eat right—or whether policy should address public structures and institutions—for example, fully funding free lunch programs or establishing a universal child nutrition program.2 The task faced by celebrity chefs in select school lunchrooms is daunting not simply because fast food is seductive and children are conservative eaters. Un-selfconsciously, the chefs are entering an institution only partly governed by concerns for children’s nutrition. Historically, concerns about national agricultural policies and poverty policy have regularly competed with dietary issues in the creation of school lunch programs. School lunch is, surely, rooted in the science of nutrition and ideas about healthy diets, but those ideas have never been sufficient on their own to shape public policy (or to change people’s eating behavior, for that matter). School lunch, like other aspects of public policy, has been shaped by the larger forces of politics and power in American history.
Since its founding in 1946, the National School Lunch Program has been the target of critics from the right as well as from the left. It is clear that even after more than half a century of operation, the National School Lunch Program is deeply flawed. School meals are often unattractive, unappetizing, and not entirely nutritious. The menu has always depended more heavily on surplus commodities than on children’s nutrition needs. Until the 1970s, the program reached only a small percentage of American children and served very few free lunches. All the while, however, the National School Lunch Program stood as one of the nation’s most popular social welfare programs. Politicians as savvy as Ronald Reagan discovered that the American public is intensely committed to the idea of a school lunch program, particularly one that offers free meals to poor children. In fact, the National School Lunch Program, to this day, is the only comprehensive food program aimed at school-aged children.3 Almost thirty million children in 98,000 schools eat school lunches each day. What is more, in most American cities, the National School Lunch Program is the single most important source of nutrition for children from low-income families. Almost 60 percent of all school children nationwide get free school lunches each day: 80 percent of Chicago’s public school children qualify for free school lunches; 79 percent of the children in Atlanta’s public schools receive free meals; New York City schools regularly feed almost 72 percent of their children for free; and in the state of Texas, over 70 percent of the children eat free or reduced price school lunches.4 The National School Lunch Program, for all its nutritional flaws, provides a crucial public welfare support for our nation’s youth. Without school lunches, many children in this country would go hungry; many more would be undernourished. Indeed, the National School Lunch Program has outlasted almost every other twentieth-century federal welfare initiative and holds a uniquely prominent place in the popular imagination. It suggests the central role food policy plays in shaping American health, welfare, and equality. A history of the National School Lunch Program is thus a crucial mirror into the variety of interests that continually vie for power and authority in American public life.5
School lunch politics have been marked by a shifting and not always predictable set of alliances over the course of the twentieth century. At first glance, the program’s trajectory appears to be the typical story of American liberalism, thwarted by southern Democrats who held social welfare hostage to racial segregation and states’ rights. Indeed, initiated by liberal reformers in the early part of the century, school lunch programs became institutionalized only when southern Democrats agreed to support federal appropriations in exchange for agricultural subsidies and under the condition that there would be limited federal oversight and unlimited local control. The result was a system that perpetuated the nation’s deep racial, regional, and class inequalities. But the fact that school lunches involve both children and food, two subjects fraught with powerful cultural and symbolic significance, renders the story more complicated and the players’ motives less transparent. It was conservative southern Democrats who, at the end of the New Deal, proposed a permanently funded federal school lunch program. Indeed, the 1946 bill creating a National School Lunch Program was named after Georgia senator Richard Russell, a staunch segregationist and opponent of civil rights. While Russell’s first priority was to protect a program he believed would benefit American agriculture, he was also motivated by a lasting concern about poverty in his region and a deep post-war anxiety about national defense, which linked healthy children to the future of American prosperity and strength. Despite his defense of states’ rights, Russell nonetheless crafted one of the most enduring and popular federal welfare programs of the twentieth century. Children’s welfare confounded predictable political lines again during the late 1960s and early 1970s, when powerful images of hungry children propelled Republican president Richard Nixon to announce that he would, within a year’s time, provide every poor child a free school lunch. Nixon vastly increased funds for free meals and, ultimately, turned the National School Lunch Program into the nation’s premier poverty program. Once the school lunch program became a poverty program, however, the political alliances again proved surprising. To protect the program’s ability to serve poor children in the face of an effective decrease in funds (the new federal monies only paid for free food, not for equipment, labor, or operating expenses), liberal senators like George McGovern, along with anti-poverty activists, found themselves—over the protests of nutritionists who had long opposed commercializing children’s meals—advocating privatization. Hoping that fast-food corporations and giant food service companies would be able to bring down the cost of lunchroom operations, these reformers saw privatization as a way to allow lunchrooms to continue to serve both free and paying children. Thus, by the time Ronald Reagan suggested that ketchup be considered a vegetable on the school lunch tray, private commercial interests already had two feet in the door of the school cafeteria.
School lunch politics suggest that children’s meals have always served up more than nutrition. Indeed, the National School Lunch Program, from the start, linked children’s nutrition to the priorities of agricultural and commercial food interests, both of which carried more weight in the halls of Congress than did advocates for children’s health. Most particularly, school lunches have been tied to the agenda of one of the federal government’s most powerful agencies, the Department of Agriculture, and, more recently, to the corporate food and food-service industries as well. Nutrition in each of these arenas takes a back seat to markets and prices. During its early years, the National School Lunch Program provided substantial welfare for commercial farmers as an outlet for surplus commodities, but actually fed a relatively small number of schoolchildren and provided few free meals to those who were poor. Since the 1960s school lunches have been a vital part of the American welfare system, characterized by means testing, insufficient appropriations, weak enforcement, and often blatant racial discrimination.
But even as a welfare program, children’s nutrition took a back seat to other interests. Most notably, in order to enable school lunchrooms to serve more free meals, the Department of Agriculture eased the restrictions banning commercial operations from school cafeterias. As poor children entered school lunchrooms in large numbers, so did processed meals and fast-food companies. Political compromises, first with agricultural interests and then with the food industry, have no doubt ensured the existence and expansion of a National School Lunch Program and today ensure the availability of free meals for poor children. What those compromises do not ensure is that those meals will provide a healthy cushion for children’s growth and development. Ultimately, the answers to the questions of which foods children should eat, which children deserve a free lunch, and who should pay for school meals have bedeviled even the most well intended of policy makers.
If school lunch politics hinge on priorities other than children’s health, school lunchrooms nonetheless reveal fundamental American attitudes about food and nutrition. As anthropologists have long observed, hierarchies of power and culture are embedded within the decisions about which foods are deemed suitable to eat, which foods constitute a meal, and which people are appropriate eating companions.6 Nowhere, perhaps, is the link between food and culture more relevant than in school meals where scientific ideas about nutrition continually vie with individual food choices and the enormous variety in American ethnic food traditions. The very idea of crafting a National School Lunch Program with nutrition requirements and standard menus suggests an optimistic faith in science, education, and reason. But when it comes to nutrition, scientific advice continually changes and Americans tend to ignore expert proscriptions about what to eat. When the National School Lunch program began, for example, nutritionists recommended that children needed a high-calorie diet based on whole milk, cream-based sauces, rich puddings, and butter on every slice of bread. Rooted in the belief that poor, malnourished children were “underweight” and basically needed more calories in order to grow and thrive, the prescription for a high-calorie diet made sense. Today, experts warn about an epidemic of obesity among poor children and excoriate school menus for their high calorie and high fat content. But the current obesity debate reveals more than new nutrition insight. Neither underweight children in the past nor obese children today became that way solely as a result of individual eating habits, lack of nutrition education, or bad food choices. Rather, nutrition is tied directly to social and economic circumstance—for example, family income and access to fresh foods—as much as to individual behavior. How nutrition science is translated into children’s health, therefore, has always rested on a larger context than food habits and individual choice.
This book traces the politics of school lunch from its origins in early twentieth-century science and reform to the marriage of children’s lunches and agricultural surpluses during the 1930s and the establishment of a permanent federally funded National School Lunch Program in 1946 to the transformation of school meals into a major poverty program during the 1970s and 1980s. One set of major players includes nutrition reformers—education, health, and key welfare professionals, mainly women—who struggled mightily to translate nutrition science into public policy. Another set of players includes farm-bloc legislators and Department of Agriculture officials who created the institutional infrastructure for a national school lunch program. These groups, together with political leaders responding to the demands and interests of their constituents as well as to the popular appeal of children’s health, shaped national food and nutrition policies. While the National School Lunch Program, like the American welfare system in general, is administered at the state level, the creation and fundamental outlines of the program—the development of national nutrition standards, eligibility requirements for free and reduced price meals, and the basic supply of donated foods available for lunch menus—emanate from Washington. This book thus views the nature of the school lunch and who pays for it as national policy concerns.
Chapter 1 argues that school lunch programs in the United States originated as part of the modernizing efforts of early twentieth-century social reformers. Using the new science of nutrition, professional women— home economists, teachers, and social workers—attempted to rationalize American eating habits and, in the process, bring new immigrants (and rural migrants) into a mainstream Anglo-American culture. Home economics, a new profession that attracted women who were excluded from scientific and academic careers, used the science of nutrition first to convince low-paid workers that they could “eat better for less,” then to assimilate immigrants into American culture, and, finally, to rationalize American diets more generally. School lunchrooms appeared to be the perfect setting in which to feed poor children but, more importantly, to teach both immigrant and middle-class children the principles of nutrition and healthy eating. In this way, nutrition became part of a basic civics training for future citizens. While most school lunch programs before the 1930s were volunteer efforts on the part of teachers or mother’s clubs, they drew on the expertise of professional home economists for balanced menus and scientifically formulated recipes. By the 1920s, home economists found an institutional home in the USDA’s Bureau of Home Economics, thus linking school meals to agricultural research and, ultimately, to a national network of professionals committed to school lunchrooms both ideologically and occupationally.
Chapter 2 traces the transformation of school lunch programs from local volunteer efforts into state-sponsored operations. During the Great Depression, existing lunchrooms were overwhelmed by the numbers of children coming to school hungry. Teachers and community groups tried to expand school meal offerings by raising donations but ultimately began to look to municipal, county, and state governments for resources. At the same time, a group of agricultural economists in the USDA began to formulate policies to address the severe depression in farm prices. Committed to market-based strategies that ultimately favored commercial farm interests, these policy makers proposed that the federal government monitor supplies by purchasing surplus commodities. School lunchrooms appeared as the perfect outlet for federal commodity donations. With one stroke, the Department of Agriculture could claim to help both farmers and children. By the eve of World War II, schools in every state depended on surplus commodities for their lunchrooms.
As federal involvement in school lunches became increasingly institutionalized, nutritionists and child welfare advocates began to press for standards in nutrition and service. Chapter 3 traces the increasing federal oversight of school lunch programs through the development of operating contracts and meal guidelines. Nutrition standards for the nation’s youth became increasingly significant as the United States prepared to enter World War II. Recalling the large number of young men declared unfit for service in World War I, both military and civilian policy makers began a campaign for “nutrition in the national defense.” The Roosevelt administration enlisted the aid of prominent social scientists, such as Margaret Mead, and internationally known nutritionists, such as Lydia Roberts, to develop strategies that would prepare the civilian population for expected wartime food shortages. These women proposed a universal school lunch program and “Recommended Daily Allowances” (RDAs) that would provide healthy diets for all children. While the idea of a universal child nutrition program never gained much traction, the RDAs formed the basis for all future government-subsidized school meals. As significant as national nutrition guidelines was the development of standard contracts governing the operation of school lunchrooms. Schools receiving federal assistance had to maintain sanitary conditions for food storage, handling, and service. The federal contracts also, for the first time, contained an anti-discrimination clause and required schools to provide lunches for free to children whose families could not afford to pay. While the only enforcement mechanism was to withhold food supplies—and no public official was interested in being accused of depriving children of food—the contracts represented a significant step in the institutionalization of the federal school lunch program.
Chapters 4 and 5 focus on the congressional debate surrounding the establishment of a permanently funded National School Lunch Program in 1946. These chapters argue that the compromises that were necessary in order to mount sufficient congressional support for the bill had serious consequences regarding which children received federally subsidized meals and which schools participated in the program. Like much of the American welfare system, the National School Lunch Program was characterized by weak federal oversight and a high degree of local control. After a brief attempt by child welfare advocates to place school lunches under the auspices of the commissioner of Education, the Department of Agriculture succeeded in holding on to the program. Thus, for the first fifteen years of its existence, the National School Lunch Program served primarily as an outlet for surplus commodities and only secondarily as a nutrition program for children. The congressional debate over the school lunch program raised issues of racial and regional equity, including the first attempt by New York representative Adam Clayton Powell to introduce non-discrimination language in federal legislation, but the Democratic party still relied heavily on its conservative southern wing for legislative success. While southern Democrats happily supported the idea of creating a National School Lunch Program, they vehemently opposed any direct federal role in how that program would be administered. Most particularly, they resisted any effort to establish federal oversight, nutrition standards, or eligibility requirements. The results were predictable when during its first fifteen years, few poor children received free meals and even fewer African American children participated in the program. The lack of federal oversight was particularly problematic when it came to the bill’s requirement that states match the federal contribution. With no directives out of Washington, most states counted children’s fees as their matching contribution. While Department of Agriculture officials gave lip-service to children’s nutrition—developing healthy menus and testing recipes for surplus commodities—during the 1950s the National School Lunch Program reached only about one-third of America’s schoolchildren. What is more, the program utterly failed to provide free meals for poor children who arguably were in most need of federal nutrition assistance.
Despite the National School Lunch Program’s shortcomings, it gained widespread popular support during the 1950s. While few Americans probably knew how the program actually operated, legislators, policy makers, and the public at large touted America’s school lunch program as a symbol of prosperity, equality, and democracy in the Cold War world. Only in the early 1960s, as the nation “discovered” poverty, did the limitations of the National School Lunch Program become embarrassingly clear. Chapters 6 and 7 trace the political movement to transform the National School Lunch Program from a popular, if misunderstood, agricultural subsidy into a poverty program. Galvanized by civil rights activism, and in the context of Lyndon Johnson’s War on Poverty, a group of mainstream national women’s organizations focused attention on the shortcomings of the National School Lunch Program. The women’s report became crucial evidence in both Senate and House debates on race and poverty at the end of the 1960s and ultimately forced the Nixon administration to expand access to free lunches for poor children. Demands for a “right to lunch” insisted on access to free lunches for all poor children and national eligibility standards for free and reduced price meals.
Chapter 8 discusses both the expected and the unintended consequences of turning school lunches into a poverty program. Neither the program’s congressional advocates nor liberal anti-poverty groups were willing to demand sufficient federal funding to allow school districts to serve large numbers of poor children free meals. Nor were the program’s advocates—whether in Washington or in the states—willing to demand substantial local contributions. As a result, federal funds earmarked for free meals actually threatened to bankrupt school lunchrooms across the country. State subsidies rarely were sufficient to pay for the expansions necessary to meet the new federal free lunch mandate. The only course of action for local school lunch administrators appeared to be to raise the fees on full-price meals. As a result, paying children began to drop out of the program and school cafeterias became identified with poor children. There was, in effect, a great failure on the part of liberal antipoverty activists and conservative legislators alike to craft a public child nutrition program that could effectively protect children’s nutrition. By the end of the 1970s, many school lunch advocates saw privatization as the only way to keep lunchrooms afloat. While some nutritionists held out against the commercialization of children’s meals, they had few suggestions for lunchroom operators who saw their deficits rising. The now-familiar fast food in school cafeterias appeared to be the only solution for school districts unable to sustain their mandated free-lunch program on public funds. Still, the National School Lunch Program continued to garner a tremendous amount of public support—far more than other programs for the poor. Indeed, when President Reagan tried to cut school lunch budgets by suggesting that ketchup could be counted as a vegetable, the public outcry revealed a depth of loyalty to the program that no one anticipated.
School lunch politics suggests that fixing lunch is more complicated than convincing children to eat right. Today’s critiques of school meals have a long history in which children’s welfare advocates have vied with the nation’s food and agricultural interests for control over school menus. Still, the politics of school meals makes it clear that detaching the National School Lunch Program from those other interests would leave a lot of children hungry. The celebrity chefs now working in school lunchrooms are finding, as generations of nutritionists and food reformers before them did, that there is more to a national school lunch program than a nutritious menu. To truly fix lunch, they will need to build a political coalition committed to an agenda that links child nutrition to agriculture, food policy, and social welfare. Such a coalition, however, will need to fix lunch for all children, not just those lucky enough to attend schools with private benefactors. Fixing lunch will require a public commitment to health, welfare, and opportunity for all children.
Return to Book Description
File created: 1/30/2008
Questions and comments to: email@example.com
Princeton University Press
|
<urn:uuid:9f9603d6-f749-48ea-abaf-555dd499110f>
|
CC-MAIN-2013-20
|
http://press.princeton.edu/chapters/i8640.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962337
| 4,710
| 2.796875
| 3
|
by Clarisse Olivieri de Lima
Part of being a global citizen is being able to articulate and take positions regarding one's role and responsibilities in the world. Global citizens need to be aware and concerned with what is happening not only in their nation and geographic region but also throughout the world. Global citizens need to develop a voice to promote social and economical justice for themselves and their fellows by demonstrating care and respect for other’s welfare.
Promoting a meaningful and socially valued use of information and communication technologies (ICTs) is a crucial task that 21st century teachers may pursue in order to cope with their students' education. The set of basic skills needed to fully operate and participate in a globalized society include the new literacies needed for using Internet-based information.
The Travel Buddies Project is an intercultural exchange project where students from different countries select mascots to go on a journey as a visitor in a foreign culture. In an edition of this project, students in the United States and Brazil participated by sending their buddies to each other's location. As guests, the mascots were involved in activities with the children both inside and outside of school. Students kept in touch throughout the exchange by recording events and activities using photographs, blog posts, email exchanges, and diary/journal entries.
Many subjects from the curriculum can be reinforced in a project such as this one. Connections to reading, writing, the Arts (e.g. music, dance, artistic expression) and Humanities are inherent in all the learning activities that were developed as part of this exchange. Students engaged regularly in shared reading and writing activities using the blogs to register their visitor's activities. They also developed their own individual writing and technical skills through journal entries and the use of software products to create graphic images. Many of the lessons were interdisciplinary in nature and provided opportunities for collaboration between classroom teachers.
Blog posts were used as the central mode of communication between the classes and often initiated spontaneous lessons based on the content that was posted by the partner class. All the activities done by the classes and the mascots that were posted on the blog were done so according to safety and ethical rules established by each school in order to preserve the students’ identities.
Some additional skills that are essential for children to develop for success in today’s world were also emphasized during this project. First, students learned the nuances of acceptable technology etiquette essential to forging respectful social interactions and good citizenship. While the Brazilian and American children interacted through the blog postings, they also practiced examining how individuals interpret messages differently, how values and points of view are included or excluded, and how media can influence their beliefs and behaviors. Additionally, the students learned how to effectively apply more appropriate expressions and interpretations in diverse and multicultural environments.
Telecollaborative projects such as this one provide an opportunity for participants to develop global citizenship skills that are indispensable for their living in a globalized, diverse, and flattened world.
This project was coordinated by Dr. Clarisse Lima (EdTech Consultant, Rio de Janeiro, Brazil) and Dr. Laurie Henry (University of Kentucky, USA) and was held during the year of 2009.
For complete information:
Henry, L. & Lima, C. (2012). Promoting global citizenship through intercultural exchange using technology: The Travel Buddies Project. In Kelsey, S. and Amant, K. (ed.) Computer-Mediated Communication across Cultures: International Interactions in Online Environments. Hershey, PA: IGI Global. (pp. 100-119).
To visit the blogs:
Clarisse Olivieri de Lima is an educational technology consultant in Rio de Janeiro, Brazil.
This article is part of a series from the International Reading Association Technology in Literacy Education Special Interest Group (TILE-SIG).
|
<urn:uuid:0abb428e-47c4-45a5-b096-2c075ea1231c>
|
CC-MAIN-2013-20
|
http://reading.org/general/Publications/blog/BlogSinglePost/reading-today-online/2012/12/14/tile-sig-feature-technology-promotes-intercultural-exchange-between-global-citizens
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955293
| 777
| 3.515625
| 4
|
Venture in to the territory of the lion! No visit to the Africa region of the NC Zoo is complete without seeing these majestic cats. In the Bushlands area, also see red river hogs.
These large, highly social cats live in savannas, grasslands and scrubby bushlands. Lions are not “Kings of the Jungle,” as they do not live in densely forested areas. Unique among felines, a quick glance will tell you if you are looking at a male or a female. The males are boldly marked by the shaggy mane that sprouts around their neck and shoulders. Lions emit a loud roar in the early morning hours and dusk that can be heard over 5 miles away. Roaring is used to locate pride members that have been temporarily separated and to signal their strength to adversaries.
Red River Hogs
These wild pigs are widely distributed through the rainforests, thickets and savannas of western and central Africa.The name comes from their reddish color and because they are usually found near sources of water. Red river hogs prefer areas with soft ground allowing them to dig for food. Short, strong tusks and blunt snouts help these animals dig for plant roots and tubers that make up the bulk of their diet. These hogs will eat just about anything and are often attracted to fallen fruit by listening for the sounds of feeding monkeys and birds.
|
<urn:uuid:49232873-2ed7-48cd-8d33-4a18aeecdb0e>
|
CC-MAIN-2013-20
|
http://www.nczoo.org/animals/Africa/Bushlands.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963716
| 290
| 3.078125
| 3
|
Siah Hicks (Creek) Interview, 1937
“The Indians will vanish” has been the talk of the older Indians ever since the white people first came to mingle among them. They seemed to prophecy that the coming of the white man would not be for their good and when the step towards their removal to a country to the west was just beginning, it was the older Indians that remarked and talked about themselves by saying, “Now, the Indian is now on the road to disappearance.” This had reference to their leaving of their ways, their familiar surroundings where their customs were performed, their medicine, their hunting grounds and their friends.
When they had reached their new homes in the Indian Territory, their conversations were about their old homes and they said, “We have started on the road that leads to our disappearance and we are facing the evening of our existence and are nearly at the end of the trail that we trod when we were forced to leave our homes in Alabama and Georgia. In time, perhaps our own language will not be used but that will be after our days.”
Source: Interview with Siah Hicks (Creek), November 17, 1937, Indian-Pioneer History (Oklahoma Historical Society), 29:80
|
<urn:uuid:d175998f-46ec-4a4b-826f-7cfc6b88a096>
|
CC-MAIN-2013-20
|
http://ualr.edu/sequoyah/index.php/siah-hicks-creek-interview-1937/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.99185
| 258
| 2.953125
| 3
|
VO or Voltage Optimisation, to do or not to do?
Basically VO (Voltage Optimisation) is a term given to the systematic controlled reduction in the voltages received by an energy consumer to reduce energy use, power demand and reactive power demand. While some voltage ‘optimisation’ devices merely reduce the voltage using a transformer, others give the end-user the ability to control and optimise their electricity supply locally, correcting voltage and power quality problems from the grid, and are designed to do so very efficiently. Voltage optimisation systems are typically installed in series with the mains electrical supply to a building, allowing all its electrical equipment to benefit from an optimised supply.
In the UK, where there is a particular problem with over-voltage, voltage optimisation as an energy conservation is growing rapidly in popularity.
So what does all that mean, well basically if you have 3Phase AC Motors, Lighting (fluorescent), electrical heaters you can get savings of 8%+, for a quite considerable investment (£40-50K+).
Is this any good for us, I have now had two decent consultants (whom I know personally, so trust their opinion) and they are in agreement with myself, VO will not benefit a property with mainly LED lighting, no electrical heaters.
Localised VO units fitted to a supply which directly feeds say for example a fluorescent lighting circuit in the gym (soon to be replaced with SMD strip lighting) will be most beneficial, and can be acquired much cheaper that a unit fitted at the main supply input; these smaller units can be bought for £2-3K each.
|
<urn:uuid:79127da1-21af-4cf4-bd8a-4c7865532e2a>
|
CC-MAIN-2013-20
|
http://www.crownspahotel.com/blog/2010/10/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94414
| 335
| 2.875
| 3
|
In Greek, Tenth century
This is the oldest and best manuscript of a collection of early Greek astronomical works, mostly elementary, by Autolycus, Euclid, Aristarchus, Hypsicles, and Theodosius, as well as mathematical works. The most interesting, really curious, of these is Aristarchus's "On the Distances and Sizes of the Sun and Moon," in which he shows that the sun is between 18 and 20 times the distance of the moon. Shown here is Proposition 13, with many scholia, concerned with the ratio to the diameters of the moon and sun of the line subtending the arc dividing the light and dark portions of the moon in a lunar eclipse.
Vat. gr. 204 fol. 116 recto math06 NS.02
In Greek, 1536
Apollonius's "Conics," written about 200 B.C., on conic sections, the ellipse, parabola, and hyperbola, is the most complex and difficult single work of all Greek mathematics and was all but unknown in the west until the fifteenth century. This magnificent copy, probably the most elegant of all Greek mathematical manuscripts, was made in 1536 for Pope Paul III. The pages on display show the particularly elaborate figures illustrating Propositions 2-4 of Book III on the equality of areas of triangles and quadrilaterals formed by tangents and diameters of conics, and by tangents and lines parallel to the tangents.
Vat. gr. 205 pp. 78-79 math07a NS.03
In Greek, Tenth century
Pappus's "Collection," consisting of supplements to earlier treatises on geometry, astronomy, and mechanics, dates from the late third century A.D. and is the last important work of Greek mathematics. This manuscript reached the papal library in the thirteenth century, and is the archetype of all later copies, of which none is earlier than the sixteenth century.
Vat. gr. 218 fols. 39 verso-40 recto math08a NS.05
In Greek, Ninth century
Claudius Ptolemy, who lived in the second century A.D., did work of enormous importance in astronomy and geography in which the Vatican Library has particularly rich holdings. The "Almagest," written about A.D. 150, is a comprehensive treatise on all aspects of mathematical astronomy--spherical astronomy, solar, lunar, and planetary theory, eclipses, and the fixed stars. It made all of its predecessors obsolete and remained the definitive treatise on its subject for nearly fifteen hundred years. This, the most elegant of all manuscripts of the "Almagest," is one of the oldest and best witnesses to the text, and is very rich in notes.
Vat. gr. 1594 fols. 73 verso-74 recto math09a NS.07
In Latin, Salernitan Translation, Late thirteenth or early fourteenth century
In about 1160 a very literal translation of the "Almagest" was made directly from the Greek by an unknown translator in Sicily. The version had little circulation, but in the early fifteenth century this manuscript, the only known complete copy, came into the hands of the great Florentine book collector Coluccio Salutati. Shown here is Book XII Chapters 8-9, the table of stations of the planets (the place on the epicycle where the planet appears stationary) written entirely in Roman numerals, and the method of computing a table of the greatest elongations of Mercury and Venus from the sun.
Vat. lat. 2056 fols. 45 verso-46 recto, fols. 87 verso-88 recto math10a NS.09
In Latin, Translated by Gerard of Cremona, Thirteenth century
The most important medieval Latin translation of the "Almagest," which is found in many manuscripts, was made from the Arabic in Spain in 1175 by Gerard of Cremona, the most prolific of all medieval translators from Arabic into Latin. These pages show Book X Chapters 6-7, Ptolemy's description of his kinematic model for the motion of the superior planets--Mars, Jupiter, and Saturn. The separation of the center of uniform motion from the center of uniform distance of the center of the epicycle is explained, as well as the beginning of the derivation of the elements of the model for Mars, through a lengthy iterative computation. The earth is at rest at (e) and the planets move uniformly with respect to a point (r) which is separated from the center of their spheres, (d). This device closely approximated the elliptical orbit in which planets actually move.
Vat. lat. 2057 fols. 70 verso-71 recto, fols. 146 verso - 147 recto math11a NS.10
|
<urn:uuid:ed9963ed-a205-4c54-8726-db052c4cf730>
|
CC-MAIN-2013-20
|
http://www.ibiblio.org/expo/vatican.exhibit/exhibit/d-mathematics/Greek_math2.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.933579
| 1,023
| 2.828125
| 3
|
Education in Chile
Chile’s science bus: inspiring remote and underprivileged communities
The project takes the wonder of science to rural communities, supporting teachers and opening up a whole new world to children.
Monday, January 30, 2012
Chile has its very own Magic School Bus that is touring far-flung towns across the country, bringing the wonder of learning to communities that lack the access to resources and technology.
The Bus ConCiencia - a play on words meaning both “bus with science” and “conscience bus” - is a mobile laboratory that aims to encourage learning through experimentation, using recycled and easy to obtain materials.
It is the brainchild of Maria Cuéllar, the program director of education at Desafío Levantemos Chile (“Challenge: Lift Up Chile”), a foundation created to rebuild damaged infrastructure in the wake of the 8.8-magnitude earthquake in 2010.
Cuéllar is also a Teachers Without Borders member, whose background in teaching mathematics and physics motivated her to try to improve the quality of scientific education in developing countries.
In 2011, Bus ConCiencia reached 10,000 students and engaged 50 professionals.
“For me, the most important aspect of this project is to show the children, our future generations, that science isn’t something to be feared, but that it’s a way to confront problems and answer questions,” said Dr. Sebastián Bernales, a cellular biologist who has been involved in the Bus ConCiencia.
Children are not “taught” in the workshops, but are instead guided through experiments so they can experience the joy of discovering something for themselves, using scientific methods.
The project is not just aimed at inspiring children, but rather teachers and whole communities as well, through workshops and even movie screenings. The idea is to create a “domino effect” in which children are surrounded by a supportive and stimulating environment.
“What we want to do in these rural schools is to inspire teachers, so they can help children learn about the scientific method, experimentation, different branches of science,” said the project’s executive director, Marcela Colombres, “and to inspire curiosity and open a new world of possibilities.”
The project needs support to continue providing materials, maintain and fuel the bus, pay professionals, and also help fund teachers in rural communities so that they can continue with the experiments into the future.
To contribute see the website.
|
<urn:uuid:fb2abc31-cfcc-4a97-8bb5-2002882739c9>
|
CC-MAIN-2013-20
|
http://www.thisischile.cl/7459/2/chiles-science-bus-inspiring-poor-and-remote-communities/News.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936923
| 531
| 3.09375
| 3
|
ACCOUNTING TERMS - ACCOUNTING DICTIONARY - ACCOUNTING GLOSSARY
From the web's #1 provider of financial analysis / ratio analysis
VALUE ADDED TAX Definition
VALUE ADDED TAX is a consumption tax where taxes are levied at each step of a manufacturing process where value is added to that product at that point in the manufacturing cycle; as well as at the point where the consumer purchases the end product.
Learn new Accounting Terms
FULL COSTING see ABSORPTION COSTING.
MARKET ANAMOLY is a persistent and systematic differential of returns that cannot be accounted for by systematic risk factors, i.e. it is an inexplicable price distortion on a market.
|
<urn:uuid:1c1f1813-bda9-4939-bdec-9e542b270e94>
|
CC-MAIN-2013-20
|
http://www.ventureline.com/accounting-glossary/V/value-added-tax-definition/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919539
| 151
| 2.984375
| 3
|
During his entire sixteen years of life, Tamali Oceng, a student at Creamland High School in Amuru district did not know that he had a heart problem until he was diagnosed in 2010.
“General body weakness, difficulty in breathing, chest pain and being sickly all the time, prompted thorough check ups and the results were traumatizing, said Tamali. Adding further that “The fact that I come from a poor family means that I cannot afford that treatment, how are we going to rise this much money, Tamali asked?”
Tamali says that the sickness has affected his studies which is also affected his ambition of becoming a doctor so that he can help the helpless in the community.
Tamali lost both his parents five years ago and currently lives with his brother who is a peasant farmer in Lamogi Sub County, Amuru district.
Gulu Regional Referral Hospital non-communicable diseases specialist, Dr Alice Lamwaka says that Tamali was born with a defective heart valve also referred to as congenital heart disease that needs an operation to fit an artificial one.
She says the disease is mostly genetic, resulting from a defect in fertilisation or organ development of the child before birth.
“Even though the disease is rare in the region, it’s difficult to detect at birth unless relevant tests are done as a follow up after birth, if it is suspected,” said Dr Lamwaka.
Dr Lamwaka says that once the operation is done, Tamali should be able to lead a very normal life and be able to continue with his education uninterrupted. By A Web design Company
By A Web design Company
|
<urn:uuid:f1c016ef-1262-4776-9181-c28ce843d038>
|
CC-MAIN-2013-20
|
http://acholitimes.com/index.php/health-matters/215-heart-patient-seeks-funds-for-operation-in-india
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.983606
| 343
| 2.625
| 3
|
Looking at this contribution margin example, we'll develop a contribution margin income statement to demonstrate the effect of the contribution margin and break-even point on the income statement:
Here is the example:
Break-even point in units = Fixed Expenses/Price - Variable Expenses
Break-even point = $60,000/$2.00 - .80 = 50,000 units
In order to develop the contribution margin income statement, we have to convert breakeven in units to breakeven in dollars. We use the contribution margin ratio to do this. Using the same example, variable expenses of $0.80 are 40% of sales of $2.00 per unit ($0.80/$2.00). To calculate breakeven in dollars, divide its fixed expenses by the contribution margin ratio of 60% ($60,000/.6 = $100,000). This means that the company has to make $100,000 in sales to breakeven.
If you want to check your work, calculate the company's variable costs using this method. Variable expenses are 40% of sales ($100,000 X .4 = $40,000). Sales of $100,000 minus variable expenses of $40,000 equal fixed costs of $60,000. You now know your calculations are correct because $60,000 are the stated fixed expenses of the company.
The table below shows you the firm's income statement in contribution margin format. It shows you that if one more unit of the product is sold, to total 50,001 units, then the Net Operating Profit will rise above zero and the firm will make a profit. However, if one unit less than 50,000 is sold, the firm will incur a loss.
Contribution Margin Income Statement for the Month of May, 2011
|Less: Variable Expenses||40,000||0.80|
|Less: Fixed Expenses||$60,000|
|Net Operating Profit||$0|
|
<urn:uuid:e0854fb2-e6cb-4040-9725-96434d6933db>
|
CC-MAIN-2013-20
|
http://bizfinance.about.com/od/pricingyourproduct/a/contribution-margin-income-statement.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.852404
| 408
| 3.28125
| 3
|
Apr 16, 2009
(Editor’s Note: we are pleased to bring you this article thanks to our collaboration with Greater Good Magazine.)
At a time when educators are preoccupied with standards, testing, and the bottom line, some researchers suggest the arts can boost students’ test scores; others aren’t convinced. Karin Evans asks, What are the arts good for?
When poet and national endowment for the Arts Chairman Dana Gioia gave the 2007 Commencement Address at Stanford University, he used the occasion to deliver an impassioned argument for the value of the arts and arts education.
“Art is an irreplaceable way of understanding and expressing the world,” said Gioia. “There are some truths about life that can be expressed only as stories, or songs, or images. Art delights, instructs, consoles. It educates our emotions.”
For years, arts advocates like Gioia have been making similar pleas, stressing the intangible benefits of the arts at a time when many Americans are preoccupied with a market–driven culture of entertainment, and schools are consumed with meeting federal standards. Art brings joy, these advocates say, or it evokes our humanity, or, in the words of my 10–year–old daughter, “It cools kids down after all the other hard stuff they have to think about.”
Bolstering the case for the arts has become increasingly necessary in recent years, as school budget cuts and the move toward standardized testing have profoundly threatened the role of the arts in schools. Under the No Child Left Behind Act, passed in 2002, the federal government started assessing school districts by their students’ scores on reading and mathematics tests.
As a result, according to a study by the Center on Education Policy, school districts across the United States increased the time they devoted to tested subjects—reading/language arts and math—while cutting spending on non–tested subjects such as the visual arts and music. The more a school fell behind, by NCLB standards, the more time and money was devoted to those tested subjects, with less going to the arts. The National Education Association has reported that the cuts fall hardest on schools with high numbers of minority children.
And the situation is likely to worsen as state budgets get even tighter. Already, in a round of federal education cuts for 2006 and 2007, arts education nationally was slashed by $35 million. In 2008, the New York City Department of Education’s annual study of arts education showed that only eight percent of the city’s elementary schools met the state’s relatively rigorous standards for arts education—and the city’s schools are now facing a $185 million budget cut this year.
For 2009, the nonprofit Center for Budget and Policy Priorities forecasts budget shortfalls in 41 states. California, ranked last among the states in per capita support for the arts, is considering $2 billion of additional cuts to K–12 education. Josef Norris, a grant–supported artist who creates murals with kids in San Francisco’s public schools, says he has worked with classes where fifth graders have never picked up a paintbrush or handled a lump of clay.
Given such stiff fiscal and political challenges, some arts advocates have felt pressured to bolster their arguments. Afraid that art won’t be able to stand on its own merits, such advocates have sought whatever evidence they can find to argue that art contributes to measurable gains in learning—which, in the No Child Left Behind world, means boosting a school’s academic test scores in literacy and mathematics.
And in fact, advocates have gotten a recent lift from new research in several scientific fields. For the first time ever, for example, scientists have used sophisticated brain imaging techniques to examine how music, dance, drama, and the visual arts might positively affect cognition and intelligence. Such work, the researchers claim, is a crucial first step toward understanding whether art can actually make people smarter in ways that can be measured.
But other arts advocates say that’s the wrong way to go. Skeptical of some claims of the art–boosts–smarts camp, they instead support a line of research that explores the benefits that are unique to the arts. Let art do what art can do best, they say, and let the mathematics class take care of itself. And so the debate goes on, focused on a question that has long concerned parents, educators, and policy makers alike: What are the arts good for?
The Mozart controversy
The focus on art’s contribution to academics came to wide attention in the 1990s, after researchers from the University of California, Irvine, reported in the journal Nature that college students who listened to 10 minutes of Mozart before taking certain parts of an intelligence test improved their scores—a finding that came to be known as the “Mozart Effect.”
Before long, parents who heard about the research were playing Mozart to their babies, the governor of Georgia was handing out classical music tapes to parents of newborns, and companies were springing up to package music for parents eager to bolster their children’s brain power.
The Mozart Effect research had some clear limitations: It involved only college–age students, and the improved test scores held up only for 15 minutes following the musical experience. After witnessing the strong reaction to their results, the researchers themselves were compelled to write a rejoinder in 1999, pointing out that they had never claimed that “Mozart enhances intelligence.”
Still, whether the hard evidence was there or not, the popular assumption took hold that there was a connection. According to a 2006 Gallup poll, 85 percent of Americans believed participation in school music was linked to better grades and higher test scores.
After the study on the Mozart Effect was published, other researchers tried to substantiate a connection between arts participation and improved cognitive and academic skills. For instance, James S. Catterall, a professor at UCLA’s Graduate School of Education and Information Studies, reported in a 1999 paper that middle and high school students with strong involvement in theater or music scored an average of 16 to 18 percentage points higher on standardized tests than those with low arts involvement.
“It’s true that students involved in the arts do better in school and on their SATs than those who are not involved,” write researchers Lois Hetland and Ellen Winner of the Harvard Graduate School of Education, in an article that appeared in the Boston Globe in 2007. However, they point out, correlation doesn’t add up to causation: It’s quite possible that kids involved in the arts are the ones getting good grades in the first place.
In a landmark survey called REAP—Reviewing Education and the Arts Project—Hetland and Winner examined the research supporting arts education. Their findings, released in 2000, were controversial. They revealed that in most cases there was no demonstrated causal relationship between studying one or more art forms and improved cognitive skills in areas beyond the arts.
“We found inconclusive evidence that music improves mathematical learning and that dance improves spatial learning,” reported the researchers. “We found no evidence that studying visual arts, dance, or music improves reading.” They continued.
That leaves our most controversial finding. We amassed no evidence that studying the arts, either as separate disciplines or infused into the academic curriculum, raises grades in academic subjects or improves performance on standardized verbal and mathematics tests. … Our analysis showed that children who studied the arts did no better on achievement tests and earned no higher grades than those who did not study the arts.
Their findings, the researchers said, were greeted with anger. “One scholar told us that we should never have asked the question, but having done so, we should have buried our findings,” Hetland and Winner later wrote. “We were shaken.” Some critics claimed that their report had shortchanged the effects of art on academics. But the researchers stuck to their conclusions. Furthermore, they cautioned, justifying the arts on the basis of unreliable claims would ultimately do more harm than good.
Arts and the brain
In 2004, in an attempt to sort out the facts, the Dana Foundation, a private philanthropic organization, took on the question: Are smart people drawn to the arts or does arts training make people smarter? Under the leadership of neuroscientist Michael S. Gazzaniga, the Dana Arts and Cognition Consortium assembled neuroscientists and cognitive scientists from seven universities to study whether dance, music, theater, and visual arts might affect other areas of learning—and how.
After more than three years of research, the results of the $2.1 million project were published in March of 2008 in a report titled “Learning, Arts, and the Brain.” Several studies in the report suggested that training in the arts might be related to improvements in math or reading skills. In one of these studies, a University of Oregon team, headed by psychologist Michael Posner, observed the brain activity of children four to seven years old while they worked on computerized exercises intended to mimic the attention–focusing qualities of engaging in art. The researchers concluded that the arts can train children’s attention, which in turn improves cognition.
In another Dana consortium study, Elizabeth Spelke, a neuropsychologist at Harvard University, looked at the effects of music training in children and adolescents and found a “clear benefit”: Children who had intensive music training did better on some geometry tasks and on map reading. Stanford University psychologist Brian Wandell and colleagues used brain–imaging techniques to study how a certain part of the brain might be influenced by musical activities. He found that students ages 7 to 12 who received more musical training in the first year of the study showed greater improvements in reading fluency over the next two years. Wandell reports that phonological awareness—or the ability to distinguish between speech sounds, which is a predictor of early literacy—was correlated with music training and could be tracked with the development of a specific brain pathway.
Overall, the Dana report didn’t go so far as to prove that arts training directly boosts cognitive and academic skills; it offered no concrete evidence that art makes kids smarter. But the project did tighten up the correlations that had been noted before, laying the groundwork for future research into causal explanations. In his introduction to “Learning, Arts, and the Brain,” Gazzaniga frames the report as an important first step. “A life–affirming dimension is opening up in neuroscience,” he writes. “To discover how the performance and appreciation of the arts enlarge cognitive capacities will be a long step forward in learning how better to learn.”
Though Gazzaniga and his Dana Consortium colleagues were quite measured in their assessment, many advocates interpreted the report’s results as support for their cause. “Arts Education Linked to Better Brain Activity,” read a headline on the website of the Arizona Commission on the Arts after the report was released. A California State PTA newsletter directed parents and teachers to the report, telling them to “find out about the strong links between arts education and cognitive development.”
Around the same time in 2008, the advocacy group Americans for the Arts launched a series of public service announcements aimed at encouraging parents to “feed their children the arts” with images of bowls of “Raisin Brahms” or “Van Goghurt” for breakfast, linked to promises that the arts lead to “increased test scores, better creative thinking, patience, and determination.” Even Barack Obama’s presidential platform, which promised a reinvestment in arts education and professed a broad belief in art’s value, fell back, at least partly, on the academic benefits rationale: “Studies show that arts education raises test scores.”
But many arts researchers and advocates have reacted strongly against efforts—in research, among advocacy groups, or in schools—that overemphasize the link between the arts and academic proficiency.
Jessica Hoffmann Davis, a cognitive developmental psychologist and founder of the Arts in Education Program at the Harvard Graduate School of Education, has long been one of these voices. “It is not by arguing that the arts can do what other subjects already do (or do better) that a secure place can be found for the arts in education,” she writes in her recent book, Why Our Schools Need the Arts. “We have been so driven to measure the impact of the arts in education that we began to forget that their strength lies beyond the measurable.”
In an interview, she adds, “No Child Left Behind has sapped the energy and passion out of our classrooms. It’s a malaise. Standardized testing is leaving everyone behind—teachers and kids—with this heavy preoccupation on what we can measure.”
Another leading expert on the arts, Howard Gardner, a professor at the Harvard Graduate School of Education, went so far in an interview as to call it an “American disease” to try to justify the arts in terms of benefits for other disciplines. No one, says Gardner, argues that students should take math because it will make them perform better in music.
Education of vision
So what are the arts good for?
In 2007, Hetland and Winner published a book, Studio Thinking: The Real Benefits of Visual Art Education, that is so far one of the most rigorous studies of what the arts teach. “Before we can make the case for the importance of arts education, we need to find out what the arts actually teach and what art students actually learn,” they write.
Working in high school art classes, they found that arts programs teach a specific set of thinking skills rarely addressed elsewhere in the school curriculum—what they call “studio habits of mind.” One key habit was “learning to engage and persist,” meaning that the arts teach students how to learn from mistakes and press ahead, how to commit and follow through. “Students need to find problems of interest and work with them deeply over sustained periods of time,” write Hetland and Winner.
The researchers also found that the arts help students learn to “envision”—that is, how to think about that which they can’t see. That’s a skill that offers payoffs in other subjects, they note. The ability to envision can help a student generate a hypothesis in science, for instance, or imagine past events in history class.
Other researchers have identified additional benefits that are particular to the arts. In Why Our Schools Need the Arts, Davis outlines many of these benefits, including the quality of empathy. “We need the arts because they remind children that their emotions are equally worthy of respect and expression,” she said in an interview. “The arts introduce children to connectivity, engagement, and allow a sense of identification with, and responsibility for, others.” As a young researcher, Davis once asked adults, children of varying ages, and professional artists to draw emotions such as happiness, sadness, and anger. She found that even very young children could communicate those emotions through drawing. In fact, she observes, “The arts, like no other subject, give children the media and the opportunity to shape and communicate their feelings.”
Elliot Eisner, an emeritus professor of art and education at Stanford University and a longtime leader in the field, has emphasized the subtle but important ways the arts can enhance thinking—the ability to use metaphor, for example, or the role of imagination. “These are outcomes that are useful,” says Eisner, “not only in the arts, but in business and other activities where good thinking is employed.”
At last year’s annual convention for the National Art Education Association, Eisner told the crowd, “In the arts, imagination is a primary virtue. So it should be in the teaching of mathematics, in all of the sciences, in history, and indeed, in virtually all that humans create.”
“To help students treat their work as a work of art is no small achievement,” he added. “Given this conception, we can ask how much time should be devoted to the arts in school? The answer is clear: all of it.”
An “education of vision” is also high on Eisner’s list of benefits. “You want to help youngsters really see a tree or urban landscape or an apple. It’s one of the things they can do the rest of their lives.”
Such elusive, immeasurable benefits of the arts may, in fact, be among the most valuable. “At this time when we are facing the threat of the reduction of learning to testable right and wrong answers,” says Davis, “we might say the most important thing about arts learning is that it features ambiguity and respect for the viability of different perspectives and judgments.”
But perhaps most significantly, Davis argues that the arts can engage children who might not otherwise be reached by academics. In fact, an increasing amount of attention is being focused on the benefits of the arts for at–risk youth.
For instance, when a program called the YouthARTS Development Project, a partnership involving the National Endowment for the Arts and the U.S. Justice Department, engaged at–risk youth in art programs, it found that the participants showed an increased ability to work with others and finish tasks, and showed better attitudes toward school, fewer court referrals, and improved self–esteem.
“Folks are responding to the deficits in schools by saying, ‘Bring in the arts,’” says Davis. “Ironically that’s what we’ve always done with individual kids, always turned to the arts as a kid was about to drop out of school. We have always known that arts will save the day, but now the day is so bleak that we have a national charge to do what arts do best—to provide energy and spirit and excitement and community.”
In San Francisco, artist Josef Norris has seen evidence of this claim first–hand. When he worked with children to create a mural at an inner–city school, the project was integrated into a unit on California history and immigration. Every single child in the class had a parent or grandparent who’d been born in another country, says Norris, and each child made a tile depicting some aspect of his or her family’s history.
“Kids who are struggling academically can get hooked,” he says. “You live for the moments when the kids shine—when a pathologically shy girl shows up for mural making on a Saturday morning and stays all day long. Or when a child paints a tile about his family, then brings his grandmother to the unveiling of the mural and says proudly, ‘I made that.’”
– Karin Evans is the author of The Lost Daughters of China: Adopted Girls, Their Journey to America, and the Search for a Missing Past, just released in a new edition by Tarcher/Penguin Putnam. She recently earned an MFA in poetry. Copyright Greater Good. Greater Good Magazine, based at UC-Berkeley, is a quarterly magazine that highlights ground breaking scientific research into the roots of compassion and altruism.
Related articles by Greater Good Magazine:
|
<urn:uuid:728c794e-c933-4245-9e46-88b77e42f837>
|
CC-MAIN-2013-20
|
http://sharpbrains.com/blog/2009/04/16/arts-and-smarts-test-scores-and-cognitive-development/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963698
| 4,055
| 2.890625
| 3
|
De Quervain's Disease
De Quervain's disease is swelling and inflammation of the tendons and the tendon sheath on the thumb side of the wrist.
The exact cause of de Quervain's disease is not known. It may occur from injury to the wrist or tendon. Or it may occur as a result of activities that require repeated wrist and thumb movements, such as knitting, wringing clothes, or lifting heavy objects.
Symptoms may include swelling, a grating feeling in the wrist, and pain and weakness along the thumb, wrist, and forearm. Pain increases with activities such as lifting or pouring.
Initial treatment consists of rest, splinting, stretching, and medicines to reduce inflammation. Physical therapy, a steroid injection, or surgery may sometimes be needed.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
© 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
Find out what women really need.
Most Popular Topics
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies
|
<urn:uuid:3a8a4bff-f0dc-491f-95ca-66689352d6f3>
|
CC-MAIN-2013-20
|
http://www.emedicinehealth.com/script/main/art.asp?articlekey=127450&ref=127930
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915432
| 254
| 2.84375
| 3
|
Preparing Students for STEM Careers
By Angela Traurig and Rich Feller
"The growth paradigm that has driven our economy for the past generation is exhausted" (Palley, 2008, p. B10). Yet the demand for skilled workers in science, technology, engineering, and math (STEM) is closely linked to global competitiveness. How can counselors inspire students to solve problems in the frontiers of alternative energy, climate change, nanotechnology and space exploration, while promoting STEM careers...that is a key question in career development.
Friedman (2008) suggests that energy technologies (ET) can solve worldwide environmental issues and create the economic stimulus needed to rebuild America. Yet, the lack of gender and ethnic diversity of students entering STEM educational programs and career fields present additional challenges. Using creativity and innovation to address these challenges is critical to providing career development.
What's the Fuss about STEM?
The National Academies (National Academy of Science, the National Academy of Engineering, and the Institute of Medicine, 2007), noted the rapid erosion in the U.S's competitiveness in science and technology-and thus in the U. S. as a global economic leader. They cautioned that the U.S. position as a global leader may be abruptly lost without a greatly expanded commitment to achieving success in advanced education in science, technology, math, and engineering. The National Science Foundation states:
In the 21st century, scientific and technological innovations have become increasingly important as we face the benefits and challenges of both globalization and a knowledge-based economy. To succeed in this new information-based and highly technological society, all students need to develop their capabilities in science, technology, engineering, and mathematics (STEM) to levels much beyond what was considered acceptable in the past. (p.1)
Not enough young people are being educated or inspired about interest in advanced math, science, and engineering. "The education in American junior high schools, in particular, seems to be a black hole that is sapping the interest of young people, particularly young women, when it comes to the sciences". (Friedman, 2005, p.351)
Technology is pervasive in almost every aspect of daily life, and as the workplace changes, STEM knowledge and skills are important for a variety of workers (not just for mathematicians and scientists) (The Center for Education Policy Analysis, 2008). In addition to STEM knowledge, the ways in which problems are approached and solved in these subjects are increasingly necessary for workers (The Center for Education Policy Analysis, 2008).
"Rising Above the Gathering Storm" (2007), the seminal report about STEM, is of great value to career practitioners and policy makers. It recommends the need to (1) increase America's talent pool by vastly improving K-12 mathematics and science education; (2) sustain and strengthen the nation's commitment to long-term basic research; (3) develop, recruit, and retain top students, scientists, and engineers from both the U.S. and abroad; and (4) ensure that the U.S. is the premier place in the world for innovation. Historically, the U.S. has been a leader in these areas. Now only 15% of U.S. graduates are attaining degrees in the natural sciences and engineering, compared to 50% in China (Freeman, 2008). It is estimated that the U.S. will need 1.75 million more engineers, a 20% increase, by the year 2010 (Gasbarra & Johnson, 2008). Demand for engineers is increasing at three times the rate of other professions (Gasbarra & Johnson, 2008).
Helping under-represented populations pursue STEM careers is an additional challenge. Women, although traditionally under-represented, are in high demand in these fields (Gasbarra & Johnson, 2008). Stereotypes about women's abilities and their role in the family often keep women from pursuing math and science careers. Furthermore, the atmosphere in these male-dominated fields is often challenging, if not inhospitable, to women (Gasbarra & Johnson, 2008).
Hispanics, who are the largest and fastest growing minority group in the United States, are largely under-represented in STEM fields (Gasbarra & Johnson, 2008), and face hurdles in trying to achieve academically. Hispanic students are disproportionately represented in poor, urban schools with lower quality of education and poor bilingual programs (Gasbarra & Johnson, 2008). Poverty, language barriers, and family commitments are often obstacles to success. Because few Hispanic parents have attended college, they may have little familial support for attending college, much less for studying science or engineering. With the growing need for more engineers, American businesses and Hispanic communities could both benefit from more Hispanic students being encouraged and supported in pursuing STEM careers (Gasbarra & Johnson, 2008).
The reasons for limited diversity in the STEM fields are broad and cannot be addressed overnight. However, career practitioners can better encourage and support students, especially those in under-represented populations, to enter high-demand STEM fields.
- Connect students with role models in STEM fields, especially women and ethnic minorities in non-traditional programs and careers. If there are few professionals available in these fields, consider inviting college students working towards STEM degrees.
- Promote STEM in tangible and real-life oriented ways. Connect academic courses with career and technical education (CTE) programs, such as teaching geometry through construction (The Center for Education Policy Analysis, 2008). Students are often motivated to learn if they understand the real world applications of what they are learning.
- Visit http://www.stemcareer.com/ a clearinghouse site for those seeking and promoting STEM careers.
- Promote fun ways to explore STEM interests through Space Camp (http://www.spacecamp.com/), Camp Kennedy's Space Center (http://www.kennedyspacecenter.com/educatorsParents/camp.asp), NASA's Kid's Club (http://www.nasa.gov/audience/forstudents/index.html) and local STEM career fairs within educational settings.
- Explore materials that offer insights about STEM such as NASA's http://www.nasa.gov/audience/foreducators/index.html and http://education.nasa.gov/edprograms/core/home/index.html , The Gender Clip Project Project http://www.genderchip.org/ , the Sloan Career Cornerstone Center http://www.careercornerstone.org/diversity.htm , and the Real Game 2.0 at http://www.realgameonline.ca/
Freeman, C.W. (2008). China's real three challenges to the U.S. Retrieved October 29, 2008 from http://www.theglobalist.com/StoryId.aspx?StoryId=5770
Friedman, T. (2005). The world is flat. New York: Picador.
Friedman, T. (2008). Hot, flat, and crowed. New York: Farrar, Straus and Giroux.
Gasbarra, P. & Johnson, J. (2008). Out before the game begins: Hispanic leaders talk about what's needed to bring more Hispanic youngsters into science, technology, and math professions. Retrieved September 25, 2008 from http://www.publicagenda.org/files/pdf/outbefore.PDF
National Academies of Science. (2007). Rising above the gathering storm. Report from the Committee on Prospering in the Global economy of the 21st Century. Washington; DC: National Academies Press.
Palley, T. (2008). America's exhausted growth paradigm. The Chronicle Review. April,11.
Angela Traurig is a graduate student in Counseling and Career Development at Colorado State University in Fort Collins, CO. Previously, she was the Violence Prevention Coordinator for three school districts in southwestern Colorado. email@example.com.
Rich Feller is Professor of Counseling and Career Development at Colorado State University in Fort Collins, CO. Previously, he was a school, career, and admissions counselor. firstname.lastname@example.org
< Back | Printer Friendly Page
|
<urn:uuid:4d961200-a7e1-43e9-825a-889b40c18b29>
|
CC-MAIN-2013-20
|
http://associationdatabase.com/aws/NCDA/pt/sd/news_article/6234/_PARENT/layout_details_cc/false
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.920739
| 1,696
| 3.671875
| 4
|
• vatic •
Part of Speech: Adjective
Meaning: Prophetic, oracular, capable of foreseeing and predicting the future.
Notes: Although we use both prophetic and oracular in today's definition, they are not the same. An oracle in ancient Greek and Rome, of course, was a prophecy that came from the gods and was conveyed by someone powerful enough to be in direct contact with the gods. Vatic carries a tinge of this sense. The adjective for this Good Word is vatical and the adverb, vatically. No one seems to have ventured a noun thus far. Today's word is unrelated to Vatican. That word came from Mons Vaticanus "Vatican Hill" named by the Etruscans before the Romans arrived.
In Play: Remember that today's word means "prophetic" with overtones of an infallible oracle: "The company president addressed the board of directors about the future profits of the company in such vatic tones that most members believed him." Of course, you don't have to be an oracle to have vatic powers: "When she shops, mom uses her vatic powers to predict which items will cost more and which will cost less next week."
Word History: English obtained today's Good Word from Latin vates "seer", a word Latin apparently borrowed from a Celtic language. The Celtic language inherited it from a Proto-Indo-European root wet-/wot- "blow, inspire". This same root underlies Wednesday, which was originally Woden's Day, named for the Anglo-Saxon god of wisdom, war, and death, Woden. This root also entered Old English as wod "insane", a word that did not survive the passage of time. It may have lost its W and become the root of Greek atmos "steam", a word that was combined with Latin sphaera to produce atmosphere.
Come visit our website at <http://www.alphadictionary.com> for more Good Words and other language resources!
|
<urn:uuid:5abeae9f-37bf-4d04-851c-c46bd78eb391>
|
CC-MAIN-2013-20
|
http://www.alphadictionary.com/goodword/word/vatic
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.961001
| 424
| 2.578125
| 3
|
Presented by the National Geographic Society, and made possible by Lucasfilm Ltd.,
Indiana Jones and the Adventure of Archaeology is an innovative exhibition
that immerses visitors in the science behind field archaeology.
As it leads viewers on a museum adventure inspired by their beloved film hero, Indiana
Jones, the exhibition connects his fictional world to facts that are
true to the science of archaeology – whether in the search for treasure, the
discovery of artefacts, or their analysis and interpretation.
The Indiana Jones brand humour and adventurous spirit is infused throughout the
exhibit to help engage viewers and stimulate their imaginations. Consequently, the
exhibition holds broad appeal for all audiences: neophytes and experts, families,
film fans, tourists and the general public alike.
Included in the exhibition is a vast and exclusive collection of Indy film props,
models, concept art and set designs on loan from the Lucasfilm Archives.
As viewers learn the factual and historical inspiration behind Indy’s adventures,
they will also be exposed to some of the world’s most impressive material remains
and cultural artefacts from ancient societies. Genuine archaeological artefacts
and educational material will be provided by the University of Pennsylvania Museum
of Archaeology and Anthropology (commonly known as the Penn Museum). The National
Geographic Society, the exhibition’s global presenting partner, will also share
some of its revered artefacts, photos, videos and articles.
In an environment spanning over 1,000 square metres and featuring state-of-the-art
technology, this first-of-its-kind touring museum exhibition transforms the museum
experience into an interactive, multimedia adventure.
Equipped with an intelligent hand-held video guidebook, visitors are immersed into
the legendary world of Indiana Jones TM, as they embark on a quest to
uncover the true origins of archaeological mysteries. Loaded with comprehensive
educational content, photos and videos, the multimedia companion guides visitors
along the Indy Trail (where they’ll learn about the facts behind
the content in the Indy movies) and the various Archaeological Zones
of the exhibition (which establishes links between the films and the interpretation
Lucasfilm Ltd. is one of the world's leading film and entertainment companies. Founded
by George Lucas in 1971, it is a privately held, fully integrated entertainment
company. In addition to motion picture and television production, the company’s
global businesses include visual effects, sound, video games, licensing and online
activity. For more information, visit
National Geographic Society
The National Geographic Society is one of the world’s largest non-profit scientific
and educational organizations. Since 1888, National Geographic has shared unforgettable
stories and groundbreaking discoveries with each new generation. National Geographic
supports critical expeditions and scientific fieldwork, advances geography education,
promotes natural and cultural conservation, and inspires audiences through vibrant
exhibits and live events. For more information, visit nationalgeographic.com.
The University of Pennsylvania Museum of Archaeology and Anthropology, through its
research, collections, exhibitions and educational programming, advances understanding
of the world's cultural heritage. Founded in 1887, Penn Museum has conducted more
than 400 archaeological and anthropological expeditions around the world. For more
information, visit penn.museum.
The archaeology program at Laval University, unique for its type in Canada, distinguishes
itself by its placement with the History Department and for its constitutive principle
of interdisciplinarity. The university offers a program involving earth sciences,
biological sciences, and theoretical sciences, as well as the social and human sciences,
all of which constitute the kernel of the interdisciplinarity we aim for. For more
information, visit www.laboarcheologie.ulaval.ca.
A TEAM OF EXPERTS
X3 Productions has gathered a team of world-renowned specialists to ensure the exhibition
presents a factual interpretation of the principles and methodologies of field archaeology.
With their recognized expertise in academic and applied archaeology, Dr. Michel
Fortin and Dr. Fredrik Hiebert have helped to create and develop the exhibition’s
With a specialty in Near Eastern archaeology, Dr. Michel Fortin is a Full Professor
of Archaeology who has been teaching in the Department of History at Université
Laval in Quebec City for nearly three decades. He has led numerous excavation teams
in the Middle East and is a true ambassador to his profession.
Archaeologist and National Geographic fellow Dr. Fredrik Hiebert is a veritable
field expert who has searched for human history in some of the world’s most remote
and exotic places. An expert on the ancient Silk Road, his discoveries have made
Additionally, X3 Productions is pleased to have on its team Shirley Reiff Howarth,
an expert in travelling exhibitions and exhibition curation, and the director of
the Humanities Exchange in Montreal.
Lucasfilm Ltd., Indiana Jones and all related indicia are trademarks of & ©2010
Lucasfilm Ltd. All Rights Reserved. Used Under Authorization.
|
<urn:uuid:21a4a8e9-c787-479e-9db7-20e039e7dccc>
|
CC-MAIN-2013-20
|
http://x3productions.ca/exhibitions/indiana-jones/default.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.903372
| 1,076
| 2.578125
| 3
|
Notice that all Y-chromosomes belong to either E-subclades or to J1. I would definitely not equate J1 with the Neolithic in this case; it is more likely to be due to historical movements of Semitic/Arabic populations into Egypt. The complete absence of J2 is noteworthy and resembles Arabian peninsula populations where the J2/J1 ratio reaches its minimum.
American Journal of Physical Anthropology doi:10.1002/ajpa.21078
Near Eastern Neolithic genetic input in a small oasis of the Egyptian Western Desert
Martina Kujanová et al.
The Egyptian Western Desert lies on an important geographic intersection between Africa and Asia. Genetic diversity of this region has been shaped, in part, by climatic changes in the Late Pleistocene and Holocene epochs marked by oscillating humid and arid periods. We present here a whole genome analysis of mitochondrial DNA (mtDNA) and high-resolution molecular analysis of nonrecombining Y-chromosomal (NRY) gene pools of a demographically small but autochthonous population from the Egyptian Western Desert oasis el-Hayez. Notwithstanding signs of expected genetic drift, we still found clear genetic evidence of a strong Near Eastern input that can be dated into the Neolithic. This is revealed by high frequencies and high internal variability of several mtDNA lineages from haplogroup T. The whole genome sequencing strategy and molecular dating allowed us to detect the accumulation of local mtDNA diversity to 5,138 ± 3,633 YBP. Similarly, theY-chromosome gene pool reveals high frequencies of the Near Eastern J1 and the North African E1b1b1b lineages, both generally known to have expanded within North Africa during the Neolithic. These results provide another piece of evidence of the relatively young population history of North Africa.
|
<urn:uuid:d2c0b2b0-1627-4b9b-8d64-8ddb199e45b2>
|
CC-MAIN-2013-20
|
http://dienekes.blogspot.com/2009/05/y-chromosomes-and-mtdna-in-egyptian.html?showComment=1242065460000
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.874798
| 384
| 3.109375
| 3
|
Baltimore with a Dusting of Snow
While traveling to the International Space Station aboard the Space
Shuttle Endeavor, astronauts photographed the northeastern United States
blanketed in fresh snow. This image, taken in early December 2002,
shows the city of Baltimore, and the surrounding area. An inset shows
the center of the city (rotated so north points up).
Astronauts routinely track weather phenomena on Earth, and use their
onboard cameras to document their observations. Ground support in the
Earth Observations Lab at the Johnson Space Center also track weather
events world wide as part of the image planning activities, and alert
the crews to significant events such as winter storm systems.
This image originally appeared on the Earth Observatory. Click here to view the full, original record.
|
<urn:uuid:680af274-fc46-4269-be52-1e08f3e463e9>
|
CC-MAIN-2013-20
|
http://visibleearth.nasa.gov/view.php?id=3141
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921185
| 164
| 3.09375
| 3
|
It's known that our solar system has always had four giant planets — Jupiter, Saturn, Neptune and Uranus. Now, astronomers claim to have found evidence which suggests that the solar system might have a fifth giant planet, which was mysteriously knocked out into deep space.
Computer simulations by researchers at the Southwest Research Institute in San Antonio, Texas, showed that it is statistically extremely unlikely that the solar system began with four giants.
By their calculations, it only had a 2.5 per cent chance of reaching its current population and orbital layout with four giants, but was 10 times more likely to have developed to its present state if there was a fifth monster body in the mix, The Daily Mail reported. To reach this conclusion, the researchers ran 6,000 simulations of the solar system's birth.
|
<urn:uuid:947322d9-3092-42cd-839d-65e522d95c71>
|
CC-MAIN-2013-20
|
http://www.thehindu.com/news/national/solar-system-had-a-fifth-giant-planet/article2624800.ece
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.977769
| 160
| 3.578125
| 4
|
How to find the slope of f(x) times g(x) ? Use the Product Rule.
The slope of f(x)g(x) has two terms:
f(x) times (slope of g(x)) PLUS g(x) times (slope of f(x))
The Quotient Rule gives the slope of f(x) / g(x) . That slope is
[[ g(x) times (slope of f(x)) MINUS f(x) times (slope of g(x)) ]] / g squared
These rules plus the CHAIN RULE will take you a long way.
Professor Strang's Calculus textbook (1st edition, 1991) is freely available here.
Subtitles are provided through the generous assistance of Jimmy Ren.
Lecture summary and Practice problems (PDF)
PROFESSOR: OK. This video is about derivatives. Two rules for finding new derivatives. If we know the derivative of a function f-- say we've found that-- and we know the derivative of g-- we've found that-- then there are functions that we can build out of those. And two important and straightforward ones are the product, f of x times g of x, and the quotient, the ratio f of x over g of x. So those are the two rules we need.
If we know df dx and we know dg dx, what's the derivative of the product? Well, it is not df dx times dg dx. And let me reduce the suspense by writing down what it is. It's the first one times the derivative of the second, we know that, plus another term, the second one times the derivative of the first. OK. So that's the rule to learn. Two terms, you see the pattern. And maybe I ought to use it, give you some examples, see what it's good for, and also some idea of where it comes from. And then go on to the quotient rule, which is a little messier.
OK. So let me just start by using this in some examples. Right underneath, here. OK. So let me take, as a first example, f of x equals x squared and g of x equals x. So then what is p of x? It's x squared times x. I'm multiplying the functions. So I've got x cubed, and I want to know its derivative. And I know the derivatives of these guys.
OK, so what does the rule tell me? It tells me that the derivative of p, dp dx-- so p is x cubed. So I'm looking for the derivative of x cubed. And if you know that, it's OK. Let's just see it come out here. So the derivative of x cubed, by my formula there, is the first one, x squared, times the derivative of the second, which is 1, plus the second one, x, times the derivative of the first, which is 2x. So what do we get? x squared, two more x squared, 3x squared. The derivative of x cubed is 3x squared. x cubed goes up faster than x squared, and this is a steeper slope.
Oh, let's do x to the fourth. So x to the fourth-- now I'll take f to be x cubed, times x. Because x cubed, I just found. x, its derivative is 1, so I can do the derivative of x fourth the same way. It'll be f. So practicing that formula again with x cubed and x, it's x cubed times 1 plus this guy times the derivative of f. Right? I'm always going back to that formula. So the derivative of f, x cubed, we just found-- 3x squared-- so I'll put it in. And what do we have? x cubed here, three more x cubeds here. That's a total of 4x cubed.
OK. We got another one. Big deal. What is important is-- and it's really what math is about-- is the pattern, which we can probably guess from those two examples and the one we already knew, that the derivative of x squared was 2x. So everybody sees a 2 here and a 3 here and a 4 here, coming from 2, 3, and 4 there. And everybody also sees that the power dropped by one. The derivative of x squared was an x. The derivative of x cubed involved an x squared.
Well, let's express this pattern in algebra. It's looking like the derivative of x to the n-- we hope for any n. We've got it for n equals 2, 3, 4, probably 0 and 1. And if the pattern continues, what do we think? This 4, this n shows up there, and the power drops by 1. So that'll be x to the n minus 1, the same power minus 1, one power below. So that's a highly important formula.
And actually it's important to know it, not-- right now, well, we've done two or three examples. I guess the right way for me to get this for n equals-- so we really could check 1, 2, 3, and so on. All the positive integers. We could complete the proof. We could establish the pattern. Actually, induction would be one way to do it. If we know it for, as we did here, for n equals 3, then we've got it for 4. If we know it for 4, the same product formula would get it for 5 and onwards, and would give us that answer. Good.
Even better is the fact that this formula is also true if n is a fraction. If we're doing the square root of x, you recognize the square root of x is x to the-- what's the exponent there for square root? 1/2. So I would like to know for 1/2. OK, let me take a couple of steps to get to that one.
All right. The steps I'm going to take are going to look just like this, but this was powers of x, and it'll be very handy if I can do powers of f of x. I'd like to know-- I want to find-- So here's what I'm headed for. I'd like to know the derivative of f of x to the n-th power equals what? That's what I'd like to know.
So let me do f of x. Let me do it just as I did before. Take n equals 2, f of x squared. So what's the derivative of f of x squared, like sine squared or whatever we're squaring. Cosine squared. Well, for f of x squared, all I'm doing is I'm taking f to be the same as g. I'll use the product rule. If g and f are the same, then I've got something squared. And my product rule says that the derivative-- and I just copy this rule.
Now I'm taking p is going to be f squared, right? Can I just write f squared equals-- so it's f times-- f is the same as g. Are you with me? I'm just using the rule in a very special case when the two functions are the same. The derivative of f squared is f. What do I have? f times the derivative of f, df dx. That's the first term. And then what's the second term? Notice I wrote f instead of g, because they're the same. And the second term is, again, a copy of that. So I have 2 of these. Times 2, just the way I had a 2 up there. This was the case of x squared. This is the case of f of x squared.
Let me go one more step to f cubed. What am I going to do for f cubed? The derivative of-- hold on. I have to show you what to pay attention to here. To pay attention to is-- the 2 we're familiar with. This would have been the x, that's not a big deal. But there's something new. A df dx factor is coming in. It's going to stay with us. Let me see it here. The derivative of f of x cubed. Now let's practice with this one.
OK. So now what am I going to take? How do I get f of x cubed? Well, I've got f, so I'd better take g to be f squared. Then when I multiply, I've got cubed. So g is now going to be f squared for this case. Can I take my product rule with f times f squared? My product rule of f times f squared is-- I'm doing this now with g equals f squared, just the way I did it over there at some point with one of them as a square. OK. I'm near the end of this calculation.
OK. So what do I have. If this thing is cubed, I have f times f squared. That's f cubed. And I take its derivative by the rule. So I take f times the derivative of f squared, which I just figured out as 2f df dx. That's the f dg dx. And now I have g, which is f squared, times df dx.
What are you seeing there? You're seeing-- well, again, these combine. That's what's nice about this example. Here I have one f squared df dx, and here I have two more. That's, all together, three. So the total was 3 times f squared times df dx. And let me write down what that pattern is saying. Here it will be n. Because here it was a 2. Here it's going to be 2 plus 1-- that's 3. And now if I have the n-th power, I'm expecting an n times the next lower power of f, f to the n minus 1, times what? Times this guy that's hanging around, df dx. That's my-- you could call that the power rule. The derivative of a power. This would be the power rule for just x to the n-th, and this is the derivative of a function of x to the n-th.
There's something special here that we're going to see more of. This will be, also, an example of what's coming as maybe the most important rule, the chain rule. And typical of it is that when I take this derivative, I follow that same pattern-- n, this thing, to one lower power, but then the derivative of what's inside. Can I use those words? Because I'll use it again for the chain rule. n times one lower power, times the derivative of what's inside.
And why do I want to do such a thing? Because I'd like to find out the derivative of the square root of x. OK. Can we do that? I want to use this, now. So I want to use this to find the derivative of the square root of x. OK. So that will be my function. f of x will be the square root of x. So this is a good example. That's x to the 1/2 power. What would I love to have happen? I would like this formula to continue with n equals 1/2, but no change in the formula. And that does happen.
How can I do that? OK, well, square root of x is what I'm tackling. The easy thing would be, if I square that, I'll get x, right? The square of the square root. Well, square root of x squared-- so there's f of x. I'm just going to use the fact that the square root of x squared is x. Such is mathematics. You can write down really straightforward ideas, but it had to come from somewhere.
And now what am I going to do? I'm going to take the derivative. Well, the derivative on the right side is a 1. The derivative of x is 1. What is the derivative of that left-hand side? Well, that fits my pattern. You see, here is my f of x, squared. And I had a little formula for the derivative of f of x squared. So the derivative of this is 2 times the thing to one lower power-- square root of x just to the first power-- times the derivative of what's inside, if you allow me to use those words. It's this, df dx. And that's of course what I actually wanted, the square root of x, dx.
This lecture is not going to have too many more calculations, but this is a good one to see. That's clear. I take the derivative of both sides. That's clear. This is the 2 square root of x. And now I've got what I want, as soon as I move these over to the other side. So I divide by that. Can I now just do that with an eraser, or maybe just X it out, and put it here. 1 over 2 square root of x. Am I seeing what I want for the derivative of square root of x? I hope so. I'm certainly seeing the 1/2. So the 1/2-- that's the n. It's supposed to show up here. And then what do I look for here? One lower power than 1/2, which will be x to the minus 1/2.
And is that what I have? Yes. You see the 1/2. And that square root of x, that's x to the 1/2, but it's down in the denominator. And things in the denominator-- the exponent for those, there's a minus sign. We'll come back to that. That's a crucial fact, going back to algebra. But, you know, calculus is now using all that-- I won't say stuff. All those good things that we learned in algebra, like exponents. So that was a good example.
OK. So my pattern held for n equals 1/2. And maybe I'll just say that it also would hold for cube roots, and any root, and other powers. In other words, I get this formula. This is the handy formula that we're trying to get. We got it very directly for positive whole numbers. Now I'm getting it for n equals 1 over any-- now I'm getting it for capital Nth roots, like 1/2. Then I could go on to get it for-- I could take then the n-th power of the n-th root. I could even stretch this to get up to m over n. Any fraction, I can get to. But I can't get to negative exponents yet, because those are divisions. Negative exponent is a division, and I'm going to need the quotient rule, which is right now still a big blank.
OK. Pause for a moment. We've used the product rule. I haven't explained it, though. Let me, so, explain the product rule. Where did it come from? I'm going back before the examples, and before that board full of chalk, back to that formula and just think, where did it come from? How did we find the derivative of f times g, of the product p? So we needed delta p, right? And then I'm going to divide by delta x. OK. So let me try to make-- what's the delta p when p is-- remember, p is f times g.
Thinking about f times g, maybe let's make it visual. Let's make it like a rectangle, where this side is f of x and this side is g of x. Then this area is f times g, right? The area of a rectangle. And that's our p. OK, that's sitting there at x. Now move it a little. Move x a little bit. Move x a little and figure out, how much does p change? That's our goal. We need the change in p.
If I move x by a little bit, then f changes a little, by a little amount, delta f, right? And g changes a little, by a little amount, delta g. You remember those deltas? So it's the change in f. There's a delta x in here. x is the starting point. It's the thing we move a little. When we move x a little, by delta x, f will move a little, g will move a little, and their product will move a little. And now, can you see, in the picture, where is the product? Well, this is where f moved to. This is where g moved to. The product is this, that bigger area.
So where is delta p? Where is the change between the bigger area and the smaller area? It's this. I have to figure out, what's that new area? The delta p is in here. OK, can you see what that area-- well, look, here's the way to do it. Cut it up into little three pieces. Because now they're little rectangles, and we know the area of rectangles. Right?
So help me out here. What is the area of that rectangle? Well, its base is f, and its height is delta g. So that is f times delta g. What about this one? That has height g and base delta f. So here I'm seeing a g times delta f, for that area. And what about this little corner piece? Well, its height is just delta g, its width is delta f. This is delta g times delta f. And it's going to disappear. This is like a perfect place to recognize that an expression-- that's sort of like second order. Let me use words without trying to pin them down perfectly.
Here is a zero-order, an f, a real number, times a small delta g. So that's first order. That's going to show up-- you'll see it disappear. These three pieces, remember, were the delta p. So what have I got here? I've got this piece, f delta g, and I'm always dividing by delta x. And then I have this piece, which is the g times the delta f, and I divide by the delta x. And then this piece that I'm claiming I don't have to worry much about, because I divide that by delta x. So that was the third piece.
This is it, now. The picture has led to the algebra, the formula for delta p, the change in the product divided by delta x. That's what calculus says-- OK, look at that, and then take the tricky step, the calculus step, which is let delta x get smaller and smaller and smaller, approaching 0. So what do those three terms do as delta x gets smaller?
Well, all the deltas get smaller. So what happens to this term as delta x goes to 0? As the change in x is just tiny, tiny, tiny? That term is the one that gives the delta g over delta x, in the limit when delta x goes to 0, is that one, right? And this guy is giving my g. That ratio is familiar, df dx. You see, the cool thing about splitting it into these pieces was that we got this piece by itself, which was just the f delta g. And we know what that does. It goes here. And this piece-- we know what that does.
And now, what about this dumb piece? Well, as delta x goes to 0, this would go to df dx, all right. But what would delta g do? It'll go to 0. You see, we have two little things divided by only one little thing. This ratio is sensible, it gives df dx, but this ratio is going to 0. So forget it. And now the two pieces that we have are the two pieces of the product rule. OK. Product rule sort of visually makes sense.
OK. I'm ready to go to the quotient rule. OK, so how am I going to deal, now, with a ratio of f divided by g? OK. Let's put that on a fourth board. How to deal then with the ratio of f over g.
Well, what I know is the product rule, right? So let me multiply both sides by g of x and get a product. There, that looks better. Of course the part that I don't know is in here, but just fire away. Take the derivative of both sides. OK. The derivative of the left side is df dx, of course. Now I can use the product rule. It's g of x, dq dx. That's the very, very thing I'm wanting. dq dx-- that's my big empty space. That's going to be the quotient rule.
And then the second one is q of x times dg dx. That's the product rule applied to this. Now I have it. I've got dq dx. Well, I've got to get it by itself. I want to get dq dx by itself. So I'm going to move this part over there. Let me, even, multiply both sides-- this q, of course, I recognize as f times g. This is f of x times g of x. That's what q was. Now I'm going to-- oh, was not. It was f of x over g of x. Good Lord. You would never have allowed me to go on.
OK. Good. This is came from the product rule, and now my final job is just to isolate dq dx and see what I've got. What I'll have will be the quotient rule. One good way is if I multiply both sides by g. So I multiply everything by g, so here's a g, df dx. And now this guy I'm going to bring over to the other side. When I multiply that by g, that just knocks that out. When I bring it over, it comes over with a minus sign, f dg dx. And this one got multiplied by g, so right now I'm looking at g squared, dq dx. The guy I want.
Again, just algebra. Moving stuff from one side to the other produced the minus sign. Multiplying by g, you see what happened. So what do I now finally do? I'm ready to write this formula in. I've got it there. I've got dq dx, just as soon as I divide both sides by g squared. So let me write that left-hand side. g df dx minus f dg dx, and I have to divide everything-- this g squared has got to come down here. It's a little bit messier formula but you get used to it. g squared. That's the quotient rule.
Can I say it in words? Because I actually say those words to myself every time I use it. So here are the words I say, because that's a kind of messy-looking expression. But if you just think about words-- so for me, remember we're dealing with f over g. f is the top, g at the bottom. So I say to myself, the bottom times the derivative of the top minus the top times the derivative of the bottom, divided by the bottom squared. That wasn't brilliant, but anyway, I remember it that way.
OK. so now, finally, I'm ready to go further with this pattern. I still like that pattern. We've got the quotient rule, so the two rules are now set, and I want to do one last example before stopping. And that example is going to be a quotient, of course. And it might as well be a negative power of x. So now my example-- last example for today-- my quotient is going to be 1. The f of x will be 1 and the g of x-- so this is my f. This is my g. I have a ratio of two things.
And as I've said, this is x to the minus n. Right? That's what we mean. We can think again about exponents. A negative exponent becomes positive when it's in the denominator. And we want it in the denominator so we can use this crazy quotient rule.
All right. So let me think through the quotient rule. So the derivative of this ratio, which is x to the minus n That's the q, is 1 over x to the n. The derivative is-- OK, ready for the quotient rule? Bottom times the derivative of the top-- ah, but the top's just a constant, so its derivative is 0-- minus-- remembering that minus-- the top times the derivative of the bottom.
Ha. Now we have a chance to use our pattern with a plus exponent. The derivative of the bottom is nx to the n minus 1. So it's two terms, again, but with a minus sign. And then the other thing I must remember is, divide by g squared, x to the n twice squared.
OK. That's it. Of course, I'm going to simplify it, and then I'm done. So this is 0. Gone. This is minus n, which I like. I like to see minus n come down. That's my pattern, that this exponent should come down. Minus n, and then I want to see-- oh, what else do I have here? What's the power of x? Well, here I have an x to the n-th. And here I have, twice, so can I cancel this one and just keep this one?
So I still have an x to the minus 1. I don't let him go. Actually the pattern's here. The answer is minus n minus capital N, which was the exponent, times x to one smaller power. This is x to the minus n, and then there's another x to the minus 1. The final result was that the derivative is minus nx to the minus n, minus 1. And that's the good pattern that matches here. When little n matches minus big N, that pattern is the same as that. So we now have the derivatives of powers of x as an example from the quotient rule and the product rule.
Well, I just have to say one thing. We haven't got-- We've fractions, we've got negative numbers, but we don't have a whole lot of other numbers, like pi. We don't know what is, for example, the derivative of x to the pi. Because pi isn't-- pi is positive, so we're OK in the product rule, but it's not a fraction and we haven't got it yet. What do you think it is? You're right-- it is pi x to the pi minus 1. Well, actually I never met x to the pi in my life, until just there, but I've certainly met all kinds of powers of x and this is just one more example.
OK. So that's quotient rule-- first came product rule, power rule, and then quotient rule, leading to this calculation. Now, the quotient rule I can use for other things, like sine x over cosine x. We're far along, and one more big rule will be the chain rule. OK, that's for another time. Thank you.
[NARRATOR:] This has been a production of MIT OpenCourseWare and Gilbert Strang. Funding for this video was provided by the Lord Foundation. To help OCW continue to provide free and open access to MIT courses, please make a donation at ocw.mit.edu/donate.
|
<urn:uuid:d20734d4-4c95-4e77-b06f-a0ece8a2606b>
|
CC-MAIN-2013-20
|
http://ocw.mit.edu/resources/res-18-005-highlights-of-calculus-spring-2010/derivatives/product-rule-and-quotient-rule/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970162
| 5,797
| 3.90625
| 4
|
A new atlas and catalog of the entire infrared sky with more than a half-billion stars, galaxies and other objects captured by NASA's Wide-field Infrared Survey Explorer (WISE) mission was unveiled by NASA Wednesday.
"Today WISE delivers the fruit of 14 years of effort to the astronomical community," said Edward L. (Ned) Wright, a UCLA professor of physics and astronomy and the mission's principal investigator, who began working on the mission in 1998.
A 10-foot unmanned satellite weighing 1,400 pounds, WISE was launched into space on Dec. 14, 2009, and mapped the sky in 2010. Like a powerful set of night-vision goggles, WISE surveyed the cosmos with infrared detectors about 300 times more sensitive than those used in previous survey missions, said Wright, who holds UCLA's David Saxon Presidential Chair in Physics. WISE collected 15 trillion bytes of data and more than 2.7 million images taken at four infrared wavelengths of light — invisible to the unaided human eye — capturing everything from nearby asteroids to distant galaxies.
The individual WISE exposures have been combined into an atlas of more than 18,000 images and a catalog listing the infrared properties of more than 560 million objects found in the images. Most of the objects are stars and galaxies, with roughly equal numbers of each; many of them have never been seen before.
WISE observations have already led to many discoveries, including elusive failed stars, or Y-dwarfs. Astronomers had been hunting for Y-dwarfs for more than a decade. Because they have been cooling since their formation, they do not shine in visible light and could not be spotted until WISE mapped the sky with its infrared vision. WISE has also found that there are significantly fewer mid-size near-Earth asteroids than astronomers had previously feared. With this data, now more than 90 percent of the largest of the asteroids have been identified.
One image released today (see the online version of this news release) shows a surprising view of an "echo" of infrared light surrounding an exploded star. The echo was etched in the clouds when a flash of light from the supernova explosion heated surrounding clouds. More discoveries are expected now that astronomers have access to the WISE images.
In another image (also posted with the online version of this news release), moving objects such as asteroids and comets were removed, but residuals of the planets Saturn, Jupiter, and Mars are visible in this image as bright red spots off the plane of the galaxy at the 1 o'clock, 2 o'clock and 7 o'clock positions, respectively.
"With the release of the all-sky catalog and atlas, WISE joins the pantheon of great sky surveys that have led to so many remarkable discoveries about the universe," said Roc Cutri, who leads the WISE data processing and archiving effort at the Infrared and Processing Analysis Center at the California Institute of Technology.
The entire collection of WISE images released so far can be seen at http://wise.ssl.berkeley.edu/gallery_images.html.
An introduction and quick guide to accessing the WISE all-sky archive for astronomers is online at http://wise2.ipac.caltech.edu/docs/release/allsky/.
Instructions for technically-minded people who want to explore the archive are at http://wise.ssl.berkeley.edu/wise_image_service.html.
The Jet Propulsion Laboratory (JPL) manages and operates the Wide-field Infrared Survey Explorer for NASA's Science Mission Directorate, Washington, D.C. The mission was competitively selected under NASA's Explorers Program managed by the Goddard Space Flight Center, Greenbelt, Maryland. The science instrument was built by the Space Dynamics Laboratory, Logan, Utah, and the spacecraft was built by Ball Aerospace & Technologies Corp., Boulder, Colo. Science operations and data processing and archiving take place at the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena. Caltech manages JPL for NASA.
More information is online at http://www.nasa.gov/wise.
UCLA is California's largest university, with an enrollment of more than 38,000 undergraduate and graduate students. The UCLA College of Letters and Science and the university's 11 professional schools feature renowned faculty and offer 328 degree programs and majors. UCLA is a national and international leader in the breadth and quality of its academic, research, health care, cultural, continuing education and athletic programs. Six alumni and five faculty have been awarded the Nobel Prize.
For more news, visit the UCLA Newsroom and follow us on Twitter.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
|
<urn:uuid:e102a540-67fd-47d1-b41f-c97f072e2b68>
|
CC-MAIN-2013-20
|
http://www.eurekalert.org/pub_releases/2012-03/uoc--ahs031412.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94711
| 1,003
| 3.359375
| 3
|
What`s black and white--and colored--all over? A lot more than you may realize. If you went to see ``Weird Science`` last summer, you saw a segment from the original 1931 black and white ``Frankenstein``--except it wasn`t quite the original. It was in color. If you watched the network televsion miniseries ``Kane and Abel,`` you saw some old stock footage, shot in black and white, of an ocean liner cruise--except you saw it in color.
When NBC started airing Alfred Hitchcock in color for the introductions to its ``Alfred Hitchcock Presents`` series, there was much hoopla. But this was not the first example of black and white turned color. In fact, back in
`78, NBC`s ``King`` docudrama about Martin Luther King Jr. included eight minutes of colorized black and white footage of the Washington peace march.
These examples are merely a prelude to the vast number of productions we will be seeing in color in the next couple of years. There are about 16,000 movies, or, as one company put it, 2,790,000 minutes of black and white movies and TV shows, available to be colorized. And the prospect has many movie buffs up in arms.
Two companies--Color Systems Technology Inc. (CST) and Colorization Inc.--are behind virtually all the colored blackand white productions we are seeing. Using similar, though not identical, technologies, these companies have taken advantage of one of video`s special qualities--that it can be manipulated electronically. (Film, by contrast, is a chemical medium, so it can be altered only chemically, with irreversible results.)
Colorization Inc., founded by Wilson Markle and owned by Hal Roach Studios (HRS), is steadily working its way through the Hal Roach library, coloring the 1937 production of ``Topper`` with Cary Grant and Constance Bennett and the 1936 Laurel and Hardy movie ``Way Out West.`` The company also has colored the public domain classic, ``It`s a Wonderful Life,`` with Jimmy Stewart. Colorization Inc. uses a system developed by Markle in the early
`70s. Image Transform, Markle`s company at the time, colored all the pictures from the Apollo space program to make a full-color television presentation for NASA. Since then, Markle has refined the system, making it easy to color full-length feature movies.
Markle and his partner, Brian Holmes, transfer the original film to video. Then, using the Dubner graphics computer (which is able to track color data at real time), along with microcomputer workstations and custom software, they pick key frames at the beginning of each scene of the movie. An art director determines generally what colors should be used for the overall look and technicians then color each part of the picture. The computer takes this general information and assigns a color to each pixel of the picture. The computer tracks the movement of each pixel on the screen and moves the colors accordingly. There is only 4 percent change from one frame to the next, so the computer colors in 96 percent of the next frame based on the information in the previous frame and the technician fixes the remaining 4 percent.
CST`s process is somewhat different. Developed by electronics engineer Ralph Weinger in his garage in Philadelphia in the early `70s, the CST process is more hardware (rather than software) dependent that Colorization`s. Rather than color in the picture, the technician, or ``colorist,`` takes one segment of the frame--someone`s dress, for example--and chooses a range of colors for the area. A computer then produces a gray scale for the area and assigns a particular color to each shade of gray in the area. This way, if the dress was yellow, the gray scale might include a brownish yellow for the darker, pleated areas all the way to a bright yellow where the light falls directly on the dress. The technician does this for each part of the first and last frames of a scene. Then the computer interpolates all the in-between frames. If it runs into a gray for which no color has been assigned (a new character enters the scene, for example), that area remains gray and the technician must go back and color that spot.
|
<urn:uuid:ef405418-ab99-4caa-a1d7-fe4f19348a53>
|
CC-MAIN-2013-20
|
http://articles.chicagotribune.com/1986-08-29/entertainment/8603050091_1_wilson-markle-constance-bennett-laurel-and-hardy-movie
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.929299
| 892
| 2.546875
| 3
|
For a few days since 13 Aug. 2011, observers in Germany have noticed colourful twilight phenomena like intense crepuscular rays, and thus were reminded of the volcanic twilights from the Kasatochi and Sarychev. An aerosol layer is presently verifiable in the entire northern hemisphere, indeed. At the moment, measurements from the Meteorological Observatory Hohenpeissenberg (Germany), Evora (Portugal), Mauna Loa (Hawaii), Ukraine and Russia, all record this layer at heights between 12 and 19 km.
Most probably, these volcanic aerosols can be traced back to the Nabro Volcano in Eritrea. Despite having undergone no historically reported eruptions, the Nabro Volcano erupted shortly after local midnight on 13 June 2011, after a series of earthquakes ranging up to magnitude 5.7 in the Eritrea-Ethiopia border region. Its ash plume was observed on satellite images and drifted to the west-northwest along the said border, spanning a width of about 50 km and extending for several hundred kilometers westward in the immediate hours following the onset of the eruption, while reportedly reaching a ceiling near 15 km of altitude. The ash cloud also disrupted air traffic, as United Arab Emirates based flights were cancelled along with Saudi Arabian Airlines flights. Egypt’s Luxor International Airport was placed in a state of emergency for a while.
This aerosol layer seems to have been present since 15 July 2011 as shown by the Lidar measurements from Hohenpeissenberg.
More pictures and plots of the measurements are summarized here (PDF download):
Link to the NASA-Website with further measurements.
Support for this documentation on behalf of the Meteorological Observatory Hohenpeissenberg is gratefully acknowledged.
Author: Claudia Hinz, Brannenburg, Germany
After the twilights had been getting normal through the past three weeks, where hardly any volcanic aerosoles from Sarychev volcano had been measured, I was very astonished when I saw an intense purple light with crepuscular rays about half an hour before sunrise (sun elevation at -6°) in the morning of November 17. The crepuscular rays crossed the whole sky near the horizon, converging at the antisolar point (1 – 2 – 3).
Of course I immediately asked my colleagues from the Hohenpeissenberg observatory about the phenomenon. And I got a very surprising answer:
At that moment there were two different layers of dust from the Sahara desert above us, a lower one at an altitude of about 8.5 kms with dust from the western parts of the Sahara, and a higher one at about 11 kms, which contained dust from the eastern part of the Sahara. There were two different currents of air at higher levels which overlapped each other above the Alps.
It is new for me to learn that such twilights are also possible in desert dust, just as this dust up to now only caused a kind of certain dimness in the air. But at that moment there was no desert dust directly above us; I only looked into the layers of dust.
However, there was an extra bonus on the next morning. Unfortunately could only watch it from the valley:
Author: Claudia Hinz, Brannenburg, Germany
On June 12, 2009,one of the most active volcanoes of the Kuril islands near Kamtschatka, which is situated near the northwestern end of the island of Matua, Sarychev Peak, erupted.
A NASA picture taken from the ISS gives an impressive sight of the eruption. Ashes have been ejected up to 20 kms into the atmosphere. Only a few hours after the eruption, the sulfur dioxide cloud of the volcano covered an area of2.407 kms in width and 926 kms in length above the island. During the following weeks, the aerosoles spread over the whole northern hemisphere.
Since the end of June, also in Central Europe unusual twilights are observed. The up-to-date Lidar measurement from the Hohenpeissenberg observatory in Bavaria shows three aerosol layers in altitudes of 15, 18 and 22 kms in comparison to the eruption of Mt. Pinatubo. It is very interesting that the layers in 15 and 18 kms have come here with westerly winds passing over Alaska, Canada and the Atlantic Ocean, while the layer in 22 kms has been transported to us by stratospheric easterly winds passing over Asia (Russia/China). So the volcanic aerosoles havetravelled around half of the planet in two different directions (the lower layers eastward and the upper one westward), meeting again here over Europe. I think this is worth to be mentioned.
On July 4, Peter Krämer observed the caracteristic crepuscular rays (picture above). On July 13, Reinhard Nitze photographed the most spectacular volcanic twilight in Barsinghausen near Hanover (Fig. 3). In his picture, the high aerosol clouds can easily be recognized. These clouds still receive sunlight while normal cirrus clouds are already within the shadow of the earth.
During the past few days, there were also noctilucent clouds visible, which passed over to the reddish aerosol clouds in lower layers. There should be unusual twilights visible also during the following weeks.
Posted by Claudia Hinz
On August 7, the Kasatochi Volcano, situated on the Aleutian Islands near Alaska, erupted. Clouds of ashes and sulfur dioxide were ejected up to 15 km into the stratosphere.
During the following 3 weeks, the volcanic clouds spread over the whole northern hemisphere, causing widespread intense twilight colours and often also crepuscular rays. These were first reported from Northern America during mid August, but at the end of the month, these “volcanic twilights” were also observed in Europe.
In the evening of August 29, several observers reported a strange and intense yellow light around sunset, followed by a purple light. Some of them were reminded of the unusual twilights between February 17 and 20, which were caused by PSC.
On August 30, skies were clear over Germany, and so many observers could see a kind of silvery cloud stripes a few minutes before sunset. These stripes were orientated north-south and at first glance looked like cirrus or cirrostratus clouds. But during the day these clouds had not been visible at all, and when looking at them carefully, one could see that they were higher than normal high clouds. The contrails of some airplanes were obviously below these clouds, and as the contrails turned reddish in the light of the setting sun, the clouds still remained bright. So they must have been floating higher up in the air, somewhere in the
After sunset, the clouds got a more brownish-yellow hue, but turned pink only about 20 minutes after sunset. Some observers also reported intense crepuscular rays. The purple light faded about half an hour after sunset.
In the morning of August 31, the colours and cloud stripes could also been observed. In the evening, a cold front with thunderstorms approached the western parts of Germany. While even the tops of the cumulonimbus clouds were already dark, the stratospheric clouds still lay in plain sunlight. That evening, instead of the regular stripes of the day before, they looked more like irregular waves.
During the first days of September, the strange twilight colours could still be observed over southern Germany, while for the rest of the country morning and evening skies looked quite normal again.
But as there are still volcanic ashes in the stratosphere, the colours may return. So keep watching the skies before sunrise and after sunset.
Author: Peter Krämer, Bochum, Germany
Between February 17 and February 20, 2008, large parts of Western Europe witnessed a seriesof unusually bright morning and evening twilights.
A few minutes after a quite normal sunset, the western skies began to burn in a strange yellow light which was bright enought to illuminate the landscape and giving a quite unreal touch to houses and trees.
Some minutes later, the yellow light in the west became surrounded by a brownish rim, turning into purple within some minutes. The yellow part of the sky slowly shrank towards the horizon, turning into orange and later into red and crimson. Some observers also reported of a dark, brownish-red light in the east which surrounded the whole horizon reaching up to 10° high.
The strange lights and colours in the sky were visible for up to about one hour after sunset. A similar “light-show” also appeared in the morning, starting with a crimson light over the eastern horizon and ending with the bright yellow light short before sunrise. The yellow illumination of the landscape could even be perceived through layers of low clouds (stratus) in some areas.
These in some cases weird-looking twilights were probably caused by an outbreak of polar stratospheric clouds (PSC). These form at temperatures below -78°C in the stratosphere, at an altitude of about 20 – 25 km above the ground.
Soundings made at several stations showed that temperatures in the stratosphere really were unusually low over western Europe; up to -87°C (De Bilt) were measured, the lowest since measurements began in the 1980s. This makes the formation of PSC over a large area possible. Some photographs also show faint structures in the light, giving hints that they actually were caused by PSC.
Polar stratospheric clouds have never before been observed so far south. Normally, they can only be seen from Scandinavia, Canada and Alaska. Only in 1999 there was a confirmed observation of PSC from northern Germany.
Authors: Peter Krämer, Bochum & Claudia Hinz, Brannenburg
|
<urn:uuid:ec7c6eaa-77ac-4a92-9f0c-63cf789409ec>
|
CC-MAIN-2013-20
|
http://atoptics.wordpress.com/tag/purple-light/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962158
| 2,029
| 3.171875
| 3
|
A Reference Resource
Richard Rush (1817): Secretary of State
Richard Rush was born in Philadelphia, Pennsylvania, on August 29, 1780, and graduated from Princeton University. Rush began his political career as attorney general for the state of Pennsylvania in 1811, a post earned largely through public recognition of his sustained excellence as a practicing lawyer (1800-1811).
He soon moved into the employ of President James Madison, becoming comptroller of the treasury in 1811, and, with the onset of the War of 1812, the administration's speaker on war policy. By 1814, he had become attorney general of the United States, serving in that capacity until 1817. With the arrival of James Monroe's presidential administration in 1817, Rush accepted the position of interim secretary of state; he served for a year. He gained additional cabinet experience from 1825 to 1829 as secretary of the treasury during the presidency of John Quincy Adams.
Rush served abroad for several years as well, acting as U.S. minister to Great Britain (1817-1825), and President James K. Polk's minister to France (1847-1849). Beyond his political involvement, Rush was also largely responsible for the construction of the Smithsonian Institution in Washington, D.C. He died in Philadelphia on July 30, 1859.
|
<urn:uuid:e2cc1870-dd97-4693-ac03-daf158a6aa26>
|
CC-MAIN-2013-20
|
http://millercenter.org/president/monroe/essays/cabinet/154
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.979312
| 270
| 2.734375
| 3
|
An auction in a rural community is a complex social, economic and even political event. It is also an emotional event. A farm auction usually means that the farmer is leaving — either by choice or because he or she can no longer make it financially. Neighbors gather to look through and bid on household items and equipment. In one moment, they're looking for bargains. In another moment, they're celebrating the life of their neighbor. They catch up on community news. They eat together. They bid. They usually buy something. But when the auction has been forced by a foreclosure, it can become a political event. Activists may try to stop it or, at least, make a point.
During the 1930s — one of the other times of major stress on the farm — activists in Nebraska came up with a way to try and halt foreclosures. A bank would announce that they had to foreclose on a farmer who couldn't pay his or her loan. The sheriff would serve the papers and an auction would be scheduled to sell off the land and equipment that had been pledged against the loan. The bank would hope the proceeds from the sale would amount to most of the money they were losing on the loan.
But many farm activists felt that the bankers were being too greedy, and that the farmers deserved a break in tough times. So, hundreds of farmers would show up at the auction and bid ridiculously low amounts for the equipment and land on the sale. Serious bidders were discouraged, sometimes by the threat of violence. Then, the activists would turn around and give the material back to the farm family who were in trouble. The proceeds of the first of these sales were $5.35 for equipment that should have brought hundreds or thousands of dollars. In the 30s, these were known as "Penny Auctions."
In the 1980s, they were known as "Nickel Auctions."
In 1984, farmers from Nebraska and surrounding states stopped a farm equipment bankruptcy auction in West Point, Nebraska. Reuben Leimer owed almost $1.8 million on his operation. He had been trying to find new financing and had even filed for bankruptcy protection. A trustee had been appointed by the bankruptcy judge, and the trustee had been gradually selling off assets to pay some of the creditors. Leimer was desperate. He tried to withdraw his filing for bankruptcy. He filed legal "liens," or claims, against the property being sold by the trustee. Such common-law liens are a tactic advocated by the National Agricultural Press Association and other farm organizations. He filed suit against the judge and filed "patents" against his land, trying to cloud the true ownership of the land. Leimer, his wife, and 12 children were evicted from the farm in October 1984. The court ordered a foreclosure, and the auction was scheduled for December 1984.
On the day of the sale, Leimer's supporters crowded the auction site. Many wore black armbands in memory of Cairo farmer Arthur Kirk. When asked about the people wearing black armbands, Leimer replied they were "just people who felt they knew Art Kirk well enough to wear the bands to mourn his death." More than 50 farmers from Verdigre, Hartington, and Bloomfield in northeast Nebraska and surrounding states attended.
The auction started. The first bid was for 5-cents. The second bid was 5-cents. It only lasted about five minutes, and only one or two items were sold before the auction was called off.
Gene Chamberlain, the bankruptcy trustee appointed by the federal bankruptcy court to handle the Leimer auction, said "the sale was canceled because of the conflicts (at the sale) which could have led to personal injury."
Cuming County Sheriff, Harold Welding also saw a potential for violence. "I think there was an intent to disrupt the sale." He said one of the farmers wearing a black armband had a handgun in a holster.
After the auction was canceled, Leimer commented that "You have more friends than you realize." Some observers reported that some of the farmers who gathered were members of the Posse Comitatus, the survivalist group. But Leimer said, "They can give a name to anything. We don't belong to no group. They're just trying to discredit this. We're law-abiding people."
Leimer, like many farmers caught up in the farm crisis of the 1980s, was convinced that there was a conspiracy of banking officials, especially federal loan institutions like the Production Credit Association, to destroy the family farm. Many farmers also thought that high-ranking federal government officials were determined to maintain cheap food prices and were willing to deny farmers a fair market price for their products to achieve that end.
The incident was reminiscent of scenes from the 80s movie "Country." In the film, neighbors of a farmer in trouble try to disrupt the machinery auction ordered by the Farmers Home Administration (FmHA). An FmHA official in the movie told the farmers the agency would simply hold the machinery sale in another location.
The Nickel Auctions were a dramatic attempt to help farmers in trouble. But they had little impact across the plains. When members of the American Agriculture Movement tried similar tactics to protest a sale near Imperial, Nebraska, in 1986, The State Patrol and Sheriff's office were there in force. All the protestors could do was stand on the outside and bang metal garbage can lids to drown out the auctioneer. The sale went on.
|
<urn:uuid:b9bcf321-ce86-455f-9797-593715b73d7e>
|
CC-MAIN-2013-20
|
http://nebraskastudies.org/1000/stories/1001_0116.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.989601
| 1,122
| 3.21875
| 3
|
Washington, September 9 (ANI): In a new study, scientists have determined that the technique of laser cooling could be used to create “exotic” states of matter.
According to a report in National Geographic News, in a new technique, Martin Weitz and Ulrich Vogl of the University of Bonn in Germany used a laser to bring the temperature of dense rubidium gas far below the normal point at which the gas becomes a solid.
Previous research had been able to use lasers to quickly “supercool” only very diluted gases.
But, “here’s a case where you shine a laser on something and it actually cools down, and not just a handful of atoms, but a macroscopic object,” said Trey Porto, a physicist with the National Institute of Standards and Technology’s laser-cooling group.
The process could be used to create fascinating new states of matter, according to the study authors.
“For example, if you can very quickly cool water much lower than zero Celsius (32 degrees Fahrenheit), where it would normally turn to ice, exotic crystalline and glassy states of matter would be predicted,” Weitz said.
The new technique could also be used in cooling mechanisms to boost the efficiency of some stargazing equipment, he added.
“If you could cool thermal cameras that look at the stars, they may have less noise and be more sensitive,” he said.
Since a laser’s color is linked to its intensity, the new technique is based on using a red laser in which the frequency has been adjusted so that the beam affects the atoms only when they collide with each other.
Weitz and Vogl shone this laser beam into gaseous rubidium atoms in a high-pressure “atmosphere” of argon.
In the experiment, the rubidium gas fell from 662 degrees Fahrenheit (350 degrees Celsius) to almost 536 degrees Fahrenheit (280 degrees Celsius) within mere seconds.
Much more research needs to be done before the laser-cooling process can be used in real-world applications, study co-author Weitz cautioned.
But, NIST’s Porto said the work already represents a major departure from traditional cooling of diluted gases, which are currently used for studying quantum effects or preparing gas samples for atomic clocks.
“I think the really amazing thing is that you can even get cooling in this regime, because it’s a really dense gas and a very different mechanism,” Porto said.
“Traditional cooling powers are so tiny. To cool a physical object by a measurable degree with a laser is amazing,” he added. (ANI)
|
<urn:uuid:bb7f93c6-7a92-4d0a-bdba-5185a7acbf06>
|
CC-MAIN-2013-20
|
http://silverscorpio.com/tag/glassy/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931804
| 566
| 3.578125
| 4
|
Computer Lab Activities, Online Learning Activities
Building Language for Literacy: Early Reading Activities
In these online activities, three language-loving characters inspire young children and help prep them for reading success.
- Grades: PreK–K
The three activities in the “Building Language for Literacy” program help to prepare children today for reading success tomorrow. These online activities (PreK–K) build upon young children’s home and community experiences to create meaningful connections with language.
- Naming With Nina helps children name different objects in some familiar (and maybe some new) places in and around town.
- Rhyming With Reggie encourages children to develop an awareness of patterns in language by picking out words that sound alike.
- Children match letters in Leo the Letter-Matching Lobster, so they become familiar with not only the shape of letters, but also the connection between letters and the sounds they make.
By participating in the Building Language for Literacy Language online learning activities, children will:
- Develop vocabulary skills
- Distinguish different word sounds
- Recognize letter shapes and sounds
- Enhance their understanding of the community around them
- Learn to follow oral directions
- Learn to categorize familiar objects
“Building Language for Literacy” is based on several pieces of seminal research.
|
<urn:uuid:574c1d0f-7fb5-426f-9cf7-6d8fd76a4a03>
|
CC-MAIN-2013-20
|
http://www.scholastic.com/teachers/activity/building-language-literacy
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.903944
| 274
| 4.21875
| 4
|
Stage One: Light Reflection Models
Cornell University Program of Computer Graphics
Note: This page describes stage one of a research framework for global illumination which was first presented at a special SIGGRAPH session in August of 1997. The full text of the paper is available from the Program of Computer Graphics on-line publications.
Components of a light reflection model,
showing incoming light and outgoing diffuse,
directional diffuse, and specular reflections
ModelsLight reflectance models have always been of great interest to the computer graphics community. The most commonly used model was derived approximately twenty-five years ago at the University of Utah [PHON75]. The Phong direct lighting model is a clever scheme using a simple representation, but it is neither accurate in the sense that it represents the true reflection behavior of surfaces, nor its it entry consistent.
Despite notable improvements in the Phong model over the years [BLIN77][COOK81], a comprehensive model of how light reflects or transmits when it hits a surface, including its subsurface interactions, needs to be developed. The resulting bidirectional reflectance distribution function (BRDF) is a function of the wavelength, surface roughness properties, and the incoming and outgoing directions. The BRDF should correctly predict the diffuse, directional diffuse, and specular components of the reflected light.
In 1991, He [HE91] presented a sophisticated model based on physical optics and incorporating the specular, directional diffuse, and uniform diffuse reflections by a surface. Related work [POUL90][OREN94] also provides models applicable to a wide range of materials and surface finishes, but for more complex surfaces, such as layered surfaces or thin films, analytical derivations are often too complicated. In some cases, Monte Carlo methods have been applied for simulating local reflectance properties on a micro scale [KAJI85][CABR87][HANR93].
RepresentationsUltimately, what is necessary is a compact representational scheme which can accurately describe the dominant behavior of a BRDF. The representation method should be suitable for progressive algorithms, monotonically converging to a correct solution. This past year, we introduced a new class of primitive functions with nonlinear parameters for representing reflectance functions. The functions are reciprocal, energy-conserving, and expressive, and capture important phenomena such as off-specular reflection, increasing reflectance with angle of incidence, and retroreflection [LAFO97]. Most importantly, the representation is simple, compact, and uniform and has been verified by comparisons to our physically-based model and actual measurements.
The sphere on the left uses a diffuse reflection representation, while the one on the right and the metal panel illustrate rendering with a more sophisticated representation [LAFO97]
MeasurementFor verification of our light reflection models we rely on physical measurement of light sources, surface reflections from physical samples of materials, and the input geometry for our test scenes.
Though significant progress has been made in modeling the surface BRDF, the model is far from complete. Properties such as polarization and anisotropy need to be well accounted for. Subsurface scattering which contributes towards the diffuse component of the BRDF is not well understood and is being handled empirically. Surface properties other than the BRDF which affect light interaction such as transmission, fluorenscence, and phosphorescence are either completeley ignored or are being modeled empirically. These need to be correctly accounted for.
The most accurate available scene geometry, light source emission data and surface reflection functions (BRDF's) serve as input data for simulating the light transport in stage two of our research framework.
GoalsIn summary, our specific long-term goals for light reflection models are:
Lead Researchers and Collaborators
|
<urn:uuid:1e8dfa55-1323-4afc-a69d-a8860c7b9ab0>
|
CC-MAIN-2013-20
|
http://www.graphics.cornell.edu/research/globillum/reflmodel.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.905875
| 769
| 2.796875
| 3
|
THE BORDER WALL DOES NOT WORK
All of the imagined benefits of the border wall flow from the assumption that if walls are built they will stop undocumented traffic from coming across. Politicians claim that building 700 miles of wall along our 1,933 mile long southern border, while ignoring the 3,987 mile long northern border and 12,479 miles of coastline will somehow allow the Department of Homeland Security to achieve the Secure Fence Act's goal, to "achieve and maintain operational control over the entire international land and maritime borders of the United States."
In fact, the Border Patrol's own statistics show that the border walls have not brought about a decrease in illegal entries. The border patrol uses the number of border crossers apprehended in a given sector to gauge the overall number of attempted crossings. Apprehensions dropped dramatically between 2005, the year before the Secure Fence Act was passed, and 2007, the year after. But the decrease did not occur in areas where border walls had been built. On the contrary, the greatest reductions in apprehensions, which according to the Border Patrol would indicate a successful strategy for stopping undocumented immigration, were seen in sectors that did not have walls. Texas' Rio Grande Valley sector saw a 45.3% decrease in apprehensions, bringing them to a 15 year low. The Del Rio, Texas, sector saw a 66.5% decrease. Neither sector had an inch of border wall before 2008. In sectors such as Tucson, which saw walls built shortly after passage of the Secure Fence Act, the reduction in apprehensions began before any wall posts were erected. The areas that saw an increase in crossings were California's San Diego and El Centro sectors, both of which have had border walls for over a decade. At the same time that the unwalled border witnessed dramatic decreases in crossings, heavily fortified San Diego saw a 20.1% increase.
Even before the passage of the Secure Fence Act, it was clear that border walls did not reduce the number of people entering the United States. The Congressional Research Service found that the number of border crossers apprehended nationally in 1992 was the same as the number apprehended in 2004, after walls in San Diego had been erected. They concluded that migrant traffic had simply shifted to more remote areas in Arizona and that "increased enforcement in San Diego sector has had little impact on overall apprehensions." Migrants were not stopped by border walls; they simply went around them.
Other researchers have studied the effectiveness of the border wall and border enforcement by analyzing how successful migrants are at getting through it. The Migrant Policy Institute found that 97% of undocumented immigrants eventually succeed in entering the United States, a number that has been unchanged since the first border walls went up in 1995. Wayne Cornelius, Director of the Center for Comparative Immigration Studies at the University of California-San Diego told the House Judiciary Committee that according to his research.
Tightened border enforcement since 1993 has not stopped nor even discouraged unauthorized migrants from entering the United States. Even if apprehended, the vast majority (92-97%) keep trying until they succeed. Neither the higher probability of being apprehended by the Border Patrol, nor the sharply increased danger of clandestine entry through deserts and mountainous terrain, has discouraged potential migrants from leaving home.
Assertions by pundits and politicians that walls will allow the U.S. to "secure" its southern border are patently false. Spokespersons for the Border Patrol tend to describe it much more modestly. Del Rio, Texas, Border Patrol Chief Randy Hill said, "We're going to see steel barriers erected on the borders where U.S. and Mexican cities adjoin. These will slow down illegal crossers by minutes." Not stop crossers, or allow the Border Patrol to "achieve and maintain operational control" of the border, but slow them down by "minutes." As Border Patrol spokesperson Mike Scioli said, "The border fence is a speed bump in the desert."
Even Bush administration Secretary of Homeland Security Michael Chertoff said in 2007, "I think the fence has come to assume a certain kind of symbolic significance which should not obscure the fact that it is a much more complicated problem than putting up a fence which someone can climb over with a ladder or tunnel under with a shovel."
Mile upon mile of border wall have been built, with no apparent thought given to efficacy, because the Secure Fence Act only mandated a mile count. There is no requirement that border walls have any measurable impact on immigration or smuggling, and in 2009 the Government Accountability Office found that the Department of Homeland Security had made no effort to determine whether or not walls were having any effect. Even the Border Patrol has questioned whether walls are being built in some locations for political, rather than operational, reasons. In a 2007 email obtained by the Center for Responsibility and Ethics in Washington (CREW) through a Freedom of Information Act request, the Assistant Chief Patrol Agent for the Yuma sector asks, "will we be getting fence where we don't need it in our sector for the sake of putting up the required mileage?" The miles of unnecessary border wall that he referred to have since been built through the Imperial Sand Dunes of Southern California.
Despite its "symbolic significance" and its possibly arbitrary placement, the border wall comes with a real price tag. In 2007 the Congressional Research Service estimated that the border wall could cost as much as $49 billion to build and maintain. Since then the costs of construction have risen dramatically. The Army Corps of Engineers reported that the cost of building "pedestrian fences" has increased from an average of $3.5 million per mile to $7.5 million per mile. The cost of building vehicle barriers on the border is now $2.8 million per mile. Some sections of border wall are particularly expensive: the walls that have been inserted into the levees in south Texas averaged $12 million per mile; in California, a 3.5 mile section that involved filling in canyons cost taxpayers $57 million. In 2008, the Department of Homeland Security asked Congress to allocate an additional $400 million for border wall construction, because the $2.7 billion already spent was not enough to finish out the year.
Why would members of Congress vote to spend billions of taxpayer dollars on border walls that do not work?
Simply put, for members of Congress who do not live beside the border, and do not count on the votes of those who do, the border wall is an abstraction. The reality that the border wall has little or no impact on border crossings is irrelevant. The reality that more than 400 property owners have had their property condemned is irrelevant. The reality that federally designated wilderness areas and wildlife refuges have been severely impacted is irrelevant. The politicians who voted for border walls were voting for a symbol, something that could be used to give voters a false sense of security during election cycles, and nothing more.
|
<urn:uuid:dc19d281-2f6d-4b47-853a-9c6ce507d9e2>
|
CC-MAIN-2013-20
|
http://www.no-border-wall.com/walls-do-not-work.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964331
| 1,408
| 2.6875
| 3
|
PRONOUNCED: HEN-ree [key]
Meaning & History
From the Germanic
which meant "home ruler", composed of the elements heim
"home" and ric
"power, ruler". It was later commonly spelled Heinrich
, with the spelling altered due to the influence of other Germanic names like Haganrich
, in which the first element is hagan
Heinrich was popular among continental royalty, being the name of seven German kings, starting with the 10th-century Henry I the Fowler, and four French kings. In France it was rendered Henri from the Latin form Henricus.
The Normans introduced this name to England, and it was subsequently used by eight kings, ending with the infamous Henry VIII in the 16th century. During the Middle Ages it was generally rendered as Harry or Herry in English pronunciation. Notable bearers include arctic naval explorer Henry Hudson (1570-1611), British novelist Henry James (1843-1916), and American automobile manufacturer Henry Ford (1863-1947).
OTHER LANGUAGES: Emmerich, Heimirich, Heinrich, Henricus (Ancient Germanic), Henrik (Armenian), Endika (Basque), Enric (Catalan), Henrik (Croatian), Jindřich (Czech), Henrik, Henning (Danish), Hendrik, Heike, Heiko, Hein, Henk, Hennie, Henny, Rik (Dutch), Hendrik (Estonian), Harri, Henri, Henrikki, Heikki (Finnish), Émeric, Henri (French), Emmerich, Heinrich, Hendrik, Henrik, Hinrich, Heike, Heiko, Heiner, Heinz, Henning (German), Henrik, Imre (Hungarian), Hinrik (Icelandic), Anraí, Einrí (Irish), Amerigo, Enrico, Arrigo, Enzo, Rico (Italian), Henrikas, Herkus (Lithuanian), Herry (Medieval English), Henrik, Henning (Norwegian), Henryk (Polish), Américo, Henrique (Portuguese), Eanraig, Hendry (Scottish), Henrich, Imrich (Slovak), Henrik (Slovene), Américo, Enrique, Quique (Spanish), Henrik, Henning (Swedish), Harri (Welsh)
| United States || ranked #57|| |
| England/Wales || ranked #34|| |
| Canada (BC) || ranked #49|| |
| Australia (NSW) || ranked #39|| ||
|
<urn:uuid:0c185e21-7504-42ef-b9b8-e562cbb051bd>
|
CC-MAIN-2013-20
|
http://www.behindthename.com/name/henry
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.860804
| 577
| 3.140625
| 3
|
Longleaf Pine Restoration - Bibb County
Longleaf Pine Restoration
The longleaf pine ecosystem will be restored on approximately 132 acres of property which lies at the confluence of the Cahaba River and the Little Cahaba River. The Property is also adjacent to Cahaba River National Wildlife Refuge and Cahaba River Wildlife Management Area. Longleaf pine was once Alabama's most abundant tree, but it has been greatly reduced in its extent, with much of its range now occupied by agriculture and/or forestry operations. The Alabama Comprehensive Wildlife Conservation Strategy and the U.S. Fish and Wildlife Service have identified the longleaf pine ecosystem as critical habitat. Longleaf pine communites now exist in just 3% of their previous range throughout the Southeast. Longleaf pine forest and savanna is considered one of the most endangered habitats in the country today. The longleaf pine ecosystem benefits four amphibians, 13 reptiles, five birds and nine mammals in greatest conservation need. A prescribed fire regime will also be implemented for glades that exist on the property. This habitat enhancement will benefit of five reptiles, eight birds and 2 mammals in greatest conservation need and will a variety of rare vascular plants (approximately 76) including Federally listed Mohr's Barbara's-buttons and Tennessee yellow-eyed-grass.
|
<urn:uuid:72e95e9b-f64e-4c06-b616-b3c2e320258e>
|
CC-MAIN-2013-20
|
http://www.dcnr.alabama.gov/research-mgmt/Landowner/Partners/longleafpine.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937938
| 258
| 3.09375
| 3
|
Housed in a soundproof wood-and-steel box, the EMT 140''s plate was a metal sheet only half a millimeter thick.
With the advent of convolution reverb, some would say that nobody needs a real plate reverb anymore. Not surprisingly, many purists disagree. EMT''s 140 was the first, indisputably changing the sound of recorded music the moment it appeared in 1957. The 140''s smooth, complex reverberation is still very much in demand, but you no longer need a wood-and-steel box weighing hundreds of pounds and measuring 8 x 4 x 1 feet to get it. Arguably, convolution reverb duplicates plate reverb quite precisely and makes it possible to tailor the effect in ways you never could with the real thing.
Before Walter Kuhl designed the 140 for Elektromesstechnik (EMT), authentic-sounding reverb required a lot more space. Typically, a studio''s so-called reverb chamber was an acoustically reflective room with a speaker at one end and a microphone at the other. The walls were often layered with plaster to increase reflectivity and reduce standing waves. Sound from the speaker bounced off the walls and was picked up by the mic, then mixed with the original signal. If a studio couldn''t afford the space to dedicate a room for reverb, an empty stairwell or tiled bathroom often sufficed.
WHAT IS PLATE REVERB?
Putting reverb in a soundproof box like the 140 not only saved space, but it also gave recording engineers greater control over its sound. Inside the box was a big metal sheet—the plate in plate reverb—only half a millimeter thick, suspended by clips attached to a rigid frame. In the original 140, a tube-amplified driver resembling a loudspeaker coil vibrated the plate, and a piezoelectric pickup captured the vibrations from the plate''s edge.
Compared with traditional reverb chambers, the 140 offered better low-frequency response and used very little electric power. It also let you attenuate high frequencies separately from low frequencies for a wider range of natural-sounding effects. The 140 used a pad made of porous materials to damp the plate by absorbing its reflections. Just as a piano''s soft pedal activates a felt strip to damp the strings and thus shorten decay time, the 140''s damping pad governed its decay via a remote-controlled servomotor, which changed the pad''s proximity to the plate.
The first 140s were mono, which makes perfect sense when you consider that stereo records weren''t available until 1958 and stereo radio didn''t exist until three years later. EMT later manufactured stereo versions and eventually replaced the tube amp with a quieter, more dependable solid-state circuit.
Today, you have many virtual alternatives to owning a real EMT 140. Sampled 140s are available for practically every convolution platform. Audio Ease Altiverb users can download free impulse responses from units used by Elvis Presley and Wendy Carlos. Perhaps the best emulation yet is for Universal Audio''s DSP platform; the UAD EMT 140 plug-in delivers superior control and versatility while preserving the sound of the original.
EM senior editor Geary Yelton lives in Asheville, N.C., surrounded by beautiful mountains and wonderful toys.
|
<urn:uuid:16d0c426-debe-41a4-91d3-921bf1840c24>
|
CC-MAIN-2013-20
|
http://www.emusician.com/prntarticle.aspx?articleid=144561
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947052
| 694
| 2.953125
| 3
|
Cambodia is a a land of paddies and forests dominated by the Mekong River and Tonle Sap (Cambodia's huge freshwater lake).
Its major environmental issues include:
- illegal logging activities throughout the country and strip mining for gems in the western region along the border with Thailand which have resulted in habitat loss and declining biodiversity (in particular, destruction of mangrove swamps threatens natural fisheries);
- soil erosion;
- in rural areas, most of the population does not have access to potable water; and,
- declining fish stocks because of illegal fishing and overfishing
Cambodia is susceptible to monsoonal rains (June to November); flooding;and, occasional droughts.
Most Cambodians consider themselves to be Khmers, descendants of the Angkor Empire that extended over much of Southeast Asia and reached its zenith between the 10th and 13th centuries. Attacks by the Thai and Cham (from present-day Vietnam) weakened the empire, ushering in a long period of decline.
The king placed the country under French protection in 1863 and it became part of French Indochina in 1887. Following Japanese occupation in World War II, Cambodia gained full independence from France in 1953.
In April 1975, after a five-year struggle, Communist Khmer Rouge forces captured Phnom Penh and evacuated all cities and towns. At least 1.5 million Cambodians died from execution, forced hardships, or starvation during the Khmer Rouge regime under Pol Pot. A December 1978 Vietnamese invasion drove the Khmer Rouge into the countryside, began a 10-year Vietnamese occupation, and touched off almost 13 years of civil war.
The 1991 Paris Peace Accords mandated democratic elections and a ceasefire, which was not fully respected by the Khmer Rouge. UN-sponsored elections in 1993 helped restore some semblance of normalcy under a coalition government. Factional fighting in 1997 ended the first coalition government, but a second round of national elections in 1998 led to the formation of another coalition government and renewed political stability.
The remaining elements of the Khmer Rouge surrendered in early 1999. Some of the surviving Khmer Rouge leaders have been tried or are awaiting trial for crimes against humanity by a hybrid UN-Cambodian tribunal supported by international assistance.
Elections in July 2003 were relatively peaceful, but it took one year of negotiations between contending political parties before a coalition government was formed. In October 2004, King Norodom Sihanouk abdicated the throne and his son, Prince Norodom Sihamoni, was selected to succeed him.
Local elections were held in Cambodia in April 2007, with little of the pre-election violence that preceded prior elections. National elections in July 2008 were relatively peaceful.
Cambodia has a number of international disputes:
- Cambodia is concerned about Laos' extensive upstream dam construction;
- Cambodia and Thailand dispute sections of boundary; in 2011 Thailand and Cambodia resorted to arms in the dispute over the location of the boundary on the precipice surmounted by Preah Vihear temple ruins, awarded to Cambodia by ICJ decision in 1962 and part of a planned UN World Heritage site;
- Cambodia accuses Vietnam of a wide variety of illicit cross-border activities; Progress on a joint development area with Vietnam is hampered by an unresolved dispute over sovereignty of offshore islands
Cambodia is located on mainland Southeast Asia between Thailand to the west and north and Vietnam to the east and southeast. It shares a land border with Laos in the northeast. Cambodia has a sea coast on the Gulf of Thailand. The Dangrek mountain range in the north and Cardamom Mountains in the southwest form natural boundaries. Principal physical features include the Tonle Sap lake and the Mekong and Bassac Rivers. Cambodia remains one of the most heavily forested countries in the region, although deforestation continues at an alarming rate.
Location: Southeastern Asia, bordering the Gulf of Thailand, between Thailand, Vietnam, and Laos
Geographic Coordinates: 13 00 N, 105 00 E
Area: 181,035 sq km (land: 176,515 sq km; water: 4,520 sq km)
Land Boundaries: 2,572 km (Laos 541 km, Thailand 803 km, Vietnam 1,228 km)
Coastline: 443 km
territorial sea: 12 nm
contiguous zone: 24 nm
exclusive economic zone: 200 nm
continental shelf: 200 nm
Natural Hazards: monsoonal rains (June to November); flooding; occasional droughts
Terrain: mostly low, flat plains; mountains in southwest and north. The highest point is Phnum Aoral (1,810 m) and the lowest point is the Gulf of Thailand (0 m).
Climate: tropical; rainy, monsoon season (May to November); dry season (December to April); little seasonal temperature variation
Topography of Cambodia. Source:Sadalmelik/Wikipedia.
Ecology and Biodiversity
Ecoregions of Cambodia. Source: World Wildlife Fund
Tonle Sap freshwater swamp forests - The swamp shrublands and forest of the Tonle Sap Freshwater Swamp Forests ecoregion include two forest associations that have been described for the extensive floodplain area of Tonle Sap, a short tree shrubland covering the majority of the area and a stunted swamp forest around the lake itself. Similar swamp forests are also present along floodplains of the Mekong and other major rivers in Cambodia. Although most of the ecoregion, including the lake, was declared a protected area recently, it was too little too late. The protected area is a paper park with no protection or management, and it was declared protected after most of the habitat had been cleared for agriculture. This is prime rice-growing habitat.
Tonle Sap-Mekong peat swamp forests - extend over areas permanently inundated with shallow freshwater, although the region as mapped includes mosaics of swamp forest and herbaceous wetland interposed with upland areas of dry forest. However, care must be given in separating permanently flooded swamp forests of southeast Asia from seasonal swamp forests that characterize extensive areas of the Tonle Sap Basin and the floodplain of major Cambodian rivers. The Tonle Sap-Mekong Peat Swamp Forests are only a small vestige of their former range and function. More than 90 percent of this ecoregion has been converted to scrub or degraded forests. Intensive agriculture and the alteration of the hydrodynamics of the river systems in the region have altered the natural river fluctuations, adversely affecting the remaining native vegetation.
Central Indochina dry forests - covers most of central Indochina and harbors an outstanding assemblage of threatened large vertebrates that characterize the mammal fauna of the Indo-Pacific region. Just half a century ago large populations of megaherbivores such as Asian elephants, banteng, kouprey, gaur, wild water buffalo, and Eld's deer roamed and grazed in these dry woodlands. Where human densities were still low, the landscapes were dominated by large herds of wildlife reminiscent of the savannas of east Africa. Large carnivores such as tigers, Clouded Leopards, leopards, and packs of wild dogs hunted these herbivores. Unfortunately, throughout the ensuing years habitat loss and hunting for trade have exacted a devastating toll on these species. Some species have even become extinct. The two rhinoceros species, the Javan and the Sumatran are now extinct in this ecoregion, as is Schomburgk's deer. The kouprey probably is globally extinct, although intermittent reports from remote areas of northern and eastern Cambodia keep hopes alive. Among the other species, the tiger, Asian elephant, Eld's deer, banteng, and gaur are endangered.
Cardamom Mountains rain forests - sit astride the Cardamom Mountains (locally known as Kravanh) and the Elephant Range (locally known as Dom rei) in southwestern Cambodia and extends slightly across the border into southeastern Thailand. It is separated from the nearest other rain forest by the vast, dry Khorat Plateau in central Thailand to the north and east and by the Gulf of Thailand in the west. The Cardamom Mountain rain forests are considered by some to be one of the most species-rich and intact natural habitats in the region, but they are also one of the least explored.
Indochina mangroves - Among the most diverse and extensive mangrove ecosystems in the world, this ecoregion provides extremely important habitat for some of the world's rarest waterbirds. The largest block of Indochina Mangroves in the Mekong River delta suffered large-scale habitat loss from defoliants sprayed during the Vietnam War.
Southeastern Indochina dry evergreen forests are globally outstanding for the large vertebrate fauna it harbors within large intact landscapes. Among the impressive large vertebrates are the Indo-Pacific region's largest herbivore, the Asian elephant (Elephas maximus), and largest carnivore, the tiger (Panthera tigris). The list includes the second known population of the critically endangered Javan rhinoceros (Rhinoceros sondaicus)-comprising a handful of animals in Vietnam's Cat Loc reserve-Eld's deer (Cervus eldi), banteng (Bos javanicus), gaur (Bos gaurus), clouded leopard (Pardofelis nebulosa), common leopard (Panthera pardus), Malayan sun bear (Ursus malayanus), and the enigmatic khting-vor (Pseudonovibos spiralis), known to science only by a few horns. But the ecoregion's conservation priority does not rest merely on its charismatic biodiversity. Importantly, it also represents a rare instance of a nonmontane ecoregion with large expanses of intact habitat that can allow viable populations of these species to survive over the long term. Unfortunately, all is not well in this haven, for plans to log Cambodia's forests, where most of the large habitat blocks lie, will result in large-scale habitat loss and fragmentation. Therefore, the ecoregion has been placed on the critical list.
Southern Annamites montane rain forests in the remote montane forests of Kontuey Neak, or "the dragon's tail"-in the extreme northwest of Cambodia, where the boundaries of Cambodia, Laos, and Vietnam meet-is globally outstanding for its biodiversity. The intact forests of the ecoregion are little explored; it takes two weeks of intense walking and braving hazards such as mines and bombs that lie scattered throughout the landscape to get to some of the remote areas of the ecoregion. But the known flora and fauna attest to the region's biological diversity, which includes some of Asia's charismatic fauna. Among the larger vertebrates, the tiger (Panthera tigris), Asian elephant (Elephas maximus), douc langur (Pygathrix nemaeus), gibbon (Hylobates gabriellae), wild dog (Cuon alpinus), sun bear (Ursus malayanus), clouded leopard (Pardofelis nebulosa), gaur (Bos gaurus), banteng (Bos javanicus), and Eld's deer (Cervus eldii) are better known.
- Biological diversity in Indo-Burma
- Gulf of Thailand large marine ecosystem
- ASEAN Wildlife Enforcement Network
People and Society
Population: 14,952,665 (July 2012 est.)
Ninety percent of Cambodia's population is ethnically Cambodian. Other ethnic groups include Chinese, Vietnamese, hill tribes, Cham, and Lao. Theravada Buddhism is the religion of 95% of the population; Islam, animism, and Christianity also are practiced. Khmer is the official language and is spoken by more than 95% of the population. Some French is still spoken in urban areas, and English is increasingly popular as a second language.
Ethnic Groups: Khmer 90%, Vietnamese 5%, Chinese 1%, other 4%
0-14 years: 32.2% (male 2,375,155/female 2,356,305)
15-64 years: 64.1% (male 4,523,030/female 4,893,761)
65 years and over: 3.8% (male 208,473/female 344,993) (2011 est.)
Population Growth Rate: 1.687% (2012 est.)
|Boats on Tonle Sap, Cambodia's huge freshwater lake. During the dry season from November to May, the lake is fairly shallow - only about one meter deep - allowing boats to be poled. During the monsoon season, waters from the flooding Mekong River back up raising the lake depth to about nine meters. This annual pulsing over a large floodplain brings in high sediment and nutrient fluxes allowing for rich aquatic diversity. Tonle Sap is one of the most productive inland fisheries in the world.|
|Golden lion on a Mekong River waterfront.|
|On November 30, 2001, MODIS captured this image of southeastern Asia. Source: NASA.|
Birthrate: 25.17 births/1,000 population (2012 est.)
Death Rate: 7.97 deaths/1,000 population (July 2012 est.)
Net Migration Rate: -0.33 migrant(s)/1,000 population (2012 est.)
Life Expectancy at Birth: 63.04 years
male: 60.66 years
female: 65.53 years (2012 est.)
Total Fertility Rate: 2.78 children born/woman (2012 est.)
Languages: Khmer (official) 95%, French, English
Literacy (age 15 and over can read and write): 73.6%
Urbanization: 20% of total population (2010) growing at an annual rate of change of 3.2% (2010-15 est.)
Over a period of 300 years, between 900 and 1200 AD, the Khmer Kingdom of Angkor produced some of the world's most magnificent architectural masterpieces on the northern shore of the Tonle Sap, near the present town of Siem Reap. The Angkor area stretches 15 miles east to west and 5 miles north to south. Some 72 major temples or other buildings dot the area. Suryavarman II built the principal temple, Angkor Wat, between 1112 and 1150. With walls nearly one-half mile on each side, Angkor Wat portrays the Hindu cosmology with the central towers representing Mount Meru, home of the gods; the outer walls, the mountains enclosing the world; and the moat, the oceans beyond. Angkor Thom, the capital city built after the Cham sack of 1177, is surrounded by a 300-foot wide moat. Construction of Angkor Thom coincided with a change from Hinduism to Buddhism. Temples were altered to display images of the Buddha, and Angkor Wat became a major Buddhist shrine.
During the 15th century, nearly all of Angkor was abandoned after Siamese attacks. The exception was Angkor Wat, which remained a shrine for Buddhist pilgrims. The great city and temples remained largely cloaked by the forest until the late 19th century when French archaeologists began a long restoration process. Concerned about further destruction and dilapidation of the Angkor complex and cultural heritage, the Cambodian Government in 1995 established the Authority for the Protection and Management of Angkor and the Region of Siem Reap (APSARA) to protect, maintain, conserve, and improve the value of the archaeological park. In December 1995 the World Heritage Committee confirmed Angkor's permanent inscription on the World Heritage List. Tourism is now the second-largest foreign currency earner in Cambodia's economy.
Although Cambodia had a rich and powerful past under the Hindu state of Funan and the Kingdom of Angkor, by the mid-19th century the country was on the verge of dissolution. After repeated requests for French assistance, a protectorate was established in 1863. By 1884, Cambodia was a virtual colony; soon after it was made part of the Indochina Union with Annam, Tonkin, Cochin-China, and Laos. France continued to control the country even after the start of World War II through its Vichy government. In 1945, the Japanese dissolved the colonial administration, and King Norodom Sihanouk declared an independent, anti-colonial government under Prime Minister Son Ngoc Thanh in March 1945. The Allies deposed this government in October. In January 1953, Sihanouk named his father as regent and went into self-imposed exile, refusing to return until Cambodia gained genuine independence.
Sihanouk's actions hastened the French Government's July 4, 1953 announcement of its readiness to grant independence, which came on November 9, 1953. The situation remained uncertain until a 1954 conference was held in Geneva to settle the French-Indochina war. All participants, except the United States and the State of Vietnam, associated themselves (by voice) with the final declaration. The Cambodian delegation agreed to the neutrality of the three Indochinese states but insisted on a provision in the cease-fire agreement that left the Cambodian Government free to call for outside military assistance should the Viet Minh or others threaten its territory.
Neutrality was the central element of Cambodian foreign policy during the 1950s and 1960s. By the mid-1960s, parts of Cambodia's eastern provinces were serving as bases for North Vietnamese Army and Viet Cong (NVA/VC) forces operating against South Vietnam, and the port of Sihanoukville was being used to supply them. As NVA/VC activity grew, the United States and South Vietnam became concerned, and in 1969, the United States began a series of air raids against NVA/VC base areas inside Cambodia.
Throughout the 1960s, domestic politics polarized. Opposition grew within the middle class and among leftists, including Paris-educated leaders such as Son Sen, Ieng Sary, and Saloth Sar (later known as Pol Pot), who led an insurgency under the clandestine Communist Party of Kampuchea (CPK).
The Khmer Republic and the War
In March 1970, Gen. Lon Nol deposed Prince Sihanouk and assumed power. On October 9, the Cambodian monarchy was abolished, and the country was renamed the Khmer Republic. Hanoi rejected the new republic's request for the withdrawal of NVA/VC troops and began to re-infiltrate some of the 2,000-4,000 Cambodians who had gone to North Vietnam in 1954. They became a cadre in the insurgency. The United States moved to provide material assistance to the new government's armed forces, which were engaged against both the Khmer Rouge insurgents and NVA/VC forces. In April 1970, U.S. and South Vietnamese forces entered Cambodia in a campaign aimed at destroying NVA/VC base areas. Although a considerable quantity of equipment was seized or destroyed, NVA/VC forces proved elusive and moved deeper into Cambodia. NVA/VC units overran many Cambodian Army positions while the Khmer Rouge expanded their small scale attacks on lines of communication.
The Khmer Republic's leadership was plagued by disunity among its members, the problems of transforming a 30,000-man army into a national combat force of more than 200,000 men, and spreading corruption. The insurgency continued to grow, with supplies and military support provided by North Vietnam. But inside Cambodia, Pol Pot and Ieng Sary asserted their dominance over the Vietnamese-trained communists, many of whom were purged. At the same time, the Khmer Rouge forces became stronger and more independent of their Vietnamese patrons. By 1974, Lon Nol's control was reduced to small enclaves around the cities and main transportation routes. More than 2 million refugees from the war lived in Phnom Penh and other cities.
On New Year's Day 1975, communist troops launched an offensive that, in 117 days of the hardest fighting of the war, destroyed the Khmer Republic. Simultaneous attacks around the perimeter of Phnom Penh pinned down Republican forces, while other Khmer Rouge units overran fire bases controlling the vital lower Mekong resupply route. A U.S.-funded airlift of ammunition and rice ended when Congress refused additional aid for Cambodia. Phnom Penh surrendered on April 17, 1975--5 days after the U.S. mission evacuated Cambodia.
Many Cambodians welcomed the arrival of peace, but the Khmer Rouge soon turned Cambodia--which it called Democratic Kampuchea (DK)--into a land of horror. Immediately after its victory, the new regime ordered the evacuation of all cities and towns, sending the entire urban population out into the countryside to till the land. Thousands starved or died of disease during the evacuation. Many of those forced to evacuate the cities were resettled in new villages, which lacked food, agricultural implements, and medical care. Many starved before the first harvest, and hunger and malnutrition--bordering on starvation--were constant during those years. Those who resisted or who questioned orders were immediately executed, as were most military and civilian leaders of the former regime who failed to disguise their pasts.
Within the CPK, the Paris-educated leadership--Pol Pot, Ieng Sary, Nuon Chea, and Son Sen--was in control, and Pol Pot was made Prime Minister. Prince Sihanouk was put under virtual house arrest. The new government sought to restructure Cambodian society completely. Remnants of the old society were abolished, and Buddhism suppressed.
Agriculture was collectivized, and the surviving part of the industrial base was abandoned or placed under state control. Cambodia had neither a currency nor a banking system. The regime controlled every aspect of life and reduced everyone to the level of abject obedience through terror. Torture centers were established, and detailed records were kept of the thousands murdered there. Public executions of those considered unreliable or with links to the previous government were common. Few succeeded in escaping the military patrols and fleeing the country. Solid estimates of the numbers who died between 1975 and 1979 are not available, but it is likely that hundreds of thousands were brutally executed by the regime. Hundreds of thousands more died from forced labor, starvation, and disease--both under the Khmer Rouge and during the Vietnamese invasion in 1978. Estimates of the dead range from 1.7 million to 3 million, out of a 1975 population estimated at 7.3 million.
Democratic Kampuchea's relations with Vietnam and Thailand worsened rapidly as a result of border clashes and ideological differences. While communist, the CPK was fiercely anti-Vietnamese, and most of its members who had lived in Vietnam were purged. Democratic Kampuchea established close ties with China, and the Cambodian-Vietnamese conflict became part of the Sino-Soviet rivalry, with Moscow backing Vietnam. Border clashes worsened when Democratic Kampuchea's military attacked villages in Vietnam.
In mid-1978, Vietnamese forces invaded Cambodia, advancing about 30 miles before the arrival of the rainy season. In December 1978, Vietnam announced formation of the Kampuchean United Front for National Salvation (KUFNS) under Heng Samrin, a former DK division commander. It was composed of Khmer communists who had remained in Vietnam after 1975 and officials from the eastern sector--like Heng Samrin and Hun Sen--who had fled to Vietnam from Cambodia in 1978. In late December 1978, Vietnamese forces launched a full invasion of Cambodia, capturing Phnom Penh on January 7, 1979 and driving the remnants of Democratic Kampuchea's army westward toward Thailand.
The Vietnamese Occupation
On January 10, 1979, the Vietnamese installed Heng Samrin as head of state in the new People's Republic of Kampuchea (PRK). The Vietnamese Army continued to pursue Khmer Rouge forces. An estimated 600,000 Cambodians were displaced during the Pol Pot era and the Vietnamese invasion streamed to the Thai border in search of refuge between 1979 and 1981.
The international community responded with a massive relief effort coordinated by the United States through the UN Children's Fund (UNICEF) and the World Food Program. More than $400 million was provided between 1979 and 1982, of which the United States contributed nearly $100 million.
Vietnam's occupation army of an estimated 180,000 troops was posted throughout the country from 1979 to September 1989. The Heng Samrin regime's 30,000 troops were plagued by poor morale and widespread desertion. Resistance to Vietnam's occupation was extensive. A remainder of the Khmer Rouge's military forces eluded Vietnamese troops and established themselves in remote regions. A non-communist resistance movement consisting of groups that had been fighting the Khmer Rouge after 1975--including Lon Nol-era soldiers--coalesced in 1979-80 to form the Khmer People's National Liberation Armed Forces (KPNLAF), which pledged loyalty to former Prime Minister Son Sann, and Moulinaka (Movement pour la Liberation Nationale de Kampuchea), loyal to Prince Sihanouk. In 1979, Son Sann formed the Khmer People's National Liberation Front (KPNLF) to lead a political struggle for Cambodia's independence. Prince Sihanouk formed his own organization, National United Front for an Independent, Neutral, Peaceful, and Cooperative Cambodia (FUNCINPEC), and its military arm, the Armee Nationale Sihanoukienne (ANS) in 1981.
Within Cambodia, Vietnam had only limited success in establishing its client Heng Samrin regime, which was dependent on Vietnamese advisers at all levels. Security in some rural areas was tenuous, and major transportation routes were subject to interdiction by resistance forces. The presence of Vietnamese throughout the country and their intrusion into nearly all aspects of Cambodian life alienated much of the populace. The settlement of Vietnamese nationals, both former residents and new immigrants, further exacerbated anti-Vietnamese sentiment. Reports of the numbers involved vary widely, with some estimates as high as 1 million. By the end of the decade, Khmer nationalism began to reassert itself against the traditional Vietnamese enemy. In 1986, Hanoi claimed to have begun withdrawing part of its occupation forces. At the same time, Vietnam continued efforts to strengthen its client regime, the PRK, and its military arm, the Kampuchean People's Revolutionary Armed Forces (KPRAF). These withdrawals continued over the next 2 years, and the last Vietnamese troops left Cambodia in September 1989.
From July 30 to August 30, 1989, representatives of 18 countries, the four Cambodian parties, and the UN Secretary General met in Paris in an effort to negotiate a comprehensive settlement. They hoped to achieve those objectives seen as crucial to the future of post-occupation Cambodia--a verified withdrawal of the remaining Vietnamese occupation troops, the prevention of the return to power of the Khmer Rouge, and genuine self-determination for the Cambodian people. A comprehensive settlement was agreed upon on August 28, 1990.
On October 23, 1991, the Paris Conference reconvened to sign a comprehensive settlement giving the UN full authority to supervise a cease-fire, repatriate the displaced Khmer along the border with Thailand, disarm and demobilize the factional armies, and prepare the country for free and fair elections. Prince Sihanouk, President of the Supreme National Council of Cambodia (SNC), and other members of the SNC returned to Phnom Penh in November 1991, to begin the resettlement process in Cambodia. The UN Advance Mission for Cambodia (UNAMIC) was deployed at the same time to maintain liaison among the factions and begin demining operations to expedite the repatriation of approximately 370,000 Cambodians from Thailand.
On March 16, 1992, the UN Transitional Authority in Cambodia (UNTAC) arrived in Cambodia to begin implementation of the UN Settlement Plan. The UN High Commissioner for Refugees began full scale repatriation in March 1992. UNTAC grew into a 22,000-strong civilian and military peacekeeping force to conduct free and fair elections for a constituent assembly.
Over 4 million Cambodians (about 90% of eligible voters) participated in the May 1993 elections, although the Khmer Rouge or Party of Democratic Kampuchea (PDK), whose forces were never actually disarmed or demobilized, barred some people from participating. Prince Ranariddh's FUNCINPEC Party was the top vote recipient with a 45.5% vote, followed by Hun Sen's Cambodian People's Party and the Buddhist Liberal Democratic Party, respectively. FUNCINPEC then entered into a coalition with the other parties that had participated in the election. The parties represented in the 120-member assembly proceeded to draft and approve a new constitution, which was promulgated September 24, 1993. It established a multiparty liberal democracy in the framework of a constitutional monarchy, with the former Prince Sihanouk elevated to King. Prince Ranariddh and Hun Sen became First and Second Prime Ministers, respectively, in the Royal Cambodian Government (RGC). The constitution provides for a wide range of internationally recognized human rights.
In 1997, most of the remaining Khmer Rouge fighters accepted a government amnesty and laid down their arms, putting an end to nearly 3 decades of war. On October 4, 2004, the Cambodian National Assembly ratified an agreement with the United Nations on the establishment of a tribunal to try senior leaders responsible for the atrocities committed by the Khmer Rouge. The tribunal held its first trial, against former S-21 prison chief Kaing Guek Eav (aka Duch), in 2009, resulting in a guilty verdict and a 35 year sentence in July 2010. Duch will serve 19 years after his sentence was reduced by five years for being illegally detained by a Cambodian Military court, and by 11 years for time served since his 1999 arrest. Four more former Khmer Rouge leaders are currently being tried, and two additional investigations are in progress that may result in additional indictmentrs. Donor countries have provided over $100 million to date in support of the tribunal, including $6.8 million from the United States.
While the post-1993 period was relatively stable in comparison to the previous decades, political violence continued to be a problem through the 1990s. In 1997, factional fighting between supporters of Prince Norodom Ranariddh and Hun Sen broke out, resulting in more than 100 FUNCINPEC deaths and a few Cambodian People's Party (CPP) casualties. Some FUNCINPEC leaders were forced to flee the country, and Hun Sen took over as Prime Minister. FUNCINPEC leaders returned to Cambodia shortly before the 1998 National Assembly elections. In those elections, the CPP received 41% of the vote, FUNCINPEC 32%, and the Sam Rainsy Party (SRP) 13%. Due to political violence, intimidation, and lack of media access, many international observers judged the elections to have been seriously flawed. The CPP and FUNCINPEC formed another coalition government, with CPP the senior partner. Cambodia's first commune elections, held in February 2002 to select chiefs and members of 1,621 commune (municipality) councils, also were marred by political violence and fell short of being free and fair by international standards.
National Assembly elections in July 2003 failed to give any one party the two-thirds majority of seats required under the constitution to form a government. A political stalemate ensued which was not resolved until July 2004, when the National Assembly approved a controversial addendum to the constitution in order to require a vote on a new government. The National Assembly then approved a new coalition government comprised of the CPP and FUNCINPEC, with Hun Sen as Prime Minister and Prince Norodom Ranariddh as President of the National Assembly. The SRP, with support from various non-governmental organizations (NGOs), asserted the addendum was unconstitutional and boycotted the vote.
On October 7, 2004, King Sihanouk abdicated the throne due to illness. On October 14, the Cambodian Throne Council selected Prince Norodom Sihamoni to succeed Sihanouk as King. King Norodom Sihamoni officially ascended the throne in a coronation ceremony on October 29, 2004.
In February 2005, the National Assembly voted to lift the parliamentary immunity of three opposition parliamentarians, including SRP leader Sam Rainsy, in connection with lawsuits filed against them by members of the ruling parties. One of the parliamentarians, Cheam Channy, was arrested and later tried, while Sam Rainsy went into self-imposed exile. In October 2005, the government arrested critics of Cambodia's border treaties with Vietnam and later detained four human rights activists following International Human Rights Day in December. In January 2006, the political climate improved with the Prime Minister's decision to release all political detainees and permit Sam Rainsy's return to Cambodia.
Following public criticism by Hun Sen, Prince Ranariddh resigned as President of the National Assembly in March 2006. He later broke with FUNCINPEC and founded a new party, the Norodom Ranariddh Party (NRP). In 2007, Ranariddh was convicted of corruption by a Cambodian court and fled to Malaysia to avoid imprisonment. In October 2008, he received a royal pardon and returned to Cambodia. Shortly afterward, he announced that he was withdrawing from politics. However, in December 2010 Ranariddh announced plans to re-enter politics, and the Nationalist Party reverted to its former name, the Norodom Ranariddh Party (NRP), with Ranariddh as its leader.
Cambodia's second commune elections were held in April 2007, followed by National Assembly elections in July 2008. In both cases, there was little of the pre-election violence that preceded the 2002 and 2003 elections. Both polls resulted in victories for the Cambodian People's Party, with the Sam Rainsy Party emerging as the main opposition party and the royalist parties showing weakening support. The Assembly inaugurated in September 2008 is led by a coalition of the CPP (90 seats) and FUNCINPEC (2 seats). The SRP (26 seats) and the Human Rights Party led by Kem Sokha (3 seats) are in opposition. The NRP (2 seats) has announced its intention to merge with FUNCINPEC by 2012. The CPP-led coalition retained Hun Sen as Prime Minister, as well as most of the key leaders from the previous government, and all ministers are from the CPP. In May 2009, non-universal elections were held when commune council members chose representatives to district councils, city councils, and provincial councils, which would have administrative and budgetary powers at the local level.
In 2009, the CPP-dominated parliament voted again to lift the parliamentary immunity of three members of the opposition, including Sam Rainsy, in order to allow civil or criminal charges to be pursued. Sam Rainsy was convicted in absentia and sentenced to tenyears prison in January 2010 for his role in the removal of several temporary border markers on the Cambodia-Vietnam border, along with making statements deemed racially incendiary. He remains outside the country. A second SRP member was convicted of defaming the Prime Minister; after refusing to pay the court-ordered fine and exhausting all appeals, the court ordered the lawmaker’s salary garnished to pay the fine, a process which concluded in December 2010. The member began advocating for restoration of parliamentary immunity in January 2011. A third SRP member was ultimately acquitted on all charges.
Cambodia is a constitutional monarchy, and its constitution provides for a multiparty democracy. The Royal Government of Cambodia, formed on the basis of elections internationally recognized as free and fair, was established on September 24, 1993.
The executive branch comprises the king, who is head of state; an appointed prime minister; 10 deputy prime ministers, 16 senior ministers, 26 ministers, 206 secretaries of state, and 205 undersecretaries of state. The bicameral legislature consists of a 123-member elected National Assembly and a 61-member Senate.
Government Type: multiparty democracy under a constitutional monarchy
Capital: Phnom Penh (1.519 million est. 2009)
Administrative Divisions: 23 provinces (khett, singular and plural) and 1 municipality (krong, singular and plural)
Independence Date: Independence Day, 9 November (1953)
Legal System: civil law system (influenced by the UN Transitional Authority in Cambodia) customary law, Communist legal theory, and common law. Cambodia accepts compulsory International court of Justics (ICJ) jurisdiction with reservations and accepts International Criminal Court (ICCt) jurisdiction. The judiciary includes a Supreme Court, lower courts, and an internationalized court with jurisdiction over the serious crimes of the Khmer Rouge era. Administrative subdivisions are 23 provinces and 1 municipality.
The 1993 constitution provides for a wide range of internationally recognized human rights, including freedom of the press. While freedom of the press has improved markedly in Cambodia since the adoption of the constitution, limitations still exist on mass media. Much of the written press, while considered largely free, has ties to individual political parties or factions and does not seek to provide objective reporting or analysis. Cambodia has an estimated 25 Khmer-language newspapers that are published regularly. Of these, eight are published daily; three opposition papers are published regularly and two of these are daily publications. There are two major English-language newspapers, two of which are dailies. Broadcast media, in contrast to print, is more closely controlled. It tends to be politically affiliated with the CPP, and access for opposition parties is extremely limited.
International Environmental Agreements
Cambodia is party to international agreements on: Biodiversity, Climate Change, Climate Change-Kyoto Protocol, Desertification, Endangered Species, Hazardous Wastes, Marine Life Conservation, Ozone Layer Protection, Ship Pollution, Tropical Timber 94, Wetlands, and Whaling. It has signed, but not ratified the Law of the Sea.
Total Renewable Water Resources: 476.1 cu km (1999)
Freshwater Withdrawal: 4.08 cu km/yr (1% domestic, 0% industrial, 98% agricultural)
Per capita freshwater withdrawal: 290 cu m/yr (2000)
Access to improved water sources: 61% of population
Access to improved sanitation facilities: 29% of population
Agricultural products: rice, rubber, corn, vegetables, cashews, tapioca, silk
Irrigated Land: 2,850 sq km (2008)
Natural Resources: oil and gas, timber, gemstones, iron ore, manganese, phosphates, hydropower potential
From 2001-2010, the Cambodian economy expanded by, on average, 8% per year, with the garment sector and the tourism industry driving the growth, and inflation remaining relatively low. The onset of the global recession led to a 0.1% contraction in 2009, but growth resumed in 2010 at 5.95%. The economy is heavily dollarized; the dollar and riel can be used interchangeably. Cambodia remains heavily reliant on foreign assistance--about half of the central government budget depends on donor assistance. Foreign direct investment (FDI) has increased 12-fold since 2004 as sound macroeconomic policies, political stability, regional economic growth, and government openness toward investment attract growing numbers of investors.
Manufacturing output is concentrated in the garment sector, and garments dominate Cambodia's exports, especially to the U.S. and the EU. The industry expanded rapidly from the mid-1990s until 2008, employing 350,000 workers and generating $3 billion in annual revenue at its peak. With the January 2005 expiration of a WTO Agreement on Textiles and Clothing, Cambodian textile producers were forced to compete directly with lower-priced countries such as China, India, Vietnam, and Bangladesh. The global economic slowdown caused a drop in demand, resulting in a more than 20% decline in garment exports and an estimated 60,000 unemployed workers from late 2008 through 2009. In 2010, the garment sector grew 15%. Employment in the sector and garment exports are expected to reach pre-crisis levels in 2011. The garment industry currently employs more than 300,000 people - about 5% of the work force - and contributes more than 70% of Cambodia's exports.
In 2005, exploitable oil deposits were found beneath Cambodia's territorial waters, representing a new revenue stream for the government if commercial extraction begins. Commercial production is expected to commence in late 2012, but it is not yet clear if commercial extraction is viable long-term or how large Cambodia's reserves are.
Mining also is attracting significant investor interest, particularly in the northern parts of the country. The government has said opportunities exist for mining bauxite, gold, iron and gems.
In 2006, a US-Cambodia bilateral Trade and Investment Framework Agreement (TIFA) was signed, and several rounds of discussions have been held since 2007.
Rubber exports increased about 25% in 2009 due to rising global demand.
Tourism levels, which increased to approximately two million arrivals in 2008 but were also hurt by the global downturn, rebounded to 2.15 million arrivals in 2010.
The service sector is heavily concentrated in trading activities and catering-related services. The real estate sector contracted by 15.8% and land prices declined 10%-15% in 2010. Both commercial bank credits and deposits grew between 20%-25% in 2010.
The global financial crisis is weakening demand for Cambodian exports, and construction is declining due to a shortage of credit.
The long-term development of the economy remains a daunting challenge. The Cambodian government is working with bilateral and multilateral donors, including the World Bank and IMF, to address the country's many pressing needs. The major economic challenge for Cambodia over the next decade will be fashioning an economic environment in which the private sector can create enough jobs to handle Cambodia's demographic imbalance. More than 50% of the population is less than 25 years old. The population lacks education and productive skills, particularly in the poverty-ridden countryside, which suffers from an almost total lack of basic infrastructure
In spite of recent progress, the Cambodian economy continues to suffer from the legacy of decades of war and internal strife. Per capita income and education levels are lower than in most neighboring countries. Infrastructure remains inadequate, although road networks are improving rapidly. Most rural households depend on agriculture and its related subsectors. Corruption and lack of legal protections for investors continue to hamper economic opportunity and competitiveness. The economy also has a poor track record in creating jobs in the formal sector, and the challenge will only become more daunting in the future since 50% of the population is under 20 years of age and large numbers of job seekers will begin to enter the work force over the next 10 years.
GDP: (Purchasing Power Parity): $32.95 billion (2011 est.)
GDP: (Official Exchange Rate): $13.2 billion (2011 est.)
GDP- per capita (PPP): $2,300 (2011 est.)
GDP- composition by sector:
services: 40% (2011 est.)
Industries: tourism, garments, construction, rice milling, fishing, wood and wood products, rubber, cement, gem mining, textiles.
Currency: Riels (KHR)
|
<urn:uuid:f1ca1558-9443-4cba-b38d-005a3a1d11a8>
|
CC-MAIN-2013-20
|
http://www.eoearth.org/articles/view/171946/European_Union/Bangladesh/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951457
| 9,056
| 3.390625
| 3
|
Matthew Herper, Forbes Staff
I cover science and medicine, and believe this is biology's century.
It’s been the catch-phrase of science geeks hoping to drive DNA sequencing to the next level: The $1,000 genome. There is a National Institutes of Health project to get us there, an X Prize to reward whoever gets there first, and a book (a great read) named after the idea. The $1,000 genome – it promises a day when we all carry around our genetic code on thumb drives and use it to decide what medicines to take, what to eat, and what diseases to watch out for.
Great buzzword, but it may never happen, especially not any time soon and especially not at a cost of $1,000. Research costs for sequencing a human genome may drop that low very soon, but that doesn’t include paying the doctors or the cost of information technology to process the data. Research genomes are not accurate enough for medical use. Getting better accuracy requires sequencing the DNA more times, which drives the cost back up. I’d think if we’re talking about actual medical use, $10,000 is a more accurate number. Certainly, it is not going to drop below the $2,000 level for a magnetic resonance imaging scan. And once the technology is in use, I think it is possible that the costs will go back up.
Even in consumer electronics, costs don’t always go to zero. Buying a decent computer (not the chintzy netbook I use for everything) costs as much now as it did ten years ago – the power behind the device you get has simply increased. But medicine is not like consumer electronics. That’s why we often pay astronomical prices for drugs that have real benefits –$93,000 per patient for Dendreon’s prostate cancer drug Provenge, or $200,000 per patient per year for one of Genzyme’s rare disease drugs. Sequencing isn’t going to mirror the drug business. It might be more like the PET scan and MRI business, with select hospitals buying huge, expensive machines. Or it might be that people don’t get their whole genomes scanned except when they have a hard to diagnose disease – patients with cancer might have a few hundred or a thousand of their tumor genes sequenced in order to pick the right drugs, for instance. All of this comes with the hurdles that neither doctors nor regulators really understand sequencing yet, and that’s bound to come with all sorts of hiccups. On the other hand, the first cases of using sequencing in medicine are arriving now.
That said, one of the arguments that this could be quite big is that you can get to pretty gigantic market sizes whether the cost comes down a lot or not. To quote Jay Flatley, chief executive of DNA sequencing leader Illumina: “If you look at the potential it verges on being insatiable through the next ten years,” says Flatley. “If you look at sequencing entire countries the potential volumes are really staggering, even at $1,000 a genome.”
|
<urn:uuid:1bf532c7-c733-4638-ac98-c4c5f21cbdf1>
|
CC-MAIN-2013-20
|
http://www.forbes.com/sites/matthewherper/2011/01/06/why-you-cant-have-your-1000-genome/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947118
| 653
| 2.703125
| 3
|
Tonsillectomy Guidelines: New
Chances are, if you’re over 35, you either had your tonsils out as a child, or know someone who did. For decades, removing kids’ tonsils and adenoids was standard operating procedure in the medical community. Tonsillectomy was considered the most effective weapon against sore throats and streptococcal infections and all of their serious consequences.
A new era has dawned – one with antibiotics – and just this year, new guidelines from the American Academy of Otolaryngology with respect to the surgery today’s parents remember from childhood. Based on studies of throat infections and tonsillectomies, the guidelines recommend tonsillectomy for frequent or severe sore throats only. The Academy now recommends that the surgery be considered for children who have trouble breathing while they sleep.
Reflecting changes in clinical practice, the guidelines were developed to apply scientific evidence to the need for an operation that was almost universal in the 1950s and 1960s. While the arrival of antibiotics made tonsillectomies much less necessary, many if not most of the Baby Boomer and Generation X generations had their tonsils removed. What Ellen Wald, MD, a specialist in pediatric infectious disease and chair of the pediatrics department at the University of Wisconsin School of Medicine and Public Health told the New York Times was “was the single most common operation in the United States,” now has its necessity regularly questioned by doctors.
Jack L. Paradise, MD, professor emeritus of pediatrics at the University of Pittsburgh School of Medicine, and his colleagues published a 1984 study that looked at children with many well-documented episodes of throat infection (seven or more in the preceding year, for example). Those who got tonsillectomies had fewer infections in the first couple of years after surgery than those who didn’t, the researchers found. But the children who didn’t have surgery also had fewer and fewer infections as they got older. Dr. Paradise concluded tonsillectomies were a reasonable option for children with severe, recurrent throat infections, but so was watchful waiting.
Later, Dr. Paradise studied children with fewer infections and concluded that the benefit of tonsillectomy was too “modest” to justify the risk, the pain, and the cost of surgery in those children.
These days, many doctors are less likely to move to tonsillectomy for a smaller series of run-of-the-mill sore throats, but doctors are more willing to consider that children might need the operation if their tonsils obstruct the throat enough to affect breathing while they sleep.
Parents should also know that in a significant number of children, the breathing problems—and everything that follows from disordered sleep—might persist even after the operation and necessitate further treatment.
Bottom line: The once ubiquitous tonsillectomy requires a careful overview of each child’s condition. While tonsillectomy might improve quality of life for some children, there are limits to what it can accomplish—with sleep issues and behavior problems, and with recurrent infections.
|
<urn:uuid:c0935220-8060-41ec-9e81-85f4dde5e416>
|
CC-MAIN-2013-20
|
http://www.mommymdguides.com/breaking-news/school-years/health-and-fitness/tonsillectomy-guidelines-new
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966083
| 643
| 3.078125
| 3
|
Albany's New Netherland Families
This list of eighty-two distinct family groups roughly represents
the settler population of the villiage/town of Beverwyck
at the end of the so-called New Netherland
Already, these pre-urban dwellers
were beginning to separate themselves
from the farmers and husbandmen of the surrounding countryside.
This list also represents the largest number of New Netherland
family names in the city during its first two centuries of life.
From this core group, a number of families left the Albany community
- establishing new settlements at Schenectady,
Catskill, Schaghticoke, Hoosic, Saratoga,
and beyond. Some became tenants
of the Van Rensselaers. Others left
the region entirely.
Still others literally "died out" in the Albany setting.
Those who remained formed the core population
of what became the city of Albany in
Beginning during the 1670s and 80s, the children of the New Netherland
Dutch found marriage partners and raised
American-born families of their own.
The Albany community continued to grow and to feed the growth of
the entire Hudson-Mohawk region
based on the natural increase of its New Netherland-ancestry settler
stock. Although many people came and went, Albany’s New Netherland
Dutch roots remained strong for another 200 years.
Throughout the eighteenth-century, most of the successful Albany
families from all backgrounds could call on their direct connection
to the settlers of New Netherland - particularly through the families
of the city's wives and mothers.
At the same time, it is important to note that as many of those
popularly known as the "New Netherland Dutch" traced their roots
to the German, French, and Scandanavian states surrounding the Netherlands
and even across the channel to what became Great Britain in 1707.
However, another major part of the early Albany story must be attributed
to the contributions of its newcomers.
|
<urn:uuid:f10205a6-0eb1-4fc3-a1e9-e4abaaf90c06>
|
CC-MAIN-2013-20
|
http://www.nysm.nysed.gov/albany/nnfamilies.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.945161
| 428
| 3.140625
| 3
|
SmartDraw includes thousands of professional-looking diagrams like this that you can easily edit and make your own.
Text in this example:
The Essentials of Fire. Air:
Any Surface trying to burn needs Oxygen to ignite. Fuel:
The most common type of fuel is wood. combined with heat and oxygen, fuel should ignite. Heat:
Heat provided by the smaller fuel should allow item to ignite. FIRE
|
<urn:uuid:610c040f-60ea-44d6-a6e4-bf62adabd5cf>
|
CC-MAIN-2013-20
|
http://www.smartdraw.com/examples/view/fire+venn+diagram/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.934637
| 86
| 2.6875
| 3
|
CO and Oxygen Exchange
Country: United States
Date: November 2008
Hello, I was told that if you administer 100% O2 to a
person with carbon monoxide poisoning, the O2 will replace the CO
molecules on the hemoglobin. I know that hemoglobin has a much
greater affinity for CO than for O2. What would cause the hemoglobin
to release to CO molecule for a O2 molecule?
When oxygen or carbon monoxide binds to hemoglobin, they aren't locked there
forever. They eventually will unbind. Random fluctuations in the molecules
(thermal energy) eventually will cause the molecules to unbind. "Greater
affinity" means that the fit is tighter, but it's still not absolutely
permanent. If you surround the hemoglobin with oxygen molecules, when the CO
eventually unbinds, it's more likely that an oxygen will take its space
rather than a CO molecule.
Hope this helps,
When 2 molecules bind together non-covalently, they are not bonded together
permanently. Because of thermal motion, they establish an equilibrium between
bonded and separated. This applies to CO and Hb, so the more Oxygen there is
around, every time a CO molecule dissociates from Hb, the more likely it is
that it will be replaced by an Oxygen molecule. It's known as competitive
Ron Baker, Ph.D.
Click here to return to the Molecular Biology Archives
Update: June 2012
|
<urn:uuid:b170effc-d5f0-4be0-907f-c49d6ee227d3>
|
CC-MAIN-2013-20
|
http://www.newton.dep.anl.gov/askasci/mole00/mole00897.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.91499
| 320
| 3.296875
| 3
|
Missions beyond Mars
Exploration of the outer planets requires extreme patience. Our launch vehicles are no powerful enough to send massive spacecraft directly to the giant planets, so they must take circuitous paths returning to Earth or even traveling inward to Venus for gravity assists to boost momentum enough to send them beyond the asteroid belt. Rendezvousing with asteroids and comets can be even more challenging; they lack sufficient mass to brake fast-moving spacecraft into orbit, so the ships must perform years of orbit adjustment to match position and velocity with the tiniest worlds.
Power is also a problem. Located very far from the Sun, comets, outer planets, and most asteroids receive very little solar energy. Solar arrays must be very large to gather the little sunlight, like those of Rosetta, Dawn, and Juno; or else spacecraft must carry radioisotope thermoelectric generators. The successes of missions like Voyager, Pioneer, Galileo, Cassini-Huygens, and New Horizons require these nuclear power supplies, but Earth has run short of refined plutonium-238, preventing us from planning future missions.
The exceptions to these rules are the missions to near-Earth asteroids, which, by definition, have orbits very similar to our own planet's, permitting them to be explored relatively cheaply and quickly by solar-powered spacecraft like Hayabusa and NEAR. And sometimes, small craft like Deep Impact and Stardust can catch comets as they speed across the inner solar system.
Future Jupiter polar orbiter
Launch: August 5, 2011
Jupiter arrival: Planned for July 4, 2016
Following a lengthy cruise and October 2013 Earth flyby, Juno will survey Jupiter from a polar orbit, carrying a suite of instruments designed to study the planet's interior. It will investigate the existence of an ice-rock core; determine the amount of global water and ammonia present in the atmosphere; study convection and deep wind profiles in the atmosphere; investigate the origin of the jovian magnetic field; and explore the polar magnetosphere. Its science mission does not require a camera, but it does carry one, specifically designed to capture unusual and beautiful views of Jupiter from its unusual polar perspective for public pleasure.
LINKS: Blog posts about Juno -
Asteroid 4 Vesta and dwarf planet 1 Ceres orbiter
Launch: September 27, 2007
Vesta arrival: July 16, 2011
Ceres arrival: February 2015
After flying past Mars on February 4, 2009, Dawn crept up on asteroid 4 Vesta, becoming the first orbiter of a main-belt asteroid. After surveying the asteroid from many altitudes, Dawn will depart Vesta in the summer of 2012, embarking on a journey to end with orbit insertion at 1 Ceres in August 2014.
Jupiter and Pluto / Kuiper Belt flyby
Launch: January 19, 2006
Jupiter flyby: January-May, 2007
Pluto encounter: January-August 2015, close approach July 14
New Horizons is the result of a long battle to take advantage of a once-in-a-lifetime opportunity for a Jupiter gravity assist trajectory to Pluto. It observed Jupiter over five months around the flyby in early 2007, with its closest approach on February 27. It was the first spacecraft to observe the newly formed Little Red Spot, and also caught Io's north polar volcano Tvashtar in the middle of a spectacular eruption. It will travel within 10,000 kilometers of Pluto before traveling onward to a second (and probably much more distant) encouter with a much smaller Kuiper belt object.
Comet 67P/Churyumov-Gerasimenko orbiter and lander
Launch: March 2, 2004
Churyumov-Gerasimenko arrival: May 2014
Rosetta's original goal was comet 46P/Wirtanen, but launch delays required a rerouting to 67P/Churyumov-Gerasimenko. The route is long, involving three Earth flybys (in 2005, 2007, and 2009) and one Mars flyby (February 25, 2007). It flew by asteroid 2867 Steins on September 5, 2008 and 21 Lutetia on July 10, 2010. Its long cruise will take it nearly to Jupiter's orbit before it travels inward again to rendezvous with the comet. Since Rosetta is solar-powered, ESA had to place it into a state of deep hibernation for this most distant period of its cruise. Rosetta went to sleep on June 8, 2011 and will not wake up again until January 20, 2014, five months before its arrival at Churyumov-Gerasimenko. After entering orbit, it will drop, a small lander, Philae, to the surface of the comet.
Flyby and impact into comet 9P/Tempel 1
Launch: January 12, 2005
Tempel 1 impact and flyby: July 4, 2005
Hartley 2 flyby: November 4, 2010
One day prior to its flyby of Tempel 1, Deep Impact released a 364-kilogram copper impactor onto a collision course with the comet. The impactor captured images all the way down to its 10.2 kilometer-per-second impact with Tempel 1. The flyby spacecraft captured amazing views of the impact from a safe distance as every large telescope on Earth was also pointed at the comet. Since the end of its primary mission, Deep Impact's blurred camera has been employed to study exoplanets (a project called the EPOXI mission), and the spacecraft has encountered a second comet, 103P/Hartley 2. The spacecraft is low on fuel but otherwise still functional and now being tested for future use as a deep-space astronomical observatory. It is currently targeted for a 2020 flyby of asteroid (163249) 2002GT.
Successful Saturn orbiter & Titan probe
NASA / ESA / ASI
Launch: October 15, 1997
Jupiter flyby: December 30, 2000
Orbit insertion: July 1, 2004
Huygens probe descent: January 14, 2005
Cassini-Huygens' path to Saturn required two flybys of Venus (on April 26, 1998, and June 24, 1999), one of Earth (on August 18, 1999), and one of Jupiter (on December 30, 2010). During the Jupiter encounter, Cassini conducted coordinated observations with Galileo. The Huygens probe descent was wildly successful, revealing a strange new world of channels and basins on Titan. Cassini shapes its orbit around Saturn with numerous gravity-assist flybys of Titan, occasionally surveying Saturn from above or below (with lovely perspectives on the rings) and occasionally from within the ring plane (affording frequent encounters with Saturn's other, smaller moons). Cassini's mission has been extended twice, but it will end on September 15, 2017, after 293 complete orbits of Saturn with the spacecraft's plunge into the atmosphere.
Comet Giacobini-Zinner flyby and distant Halley observer
(Formerly known as International Sun-Earth Explorer or ISEE-3)
Launch: August 12, 1978
Flyby: September 11, 1985
Return to Earth: expected August 10, 2014
Originally launched to explore Earth's magnetosphere and its interaction with the solar wind, the International Sun-Earth Explorer was renamed the International Cometary Explorer on December 22, 1983. On that date, a lunar gravity assist flyby launched the spacecraft onto a heliocentric orbit ahead of Earth to intercept comet Giacobini-Zinner. It flew through the tail of Giacobini-Zinner on September 11, 1985, and went on to transit between the Sun and Halley's comet in March 1986, becoming the first spacecraft to investigate two comets. There was no contact with ICE after the end of its mission in 1999 until September 18, 2008, when it was successfully re-contacted. ICE will return to fly by the Moon on August 10, 2014, when it could be re-captured into a halo orbit and possibly sent out again to explore another comet.
Successful Flybys of Jupiter and Saturn
Launch: September 5, 1977
Jupiter encounter: January 4 to April 13, 1979
Saturn encounter: August 23 to December 15, 1980
Launched 16 days after Voyager 2, Voyager 1 was on the fast track to Jupiter and actually arrived four months ahead of the other spacecraft. Voyager 1 flew by Jupiter on March 5, 1979, taking more than 18,000 images of planet and its moons. The spacecraft flew by Saturn on November 12, 1980, coming within 64,200 kilometers of the planet's cloud tops. During the flyby, the spacecraft took almost 16,000 images of Saturn, its moons, and ring system. Voyager 2's path past Saturn and Titan directed it up and out of the plane of the ecliptic, allowing scientists to get an overhead view of the planet and rings. Voyager 1 is currently on an Interstellar Mission and is the most distant man-made object ever launched, taking that title from Pioneer 10 on February 17, 1998. It is now probing the boundaries of the heliosphere, where the solar system gives way to the interstellar medium.
Successful “Grand Tour” flybys of Jupiter, Saturn, Uranus, and Neptune
Launch: August 20, 1977
Jupiter encounter: April 25 to August 5, 1979
Saturn encounter: June 5 to September 5, 1981
Uranus encounter: November 4, 1985 to February 25, 1986
Neptune encounter: June 5 through October 2, 1989
Even though Voyager 2 launched 16 days before Voyager 1, it took its time getting to Jupiter and arrived four months after Voyager 1. Ten months into the flight, well before the spacecraft reached the planet, Voyager 2's primary radio receiver failed. The backup receiver kicked in, but it proved to be somewhat unreliable. Controllers tried to revive the primary receiver, without any luck. They were forced to continue with the backup. Despite its irregularities, the backup receiver worked admirably during the Jupiter fly by. Voyager 2 flew by Jupiter on July 9, 1979, taking about the same number of images as Voyager 1 (18,000 images of Jupiter and its moons). Between the two spacecraft, three new moons were discovered as well as a thin, dark ring around Jupiter. Voyager images of Jupiter's moon Io revealed active volcanoes, the first ever discovered on another body besides Earth. Voyager 2 passed by the ringed world on August 26, 1981. It flew within 41,000 kilometers of the planet's cloud tops and provided scientists with almost 16,000 images of the planet, its moons and rings. While at Saturn, the two Voyager spacecraft discovered three new moons of Saturn, the intricate structure and spoke-like features of the ring system, and information about the planet's atmosphere and magnetic field. Voyager 2 flew by Uranus on January 24, 1986, coming within 81,500 kilometers of the planet's cloud tops. The spacecraft took almost 8,000 images of the planet, its moons and its dark ring system. The planet itself appeared as a vague, nearly featureless ball covered by a greenish blue methane haze. Although Voyager 2 performed a survey of Uranus’ moons, it passed by when tilted Uranus was at the height of southern summer, meaning that only the moons’ southern hemispheres were visible. Voyager 2 had to pass very close by Uranus to get the gravity assist necessary to send it on to Neptune. The close flyby altitude, combined with the vertical, “bull’s-eye” pattern of Uranus’ tilted system of rings and moons, meant that Voyager 2 saw only Miranda close-up; the rest of the moons were seen only distantly. The Voyager 2 images yielded the discoveries of 10 new moons. Voyager 2 flew by Neptune on August 25, 1989. Since Neptune was the final target for the spacecraft, scientists decided they could take risks they had avoided during previous planetary encounters. They programmed Voyager 2 to fly within 5,000 kilometers of the planet's cloud tops, closer than it had come to Jupiter, Saturn, or Uranus. The results were impressive. Even at such a great distance from the Sun, the 4-hour time lag in communications and low lighting conditions, the spacecraft returned 10,000 images of Neptune, its moons, and ring system. Voyager 2 discovered interesting cloud features on the planet and recorded some of the fastest winds in the solar system. The spacecraft also discovered the clumpiness of Neptune’s rings, as well as six new moons. The close approach to Neptune actually slowed Voyager 2’s speed with respect to the Sun, and sent the spacecraft on a trajectory diving below the plane of the solar system. Like Voyager 1, it is now probing the boundaries of the heliosphere, where the solar system gives way to the interstellar medium.
Orbiter and sample return from asteroid Itokawa (1998 SF36)
Launch: May 9, 2003
Itokawa arrival: September 2005
Earth sample return: June 13, 2010
Hayabusa's mission to and from asteroid Itokawa was one of the most thrilling adventures in modern space exploration, marked by numerous near-mission-ending disasters saved by the ingenuity of mission engineers, and culminating in the fiery death of the parent spacecraft on the night of the return of its sample capsule -- a story much too long for this space (and dramatic enough to be the subject of three feature-length films in Japan). Hayabusa rendezvoused with and touched down on a very small asteroid. It deployed a hopper named "Minerva" on November 12, 2005, but the hopper missed the asteroid. It did successfully drop a target marker containing 880,000 names to the surface, and then followed the marker down for two landing attempts. Upon the successful return of the sample capsule, a very small amount of asteroid dust was found inside, plenty for analysis by labs trained on the Stardust samples.
Failed multi-comet flyby
Launch: July 3, 2002
CONTOUR was lost August 15, 2002, when the spacecraft failed to contact Earth shortly after a scheduled firing of its main rocket motor. Investigation revealed that the spacecraft broke apart toward the end of the rocket motor firing. The spacecraft had been scheduled to fly by at least three comets: comet 2P/Encke in 2003, continuing with 29P/Schwassmann-Wachmann in 2006, and 6P/d'Arrest in 2008.
Flyby and coma sample return from comet P/Wild 2
Launch: February 7, 1999
Annefrank flyby: October 31, 2002
Wild 2 flyby: January 2, 2004
Sample return: January 15, 2006
Tempel 1 flyby: February 15, 2011
Propellant exhausted: March 24, 2011
Stardust flew past Earth on November 14, 2000, and then asteroid 5535 Annefrank. When Stardust flew by Wild 2, it collected samples of dust and volatiles from the comet's coma as well as images and other data. Other objectives of the mission included collecting samples of interstellar dust grains, imaging the comet nucleus, and conducting preliminary analysis of the composition of the cometary dust particles. It returned the samples to Earth on January 15, 2006. The aerogel collector plates proved to be full of cometary material, surpassing the science team's expectations. Following another Earth flyby on January 14, 2009, Stardust was sent onward to comet 9P/Tempel 1, which had been the target of the Deep Impact mission. With nearly no fuel left onboard, Stardust was commanded to burn the rest of it to depletion before powering down for good.
Flybys of asteroid 9969 Braille and comet 19P/Borrelly
Launch: October 24, 1998
Braille flyby: July 28, 1999
Borrelly flyby: September 22, 2001
Engine shut down: December 18, 2001
Deep Space 1 was a demonstration probe designed to test new technologies such as ion propulsion. The spacecraft flew by asteroid 9969 Braille, which was named through a Planetary Society-run contest, within 15 kilometers of the asteroid's surface. With all systems still operating at the end of its primary mission in September 1999, engineers decided to extend the mission and attempt a flyby of comet 19P/Borrelly. By the time Deep Space 1 reached Borrelly, it had lasted three times longer than expected. It flew within 2,200 kilometers of the comet, providing the most detailed images of a comet's nucleus yet seen. With its fuel almost gone and its instruments in varying states of disrepair, communication with the spacecraft was terminated in December 2001. However, the spacecraft could, in theory, be re-contacted and returned to service, as ICE was.
Asteroid 433 Eros orbiter (eventually used as a lander!)
Launch: February 17, 1996
Eros arrival: February 14, 2000
Eros landing: February 12, 2001
End of mission: February 28, 2001
During its yearlong mission, NEAR gathered 10 times more data than originally planned. On February 12, 2001, with its fuel and funding nearly depleted, mission planners tried the unprecedented maneuver of landing the orbiter on Eros. With fragile solar panels and protruding antennae, NEAR was never intended to be a lander. However, controllers successfully brought the spacecraft to a gentle 1.9 meter-per-second touchdown onto the rocky surface, taking 69 images during the final descent. The spacecraft continued to function even after it landed. NEAR was officially shut down on February 28, 2001.
Successful Solar polar orbiter
Launch: October 6, 1990
Jupiter flyby: February 8, 1992
Mission end: June 30, 2009
The primary mission of Ulysses was to study the north and south pole of the Sun. However, getting to those solar poles required the spacecraft to perform some interplanetary gymnastics. The spacecraft first went to Jupiter, where the strong Jovian gravity helped redirect the spacecraft, placing it on its proper course. As Ulysses flew by the planet, instruments onboard the spacecraft studied Jupiter's strong magnetic field and radiation levels. The mission was long and productive, ending only in 2008 after the X-band transmitter failed and the fuel had nearly frozen.
Jupiter orbiter, with flybys of asteroids 951 Gaspra and 243 Ida/Dactyl
Launch: October 18, 1989
Gaspra flyby: October 29, 1991
Ida/Dactyl flyby: August 28, 1993
Witnessed Shoemaker-Levy crash: July 1994
Jupiter probe descent: December 7, 1995
Jupiter orbit insertion: December 8, 1995
Plunge into Jupiter: September 22, 2003
After a long and troubled development process culminating with a launch from Space Shuttle Atlantis, Galileo traveled past Venus once (on February 10, 1990) and Earth twice (on December 8, 1992 and August 28, 1993). But it had suffered a crippling malfunction early in its mission when its high-gain antenna failed to open. Still, Galileo accomplished the first ever asteroid flybys as it traveled through the main belt on its way to Jupiter. It passed within 1,600 kilometers of Gaspra and 2,400 kilometers of Ida. Galileo made the surprising discovery that Ida has a tiny satellite, which was later named Dactyl. As Galileo approached its insertion into Jupiter orbit, it happened to be the right place in the right time to observe comet Shoemaker-Levy 9 break up and crash into Jupiter. Galileo was the only observatory that had a direct view of the impact, which happened on Jupiter’s night side; Earth-based telescopes had to wait until Jupiter’s rotation brought the impact zone into view hours later. Galileo was the first spacecraft to deploy a probe into an outer planet’s atmosphere. When the Jupiter Probe plunged into the Jovian clouds, it sent back information about the temperature, wind speeds, and pressure as it descended. It finally succumbed to the incredible pressure (24 times Earth's pressure at sea level) one hour after it began its descent. Galileo was also the first spacecraft to dwell in a giant planet's magnetosphere long enough to identify its global structure and investigate the dynamics of Jupiter's magnetic field. It revealed that Jupiter's ring system is formed by dust kicked up as interplanetary meteoroids smash into the planet's four small inner moons and that the planet's outermost ring is actually two rings, one embedded within the other. The spacecraft’s mission was extended three times in order to study the Galilean satellites Io, Europa, Ganymede, and Callisto. Galileo made many discoveries about these moons: Io's extensive volcanic activity is 100 times greater than that found on Earth; Europa harbors a salty ocean up to 100 kilometers (62 miles) underneath its frozen surface, containing about twice as much water as all the Earth's oceans; Callisto and Ganymede may also feature a liquid-saltwater layer; and Ganymede has an iron core, like Earth, and a magnetic field, making this moon the first satellite known to possess a magnetic field. In order to avoid any possibility of the spacecraft contaminating Europa’s salty ocean with material brought from Earth, the spacecraft was deliberately destroyed by sending it onto a collision course with Jupiter.
Comets 1P/Halley and 26P/Grigg-Skjellerup flyby
Launch: July 2, 1985
Halley flyby: March 13, 1986
Grigg-Skjellerup flyby: July 10, 1992
End of mission: July 23, 1992
Giotto flew by Halley at a distance 596 kilometers. All experiments performed well and returned a wealth of new scientific results, of which perhaps the most important was the clear identification of the cometary nucleus. During an extended mission, the spacecraft successfully encountered comet Grigg-Skjellerup at a distance of 200 kilometers.
Comet 1P/Halley flyby
Institute of Space and Aeronautical Science (ISAS)
Launch: March 18, 1985
Flyby: March 8, 1986
Fuel depleted: February 22, 1991
Suisei (which translates to ‘Comet') was identical to Sakigake apart from its payload: an ultraviolet (UV) imaging system and a solar wind instrument. Suisei began UV observations in November 1985, generating up to 6 images per day. The spacecraft encountered Comet 1P/Halley at a distance of 151,000 kilometers. ISAS had decided during 1987 to guide Suisei to a November 24, 1998 encounter with 21P/Giacobini-Zinner, but due to depletion of the hydrazine, this, as well as plans to fly within several million kilometers of comet 55P/Tempel-Tuttle on February 28, 1998, were cancelled.
Comet 1P/Halley flyby
Institute of Space and Aeronautical Science (ISAS)
Launch: January 8,1985
Flyby: March 11, 1986
Contact lost: November 15, 1995
Sakigake (which translates to 'Pioneer') was a prototype spacecraft launched by the Japanese space agency ISAS. It successfully flew within 7 million kilometers of Halley's comet. The spacecraft was equipped with 3 instruments to measure plasma wave spectra, solar wind ions, and interplanetary magnetic fields. An extended mission was planned, including flybys of comet 45P/Honda-Mrkos-Pajdusakova in 1996 and comet 21P/Giacobini-Zinner in 1998. Unfortunately, controllers lost contact with the spacecraft.
Comet 1P/Halley flybys
Soviet Academy of Sciences
Launch: December 15 and 21, 1984
Flyby: March 6 and 9, 1986
The identical Vega 1 and Vega 2 combined Venus swingbys with flybys of comet 1P/Halley. It is estimated that Vega 1 flew by at a distance of 10,000 kilometers (6,000 miles), and Vega 2 at 3,000 kilometers (1,800 miles).
Successful Jupiter and Saturn flyby
Launch: April 5, 1973
Jupiter flyby: December 2, 1974
Pioneer 11 was the second spacecraft to explore the outer solar system (the first being Pioneer 10). Pioneer 11 flew within 34,000 kilometers (21,100 miles) of the Jovian cloud tops. The spacecraft studied the planet's magnetic field and atmosphere and took pictures of the planet and some of its moons. It then flew by Saturn on September 1, 1979 and continued on out of the solar system. Instruments were finally shut down on September 9, 1995, when there was no longer enough power.
Successful Jupiter flyby
Launch: March 2, 1972
Jupiter flyby: December 3, 1973
Pioneer 10 was the first spacecraft to pass through the Asteroid Belt and explore the outer solar system. It flew within 200,000 kilometers of the Jovian cloud tops. Scientists were surprised at the tremendous radiation levels experienced by the spacecraft as it passed the gas giant planet. Once past Jupiter, the spacecraft headed out of the solar system. Routine communication with Pioneer 10 ended on March 31, 1997, but controllers occasionally checked in with it until contact was lost on April 28, 2001. It is now heading in the general direction of Aldebaran, the red giant star in the constellation of Taurus. At its current speed, it would take about 2 million years to get to Aldebaran.
They are Watching the Skies for You!
Our researchers, worldwide, do absolutely critical work.
Asteroid 2012DA14 was a close one.
It missed us. But there are more out there.
|
<urn:uuid:41109e85-08ca-4e39-a064-ba530d47b895>
|
CC-MAIN-2013-20
|
http://www.planetary.org/explore/space-topics/space-missions/missions-beyond-mars.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947846
| 5,360
| 3.9375
| 4
|
The future for stem cell research outlined at the Facial Surgery Research Foundation's meeting Stem cell research: hope or hype
, held last week in London, was a heralded with a sober prediction: no safe and effective stem cell therapy will be widely available for at least a decade, and possibly longer. And the hype? Take a look at any one of the numerous sensationalist headlines that greeted the report 2 weeks ago
by South Korean scientists who had created embryonic clones and you will get the general idea.
The message from the meeting was that hype, and its stark contrast with reality, should not disguise the technological strides that are being made in stem cell research. The difficulties are unique, involving major practical and ethical issues. Adult stem cells are hard to grow in large enough numbers; embryonic stem cells are more amenable to culture but come from supernumerary (after in-vitro fertilisation) or experimentally created embryos. Harvesting, culture, purification, and administration need much more research before clinical trials become widespread. The use of embryos as a source of stem cells creates immediate ethical dilemmas, as does the creation of clones by somatic-cell nuclear transfer. Evidence of the strong feelings aroused by such debates came last Friday, when the US House of Representatives voted to raise federal funding for embryonic stem cell research to allow the use of supernumerary embryos, which is currently banned. President Bush immediately threatened to veto the bill.
Collaborations in cardiovascular medicine, as John Martin outlines in an online Comment
, may provide more hope for the future. Stem cell research holds much promise: in-vitro work will provide insights into disease mechanisms, and one day there will be new treatments for intractable congenital and chronic diseases. 4 years ago, we said that cloning to obtain stem cells was a step too far (because there were sufficient supernumerary embryos available), but the research community is now beyond that step. Consensus about the use of embryonic stem cells will probably remain fluid, as the science evolves and as the public and patients join the debate with scientists, ethicists, and politicians in response.
|
<urn:uuid:b11647ce-c450-41fa-85f2-bf97f04ff402>
|
CC-MAIN-2013-20
|
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(05)66634-2/fulltext
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948597
| 438
| 2.640625
| 3
|
Communicable diseases continue to account for an unduly high proportion of the health budgets of developing countries. According to The world health report, acute diarrhoea is responsible for as many as 2.2 million deaths annually. Acute respiratory infections (primarily pneumonia) are another important cause of death, resulting in an estimated 4 million deaths each year.
Analysis of data on lung aspirates appears to indicate that, in developing countries, bacteria such as Haemophilus influenzae and Streptococcus pneumoniae, rather than viruses,are the predominant pathogens in childhood pneumonia. b-Lactamase-producing H. influenzae and S. pneumoniae with decreased sensitivity to benzylpenicillin have appeared in different parts of the world, making the surveillance of these pathogens increasingly important.
Sexually transmitted diseases are on the increase. There are still threats of epidemics and pandemics of viral or bacterial origin, made more likely by inadequate epidemiological surveillance and deficient preventive measures. To prevent and control the main bacterial diseases, there is a need to develop simple tools for use in epidemiological surveillance and disease monitoring, as well as simplified and reliable diagnostic techniques.
To meet the challenge that this situation represents, the health laboratory services must be based on a network of laboratories carrying out microbiological diagnostic work for health centres, hospital doctors, and epidemiologists.
The complexity of the work will increase from the peripheral to the intermediate and central laboratories. Only in this way will it be possible to gather, quickly enough, sufficient relevant information to improve surveillance, and permit the early recognition of epidemics or unusual infections and the development, application, and evaluation of specific intervention measures.
This book is a an encompassing guide to basic lab procedures in clinical bacteriology. Included is everything you ever needed to know about bacteriology.
- Internal quality control
- External quality assessment
- Bacteriological investigations
- When and where bacteraemia may occur
- Blood collection
- Blood-culture media
- Processing of blood cultures
- Cerebrospinal fluid
|
<urn:uuid:355ebb9b-834f-4952-a0db-8beaed7b24e7>
|
CC-MAIN-2013-20
|
http://www.ultimatesurvivalskills.com/diy/chemistry-laboratory/basic-lab-procedures-in-clinical-bacteriology.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.906795
| 450
| 3.109375
| 3
|
Cultural heritage is generally associated with archives, works of art, places of worship and monuments. But it also exists in less tangible forms: language, music and dance, festivities, rituals and traditional craftsmanship. Cultural heritage is important to the identity of a society. In times of need, songs, writings and works of art can be a beacon of hope and comfort. Cultural heritage reinforces cultural and historical self-awareness. Monuments and art treasures make a shared past visible and thus strengthen inter-cultural ties.
In this section you will find news stories and articles on cultural heritage, that were published on the Power of Culture website between 2003 and 2010. More recent articles from our network you'll find on our Facebook page.
|
<urn:uuid:53bea304-1aa2-4cc7-9d8b-35e59f69ada9>
|
CC-MAIN-2013-20
|
http://kvc.minbuza.nl/en/theme/heritage?page=8
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937576
| 145
| 3.0625
| 3
|
RALEIGH — Sometimes, despite good intentions, we just get things wrong. That’s what happened in 2007, when the North Carolina legislature enacted a bill to force electric utilities to buy “renewable” power from wind, solar, and other expensive and unreliable sources.
When I say “we” got it wrong, I mean it in the broadest possible sense. The legislature and then-Gov. Mike Easley got it wrong by ignoring the effects of higher electric rates on North Carolina’s economy. At the time, they stated the primary goal was to combat global warming.
But I must also admit that my John Locke Foundation colleagues and I also got it wrong back in 2007. Arguing against the bill, we pointed out that even if North Carolina’s carbon-dioxide emissions suddenly dropped to zero — which would require, among other things, the mass extinction of North Carolinians — the effects on climate change would be too small to measure.
We also commissioned a study of the economic impact from economists at Suffolk University’s Beacon Hill Institute in Boston. They found the extra electricity costs would slash the state’s GDP by $140 million and cost thousands of jobs.
That’s where things started to go wrong for us. You see, we put our faith in smarty-pants Ph.D.s from Massachusetts. Their fancy econometric models assumed that if you force consumers to pay more for power, and artificially induce investors to finance the construction of those expensive power facilities rather than other capital assets, that would reduce jobs and incomes elsewhere in the economy — while the higher rates could deter some electricity-dependent companies from locating in North Carolina.
It turns out, however, that when it comes to electrical power, households and businesses don’t pay their bills in the normal way. According to a recent report commissioned for the N.C. Sustainable Energy Association, which represents the companies subsidized by North Carolina’s renewal-portfolio standard, the legislation has created more than 21,000 jobs since its passage in 2007. That’s because installing solar panels and the like requires spending money on employees and contractors.
Actually, I should be more specific: The Sustainable Energy Association says the mandate to buy expensive electricity has created or retained 21,000 job years. If someone has the same job for four years, that is considered four “job years.” In reality, the study estimated an average of 4,233 jobs since 2007. Still, what an impressive way to exaggerate economic impacts. In our defense, we were unable to foresee such statistical wizardry in 2007, since the “job years” and “jobs retained” tricks came to prominence in 2009 when employed by the Obama administration to sell their fabulously successful stimulus program.
Anyway, the point is that the John Locke Foundation hired the wrong consultants, who then used the wrong model. When it comes to electricity, it is apparently safe to assume that higher costs are really higher benefits. All you need to do is produce a lengthy study proving the obvious point if a government regulation compels higher spending, somebody receives the money. Only in highfalutin economics departments is it necessary to account for where the money would otherwise have been spent.
Having made this discovery, however, supporters of North Carolina’s 2007 renewable-portfolio mandate — that is, opponents of Rep. Mike Hager’s 2013 bill to revisit it — failed to see the logical conclusion. Even windmills and solar panels fail to maximize the potential creation of job years from high-cost electricity.
So the John Locke Foundation is proposing a new approach. Let’s amend the 2007 law to require that at least 25 percent of North Carolina’s electricity come from human generation. Approved technologies would include stationary bicycles, hand cranks, even new devices to produce current from activities such as human breathing. We conservatively estimate this would create at least a million job years through 2016. In fact, all unemployed North Carolinians could find work producing electricity, as long as they are capable of independent respiration.
Remember, human beings run on food, which is a renewable resource — as is the resulting waste product.
John Hood is president of the John Locke Foundation, which has just published “First In Freedom: Transforming Ideas into Consequences for North Carolina.” It is available at JohnLockeStore.com.
|
<urn:uuid:b8bedb95-8775-41a7-accb-cea7982aa3c7>
|
CC-MAIN-2013-20
|
http://www.robesonian.com/view/full_story_myown/22021336/article-The-real-power-to-create-jobs
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954332
| 917
| 2.53125
| 3
|
Researchers at Uppsala University can now show that what is good for one sex is not always good for the other sex. In fact, evolutionary conflicts between the two sexes cause characteristics and behaviors that are downright injurious to the opposite sex. The findings are being published in the scientific journal Current Biology.
In both males and females in the animal world it is common – much more common that one might like to think – for one sex to evince characteristics and properties that are injurious to individuals of the other sex, according to Professor Göran Arnqvist at the Department of Ecology and Evolution, who adds:
"One especially tricky case involves species where the males have mating organs that are supplied with hooks, barbs, and flukes that cause internal injuries in females during mating. This is extremely common among insects, but it also occurs in many other animal groups."
The Uppsala scientists have studied seed beetles and their mating behavior. Göran relates that the males' mating organ is rather similar to a medieval spiked club and that it causes severe wounds in females during mating. But since it is never a good idea for a male merely to injure a female, the researchers have assumed that these structures serve another purpose and that the injury is an unfortunate side effect.
"Females' injuries as such do not benefit the male she mated with. It has been suggested rather that the injuries are a side effect of other benefits the males reap from the barbs. Now, for the first time, we are able to show that this is the case," says Göran Arnqvist.
Despite these costs, females mate with multiple males.
"We also show that males with long barbs cause more severe injuries to females, but also that these males have a greater rate of fertilization success," says Göran Arnqvist.
The barbs are thus extremely important to males in their competition to be able to fertilize an egg. When females mate with two males, it is more often the male with the longer barbs that fertilizes her eggs.
|
<urn:uuid:1dc33039-7f04-4d29-9616-4a9ce751989d>
|
CC-MAIN-2013-20
|
http://www.sciencecodex.com/mating_that_causes_injuries
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.973429
| 418
| 3.46875
| 3
|
The Antennae Galaxies Found To Be Closer To Us
Paris, France (ESA) May 12, 2008
New research on the Antennae Galaxies using the Advanced Camera for Surveys onboard the NASA/ESA Hubble Space Telescope shows that this proto-typical pair of interacting galaxies is in fact much closer to us than previously thought - at 45 million light-years instead of 65 million light-years.
The Antennae Galaxies are among the closest known merging galaxies. The merging pair of galaxies, NGC 4038 and NGC 4039, began interacting a few hundred million years ago, creating one of the most impressive sights in the night sky. They are considered by scientists as the archetypal merging galaxy system and are used as a standard against which to validate theories about galaxy evolution.
An international group of scientists led by Ivo Saviane from the European Southern Observatory has used Hubble's Advanced Camera for Surveys and Wide Field Planetary Camera 2 to observe individual stars spawned by the colossal cosmic collision in the Antennae Galaxies.
They reached an interesting and surprising conclusion. By measuring the colours and brightnesses of red giant stars in the system, the scientists found that the Antennae Galaxies are much closer to us than previously thought: residing at a distance of 45 million light-years instead of the previous best estimate of 65 million light-years.
The team targeted a region in the relatively quiescent outer regions in the southern tidal tail, away from the active central regions. This tail consists of material thrown from the main galaxies as they collided. The scientists needed to observe regions with older red giant stars to derive an accurate distance.
Red giants are known to reach a standard brightness, which can then be used to infer their distance from the difference between the intrinsic and observed brightness. The method is known as the tip of the red giant branch (TRGB).
The proximity of the Antennae system means it is the best-studied galaxy merger in the sky, with a wealth of observational data to be compared to the predictions of theoretical models.
Saviane says: "All aspiring models for galaxy evolution must be able to account for the observed features of the Antennae Galaxies, just as respectable stellar models must be able to match the observed properties of the Sun. Accurate models require the correct merger parameters, and of these, the distance is the most essential".
The previous canonical distance to the Antennae Galaxies was about 65 million light-years although values as high as 100 million light years have been used. Our Sun is only eight light-minutes away from us, so the Antennae Galaxies may seem rather distant, but if we consider that we already know of galaxies that are more than ten thousand million light-years away, the two Antennae Galaxies are really our neighbours.
The previous larger distance required astronomers to invoke some quite exceptional physical characteristics to account for the spectacular system: very high star-formation rates, supermassive star clusters, ultraluminous X-ray sources etc. The new smaller distance makes the Antennae Galaxies less extreme in terms of the physics needed to explain the observed phenomena.
For instance, with the smaller distance its infrared radiation is now that expected of a "standard" early merging event rather than that of an ultraluminous infrared galaxy. The size of the star clusters formed as a consequence of the Antennae merger now agree with those of clusters created in other mergers instead of being 1.5 times as large.
The Antennae Galaxies are named for the two long tails of stars, gas and dust that resemble the antennae of an insect. These "antennae" are a physical result of the collision between the two galaxies. Studying their properties gives us a preview of what may happen when our Milky Way galaxy collides with the neighbouring Andromeda galaxy in several thousand million years.
Although galaxy mergers today are not common, it is believed that in the past they were an important channel of galaxy evolution. Therefore understanding the physics of galaxy mergers is a very important task for astrophysicists.
The Antennae are located in the constellation of Corvus, the Crow.
Email This Article
Comment On This Article
Share This Article With Planet Earth
Stellar Chemistry, The Universe And All Within It
Argonne Supercomputer To Simulate Extreme Physics Of Exploding Stars
Chicago IL (SPX) May 05, 2008
Robert Fisher and Cal Jordan are among a team of scientists who will expend 22 million computational hours during the next year on one of the world's most powerful supercomputers, simulating an event that takes less than five seconds. Fisher and Jordan require such resources in their field of extreme science.
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2007 - SpaceDaily.AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement|
|
<urn:uuid:02214082-50ee-4349-bc58-cfe61ad7534a>
|
CC-MAIN-2013-20
|
http://www.spacedaily.com/reports/The_Antennae_Galaxies_Found_To_Be_Closer_To_Us_999.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.922915
| 1,083
| 3.390625
| 3
|
HARRIS CREEK (GRAYSON COUNTY)
HARRIS CREEK (Grayson County). Harris Creek rises three miles west of Sherman in central Grayson County (at 33°38' N, 96°41' W) and runs northwest for ten miles, passing into the Hagerman National Wildlife Refuge before reaching its mouth on the Big Mineral Arm of Lake Texoma, twelve miles northeast of Whitesboro (at 33°44' N, 96°45' W). The stream, which is intermittent in its upper reaches, traverses flat to rolling prairie, surfaced by soils ranging from dark, commonly calcareous clays to clay and sandy loams. Local vegetation includes mesquite, water-tolerant hardwoods, conifers, and various grasses. The area has been used for range and crop land.
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article."HARRIS CREEK (GRAYSON COUNTY)," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/rbh34), accessed May 24, 2013. Published by the Texas State Historical Association.
|
<urn:uuid:3490a31d-1a83-4267-af76-8b4c851dd83d>
|
CC-MAIN-2013-20
|
http://www.tshaonline.org/handbook/online/articles/rbh34
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.904699
| 248
| 2.984375
| 3
|
Quiz for Lessons 116- 120
Parts of the Sentence - Transitive and Intransitive Verbs and Voice
Instructions: Tell whether the verbs in the following sentences are transitive active, transitive passive, intransitive linking, or intransitive complete.
1. We started our new lessons today.
2. The game started at noon.
3. Mr. Paul is our math teacher.
4. The dog slept in the sun.
5. The cat chased our dog around the barn.
6. Ann prepared the fruit for the salad.
7. The relish tray was done by the two sisters.
8. The meal is now complete.
9. The man opened the car door for his wife.
10. There were many guests at the party.
--For answers scroll down.
1. started - transitive active (lessons = direct object)
2. started - intransitive complete (no receiver of the action)
3. is - intransitive linking (teacher = predicate nominative)
4. slept - intransitive complete (no receiver of the action)
5. chased - transitive active (dog = direct object)
6. prepared - transitive active (fruit = direct object)
7. was done - transitive passive (tray = receiver of the action and is the subject)
8. is - intransitive linking (complete = predicate adjective)
9. opened - transitive active (door = direct object)
10. were - intransitive complete ( no action or predicate nominative or predicate adjective)
DAILY GRAMMAR - - - - by Mr. Johanson
Copyright 2012 Word Place, Inc - - All Rights Reserved.
For your convenience, all of our lessons are available on our website in our
lesson archive at http://www.dailygrammar.com/archive.shtml. Our lessons are
Daily Grammar Lessons Search
|
<urn:uuid:76322fec-bfda-4c7f-b1bc-5466709bca3d>
|
CC-MAIN-2013-20
|
http://dailygrammar.com/Quiz-116-120-Transitive-and-Intransitive-Verbs.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902736
| 411
| 2.859375
| 3
|
I've heard it's 33% during the Qing dinasty. 4% during cultural revolution. Now it's climbing up steadily to 11%.
Where can I see the table of those actually tracking those numbers?
What were the causes?
Han dynasty (206 BC–220 AD)
Tang Dynasty 618 – 907 AD GDP per capita:$480, 58% of world GDP
Song Dynasty 960-1279 AD GDP per capita:US$2,280, 80% of the world’s GDP
Yuan Dynasty 1271-1368 A estimated to account for about 30% -35% GDP
Ming Dynasty 1368–1644 AD GDP per capita:US$600, 55% of world GDP
Qing Dynasty 1644-1922 AD GDP per capita:US$600 Qing Dynasty accounted for 35% -10% of the World GDP
People’s Republic of China 1949- GDP per capita:US$5414, 9.48 percent of the world economy. See also Wikipedia for this period.
There is also another article at Wikipedia: Economic history of China before 1912
Maybe you can find hint to other sources there.
Well, at least the drop during the "Cultural Revolution" is easily explained. Setting hordes of young, largely semi-literate, thugs to torture and maim the productive and educated members of society will do wonders for the economy. Here is a long quote from wikipedia:
What is China has changed over time, as have the number of people in China, as has China's level of per capita output (GDP prior to capitalism is an anachronistic imposition if measured in a current value expression), as has the number of people not-in-China, as have the level of per capita output in places not-China.
The assumption that China's proportion of world output would remain static is a less tenable hypothesis than that all proportions of long lasting cultural nexuses of output would vary.
What you're really asking is "Was Needham right?" http://en.wikipedia.org/wiki/Joseph_Needham#The_Needham_Question
And there's a great deal of literature about prefigurative forms of capitalism in China out there. I'd suggest you start by reading the review articles on the Needham Question, because scholarly discontent with the failure of China, a large high productivity advanced feudal society*1 to cement the prefigurative forms of capitalism into actual capitalism continues.
*1 Marxist use in relation to circulation of prestige, status, and direct extraction techniques; not a claim of infeudation.
|
<urn:uuid:464dafa0-a43e-4b4c-9f5c-94de3f074f87>
|
CC-MAIN-2013-20
|
http://history.stackexchange.com/questions/7518/how-has-chinese-gdp-as-a-percentage-of-world-gdp-changed-over-time-and-why/7520
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907534
| 541
| 2.546875
| 3
|
I keep hearing about people on distillery tours told that a "Corn Writ" gave away land in Kentucky and helped populate the state with distillers. As a person who knows Kentucky history, I have to say I am appaled by hearing this story. First of all, there was no need for Virginia to go to such lengths to give away land in Kentucky. The truth is the Corn Wris were inacted because of the opposite problem - too many people settled in Kentucky without a clear claim.
Kentucky land was filled by two different land grant companies and only one had a claim recognized by the goverment in Richmond. People had oved to Kentucky after purchasing land from both companies so the government in Virginia had to straighten out the mess as to who had legal claim to what. This was especially important since they wanted to pay their veterans of the Revolutionary war with land grants in Kentucky. The "Corn Writs" were laws that allowed a person ALREADY in Kentucky but had a grant from the Transylvania company (Daniel Boone for one) to make a portion of his claim legal if he could proove he had built a cabin and raised a crop of corn on the land. It did not give away land to people whio were not already in the state.
"Our people live almost exclusively on whiskey" - E H Taylor, Jr. 25 April 1873
|
<urn:uuid:b978ba85-dd9a-46a0-bc90-f35b026c9509>
|
CC-MAIN-2013-20
|
http://www.bourbonenthusiast.com/forum/viewtopic.php?p=46241
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.989698
| 282
| 2.984375
| 3
|
It may not be news that regular exercise keeps you fit and healthy – physically as well as psychologically. But, to surprise you, exercise is an important step towards healthy skin. Many of us generally focus on the cardiovascular advantages of exercise. But, anything working towards good circulation makes your skin look and feel healthy. The increased blood flow through exercise is helpful in nourishing the cells and maintaining their vitality. Blood is the primary carrier of oxygen to the cells. Besides supplying oxygen, blood flow is also performs the function of doing away with waste products, like free radicals, from the cells. This indicates that your skin is undergoing internal cleansing process while exercising.
Adding further, regular work-out also provides an anti-wrinkle advantage by creating the conditions essential for production of elastin and collagen. Therefore, when these proteins are present in your skin in optimum amounts, the skin is capable of retaining the moisture in a better way thus getting smoother and firmer.
Furthermore, exercise is also helpful in treating dark circles. The basic reason for dark circles is that the skin beneath your is quite thin and gets thinner with growing age as the supply of elastin and collagen gets depleted. The blood vessels become more prominent as the skin gets thinner. These blood vessels appear as what are known as dark shadows under the eyes. Going further, with growing age and poor blood circulation, these blood vessels may tend to leak thus forming purplish-black spots that are commonly referred as “raccoon eyes” or dark circles.
With regular exercise, you can treat and prevent the dark circles with improved blood circulation. Moreover, exercise makes the skin below the eyes less translucent and firmer due to increased production of essential proteins required for a healthy skin.
How does exercise benefit your skin?
In addition to being great for physical and emotional health, regular exercise also enhances the appearance of your skin. Being regular with your work outs to sweat out the venoms helps enhance the skin health and add glow to it. Exercise provides numerous benefits for the skin. Therefore, it should be counted as an important part of any skin care regime. These benefits have been illustrated below to add to your knowledge:
Exercise helps in sweating
Sweat flushes out the toxins from your body which are responsible for clogging of pores leading to blemishes and pimples. Exercise improves the blood circulation in addition to increasing neuronal stimulation and enabling the sweat glands to improve their functions thus doing away with toxins.
Exercise helps in muscle toning
Your skin would appear and feel fresh and healthier if toning is better under the skin. as per studies, firmer and stronger muscles provide a better support to the skin thus making it appear firmer and more elastic. This toning of muscles is possible only through the way of exercise. Adding more to the point, muscle toning also helps in curtailing the appearance of cellulite. However, cellulite can’t be removed with exercise but can be made to look better.
Exercise increases blood circulation and oxygen supply to the skin
It has been proved through studies that regular workout enhances the blood flow required by diabetics to mitigate the possibility of skin problems leading to amputation. Physical exercise makes the blood flow and this improved blood circulation supplies the required amount of oxygen to the skin. this improvement in circulation of blood and oxygen to the cells also contains nutrients that boost skin health.
Exercise relieves stress
For a long time, exercise has been known to feature stress relieving qualities. These mind-body advantages may reflect on your complexion. According to some studies, stress is considered to be responsible for acne and psoriasis. Actually, stress influences hormone production in addition to repressing the healing ability of the body. However, with regular exercise, it is possible to control stress as well as skin conditions that are hard to manage.
Exercise adds a natural glow to your complexion
While exercising, the skin produces its natural oils in a higher quantity. These natural oils are responsible for healthy and supple appearance of the skin. Even as these moisturize the skin naturally, it is important to gently cleanse the face as a part of the skin care regime to avoid any breakouts.
So, for a younger, healthier, and smooth glowing complexion, include exercise as a secret ingredient in your daily skin care regime. Here are some exercises that can help you get a healthy skin:
Natural Skin Tightening Exercises
Sagging face makes you look old and depressed even though you are not. To treat this sagging skin, many people choose for various anti-aging treatments including crèmes and lotions and even cosmetic surgery, laser treatments, and Botox injections. Alas, maximum of these solutions show up as a waste of money and time.
Well, just like exercise shows its effect on the complete body, these skin tightening exercises for face can boost the toning of muscles thus providing you with a sleeker look. There are numerous exercises designed especially for the toning of facial muscles to make them firmer and tighter. These specially projected exercises can show wonders on the facial muscles. Below are some of the most effective exercises to tauten the facial skin naturally:
- Position the index fingers onto the external edges of the eyes. Close them and the pull your skin gently by pushing the fingers away from the eyes going towards the hairline. Then, open your eyes wide as much as possible. Hold this position for some seconds and relax.
- Position two fingers onto the external ends of your lips. At a snail’s pace, slide the fingers outwards in opposite directions (only for an inch). Hold this position for some seconds. Next, release and then relax.
- Give a wide smile without showing your teeth. Make it wide to its maximum to give a stretch to the muscles on your cheeks. Maintain this position for a few seconds and relax.
- Gently, lean your head backwards. Feel the stretch in the muscles of the neck and maintain the position for some seconds. Then, relax.
Reiterate these exercises for as many numbers during a single session as is comfortable for you. Start slow followed by building up repetitions. Doing these exercises regularly for some time can show noticeable results in the form of a firmer and younger looking face.
Yoga exercises for a healthy skin
For ages, yoga has been considered as one of the best ways towards physical fitness. Yoga, in addition to keeping your body in shape, also shows effective results on your overall health.
Yoga can work wonders to give you healthy and naturally glowing skin. Due to busy schedules and work routines, the skin usually gets ignored. Additionally, the skin is also taken over a toll by insufficient and improper diet. Yoga done for skin, actually, focuses on improvement of body’s digestion capability as well as blood circulation. Exercises aimed at boosting the metabolism of body are considered to be best for the skin. Other yoga exercises helpful for a healthy skin consist of stretching poses plus postures aimed at toning and relaxing of muscles. Yoga also reduces stress thus improving the skin health.
Furthermore, yoga helps in maintaining the collagen (substance responsible for elasticity of skin) health. However, there are numerous yoga poses to help you rejuvenate your body with time. Try and begin with basic yoga postures such as dead corpse posture and move on to lotus pose followed by pranayama and then proceeding to the various inverted poses.
Well, some useful yoga exercises that can be performed to get a healthy skin include:
- The muscles and the mind can be relaxed by performing the simplest pose of yoga – dead corpse pose. This pose involves you lying down straight on the ground and relaxing completely. Follow the breathing procedure involving deep inhale and slow exhale. The hands should be positioned beside the trunk. Practicing this pose before jgh9and after this yoga session relieves anxiety and stress.
- The dead corpse pose should be followed by the lotus pose. This pose involves sitting with legs folded and locked. The hands should be placed over the knees. Concentrate deeply with closed eyes and relax. Go for deep breathing by inhaling deeply and exhaling swiftly. This yoga pose is helpful in relaxation and rejuvenation of the energy. The deep breathe procedure involved in the exercise are helpful in supplying ample oxygen to your facial muscles.
- Pranayama, the breathing exercise, increases the capability of lungs the increasing the amount of oxygen supplied to your skin. Besides, pranayama also improves blood circulation.
- Blood circulation can also be improved through inverted poses like wheel pose, hand stand, lion pose, fish pose, and headstands. All these poses can work wonders for the skin by preventing the occurrence of wrinkles. These poses can be performed quite easily once you have obtained perfection in the aforesaid postures.
For fish pose, extend the lotus pose by lying down with folded legs and try to touch the ground with your head. This pose facilitates better oxygen supply to the facial skin muscles. Release the posture slowly.
Wheel pose, a little different from the above poses, involves lying flat on the ground. Gradually, lift the trunk upwards with support of your legs as well as hands. Hold this pose for a while and then release gradually.
If followed in proper sequence, these yoga exercises can show noticeable results in improving the skin health in addition to making it smooth and soft.
Yoga postures facilitate oxygen supply to all body organs thus vitalizing your skin and making it glow naturally. These yoga exercises, combined with a nutritious yogic diet, help in flushing out the toxins from the body thus allowing your skin to breathe mere conveniently and look younger, brighter, and smoother.
|
<urn:uuid:14e51393-011e-4845-ba2f-23643b94c44b>
|
CC-MAIN-2013-20
|
http://www.glamcheck.com/health/2012/04/16/exercise-for-healthy-skin/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.953394
| 1,994
| 2.8125
| 3
|
Acromegaly occurs in about 6 of every 100,000 adults. It is caused by abnormal production of growth hormone after the skeleton and other organs finish growing.
Excessive production of growth hormone in children causes gigantism rather than acromegaly.
The cause of the increased growth hormone release is usually a noncancerous (benign) tumor of the pituitary gland. The pituitary gland, which is located just below the brain, controls the production and release of several different hormones, including growth hormone.
Surgery to remove the pituitary tumor that is causing this condition usually corrects the abnormal growth hormone release in most patients. Sometimes the tumor is too large to remove completely. People who do not respond to surgery will have radiation of the pituitary gland. However, the reduction in growth hormone levels after radiation is very slow.
The following medications may be used to treat acromegaly:
Octreotide (Sandostatin) or bromocriptine (Parlodel) may control growth hormone release in some people.
Pegvisomant (Somavert) directly blocks the effects of growth hormone, and has been shown to improve symptoms of acromegaly.
These medications may be used before surgery, after surgery, or when surgery is not possible.
After treatment, you will need to see your health care provider regularly to make sure that the pituitary gland is working normally. Yearly evaluations are recommended.
Pituitary surgery is successful in most patients, depending on the size of the tumor and the experience of the surgeon.
Without treatment the symptoms will get worse, and the risk of high blood pressure, diabetes (high blood sugar), and cardiovascular disease increases.
Other health problems may include:
in most joints, which along with excess bone growth may put pressure on the nerves of the spine or the spinal cord
There are no methods to prevent the condition, but early treatment may prevent complications of the disease from getting worse.
Melmed S, Kleinberg D. Pituitary masses and tumors. In: Kronenberg HM, Melmed S, Polonsky KS, Larsen PR, eds. Williams Textbook of Endocrinology. 12th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 9.
Nancy J. Rennert, MD, Chief of Endocrinology & Diabetes, Norwalk Hospital, Associate Clinical Professor of Medeicine, Yale University School of Medicine, New Haven, CT. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
|
<urn:uuid:e4344e82-e2c0-4528-861c-b31e541cbc5e>
|
CC-MAIN-2013-20
|
http://www.samc.com/body.cfm?id=285&action=detail&AEArticleID=000321&AEProductID=Adam2004_5117&AEProjectTypeIDURL=APT_1
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.91205
| 553
| 2.578125
| 3
|
January 29, 2010 | To mark the 50th anniversary of UCAR and NCAR, Staff Notes will feature occasional articles this year on the organization's rich history. Here, Margaret "Peggy" LeMone outlines the progress of women scientists since the 1970s. Recently retired from NESL/MMM, Peggy was the first woman to become an NCAR senior scientist. This year she is president of the American Meteorological Society. She is a regular contributor to NCAR & UCAR Currents on topics connecting research with "backyard science."
When I arrived at NCAR to join ASP in April 1972, there were few other female Ph.D. scientists out of the roughly 100 Ph.D.s on staff. Joan Feynman (Richard Feynman's younger sister) was in HAO, and Sue Anne Bowling was in ASP with me. Cicely Ridley, a Ph.D. in mathematics, worked on model codes in the computing division. There were also about a half-dozen female scientists with master's and bachelor's degrees.
In fact, women in meteorology were a rarity nationally (see graph). When I met Joanne Simpson, she greeted me like a long-lost sister—it was so exciting to meet another woman in the field! As we talked, each telling the other how many women meteorologists we knew, we decided it was time to really find out. So over the next year and a half, we contacted every woman meteorologist we could find. The many letters we received told a rich story, which was published in the Bulletin of the American Meteorological Society (BAMS) in 1974.
I arrived at NCAR more interested in doing science than furthering the cause of women. That quickly changed. NCAR, in a move to enforce its anti-nepotism rule, terminated Nancy Knight's appointment after she had worked 11 years as a casual, studying hail with her husband, Charlie Knight. In addition, the situation for women scientists was being affected by recently enacted equal opportunity and affirmative action regulations. These events galvanized a group of women across the institution to form the Council for NCAR Women (C4NW). We polled the UCAR universities to learn about their anti-nepotism rules. It turned out that many were more progressive than NCAR. UCAR responded positively, ending the anti-nepotism rule entirely, and Nancy had her job back. But it wasn't until Alex Dessler came to NCAR a few years later to head the cloud physics group that Nancy was given a regular M.S/B.S. scientist job. C4NW faded away not long after that.
Logo for C4NW, designed by Ed Danielson. The screw shape reflects the edgy humor of the early 1970s.
Aside from the sometimes onerous institutional hurdles, women had to face some more mundane ones as well. Every woman scientist over a certain age has at least one bathroom story. My favorite involved the lack of women's restrooms at an old airbase in Michigan that we used as a base of operations during a field program in 1970. There was a men's room that seemed to be the size of an auditorium, and a locked women's room used only by the base secretary. When she was out, I had to have someone yell into the bathroom/auditorium to announce my arrival, and hope that no one came in before my departure.
Also, pin-ups were abundant well into the 1970s. In the GATE operations center in Dakar, Senegal, the radio room was literally wallpapered with Penthouse and Playboy centerfolds. This must have been bewildering to the Senegalese women who worked there, particularly when they saw the reaction after they posted a male centerfold from an issue of Playgirl that showed up one day in the data analysis room!
Percentage of female atmospheric science graduate students versus percentage in ASP. Source: Curriculum for the Atmospheric Sciences and NCAR.
The number and fraction of women scientists at NCAR began to increase during the 1980s and 1990s, especially within ASP, as a result of new affirmative action laws, outside reviews, and more women entering the field. Many who showed up during these years brought a new dimension to the institution, and several of them continue to serve as NCAR leaders and senior scientists. Linda Mearns (CISL/IMAGe) worked with people from many disciplines on the impact of variability in future climates on crops. Beth Holland (NESL/ACD) looked at the nitrogen and carbon cycles, and their role in the Earth system and Bette Otto- Bliesner (NESL/CGD) studied paleoclimates.
Around 1990, women scientists organized again. Linda Mearns, Kathy Miller (RAL), and Barbara Brown (RAL) formed the Boulder-wide Women in Atmospheric Science (WIAS). This group facilitated the development of a clear maternity policy based on a survey of the UCAR universities (this time by HR) and became the first of several groups to recommend a UCAR day care center, which materialized a number of years later. WIAS also provided a friendly audience for young women scientists to hone their public-speaking skills.
After this group faded, part of its role was taken over by division equity committees, which looked for ways to increase fair practices for all employees across the institution. In the mid-1990s, Susan Solomon, who was acting director of ACD at the time, led an effort to modify the NCAR promotion policy to allow stopping the tenure clock to mitigate the impact of family responsibilities or unusual work responsibilities on scientists who had not yet achieved a tenured position.
A view from outside
The women scientists organized for a third time in 1999, in response to a small meeting called by Bob Serafin, NCAR director at the time, to examine ways to better the situation for women at NCAR. The means fell into our laps. The Committee for the Status of Women in Physics (CSWP) was having a site visit at CU at about that time, and one of the panel members stayed with my husband and me. The CSWP evaluates the climate for women in physics departments using a panel of prominent female physicists who understand the culture, know the lessons learned by previous panels, and have sufficient clout to be listened to. The department only has to pay the panel expenses.
This process seemed ideal both to us and the NCAR administration. After some negotiation (the CSWP had never visited an institution like NCAR before), the committee agreed to come. The process involved a written survey and on-site interviews with groups of scientists, both men and women. The findings were similar to earlier CSWP evaluations. Many problems perceived as unique to women scientists were common to all junior scientists, and once again, a day care center was recommended. Out of this grew the Early Career Scientist Assembly and a Standing Committee on Women in Science. Thanks to the efforts of Katy Schmoll, UCAR vice president for finance and administration, employees gained access to a day care center, the Children's Creative Learning Center, in 2004.
Today, women make up 28% of NCAR's scientific staff and 10 out of 78 senior scientists. We have had two women (Susan Solomon and Anne Smith) as interim directors of ACD, and Anne was also the NCAR Scientists' Assembly representative to the Director's Committee. Maura Hagan (HAO) is the first female deputy director of NCAR. Some of the divisions and labs have active women's groups. While there is a sincere effort to hire qualified women, the emphasis has shifted to making the workplace a place in which all employees—men and women—can contribute to the best of their abilities. And there is no shortage of bathrooms.
For more about Peggy, read "Recollections from a pioneering woman scientist," Staff Notes, December 2004.
|
<urn:uuid:1c340661-949e-461f-a205-2cd37a5bbda9>
|
CC-MAIN-2013-20
|
http://www2.ucar.edu/for-staff/updates/women-scientists-ncar-we-ve-come-long-way
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.977367
| 1,625
| 2.640625
| 3
|
This is an excerpt from EERE Network News, a weekly electronic newsletter.
Peace Corps to Tackle Grassroots Energy Issues
The Peace Corps announced on August 19 that the U.S. Department of State will provide $1 million to fund the program in support of the Energy and Climate Partnership of the Americas (ECPA). The money will help Peace Corps efforts that increase rural access to energy, mitigate the effects of climate change, and support the use of renewable energy and energy efficient technologies in Central and South American communities.
Under the partnership, Peace Corps volunteers will work with members of local communities to build infrastructure to support environmentally friendly energy and to educate communities on climate change and energy conservation. Volunteers will train host-country citizens to install, operate, and maintain energy-efficient technologies, including alternative fuels, biodigesters, solar water heaters, photovoltaic devices, solar and fuel-efficient stoves, and wind or mini hydroelectric power generation. These efforts will make clean energy more accessible to rural communities, reduce carbon emissions, and provide opportunities for individuals to generate income.
This is the most recent initiative for ECPA, which has expanded since President Obama invited all Western Hemisphere countries to join during the Fifth Summit of the Americas in April 2009. In spring, DOE announced a series of EPCA partnerships to address clean energy and energy security in the Western Hemisphere, including launching an Energy Innovation Center for Latin America and the Caribbean, developing biomass resources in Columbia, and cooperating with Argentina on clean energy technologies. The Peace Corps' initial ECPA-related efforts will be implemented in Costa Rica, Dominican Republic, Guyana, Honduras, Nicaragua, Panama, Peru, and Suriname. See the Peace Corps press release, the ECPA Web site, and the April 21 edition of EERE Network News.
|
<urn:uuid:14848746-cf2f-441c-96ad-e80c154f2add>
|
CC-MAIN-2013-20
|
http://apps1.eere.energy.gov/news/news_detail.cfm/news_id=16278
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907312
| 365
| 2.609375
| 3
|
Updated 01/18/2013 05:00 PM
WIC kids are eating healthier, less obese
Low income mothers and children are eating healthier. That's according to a study of the New York State's Supplemental Nutrition Program for Women, Infants and Children, or WIC. As our Katie Gibas reports, since healthier food packages were introduced in 2009, obesity rates of children in the program have dropped.
To view our videos, you need to
install Adobe Flash 9 or above. Install now.
Then come back here and refresh the page.
NEW YORK -- About one out of every six children is obese. That often carries over to adulthood, which can increase the likelihood of developing heart disease and type two diabetes.
"The New York State Department of Health was the first Department of Health to recognize the obesity epidemic beginning in the mid-1990s, and was then the first department to start implementing efforts to address childhood obesity," said Jackson Sekhobo, Ph.D, of the NYS Department of Health Nutrition Division.
That's part of the reason for changes in the WIC program that provides supplemental nutrition for expectant and new moms and their young children. Women meet with a nutritionist and get a list of approved foods for the month. In 2009, WIC added several initiatives to encourage better food choices and healthier behaviors.
"Promoting consumption of low-fat milk, child-appropriate physical activity, breast-feeding, reduction of screen time as well as consumption of fruits and vegetables," said Sekhobo.
A four year, $2.2 million study was conducted to find out how the efforts were working.
Child obesity rates for kids in the program dropped from 14.7 percent in 2009 to 14.1 percent in 2011.
"That's a very significant change because going from 14.7 to 14.1 represent a lot of children who are no longer obese," said Sekhobo.
But there's still a long way to go. The goal is to reduce child obesity rates to under ten percent.
The study was funded by the Robert Wood Johnson Foundation and the New York State Health Foundation.
Visit www.health.ny.gov for more information.
|
<urn:uuid:aeea0c74-14cf-4471-bf66-f4378e386651>
|
CC-MAIN-2013-20
|
http://binghamton.ynn.com/content/top_stories/631454/wic-kids-are-eating-healthier--less-obese/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960043
| 450
| 2.875
| 3
|
You can view the current or previous issues of Diabetes Health online, in their entirety, anytime you want.
Click Here To View
Latest Blood Sugar Articles
Popular Blood Sugar Articles
Highly Recommended Blood Sugar Articles
Send a link to this page to your friends and colleagues.
Exercise Improves Diabetes and Cardiovascular Control, but Maintenance is Necessary. It is well known that good diet and exercise habits reduce the risk of heart disease and improve blood sugar control in people with type 2 diabetes. Knowing that diet and exercise programs are extremely difficult to follow, researcher Fannie Smith enrolled overweight type 2 patients in an intensive, 16-week program called "Fit N' Healthy."
Smith wanted to find out how the subjects' health would improve during the course of the program. She also wanted to see if the program's benefits continued at three and six-month intervals after it ended.
The chart on page 35 shows that all measurements improved during the program. At the six month follow-up, however, only triglyceride and cholesterol levels remained better than they were at the beginning of the study. All other numbers were approaching pre-program levels.
Workshops Improve Exercise Routines in People With Diabetes. Intensive, one-day, motivational and educational workshops appear to help blood sugar control, exercise routines and attitude in people with diabetes.
Nicole Champagne and Steven Edelman studied the effects of the one-day, patient-oriented, educational and motivational program called Taking Control of Your Diabetes. A total of 250 type 1s were asked to fill out questionnaires prior to the 1997 San Diego program, and at six week and nine month intervals following the program.
Nine months after the program, Champagne and Edelman discovered that participants' exercise increased from 2.87 to 3.46 days per week.
Type 2s Stick With Water Exercise Programs. Researchers at the Maine Medical Center say that water exercise programs are good for people with type 2 diabetes, and that drop-out rates are low.
Mary K. Frohnauer and colleagues, studied 21 men and women enrolled in a low-intensity, aquatic exercise program for one year. Forty-five minute workouts in a heated pool were supervised three times a week by a certified aquatic therapist.
After one-year, subjects lost weight, but there was little improvement in blood glucose levels. Despite no reduction in blood glucose levels, the researchers suggest other psychological and physical benefits of the aquatics program may have kept participants from dropping out.
- - - - -
The preceding abstracts on exercise and diabetes were presented at the ADA Scientific Sessions in San Diego, June 19-22. 1999.
Diabetes Health is the essential resource for people living with diabetes- both newly diagnosed and experienced as well as the professionals who care for them. We provide balanced expert news and information on living healthfully with diabetes. Each issue includes cutting-edge editorial coverage of new products, research, treatment options, and meaningful lifestyle issues.
|
<urn:uuid:3dbad186-2bd3-4878-8c2f-85ba1d9f8356>
|
CC-MAIN-2013-20
|
http://diabeteshealth.com/read/1999/08/01/1579/researchers-show-how-exercise-improves-blood-sugars-and-well-being/?isComment=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95463
| 606
| 2.71875
| 3
|
Select educators, researchers and policy
makers are addressing a vital issue: the impact of digital technology on learning -
will it merely produce incremental improvement or could it lead to fundamental
Digital technology has impacted different sectors of
life in different ways and to different depths. Some, such as communications
and many branches of medicine, have been transformed beyond all recognition.
A time traveller from even half a century back would not have the faintest
understanding of the questions being discussed at a research conference
in these areas.
Other sectors have been impacted more superficially.
Among these is Education. Nobody in Education would say that ICT does
not have some role. But prevailing policies virtually everywhere appear
to be based on the assumption that at least in the foreseeable future,
the presence of the technology will not fundamentally change the way schools
work. Worse, there is no focused forum for asking why.
We are creating such a forum, beginning with the discussions at Media Lab Europe during the spring of 2004.
Professor Emeritus Seymour
Papert introduced the themes of Epistemology, Learning, School, Society,
Technology, and Change, which the delegates addressed. We hope the proceedings
and other records here will become a platform for continuing Europe-wide discussion on
this issue and ultimately for catalyzing change.
|
<urn:uuid:1b16c7bf-8451-422c-bfeb-8766668e8481>
|
CC-MAIN-2013-20
|
http://fundamentalchange.carolstrohecker.info/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938592
| 272
| 2.734375
| 3
|
With the completion of the whole genome sequence for many organisms, investigations into genomic structure have revealed that gene distribution is variable, and that genes with similar function or expression are located within clusters. This clustering suggests that there are evolutionary constraints that determine genome architecture. However, as most of the evidence for constraints on genome evolution comes from studies on yeast, it is unclear how much of this prior work can be extrapolated to mammalian genomes. Therefore, in this work we wished to examine the constraints on regions of the mammalian genome containing conserved gene clusters.
We first identified regions of the mouse genome with microsynteny conservation by comparing gene arrangement in the mouse genome to the human, rat, and dog genomes. We then asked if any particular gene types were found preferentially in conserved regions. We found a significant correlation between conserved microsynteny and the density of mouse orthologs of human disease genes, suggesting that disease genes are clustered in genomic regions of increased microsynteny conservation.
The correlation between microsynteny conservation and disease gene locations indicates that regions of the mouse genome with microsynteny conservation may contain undiscovered human disease genes. This study not only demonstrates that gene function constrains mammalian genome organization, but also identifies regions of the mouse genome that can be experimentally examined to produce mouse models of human disease.
|
<urn:uuid:07f14593-f23b-49bb-906e-a086fc5b33fa>
|
CC-MAIN-2013-20
|
http://pubmedcentralcanada.ca/pmcc/articles/PMC2779822/?report=abstract
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.9266
| 273
| 2.515625
| 3
|
Can you imagine California without farms? Secretary of Energy Steven Chu already has.
“We’re looking at a scenario where there’s no more agriculture in California,” he said earlier this year.
He was talking about the threat of climate change—and the prospect that rising temperatures could cause the Sierra snowpack to melt permanently. This would devastate farms in the San Joaquin Valley. “I don’t think the American public has gripped in its gut what could happen,” said Chu.
Maybe not. Yet some public officials are doing their best to give us a taste of this grim future. By hurting the Golden State’s farmers, they’re throwing people out of jobs and jeopardizing one of America’s most important sources of food.
The problem is a drought, brought on by weather patterns outside our control and political malfeasance that is entirely man-made. It’s amazing how some people can take a bad situation and make it worse.
Get used to hearing about water scarcity. Around the world, it’s an emerging problem. More than a billion people live in areas where water is in short supply. If they resort to drinking unclean water, they put their health at risk: cholera, typhoid fever, and dysentery become everyday threats.
If current trends continue, one day we may worry about water the way we now agonize about energy. One important difference is that with oil, at least we have the opportunity to develop alternatives, such as biofuels. Water, however, has no substitute. It’s the ultimate biofuel—an irreplaceable ingredient for life itself.
In California, we aren’t getting nearly enough. This year, I’ve had to let a good portion of my own cropland lie fallow, simply because I can’t deliver enough water. Many other farmers are making similar decisions, out of sheer necessity. In Fresno County alone, the water shortage has idled 262,000 acres. Throughout the state, the figure is 450,000 acres.
For a state that produces about half of America’s fruits, vegetables, and nuts, this is a major problem. Consumers will feel the consequences when they pay more at the grocery store for everything from canned tomatoes to almonds.
More rainfall would help, but that’s not the only problem. Politics is wreaking havoc as well. Radical environmentalists favor fish over farmers. In particular, they’re lobbying on behalf of a minnow-like species called the delta smelt. Their efforts are working, as public officials in both Sacramento and Washington conspire to neglect the needs of agriculture.
Water levels in our area are actually at about 95-percent normal. Farmers, however, are getting only about 10 percent of their fair share, based on agreements we have made with the government. We’re trumped by the delta smelt. This is not a phenomenon of climate, but rather a political choice. That’s why I’ve started referring to our problem as a “legislative drought.”
It’s a strange set of circumstances, given the financial crisis. The University of California at Davis estimated that 35,000 people had lost their agricultural jobs as of May. A few of our towns have some of the highest unemployment rates in the country. In addition, farm revenue was down by $830 million. If lawmakers truly want to stimulate our local economy, they simply should release more water to food producers.
If they don’t, and this problem persists, Steven Chu’s alarming vision of California agriculture could come to pass. More than jobs are at stake. Americans would have to rely upon imports for much of their food supply. This would imperil our national food security.
A few weeks ago, thousands of farmers and farm workers staged a demonstration, calling for better management of our natural resources. “Water makes the difference between the Garden of Eden and Death Valley,” said the comedian Paul Rodriguez, whose parents are farmers in the region.
We aren’t looking to rebuild the Garden of Eden, of course. All we want is the water that will let us grow enough crops to maintain our livelihoods and sell the food that everybody needs.
Unfortunately, a line from “The Rime of the Ancient Mariner” is now coming to describe the plight of the California farmer: “Water, water everywhere / Nor any drop to drink.”
Ted Sheely raises lettuce, cotton, tomatoes, wheat, pistachios, wine grapes and garlic on a family farm in the California San Joaquin Valley. He is a board member of Truth About Trade and Technology www.truthabouttrade.org
|
<urn:uuid:c043f95d-286a-41b8-a100-ffe3d62ad01f>
|
CC-MAIN-2013-20
|
http://www.agweb.com/farmjournal/farm_journal_corn_college/blog/the_truth_about_trade/?Year=2009&Month=7
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955739
| 991
| 2.703125
| 3
|
Diabetic Educational Eating Plan (DEEP)
By improving glycemic control, many of the devastating complications of diabetes are to a large extent preventable. The use of a low GI (glycemic index) diet to improve glycemic control is relatively new and untested. However, a low GI diet may be a cost-effective approach to preventing diabetes-related complications. The aim of this proposed 2-year study is to gather pilot data on the feasibility of implementing a nutritionist-delivered low GI intervention, to reduce dietary GI in patients with type 2 diabetes, and to compare it with a nutritionist-delivered standard American Diabetes Association (ADA) diet intervention. Our outcomes are recruitment and retention rates, as well as physiological measures (HbA1c, blood lipids, blood pressure, and body mass index), dietary GI scores and acceptability of the intervention.
1. Recruitment and retention rates for the low GI intervention will be satisfactory.
- Participants in the low GI intervention group will show more favorable changes in physiological measures than participants in the ADA diet group.
- Participants in the low GI group will be successful in lowering the GI of their diet.
- Participants will find the intervention acceptable.
Low-GI Dietary Education
ADA Dietary Education
Behavioral: Low GI
Behavioral: ADA diet
|Study Design:||Allocation: Randomized
Endpoint Classification: Efficacy Study
Intervention Model: Parallel Assignment
Masking: Single Blind (Outcomes Assessor)
Primary Purpose: Treatment
|Official Title:||Applicability of a Low Glycemic Index Diet in Diabetes|
- Our outcomes are recruitment and retention rates, as well as physiological measures (HbA1c, blood pressure, and body mass index), dietary GI scores and acceptability of the intervention [ Time Frame: one year ] [ Designated as safety issue: No ]
|Study Start Date:||August 2005|
|Study Completion Date:||August 2012|
|Primary Completion Date:||August 2012 (Final data collection date for primary outcome measure)|
Active Comparator: ADA diet
Patients will be encouarged to consume foods consisted with ADA dietary recommendation
Behavioral: ADA diet
ADA dietary education
a low GI dietary education
Behavioral: Low GI
The glycemic index (GI) is a ranking of carbohydrate containing foods according to the rate at which they raise blood glucose levels after eating. A recent meta-analysis of randomized clinical trials (RCT) suggests that choosing low GI foods has a small but clinically useful effect on medium-term glycemic control in patients with type 2 diabetes. However, in most of the reviewed RCTs, patients were fed experimental diets and therefore there is still controversy over the applicability of GI in the clinical setting for management of diabetes. In addition, there is no evidence that long-term consumption of a low GI diet will contribute to improved glycemic control in people with diabetes.
Our ultimate goal for a future larger RCT is to evaluate the long-term effects of using low GI diet in type 2 diabetics. The primary outcome variable of the future large trial will be glycosylated hemoglobin levels (HbA1c), a measure reflecting average glycemic level during the preceding 2-3 months. The proposed feasibility study will recruit 40 patients with type 2 diabetes and will randomly assign them to one of two groups: a low GI nutrition education group (low GI group) and a standard ADA dietary education group as the control group (ADA group) (20 patients in each group).
For both groups, the intervention phase will last 6 months and consist of an initial group session, an individual session, and then four group counseling sessions. The follow-up phase will be six months and consist of two group booster sessions, one at 8-months and another at 10-months. The low GI nutritional education will be primarily targeted at a low GI diet. The focus is not on decreasing total carbohydrate intake, but rather encouraging patients to substitute low GI foods for high GI foods. The dietary intervention will be based on a patient-centered counseling model which has been demonstrated to facilitate health behavior change. Data collection points coincide with two phases of the intervention. Assessments, including demographics, anthropometric measurements, diet and physical activity recalls, and clinical data, will be conducted at baseline, and at 6 and 12 months after randomization, with blood samples collected at each interval. We will track response to recruitment, adherence, and retention. Quantitative and qualitative methods will be used to assess acceptability of the intervention.
The aim of this proposed 2-year study is to gather pilot data on the feasibility of implementing a nutritionist-delivered low GI intervention to reduce dietary GI in patients with type 2 diabetes. Our outcomes are recruitment and retention rates, as well as physiological measures (HbA1c, blood pressure, and body mass index), dietary GI scores and acceptability of the intervention.
By improving glycemic control, many of the devastating complications of diabetes are to a large extent preventable. The use of a low GI diet to improve glycemic control is relatively new and untested. However, a low GI diet may be a cost-effective approach to preventing diabetes-related complications. Testing the feasibility of such a program and its potential impact would be an important step towards an RO1 application to the NIH.
|United States, Massachusetts|
|University of Massachusetts Medical School|
|Worcester, Massachusetts, United States, 01655|
|Principal Investigator:||Yunsheng Ma, MD, Ph.D.||Division of Preventive & Behavioral Medicine, Department of Medicine, University of Massachusetts Medical School|
|
<urn:uuid:364b66db-d9e1-4519-95ca-c43034c31cb9>
|
CC-MAIN-2013-20
|
http://www.clinicaltrials.gov/ct2/show/NCT00473811
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.887742
| 1,178
| 2.6875
| 3
|
Posted at: 11/07/2011 1:53 PM
A Harvest of Root Vegetables
Hy-Vee Registered Dietitian Jen Haugen will show you how to incorporate root vegetables into your diet.
Root Veggie 101 -
- Root vegetables are the underground part of vegetables that are edible, namely the roots of plants.
- Root vegetables have high NuVal scores. They are high in fiber, vitamin C and other antioxidants.
- Root veggies can be prepared in many ways – roasting, mashing, “chips”, soups and more!
Root Vegetable NuVal Scores
Onion – NuVal 93
Potato – NuVal 93
Parsnips – NuVal 94
Garlic – NuVal 96
Sweet Potato – NuVal 96
Carrots – NuVal 99
Radish – NuVal 99
Kohlrabi – NuVal 100
Turnip – NuVal 100
Ways to Use Root Veggies:
Roasting brings out the essential sweetness in root vegetables and creates a crispy, brown exterior. How-To: Scrub vegetables clean. Leave the skin on for more fiber or peel instead. Cut vegetables into bite-sized pieces. Toss with olive oil to coat lightly and evenly. Put veggies in a shallow roasting pan or on a baking sheet. Sprinkle with salt, ground black pepper, chopped herbs or spices to taste. Roast in hot oven (375 to 425 degrees) until vegetables are tender and browned, about 30 minutes.
Mashed potatoes are well-known. However, other root vegetables (parsnips, turnips, sweet potatoes) are delicious mashed.
Root Veggie Chips Plain potato chips will seem boring when you try sweet potato chips or carrot chips. How-To: Cut root veggies into ¼-inch slices. If you have the equipment, the fine-slicing blade of a food processor or a mandoline is a great tool for cutting thin slices. Toss with olive oil and spices. Arrange in a single layer on a baking rack or greased baking sheet. Bake in preheated 375- to 400-degree oven for 40-60 minutes, turning two to three times, or until veggies are crisp and browned.
Root Veggie Soups Root veggies add flavor, nutrition and bulk to soups and stews without adding many calories. Bite-size pieces take approximately 20 minutes to soften once boiling/simmering. Or try grating them to blend them well into the soup.
Maple-Roasted Sweet Potatoes
Serves 12 (about 1/2 cup each). Active Time: 10 minutes | Total: 1 hour 10 minutes
Roasting sweet potatoes is even easier than boiling and mashing them. Maple syrup glaze transforms this ultra-simple dish into something sublime.
All you need
2 1/2 pounds sweet potatoes, peeled and cut into 1 1/2-inch pieces (about 8 cups)
1/3 cup Grand Selections 100% pure maple syrup
2 tablespoons Hy-Vee butter, melted
1 tablespoon lemon juice
1/2 teaspoon salt
Freshly ground pepper to taste
All you do
1. Preheat oven to 400°F.
2. Arrange sweet potatoes in an even layer in a 9-by-13-inch glass baking dish. Combine maple syrup, butter, lemon juice, salt and pepper in small bowl. Pour the mixture over the sweet potatoes; toss to coat.
3. Cover and bake the sweet potatoes for 15 minutes. Uncover, stir and cook, stirring every 15 minutes, until tender and starting to brown, 45 to 50 minutes more.
To make ahead: Cover and refrigerate for up to 1 day. Just before serving, reheat at 350°F until hot, about 15 minutes.
Source: adapted from Eating Well, Inc.
Nutrition facts per serving: 92 calories; 2g fat (1g sat, 1g mono); 5mg cholesterol; 18g carbohydrate; 5g added sugars; 1g protein; 2g fiber; 119mg sodium; 294mg potassium.
Nutrition bonus: Vitamin A (223% daily value), Vitamin C (20% dv).
Carbohydrate Servings: 1 | Exchanges: 1 1/2 starch, 1/2 fat
Jen Haugen represents Hy-Vee as a nutrition expert promoting healthy eating throughout the community. Jen is a member of the American Dietetic Association.
|
<urn:uuid:8f65bca4-568d-4e39-8673-f34b36fad0ac>
|
CC-MAIN-2013-20
|
http://www.kaaltv.com/article/stories/S2362235.shtml?cat=11985
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.842348
| 925
| 2.828125
| 3
|
Finnish Grammar Sentence Structure
A selection of articles related to finnish grammar sentence structure.
Original articles from our library related to the Finnish Grammar Sentence Structure. See Table of Contents for further available material (downloadable resources) on Finnish Grammar Sentence Structure.
- Bringing it Down to Earth: A Fractal Approach
- 'Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.' B. Mandelbrot W e want to think about the future - it's our nature. Unlike other creatures, humans possess an...
Mystic Sciences >> Astrology
- The Legend of Stonehenge
- Stonehenge has fascinated thousands of people throughout the ages, even today people are still wondering about the origins of the mysterious Stonehenge. Today's scientists and historians are still unable to come to a solid theory of when, why, by whom, and...
Earth Mysteries >> Mystic Places
- King James Bible: Deuteronomy, Chapter 17
- Chapter 17 17:1 Thou shalt not sacrifice unto the LORD thy God any bullock, or sheep, wherein is blemish, or any evilfavouredness: for that is an abomination unto the LORD thy God. 17:2 If there be found among you, within any of thy gates which the LORD thy...
Old Testament >> Deuteronomy
- Hyperspace Reality
- Despite the fact that the 'new' physics, a godchild of the Einsteinian revolution has taught us that the Universe we perceive is a mere shadow of a vastly more unpredictable one, most of us still view the world in a distinctly materialistic way. A world where...
Modern Science >> New Physics
- Psychic Protection - Barriers
- Introduction: The countermeasures given in these short articles are extremely basic. They only contain the bare bones of what is necessary to use them. To understand the principles behind these countermeasures, please get a copy of 'Practical Psychic Self-Defe...
Psychic Abilities >> Psychic Protection
- The First Congregational Church of Wicca
- Witchcraft is a religion of field and grove, river and stream. In coven or individually, we perform rituals to renew our connections with the Goddess and God. However, what suits one may not answer the needs of another. This need is being satisfied by a new...
Religions >> Paganism & Wicca
Finnish Grammar Sentence Structure is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Finnish Grammar Sentence Structure books and related discussion.
Suggested Pdf Resources
- SUFFIXES AS DEEP STRUCTURE CLUES Transformational
- Transformational grammar assigns persuade a lexical struc- ture that prevents it . for the two sentence structures, in Finnish as well as in Dutch,.
- ON DETERMINATION IN ENGLISH AND ITALIAN AS COMPARED
- in which articles are used or are absent because that is the task of grammars and .
- 1 Introduction1 2 Properties of the doubling pronoun
- In colloquial Finnish the subject can be doubled by a pronoun, as in (1a,b): 2 Use of se/ne to refer to humans is traditionally proscribed in Finnish normative grammar. ..
- Lexicase Parsing: A Lexicon-driven Approach to Syntactic Analysis
- Grammar , Generalized Phrase Structure Grammm" , and. Lexiease have begun rules: the structural representation of a sentence is aay sequence of words connected by ... model of Finnish sentence structure.
Suggested Web Resources
- Finnish grammar - Wikipedia, the free encyclopedia
- This article deals with the grammar of the Finnish language (the article "Finnish 7 Numbers; 8 Sentence structure. 8.1 Word order; 8.
- Ending a Sentence With a Preposition : Grammar Girl :: Quick and
- Mar 31, 2011 Get Grammar Girl's take on ending a sentence with a preposition. of many words (yes, the same thing happened to Finnish nouns and adjectives). ..
- Finnish Grammar Sentence Structure | RM.com ®
- Finnish Grammar Sentence Structure articles, reference materials. Need more on Finnish Grammar Sentence Structure?
- Finnish Tutorial: Basic Phrases, Pronunciation and Grammar
- Finnish is a language that has no grammatical gender. ...
- Amazon.com: Finnish: An Essential Grammar (Essential Grammars
- This second edition of Finnish: An Essential Grammar has undergone profound revisions.
Finnish Grammar Sentence Structure Topics
Related searchesindian ocean raid colombo
land mine manufacturers
tobins q tobins marginal q
top gear theme music
|
<urn:uuid:e30cb148-f02d-4c37-bee6-075eb63b8967>
|
CC-MAIN-2013-20
|
http://www.realmagick.com/finnish-grammar-sentence-structure/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.867519
| 1,015
| 2.828125
| 3
|
A mass of yellow cocoons attached to the end of a twig.
Hi! It’s been a few weeks, hasn’t it? I have all these pictures and draft posts but no time to finish any of them because I am trying to get my analyses done for the ecology conference this August. I am tragically productive.
Here are some cocoons I found up at the Oklahoma biostation back in spring. Cocoons are awesome because they are basically insects you can rear without actually doing any work. I stuck these guys in a jar for a week or two to see what would emerge. What I got was tons of tiny black and brown wasps. I took some pictures under a scope and threw them up on BugGuide where I they were quickly ID’ed by the excellent Bob Carlson. BugGuide is awesome, because it is basically network of experts you can access without actually doing any work.
A female braconid wasp (Cotesia), emerged from the cocoons.
The black wasps turned out to be members of the genus Cotesia, in the family Braconidae. These are parasitoid wasps which lay their eggs on (or in) caterpillar hosts. The larvae develop inside the caterpillars Alien-style, slowly eating them alive, before eventually emerging to pupate and seek out new hosts.
A female ichneumonid wasp (Mesochorus) emerged from the cocoons.
The brown wasps turned out to be a species of Mesochorus which are hyperparasitoids of the original black wasps. These are parasitoids of parasitoids which lay their eggs in the egg or early instar larvae of the Cotesia parasitoid wasp as it develops in the caterpillar host. (Read that sentence back to yourself until it makes sense.) If this arrangement seems unnecessarily complex to you, just realize that hyper-hyperparasitoids also exist. Every “hyper” kicks it down another level. It’s basically the plot of “Inception” but with innards-devouring bugs instead of dreams. (“Insection“?)
Eucalyptus leaf galls formed by gall wasps.
This is a green bug for St. Patrick’s day. (I’m reaching; I know. Happy Birthday, Eric!)
I’ve talked a bit about gall-forming insects in the past, but I think it bears repeating how extremely cool this adaptation is. Galls are created by parasites (fungi, bacteria, mites, wasps, aphids, flies, midges, psyllids, etc.) that use chemicals to co-opt the physiology of their host and cause the plant to grow abnormal structures that make a comfy little home for the parasite in question. Opening up these particular leaf galls revealed tiny wasp pupae, developing in the safety and luxury of their own private green room. Chemical warfare at it’s most refined.
Tiny gall wasp pupae inside a leaf gall.
P.S. Does anyone know if any wasps outside Cynipidae form galls? That’s the only family I’m familiar with.
An ensign wasp (Evaniidae) perched on a wall.
Due to their long legs and antennae, an ensign wasp on a wall may resemble a spider from a distance, and like spiders, they ought to be welcome guests in a home. These little wasps are unable to sting and harmless to humans, but they are deadly to roaches. Like many other small wasps, ensign wasps are parasitoids: the female ensign wasp lays her eggs only in the egg cases of cockroaches, where the larvae hatch and quickly devour the cockroach eggs.
Ensign wasps (also called hatchet wasps) are members of the family Evaniidae, and take their common name from the distinctive shape of their gaster (rear end). It is flattened laterally, and attached high like a flag. Much like a banner waver, they will twitch their gaster rapidly up and down when disturbed. The species I find around here is also notable for the attractive blue eyes that can be seen under a hand lens. They main body is perhaps 1cm long, with the legs and antennae nearly doubling the size. I found the wasp pictured above hanging around in the hallways of our building on campus, defending us from roaches.
Parasitoid wasps parasitize an insect egg case.
Found these tiny wasps parasitizing a mantis egg case or ootheca in Argentina. You can see the tiny wasps rearing up and inserting their long ovipositors. In higher members of the order Hymenoptera, the female ovipositor is modified into the sting (only female bees, wasps, and ants can sting.)
‘Parasitoids’ are distinguished from ‘parasites’ in several ways. A parasitoid insect lays one or multiple young on or into a single host organism–often a juvenile or egg. The young develop inside the host, which may live on for some time, but almost always ultimately succumbs to the creature devouring it from the inside out. Anyone who has watched the movie Aliens may find this sequence of events familiar. Parasitoids thus are not true parasites as they kill their host, but not quite predators, in that they consume only one prey item during the course of their lifespan.
An ambush bug preys on an unwary wasp.
Exploring an Argentinean roadside I spotted what I thought was a dead wasp on a flower. Wondering how this wasp had come to perish so abruptly in her nectar gathering work, I looked closer. I actually poked at her several times before I noticed the second occupant of the flower—an ambush bug enjoying a tasty wasp meal!
Ambush bugs are a subfamily of Assassin bugs, family Reduviidae. Ambush bugs are “sit-and-wait” predators. These highly cryptic (camouflaged) insects frequently lurk around flowers, where they pick off unwary visitors. They have mantis-like raptorial forelegs to snatch their prey from a safe distance. Like other true bugs (order Hemiptera, suborder Heteroptera) ambush bugs have a segmented tube-like ‘beak’ for feeding. Ambush bugs insert this beak into a weak spot in their prey’s hard exoskeleton and suck out the fluids.
|
<urn:uuid:10f30184-5895-4978-8dda-1c27bb8f4793>
|
CC-MAIN-2013-20
|
http://6legs2many.wordpress.com/tag/wasps/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949931
| 1,382
| 2.734375
| 3
|
Teresa M. Dominguez,
For a summary of this policy click here.
Table of Contents
- The University's Hearing Conservation Program
- Determining Noise Levels: Sound Survey
- Noise Control Methods
- Training and Education Methods
- Audiometric Testing
- Record Keeping
- Appendix A. Possible Causes of Standard Threshold Shift and Their Solutions
- Appendix B. OSHA Occupational Noise Exposure Standard
A. The Hearing Conservation Program at the University of Connecticut is designed to prevent noise-induced hearing loss through the use of engineering controls, administrative controls, hearing protective devices, annual audiometric testing and employee training. Employees exposed to workplace noise at or above the action level must be included in the Hearing Conservation Program.
B. Key elements of the Hearing Conservation Program:
- Identify hazardous noise environments in the workplace through sound
- Implement engineering and/or administrative controls to reduce workplace
noise levels or worker exposure to noise.
- Provide hearing protective devices to exposed employees whenever
such controls are not feasible or fail to reduce noise levels below the
- Provide audiometric testing for exposed employees.
- Provide training for exposed employees.
- Identify hazardous noise environments in the workplace through sound surveys.
The amount of potential damage to the ear is related to the intensity of noise and the duration of exposure. Sound surveys will be conducted to identify work environments in which the combination of noise level and exposure time could subject employees to noise at or above the action level. In performing sound surveys, two measuring devices may be used: the sound level meter and the dosimeter. The sound level meter measures a noise level at a given moment. The dosimeter measures the noise level over a period of time. Employees are entitled to observe these monitoring procedures if they so choose.
A. Basic Sound Survey: Initially, noise levels will be estimated using a sound level meter.
- If measurements show that maximum noise levels fall below the action
level, no further steps are required.
- If measurements indicate noise levels are or could potentially be
above the action level, a detailed sound survey is necessary.
- If sound level meter monitoring is too difficult due to high worker mobility or fluctuating noise levels, a detailed sound survey is necessary.
B. Detailed Sound Survey: Noise level measurements will be recorded over the course of a typical work day using dosimeters which will be worn by representative employees.
- If measurements indicate noise levels are below the action level,
no further steps are required. However, use of engineering controls, administrative
controls and/or hearing protective devices are encouraged.
- If measurements indicate noise levels are at or above the action level, identified exposed employees must be included in the Hearing Conservation Program.
C. Engineering Sound Surveys: If measurements in the detailed sound survey also prove to be high, a survey of individual units of equipment, or noise sources, will be conducted in order to determine the problem areas and types of engineering controls.
Ø Additional surveys should be requested if there is an increase in the use of noisy equipment or if activities or procedures change noise levels in the work environment.
Ø Employees exposed to noise levels at or above the action level must be notified of the noise survey results.
- If measurements show that maximum noise levels fall below the action level, no further steps are required.
All employees who are exposed to workplace noise at or above the action level must attend annual hearing conservation training sessions offered by Environmental Health & Safety. These sessions will provide training and education in the following areas:
1. Effects of noise on hearing
2. Hearing protective devices (HPD's)
a) Purpose of HPD's
b) Types of HPD's and their attenuations
c) Advantages/Disadvantages of HPD's
d) Instruction on selection, use and care of HPD's
e) Initial fitting of HPD's
3. Purpose of audiometric testing and an explanation of test procedures.
Employees who are exposed to workplace noise at or above the action level are required to undergo audiometric testing. First, a baseline audiogram must be established against which subsequent annual audiograms may be compared.
Step 1: Obtain a Baseline Audiogram. Prior to annual testing, a baseline audiogram must be established. When establishing a baseline, the employee should not be exposed to workplace noise or high levels of non-occupational noise for at least 14 hours. A baseline audiogram within the first 6 months of employment is required for new employees assigned to an area in which noise exposure is expected to exceed the action level.
Step 2: Annual Audiograms. The purpose of annual testing is to detect threshold shifts so that follow-up action may be taken to prevent further hearing loss. Annual audiograms should be taken during the normal work shift in order to detect temporary threshold shifts resulting from workplace noise exposure that may lead to permanent hearing loss. If audiometric testing reveals that an employee has experienced a standard threshold shift, the use, fit and attenuation of the hearing protective device should be evaluated to ensure adequate protection. In addition, follow-up action, as shown below, must be taken.
Step 3: Follow-up action, if required, includes:
Determining the cause of the standard threshold shift (see Appendix
- Referral for a clinical audiological evaluation or otological exam
when problems of a medical nature are suspected.
- Retesting the employee's hearing level thresholds within 30 days
to determine whether the standard shift is a temporary or permanent threshold
Feedback: All monitored employees must be informed of their audiometric test results and be provided with an explanation of these results. Those employees who exhibit a standard threshold shift must be informed in writing within 21 days of the audiometric test. Follow-up recommendations may be included in the written notification form.
- Determining the cause of the standard threshold shift (see Appendix A).
An accurate record for each exposed employee, including audiometric test results must be established and maintained. Access to all regulations and personal monitoring records described below must be granted to each employee monitored. The record must include the following information:
A. Noise Exposure Measurements
Records of all sound surveys of the work environment must be retained for a minimum of two years.
B. Audiometric tests
Audiometric tests of every employee included in the Hearing Conservation Program must be retained for the duration of the employee's employment and must include the following information:
- Employee's name
- Employee's job classification
- Examiner's name
- Test date and time
- Test location site
- Date of last equipment calibration
- Audiograms / Threshold values obtained
- Technician's comments
- Professional Recommendations
- Background measurements of sound pressure levels in the audiometric test room.
C. Additional Records
- Hearing protective device fitting dates
- Attendance at annual training sessions
- Employee's name
- Familiarize themselves with the University's Noise Policy.
- Select and wear hearing protective devices in required work environments.
- Attend annual training sessions offered by Environmental Health & Safety.
- Notify supervisors of any significant change in observed workplace noise levels.
- Comply with the mandatory appendices C - E of OSHA's 29 CFR 1910.95
standard (see Appendix B of this policy) regarding audiometric measuring
instruments, audiometric test rooms, and acoustic calibration of audiometers.
- Obtain baseline and annual audiograms for employees.
- Compare annual audiograms to the baseline audiogram to determine
if a standard threshold shift has occurred.
- Explain audiometric test results to individual employees.
- Refer employees for a clinical audiological evaluation or otological
exam when problems of a medical nature are suspected.
- Retain records of test results for individual employees as part of
their permanent files. Audiometric test results must include the information
listed in Section VIII.B. of this policy.
- Forward a copy of the test results to the employer.
The purpose of this policy is to define University requirements regarding noise in the workplace. These requirements, which are based on the Occupational Safety & Health Administration (OSHA) standard Occupational Noise Exposure (29 CFR 1910.95), are designed to protect employees from hearing loss which could result from exposure to high levels of workplace noise.
The OSHA standard requires employers to implement a Hearing Conservation Program whenever employees are exposed to occupational noise levels above the OSHA action level (an 8-hour time-weighted average of 85 dBA). In compliance with the OSHA standard, this policy incorporates a Hearing Conservation Program with a primary objective of maintaining University work environments free from noise hazards that could lead to noise-induced hearing loss.
Several measures can be taken to prevent overexposure to workplace noise. The first step is to reduce the noise coming from the source itself. This may be accomplished through engineering or administrative controls, as described in section V of this policy. Whenever such controls are not feasible, or fail to adequately reduce workplace noise to safe levels, exposed employees will be included in the Hearing Conservation Program outlined in this policy.
The University also recognizes that some employees may be at-risk for damage to hearing from exposure to noise that is of lesser intensity/duration than the OSHA action levels. These employees could include those with previous noise exposure, as well as those with some degree of hearing loss from a variety of causes. While the University's policies and procedures are designed to identify and modify work environments that present a potential noise hazard, all employees are urged to self-identify known or possible hearing loss so that appropriate modifications and/or protection can be arranged.
Action Level - Noise exposure limits, as indicated in Table 1, above which exposed employees must be included in the Hearing Conservation Program.
|Sound Level (dBA)||Duration per day (Hours)|
All employees who are exposed to workplace noise at or above these levels will be included in the University's Hearing Conservation Program.
*Levels measured on the A-scale at slow response
and without regard to hearing protection worn.
Attenuation - Reduction in the loudness level.
Audiogram - A graph of the results of a hearing test. It shows how loud a sound has to be before an individual can hear it. The graph shows the results for sounds of varying frequencies.
Audiometric testing - A method of evaluating an exposed employee's changes in hearing over time. It consists of a baseline hearing test followed by annual testing.
Decibel (dB) - A unit of measurement of loudness of sound.
dBA - Decibel measurements read on the A-scale of a sound level meter. This scale more closely approximates human perception of sound levels.
Exposed employees - Employees whose work day routine exposes them to workplace noise at or above the action level.
Hearing Protective Devices (HPD's) - Individually worn devices, such as ear muffs and ear plugs, that attenuate (reduce) noise levels.
Potentially Hazardous Noise Environment - An environment in which workers must raise their voices in order to communicate while standing three feet away from each other.
Standard Threshold Shift - A change in hearing threshold relative to the baseline audiogram of an average of 10dB or more at 2000, 3000, and 4000Hz in either ear.
A. Engineering Controls
The preferred method for reducing noise to safe levels is to implement engineering controls. Engineering controls modify the equipment producing the noise, the characteristics of the receiver's (exposed employee's) environment, or the path through which the noise travels. Some examples of engineering controls are the use of absorption materials, muffling devices and vibrational dampening equipment. If engineering controls successfully reduce noise to below the action level, affected employees will no longer be included in the Hearing Conservation Program.
B. Administrative Controls
If engineering controls are not feasible or are ineffective, administrative controls, although sometimes less desirable, may be an alternative course of action. Administrative noise controls include replacement of old equipment with quieter new models, establishment of equipment maintenance programs and changes in employee work schedules to reduce noise doses by limiting exposure time.
C. Hearing Protective Devices (HPD's)
1. Required Use
When neither engineering nor administrative controls are feasible, or if they fail to reduce noise to acceptable levels, exposed employees must be included in the Hearing Conservation Program. As part of this program, employees must be issued and be required to wear HPD's. Prior to issuing HPD's, employees must be trained as required in Section VI.
2. Optional Use
Use of HPD's is encouraged in noisy work environments that have been determined not to expose employees to noise at or above the action level. The Department of Environmental Health and Safety is available to provide worker training on the proper selection, use and care of HPD's for University Departments that decide to issue HPD's to its employees.
A. Supervisors will:
1. Sound Surveys
a) Request sound surveys for potentially hazardous noise environments (see Section II-Definitions).
b) Request additional sound surveys whenever a change in the workplace noise level may occur (due to new equipment, increased production, etc.)
c) Permit affected employees to observe the sound level monitoring, if they so choose.
2. Noise Policy Compliance
a) Identify and schedule all exposed employees for annual hearing conservation training with Environmental Health & Safety.
b) Implement engineering and/or administrative controls whenever feasible.
c) Send exposed employees for audiometric testing.i. Schedule hearing tests to establish baseline audiograms for newly-exposed employees. Newly-exposed employees must be sent for baseline audiograms within 6 months of initial exposure. Employees must not be exposed to workplace noise for 14 hours prior to the testing. Employees should also be notified to avoid high levels of non-occupational noise 14 hours prior to testing.d) Ensure that effective hearing protective devices are being worn by employees in required areas or while performing duties which require their use.
ii. Schedule hearing tests annually to obtain follow-up audiograms. Testing should be scheduled during normal work shift hours.
e) Post a copy of Appendix B of this policy in the workplace of the exposed employee(s).
3. Employee Notification
a) Inform employees about the University's Noise Policy and of their responsibilities under the policy.
b) Notify employees of the sound survey results and make a copy accessible to them.
c) Notify exposed employees of their individual annual hearing test results. In the event the employee has experienced a standard threshold shift, he/she must be notified of this fact in writing within 21 days of the determination.
a) Maintain a list of high noise level areas and activities.
b) Maintain copies of sound survey results
c) Keep attendance lists of annual training sessions.
d) Maintain copies of annual audiometric test results for the duration of employment of the exposed employees.
e) Maintain records of the number of exposed employees.
B. Exposed Employees will:
C. Environmental Health and Safety will:
1. Written Program
Develop, implement and maintain the University's Noise Policy.
2. Sound Surveys
a) Identify problem areas by conducting basic, detailed and engineering sound surveys (see Section IV).
b) Recommend steps to be taken when noise levels are at or above the action level.
3. Employee Training
Provide information and training on hearing conservation and noise control for exposed employees.
a) Maintain records of sound surveys conducted in each department.
b) Maintain records of employee attendance at training and hearing protective device fitting sessions.
D. Audiometric Testing Center will:
Possible Causes of Standard Threshold Shift and Their Solutions
Inadequate or improper HPD use
Refit and retrain employee on correct HPD usage and/or re-select HPD with greater attenuation
See Appendix F of OSHA's Occupational Noise Exposure Standard
Referral for audiological/medical examination or to an Otolaryngologist
Complete audiological examination
OSHA Occupational Noise Exposure Standard (29 CFR 1910.95)
- 1910.95 Occupational Noise Exposure
- 1910.95 App. A Noise exposure computation
- 1910.95 App. B Methods for estimating the adequacy of hearing protector attenuation
- 1910.95 App. C Audiometric measuring instruments
- 1910.95 App. D Audiometric test rooms
- 1910.95 App. E Acoustic calibration of audiometers
- 1910.95 App. F Calculations and application of age corrections to audiograms
- 1910.95 App. G Monitoring noise levels nonmandatory informational appendix
- 1910.95 App. H Availability of referenced documents
- 1910.95 App. I Definitions
|
<urn:uuid:7560d152-9c65-47c9-a4b3-8bd513261ebf>
|
CC-MAIN-2013-20
|
http://ehs.uconn.edu/Occupational/occunoise.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.884595
| 3,458
| 2.90625
| 3
|
to Home Page
Back to Index
(For the week of November 29, 2010)
Proper Disposal of Deer Carcass
by James L. Cummins
With the passing of each year, there appears to be an increasing number of deer carcasses appearing on roadsides and in streams or rivers. These dumping practices are not only illegal, but unsightly and unhealthy.
I am an avid hunter and spend most of my working days trying to further the sport of hunting and the conservation of the natural resources of our state and our nation. But not everyone feels like I do. Some spend as much, or more, time trying to outlaw hunting as I do trying to further it. While I totally disagree with those in our society with such beliefs, we, as a group of hunters, should not do things to further the cause of the anti-hunting public, or give those that are indifferent about hunting, a reason to oppose it. In other words, we should not spread the byproduct, or the carcass, of a successful hunt in areas where other people have to view it or in areas that could cause a public health issue.
There is already a law prohibiting such activity. Section 97-15-29 of the Mississippi Code of 1972 prohibits the dumping of dead fish and wildlife, their parts, or waste on Mississippi’s roadways or their right-of-ways or on private property without the landowner’s consent. If caught, an offender can be charged with a misdemeanor and fined up to $250.00.
Dead deer on the side of the road can be a hazard to drivers. In some instances, they could cause some serious damage to a car, or injury to the driver. This dumping is happening when no one is around, therefore making it very hard to catch the culprits.
Deer carcasses dumped in streams and rivers can pose a human health risk. This risk comes by the drinking and/or swimming in waters contaminated by decomposing deer carcasses.
Roadsides, streams and rivers are not options. Two recommended methods of disposal are digging a pit in which to place the carcass or taking it to a deer processor who will properly dispose of or compost it. This is legal and respectful to the sport. And if you can’t do that, place the carcass in an area where it cannot be viewed and it is not near any homes. It won’t take long for the coyotes and buzzards to salvage the rest of it.
Anyone who finds a deer carcass on his property is obligated to clean it up and report it to law enforcement agencies. Please take time for appropriate carcass disposal. To report a violation, call your local sheriff’s office or the Mississippi Department of Wildlife, Fisheries and Parks at 1-800-BE-SMART (1-800-237-6278).
|
<urn:uuid:596f6546-53e4-46f2-8c37-31494b5fc6b8>
|
CC-MAIN-2013-20
|
http://www.wildlifemiss.org/news/columns/2010/11-29-10.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944992
| 592
| 2.59375
| 3
|
There's nothing worse than the sound of someone snoring if you're trying to fall asleep. Or maybe it's you who snores, and people tease you about the noise you make in your sleep.
Snoring isn't just noisy. Sometimes it's a sign of a serious medical problem that should be treated by a doctor. Read on to find out more about the snore!
Snoozing or Snoring?
Snoring is a fairly common problem that can happen to anyone — young or old. Snoring happens when a person can't move air freely through his or her nose and mouth during sleep. That annoying sound is caused by certain structures in the mouth and throat — the tongue, upper throat, soft palate (say: pa-lut), uvula (say: yoo-vyuh-luh), as well as big tonsils and adenoids — vibrating against each other.
People usually find out they snore from the people who live with them. Kids may find out they snore from a brother or sister or from a friend who sleeps over. Snoring keeps other people awake and probably doesn't let the snoring person get top quality rest, either.
People snore for many reasons. Here are some of the most common:
Seasonal allergies can make some people's noses stuffy and cause them to snore.
Blocked nasal passages or airways (due to a cold or sinus infection) can cause a rattling snore.
A deviated septum (say: dee-vee-ate-ed sep-tum), which is the tissue and cartilage that separates the two nostrils in your nose, may be crooked. Some people with a very deviated septum have surgery to straighten it out. This also helps them breathe better — not just stop snoring.
Enlarged or swollen tonsils or adenoids may cause a person to snore. Tonsils and adenoids (adenoids are glands located inside of your head, near the inner parts of your nasal passages) help trap harmful bacteria, but they can become very big and swollen all of the time. Many kids who snore have this problem.
Drinking alcohol can relax the tongue and throat muscles too much, which partially blocks air movement as someone is breathing and can contribute to snoring noises.
Being overweight can cause narrowing of the air passages. Many people who are very overweight snore.
Snoring is also one symptom of a serious sleep disorder known as sleep apnea. When a person has sleep apnea, his or her breathing is irregular during sleep. Typically, someone with sleep apnea will actually stop breathing for short amounts of time 30 to 300 times a night! It can be a big problem if the person doesn't get enough oxygen.
People with this disorder often wake up with bad headaches and feel exhausted all day long. They may be very drowsy and have difficulty staying awake while having a conversation or even while driving. Kids affected by sleep apnea may be irritable and have difficulty concentrating, particularly in school and with homework.
According to the government's patent office (this is where you go to register an idea or invention), there are hundreds of anti-snoring devices on the market. Some of them startle you awake when they sense you are snoring. Unfortunately, they may only work because they keep you awake!
Those small, white strips some football players wear across their noses that kind of look like a bandage are another anti-snoring device. Football players wear them during the game to breathe easier while running a play or making a tackle. People also wear these breathing strips to try to stop snoring.
Other snoring solutions include tilting the top of a bed upward a few inches, changing sleeping positions (from the back to a side), and not eating a heavy meal (or for an adult, not drinking alcohol) before bedtime. These kinds of "cures" may work only for someone who snores occasionally and lightly — or they may not work at all.
If you can't stop snoring or the snoring becomes heavy, it's a good idea to see a doctor. He or she might tell you how to keep your nasal passages clear and will check your tonsils and adenoids to be sure they aren't enlarged and don't have to be removed.
Some people need to lose weight, change their diets, or develop regular sleeping patterns to stop snoring. It may be helpful to remove allergy triggers (stuffed animals, pets, and feather/down pillows and comforters) from the person's bedroom. The doctor might also suggest medications for allergies or congestion due to a cold.
If a doctor thinks someone has sleep apnea, he or she will order a test to monitor the patient during sleep. This is usually done in a sleep center (a medical building that has equipment to monitor breathing during sleep). A patient is attached to machines that check heart rate, oxygen and carbon dioxide levels, eye movement, chest wall movement, and the flow of air through the nose.
The doctor can then tell if a patient has a disorder like sleep apnea. The best thing about the test is that it doesn't hurt at all. After all, you sleep right through it! Once doctors know what's wrong, you can be treated for it, usually with lifestyle changes, sometimes medicines, or even surgery, if necessary.
Solving a snoring problem lets everyone breathe and sleep a little easier!
|
<urn:uuid:0e44b4b5-b1a3-407b-83b1-3aff5e38444b>
|
CC-MAIN-2013-20
|
http://kidshealth.org/PageManager.jsp?dn=American_Academy_of_Family_Physicians&lic=44&cat_id=20086&article_set=36723&ps=304
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96584
| 1,136
| 3.375
| 3
|
California's Safer Consumer Products Law
There are some chemicals and applications that warrant urgent action, such as phthalates in toys or carcinogens in cosmetics.
When a chemical is particularly harmful or used in products marketed to vulnerable age groups, the Breast Cancer Fund takes targeted action.
Unfortunately, there's a bigger issue at work: our entire chemicals management system is broken. Right now, chemicals linked to diseases like breast cancer can be used in everyday products without warning to consumers. That's where the California Safer Consumer Products program fits in.
What kinds of products contain chemicals linked to breast cancer?Explore by category >
The foundation of this initiative is two state laws that the Breast Cancer Fund helped secure passage of in 2008. These laws granted the state authority—for the very first time—to regulate the chemicals used in everyday consumer products and to create a public online database of health hazards associated with these chemicals.
The Breast Cancer Fund has been working for the last four years to ensure proper implementation of this important program.
Now the Brown administration is poised to finally begin doing the work. Regulations outlining the specifics of the program are currently being vetted through a formal approval process and are slated to be finalized on July 1, 2013.
Once the program begins, there is much work to be done, but we are anxious for this important milestone so that we can stop talking about how to eliminate toxics in products and actually start doing the work.
Related Blog Posts
Breast Cancer Fund Director of Program and Policy Janet Nudelman explores the threats to men's health from personal care products in this Huffington Post blog.
Reaction to Gov. Jerry Brown's proposed changes to Prop. 65, the state's Safe Drinking Water and Toxic Enforcement Act.
UC-Berkeley study finds metals linked to breast cancer in lipsticks and lip glosses.
Breast Cancer and the Environment (National Institute of Environmental Health Sciences podcast, 3/15/2013)
Breast Cancer Fund President and CEO Jeanne Rizzo talks about why translating breast cancer research is critical for the decisions we make in our everyday lives.
|
<urn:uuid:3dd4df9e-bbd4-42d9-8a84-7a834d020923>
|
CC-MAIN-2013-20
|
http://www.breastcancerfund.org/big-picture-solutions/make-our-products-safe/addressing-the-larger-problem.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.921684
| 427
| 2.5625
| 3
|
By Elizabeth C. McCarron
Art by Judith Moffatt
|This is a summer burrow of a woodchuck. Burrows can be five feet deep and as long as a school bus.|
The woodchuck sits up on its hind legs, chewing a wild strawberry. Looking around, the chuck freezes when it spies the farmer's dog. The dog sniffs the air, spots the chuck, and charges toward it. The woodchuck watches the enemy coming closer and closer, then POOF! The chuck disappears from sight, and the dog is left puzzled. The woodchuck has dropped into its burrow to escape.
A woodchuck burrow is more than just a hole in the ground. It is a complex system of entrances, tunnels, and rooms called chambers. Burrows give woodchucks a place to sleep, raise young, and escape enemies. When a woodchuck hibernates (sleeps through the winter), it makes a simple burrow and plugs the entrance with sand.
A woodchuck uses its strong claws to dig its own burrow. In soft soil, a woodchuck can dig an entire burrow in one day.
Each summer burrow usually has several entrances. This lets the woodchuck roam and still have a safe hole nearby in case danger comes along.
For the main entrance, a chuck may choose the woods at the edge of a meadow. The hole must be hidden from view but close to food.
The plunge hole is a special burrow entrance. It goes straight down two or more feet. When an enemy comes near, the woodchuck may give a shrill whistle, then drop straight down into the hole. This is how the woodchuck "disappeared" from the dog's sight!
Under the ground, tunnels and chambers connect the entrances. There is a sleeping chamber, a turn-around chamber, and a nursery chamber. A woodchuck burrow can even have a bathroom! A woodchuck may bury its waste in a chamber. Sometimes it adds waste to the mound of sand that marks the main entrance. This mound lets other animals know whether or not a burrow is active (being used).
Many animals look for empty woodchuck burrows. And why not? The burrows are warm in winter, cool in summer, and ready-made. Rabbits use empty burrows to avoid summer heat. They may even pop into an active burrow to escape an enemy. Skunks, weasels, and opossums use empty burrows as woodchucks do--
for sleeping, hiding, and raising their young. Foxes may take over active burrows to raise their own young in the warm dens.
Now you can see that a burrow is more than just a hole in the ground. It's the perfect place for woodchucks--or other animals--to sleep, hide, and raise young. To a woodchuck, there's no place like its burrow!
Woodchucks are also known as groundhogs. They can be found in many eastern and central states and in most of Canada.
|
<urn:uuid:d6ec86db-1bbe-4eb4-9dcb-367f8bcb6cdc>
|
CC-MAIN-2013-20
|
http://www.highlightskids.com/science-stories/not-just-hole-ground
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936394
| 649
| 3.640625
| 4
|
Australian Bureau of Statistics
4102.0 - Australian Social Trends, 1995
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 20/06/1995
|Page tools: Print Page Print All RSS Search this Product|
Population Distribution: Internal migration
MIGRATION PATTERNS, 1993
Source: Estimated Resident Population; Labour Force Survey
10% of the population in 1991 had lived in a different state/territory, or overseas, in 1986. However this varied between states. Only 7% of the populations of Victoria and South Australia in 1991 had moved there from another state/territory, or from overseas, in the previous five years. Other states had much higher proportions moving; 27% of the population of the Northern Territory, 25% of the population of the Australian Capital Territory, and 14% of the population of Queensland had moved there in the previous five years.
Queensland, the Australian Capital Territory and Western Australia had more interstate arrivals than departures. Queensland had nearly twice as many arrivals (113,000) as departures (60,000). Three-quarters of the net interstate migration to Queensland (53,000) came from Victoria (21,000) and New South Wales (20,000). Queensland gained population from every state in 1993 while Victoria lost population to every state.
While 33% of all interstate movers moved to Queensland, only 12% of net migration from overseas went to Queensland. New South Wales and Victoria attracted the largest numbers of overseas migrants, together accounting for 67% of net migration to Australia.
Queensland has had the largest population growth due to interstate migration of any state since 1971. Net interstate migration to Queensland averaged 14,000 a year in the early 1970s, rising to 49,000 in the early 1990s. Over the same period, Victoria had a net loss of population in each period, with annual average losses ranging from 8,000 in the early 1980s to 28,000 in the early 1990s.
New South Wales has had a net interstate loss in each period since 1971. This loss has been larger than the loss from Victoria except in the late 1970s and early 1990s.
NET INTERSTATE AND OVERSEAS MIGRATION, 1993
Source: Estimated Resident Population; Overseas Arrivals and Departures
ANNUAL AVERAGE NET INTERSTATE MIGRATION
Source: Estimated Resident Population
Distribution of interstate movers
Because capital cities contain large numbers of people and are the main employment base in each state/territory, they also attract large numbers of interstate movers and overseas migrants. Areas outside the capital cities also attract large numbers of interstate movers, and smaller numbers of overseas migrants.
Most moves are of relatively short distances, people moving within their local area or within their city. Long distance moves are much less common. People who live near borders such as those living in Tweed Heads, the Gold Coast, Albury or Wodonga are more likely to move interstate than others.
Areas where regional centres are in a different state from their region also have high levels of interstate mobility. For example, Broken Hill is in New South Wales, but has a high level of movement to and from South Australia. Similarly, the Australian Capital Territory attracts and supplies many movers to and from the surrounding areas of southern New South Wales. Central Australia also has a relatively high number of interstate movers despite the small population in the area.
While Queensland attracts large numbers of interstate movers, they tend to congregate in the south east corner of the state, especially along the coast. 70% of the Queensland population lived in the south east corner of the state in 1991, yet 78% of people who moved to Queensland settled in that area between 1986 and 1991.
DISTRIBUTION OF INTERSTATE AND OVERSEAS ARRIVALS, 1986-1991
Each dot represents 100 people moving into a statistical sub-division
Source: Census of Population and Housing
Areas with declining population
Overall, population growth in Australia in 1992-93 was 1%; 0.8% was due to natural increase, i.e. the excess of births over deaths, and 0.2% to net overseas migration. However, there was considerable regional variation in these figures. At the regional level net migration is composed mainly of internal migration and its effect varied from a gain of 9% in part of Caboolture shire in Queensland to a loss of 4% of the population in Whyalla, South Australia.
The areas with the largest population decline in 1992-93 differed considerably in their population characteristics. In Whyalla, 30% of employed people worked in manufacturing basic metal products in 1991. Reductions in employment at the BHP smelter in Whyalla have significantly reduced employment opportunities, and so people have moved away in search of better prospects.
Weston Creek and Belconnen have a relatively large number of people in their 20s. These people, who have grown up in the area, are forming new households and moving away to other areas. Recent large residential developments in the Australian Capital Territory have also attracted people away from the older areas.
The decline in Glenelg (in Victoria) reflects that experienced in many rural areas around Australia over the past few decades. A number of factors have contributed to this rural decline. Goods once produced in the local area are now produced in centralised locations and transported around. Increased personal mobility has also resulted in services being centralised in larger towns, reducing employment and therefore population in local centres. Technological changes in agriculture have reduced agricultural employment, and this has flowed through to other industries1.
The inner areas of Australia's capital cities have a high proportion of older people, and consequently, a low rate of natural increase. Large parts of these areas have been redeveloped for non-residential use and the areas have therefore had a net loss of population1.
AREAS WITH THE MOST RAPID DECLINE IN POPULATION(a) 1992-93
(a) Statistical Sub-divisions with a population greater than 25,000.
Source: Estimated Resident Population; Census of Population and Housing
Capital city migration
Overall, between 1986 and 1991, the capital cities had a net loss of 116,000 people to the rest of the country. This was made up of a net loss of 78,000 people from the capital city to other areas of the same state, plus 38,000 to non-capital city areas of other states.
Between 1986 and 1991, Sydney had a net loss of 139,000 people to other areas of Australia. About half of this movement (68,000) was to other areas of New South Wales, especially coastal areas. There was also a large net migration to other capital cities (35,000), especially Brisbane (22,000) and Perth (6,000), and to other areas of other states, especially south east coastal Queensland.
There was a large net migration of people from Melbourne to other areas in Victoria (20,000). The net migration from Melbourne to other states was most likely to go to areas other than the capital cities.
Brisbane had a net gain of 46,000 people, with about 73% of this coming from other capital cities. The rest of Queensland had a net gain of 79,000 people, with 66% of this coming from capital cities other than Brisbane.
NET INTERNAL MIGRATION PATTERNS, 1986-1991
Source: Census of Population and Housing
1 Hugo, G. (1989) Atlas of the Australian People Bureau of Immigration Research.
This page last updated 2 June 2006
Unless otherwise noted, content on this website is licensed under a Creative Commons Attribution 2.5 Australia Licence together with any terms, conditions and exclusions as set out in the website Copyright notice. For permission to do anything beyond the scope of this licence and copyright terms contact us.
|
<urn:uuid:10da0382-ec6a-4ad0-9bfc-5b5d917bdf53>
|
CC-MAIN-2013-20
|
http://www.abs.gov.au/ausstats/abs@.nsf/2f762f95845417aeca25706c00834efa/4ce0a40a10d64fa9ca2570ec00751359!OpenDocument
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957102
| 1,628
| 3.109375
| 3
|
Humans can be hit by lightning directly when outdoors. Contrary to popular notion, there is no 'safe' location outdoors. People have been struck in sheds and makeshift shelters. However, shelter is possible within an enclosure of conductive material such as an automobile, which is an example of a crude type of Faraday Cage.
Nearly 2000 people per year in the world are injured by lightning strikes. In the USA between 9-10% of those struck die, amounting to an average of 75 fatalities annually. In the United States, it is the #2 weather killer (second only to floods), killing 100 annually and injuring ten times that number. The odds of an average person living in the USA being struck by lightning in a given year is 1:700,000. Roy Sullivan has the record for being the human who has been struck by lightning the most times. Working as a park ranger, Roy was struck seven times over the course of his 35 year career. He lost a nail on his big toe, and suffered multiple injuries to the rest of his body.
In a direct hit the electrical charge strikes the victim first. Counterintuitively, if the victim's skin resistance is high enough, much of the current will flash around the skin or clothing to the ground, resulting in a surprisingly benign outcome. Metallic objects in contact with the skin may concentrate the lightning strike, preventing the flashover effect and resulting in more serious injuries. At least two cases have been reported where a lightning strike victim wearing an iPod suffered more serious injuries as a result.
Splash hits occur when lightning prefers a victim (with lower resistance) over a nearby object that has more resistance, and strikes the victim on its way to ground. Ground strikes, in which the bolt lands near the victim and is conducted through the victim and his or her connection to the ground (such as through the feet, due to the voltage gradient in the earth, as discussed above), can cause great damage.
Several different types of devices, including lightning rods and electrical charge dissipators, are used to prevent lightning damage and safely redirect lightning strikes.
A lightning rod (or lightning protector) is a metal strip or rod, usually of copper or similar conductive material, used as part of lightning safety to protect tall or isolated structures (such as the roof of a building or the mast of a vessel) from lightning damage. Its formal name is lightning finial or air terminal. Sometimes, the system is informally referred to as a lightning conductor, arrester, or discharger; however, these terms actually refer to lightning protection systems in general or specific components within them. Lightning protection systems alter lightning streamer behavior. The field of lightning protection is almost totally void of systems or concepts designed to deal with the general problem area as a whole. Chaff and silver iodide crystals concepts were devised to deal directly with the cloud cells and were dispensed directly into the clouds from an overflying aircraft. The chaff was devised to deal with the electrical manifestations of the storm from within, while the silver iodide salting technique was devised to deal with the mechanical forces of the storm.
Although commonly associated with close thunderstorms, lightning strikes can occur on a day that seems devoid of clouds. This occurrence is known as "A Bolt From the Blue and is because lightning can strike up to 10 miles from a cloud.
Lightning interferes with AM (amplitude modulation) radio signals much more than FM (frequency modulation) signals, providing an easy way to gauge local lightning strike intensity. To do so, one should tune a standard AM Medium wave receiver to a frequency with no transmitting stations, and listen for crackles amongst the static. Stronger or nearby lightning strikes will also cause cracking if the receiver is tuned to a station.
Lightning prediction systems have been developed and may be deployed in locations where lightning strikes present special risks, such as public parks. Such systems are designed to detect the conditions which are believed to favor lightning strikes and provide a warning to those in the vicinity to allow them to take appropriate cover.
The National Lightning Safety Institute recommends using the F-B (flash to boom) method. The flash of a lightning strike and resulting thunder occur at roughly the same time. But light travels at 300,000 kilometers in a second, almost a million times the speed of sound. Sound travels at the slower speed of 344 m/s so the flash of lightning is seen before thunder is heard. To use the method, count the seconds between the lightning flash and thunder. Divide by 3 to determine the distance in kilometers, or by 5 for miles. All of the precautions above should be taken from the time the F-B is 25 seconds or less, that is, the lightning is closer than 8 km (5 miles). Do not rely on the F-B method for determining when to relax the safety measures, because lightning typically occurs in multiple locations, and just because some strikes are far away does not mean another is not close. Precautions should not be relaxed until thunder cannot be heard for 30 minutes.
The US National Lightning Safety Institute advises everyone to have a plan for their safety when a thunderstorm occurs and to commence it as soon as the first lightning or thunder is observed. This is important, since lightning can strike without rain actually falling. If thunder can be heard at all then there is a risk of lightning. The safest place is inside a building or a vehicle. Risk remains for up to 30 minutes after the last observed lightning or thunder.
If a person is injured by lightning they do not carry an electrical charge and can be safely handled to apply First aid before emergency services arrive.
US Patent Issued to Weather Central Holdings on Jan. 11 for "System and Method for Presenting Lightning Strike Information" (Wisconsin Inventors)
Jan 12, 2011; ALEXANDRIA, Va., Jan. 12 -- United States Patent no. 7,869,953, issued on Jan. 11, was assigned to Weather Central Holdings Inc....
US Patent Issued to Mitsubishi Heavy Industries on April 23 for "Lightning Strike Simulation Apparatus, Method Thereof, and Program" (Japanese Inventors)
Apr 23, 2013; ALEXANDRIA, Va., April 23 -- United States Patent no. 8,428,878, issued on April 23, was assigned to Mitsubishi Heavy Industries...
|
<urn:uuid:ffd29597-6a06-4a8d-9ad5-dfba2f8fb57c>
|
CC-MAIN-2013-20
|
http://www.reference.com/browse/wiki/Lightning_strike
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.946775
| 1,300
| 3.4375
| 3
|
As a rainy winter plods on, scientists, resource managers and property owners from coast to coast cast worried eyes on America’s hillsides for signs of this common natural geologic hazard – a landslide.
Landslides can happen anyplacethat has unstable hillslopes and a contributing factor such as an earthquake, a wildfire severe enough to alter soil properties, human modification of the landscape or, most commonly, rain. The West Coast, particularly California, is vulnerable to landslides because of its steeplands, its seasonal rainfall patterns, and the way its ongoing tectonic deformation exposes vulnerable sediments,not to mention the large population that lives in areas that could be affected by landslides, said Lynn Highland, a geographer with the USGS National Landslide Information Center in Golden, Colo.
The USGS is working on several fronts to better understand landslide hazards both in the United States and internationally and to provide information that will help resource managers and the public make safe choices in the face of landslide hazards. USGS scientists have mapped landslides, studied why landslides can repeat in the same places, created near-real-time monitoring systems, and calculated the economic costs of landslides. A “Did You See It” feature recently added to the USGS Landslide Hazards Program’s website allows people to report landslides anywhere in the United States and to see where landslides have been reported in the past.
In California, most people think of landslides as following prolonged heavy rain. They reference such tragedies as the San Francisco Bay Area’s Love Creek landslide, which claimed 10 lives in January 1982, or the Scenic Drive landslide in nearby La Honda, which rendered uninhabitable more than $40 million worth of homes in February 1998. Intense West Coast storms that hit from mid-December to April are much more likely to trigger landslides than storms earlier in the season, because they add water to steepland soil that has already accumulated significant moisture in its fissures and pores, said USGS research geomorphologist Jonathan Stock. In late November 2012, a heavy storm brought more intensive rain to some parts of the Bay Area than they received during the 1982 disaster. “But there was no widespread landsliding, because the soil pores were not yet fully saturated,” Stock said.
Stock and USGS research civil engineer Brian Collins have created a near-real-time landslide monitoring system in the San Francisco Bay Area. Sensors at four stations monitor pore water pressure and soil moisture conditions that could precede widespread landsliding. Similar USGS monitoring systems exist in parts of Oregon, southern California and Colorado. Years of observation using such systems are typically required to understand what combinations of soil moisture and precipitation lead to landslides.
It’s already known that many California landslides are associated with the airborne masses of water vapor called atmospheric rivers, which, because they originate in the semitropics of the Pacific Ocean and travel west, can be tracked several days before they reach the West Coast. Now, Stock, Collins and others are working on ways to integrate precipitation and soil moisture data to forecast susceptibility to widespread landsliding.
Hillsides that have been swept by wildfires also pose risk of post-fire debris flows. These post-fire debris flow warnings are issued by the National Weather Service to local emergency managers and the public. Here, the triggering process is different. Not only is vegetation destroyed that might intercept rainfall, but soil particles themselves can become coated with volatilized organic matter that makes them water-repellent, explained USGS emeritus research geologist Sue Cannon. More rainwater cascades off hillslopes than soaks in.
As it moves downslope, the rainwater collects increasing amounts of soil and rocks. The mix becomes denser and heavier as it travels, and in as little as 15 minutes can become a debris flow powerful enough to threaten structures or block highways, as happened after the 2009 Station fire in southern California and the June 2012 High Park fire west of Fort Collins, Colo.
Because this process does not depend on soil saturation, post-fire debris flows can be triggered by the first big rain of the year, Cannon said. USGS scientists created debris-flow hazard maps of the High Park burn area and found that some drainage basins had up to an 84 percent chance of producing debris flows if they received 25mm (just under an inch) of rain in an hour. USGS has also installed monitoring sensors in areas with high potential for post-fire debris flows, including basins burned by the 2009 Station Fire that threatened several cities at the base of the San Gabriel Mountains in southern California. The potential for post-fire debris flows generally lessens after two to three years, however, as burned areas become revegetated and sediment supplies are depleted.
USGS’ Landslide Hazards Program has developed methods to characterize debris-flow hazards from recently burned areas, including empirical models to predict for a given point along a drainage network 1) the volume of debris flow; 2) the probability of debris flow; and 3) the downstream path that the debris flow would take . “A set of models were originally developed for burned areas in the Intermountain West,” Cannon said, “and we are now in the process of developing models specific to conditions within Southern California because of differences in the region’s fire behavior and the large number of people potentially impacted there.”
An important part of mitigating landslide hazards is the understanding that landslides can recur at the same sites, Stock said. Water can seep off the toe of an old slide and cause failures there, he said. Other slope failures calve off earlier failures, as happened in the seaside community of La Conchita, Calif., in 1995 and again in January 2005. No one was injured in the 1995 event, but a debris flow a decade later killed 10 people. Both landslides were within the footprint of an ancient landslide on the slope behind them, as aerial photography and LIDAR imagery reveal. La Honda’s Scenic Drive homes sat atop an ancient landslide as well.
Landslide hazard maps of many areas are available from USGS as well as the California Geological Survey and other states’ geological offices, along with resources and tips on how to help avoid landslide hazards. These landslide maps identify areas where the greatest threat to property exists from the movement of deep-seated landslides.
Other USGS science investigates how landslides fit into larger ecological processes and economic assessments. Highland studies direct economic losses from landslides in addition to indirect losses. Landslide losses in many cases are ascribed to earthquakes or floods, but actually are due to landslides that the earthquakes or floods have triggered. In the case of the 1964 Alaska earthquake, much of the economic damage was actually done by landslides, Highland said. Other USGS scientists are studying large-scale landslide hazards threatening people and property in coastal Oregon and Washington, Appalachia, China, Micronesia, and Haiti, as well as rockfall associated with the 2001 earthquake in Nisqually, Wash., and the 2011 earthquake in Mineral, Va.
More USGS landslide resources:
FAQs on landslides
Other USGS publications on landslides
Images of landslides around the world
Pilot project toward a national landslide inventory
USGS geologists working on landslide research
How to find landslide information in your state
Related Top Stories:
Receive news and updates:
|
<urn:uuid:6c370a7b-33df-4ef3-a53a-5845da09c091>
|
CC-MAIN-2013-20
|
http://www.usgs.gov/blogs/features/usgs_top_story/are-you-ready-for-landslide-season/?from=title
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957193
| 1,545
| 3.9375
| 4
|
[From Samuel Smiles's Self-Help (1859). Text courtesy of Professor Professor Mitsuharu Matsuoka of Nagoya University, Japan. Translation to html and links by GPL.]
One of the first grand results of Watt's invention, — which placed an almost unlimited power at the command of the producing classes, — was the establishment of the cotton-manufacture. The person most closely identified with the foundation of this great branch of industry was unquestionably Sir Richard Arkwright, whose practical energy and sagacity were perhaps even more remarkable than his mechanical inventiveness. His originality as an inventor has indeed been called in question, like that of Watt and Stephenson.
Arkwright probably stood in the same relation to the spinning- machine that Watt did to the steam-engine and Stephenson to the locomotive. He gathered together the scattered threads of ingenuity which already existed, and wove them, after his own design, into a new and original fabric. Though Lewis Paul, of Birmingham, patented the invention of spinning by rollers thirty years before Arkwright, the machines constructed by him were so imperfect in their details, that they could not be profitably worked, and the invention was practically a failure. Another obscure mechanic, a reed-maker of Leigh, named Thomas Highs, is also said to have invented the water-frame and spinning-jenny; but they, too, proved unsuccessful.
When the demands of industry are found to press upon the resources of inventors, the same idea is usually found floating about in many minds; — such has been the case with the steam-engine, the safety- lamp, the electric telegraph, and other inventions. Many ingenious minds are found labouring in the throes of invention, until at length the master mind, the strong practical man, steps forward, and straightway delivers them of their idea, applies the principle successfully, and the thing is done. Then there is a loud outcry among all the smaller contrivers, who see themselves distanced in the race; and hence men such as Watt, Stephenson, and Arkwright, have usually to defend their reputation and their rights as practical and successful inventors.
Richard Arkwright, like most of our great mechanicians, sprang from the ranks. He was born in Preston in 1732. His parents were very poor, and he was the youngest of thirteen children. He was never at school: the only education he received he gave to himself; and to the last he was only able to write with difficulty. When a boy, he was apprenticed to a barber, and after learning the business, he set up for himself in Bolton, where he occupied an underground cellar, over which he put up the sign, "Come to the subterraneous barber — he shaves for a penny." The other barbers found their customers leaving them, and reduced their prices to his standard, when Arkwright, determined to push his trade, announced his determination to give "A clean shave for a halfpenny." After a few years he quitted his cellar, and became an itinerant dealer in hair. At that time wigs were worn, and wig-making formed an important branch of the barbering business. Arkwright went about buying hair for the wigs. He was accustomed to attend the hiring fairs throughout Lancashire resorted to by young women, for the purpose of securing their long tresses; and it is said that in negotiations of this sort he was very successful. He also dealt in a chemical hair dye, which he used adroitly, and thereby secured a considerable trade. But he does not seem, notwithstanding his pushing character, to have done more than earn a bare living.
The fashion of wig-wearing having undergone a change, distress fell upon the wig-makers; and Arkwright, being of a mechanical turn, was consequently induced to turn machine inventor or "conjurer," as the pursuit was then popularly termed. Many attempts were made about that time to invent a spinning-machine, and our barber determined to launch his little bark on the sea of invention with the rest.
Like other self-taught men of the same bias, he had already been devoting his spare time to the invention of a perpetual-motion machine; and from that the transition to a spinning-machine was easy. He followed his experiments so assiduously that he neglected his business, lost the little money he had saved, and was reduced to great poverty. His wife — for he had by this time married — was impatient at what she conceived to be a wanton waste of time and money, and in a moment of sudden wrath she seized upon and destroyed his models, hoping thus to remove the cause of the family privations. Arkwright was a stubborn and enthusiastic man, and he was provoked beyond measure by this conduct of his wife, from whom he immediately separated.
In travelling about the country, Arkwright had become acquainted with a person named Kay, a clockmaker at Warrington, who assisted him in constructing some of the parts of his perpetual-motion machinery. It is supposed that he was informed by Kay of the principle of spinning by rollers; but it is also said that the idea was first suggested to him by accidentally observing a red-hot piece of iron become elongated by passing between iron rollers.
However this may be, the idea at once took firm possession of his mind, and he proceeded to devise the process by which it was to be accomplished, Kay being able to tell him nothing on this point.
Arkwright now abandoned his business of hair collecting, and devoted himself to the perfecting of his machine, a model of which, constructed by Kay under his directions, he set up in the parlour of the Free Grammar School at Preston. Being a burgess of the town, he voted at the contested election at which General Burgoyne was returned; but such was his poverty, and such the tattered state of his dress, that a number of persons subscribed a sum sufficient to have him put in a state fit to appear in the poll-room. The exhibition of his machine in a town where so many workpeople lived by the exercise of manual labour proved a dangerous experiment; ominous growlings were heard outside the school-room from time to time, and Arkwright, — remembering the fate of Kay, who was mobbed and compelled to fly from Lancashire because of his invention of the fly-shuttle, and of poor Hargreaves, whose spinning-jenny had been pulled to pieces only a short time before by a Blackburn mob, — wisely determined on packing up his model and removing to a less dangerous locality. He went accordingly to Nottingham, where he applied to some of the local bankers for pecuniary assistance; and the Messrs. Wright consented to advance him a sum of money on condition of sharing in the profits of the invention. The machine, however, not being perfected so soon as they had anticipated, the bankers recommended Arkwright to apply to Messrs. Strutt and Need, the former of whom was the ingenious inventor and patentee of the stocking-frame. Mr. Strutt at once appreciated the merits of the invention, and a partnership was entered into with Arkwright, whose road to fortune was now clear. The patent was secured in the name of "Richard Arkwright, of Nottingham, clockmaker," and it is a circumstance worthy of note, that it was taken out in 1769, the same year in which Watt secured the patent for his steam-engine. A cotton-mill was first erected at Nottingham, driven by horses; and another was shortly after built, on a much larger scale, at Cromford, in Derbyshire, turned by a water-wheel, from which circumstance the spinning-machine came to be called the water- frame.
Arkwright's labours, however, were, comparatively speaking, only begun. He had still to perfect all the working details of his machine. It was in his hands the subject of constant modification and improvement, until eventually it was rendered practicable and profitable in an eminent degree. But success was only secured by long and patient labour: for some years, indeed, the speculation was disheartening and unprofitable, swallowing up a very large amount of capital without any result. When success began to appear more certain, then the Lancashire manufacturers fell upon Arkwright's patent to pull it in pieces, as the Cornish miners fell upon Boulton and Watt to rob them of the profits of their steam- engine. Arkwright was even denounced as the enemy of the working people; and a mill which he built near Chorley was destroyed by a mob in the presence of a strong force of police and military. The Lancashire men refused to buy his materials, though they were confessedly the best in the market. Then they refused to pay patent-right for the use of his machines, and combined to crush him in the courts of law. To the disgust of right-minded people, Arkwright's patent was upset. After the trial, when passing the hotel at which his opponents were staying, one of them said, loud enough to be heard by him, "Well, we've done the old shaver at last;" to which he coolly replied, "Never mind, I've a razor left that will shave you all." He established new mills in Lancashire, Derbyshire, and at New Lanark, in Scotland. The mills at Cromford also came into his hands at the expiry of his partnership with Strutt, and the amount and the excellence of his products were such, that in a short time he obtained so complete a control of the trade, that the prices were fixed by him, and he governed the main operations of the other cotton-spinners.
Arkwright was a man of great force of character, indomitable courage, much worldly shrewdness, with a business faculty almost amounting to genius. At one period his time was engrossed by severe and continuous labour, occasioned by the organising and conducting of his numerous manufactories, sometimes from four in the morning till nine at night. At fifty years of age he set to work to learn English grammar, and improve himself in writing and orthography. After overcoming every obstacle, he had the satisfaction of reaping the reward of his enterprise. Eighteen years after he had constructed his first machine, he rose to such estimation in Derbyshire that he was appointed High Sheriff of the county, and shortly after George III. conferred upon him the honour of knighthood. He died in 1792. Be it for good or for evil, Arkwright was the founder in England of the modern factory system, a branch of industry which has unquestionably proved a source of immense wealth to individuals and to the nation.
Last modified 22 December 2007
|
<urn:uuid:a709a37e-2ff8-4700-94f0-76587259212a>
|
CC-MAIN-2013-20
|
http://www.victorianweb.org/technology/inventors/arkwright.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.989994
| 2,223
| 2.796875
| 3
|
Do you understand the terms watt and kilowatt? Do you have trouble figuring out how your electric company calculates the amount of energy you consume on a monthly basis? Payless Power, a retail electric company providing cheap electricity in Dallas with low power company rates, can help you understand and make sense of all the terminology associated with electricity.
Electric power is measured in the basic unit of a watt. A watt is the rate of electricity use at any particular instant. A kilowatt is equal to 1000 watts, so the amount of electricity you use in an hour is a kilowatt-hour. The number of kilowatt-hours (kWhs) you use is calculated on a monthly basis. Simply put, watt-hours are a measure of how much electricity is used over a period of time. The watt-hours used are divided by 1000 to get the actual kilowatt-hours used.
Are you still a bit confused? Perhaps understanding the cost of running some common household devices will make this easier to understand. Consider the following examples:
- A 100-watt light bulb used for 500 hours during the month would mean it used 50 kWh.
- A large window air-conditioning unit using 1500 watts for ten hours during the month would equal 15 kWhs used.
- A small window air-conditioning unit using 500 watts of electricity for ten hours during the month would equal 5 kWhs used.
The formula you should have picked up on is:
Wattage x hours used divided by 1000 x price per kWh = the cost of electricity used
Electric companies’ rates vary by location, but the rate is always measured in kWhs used. For instance, the average price of residential electricity in the United States in August 2011 was $0.12/kWh, ranging from only $0.082 in Washington State to $0.29 in Hawaii! Hopefully Payless Power, your low rate retail electric power company, has made some sense of what’s watt and kilowatt!
|
<urn:uuid:87c4313b-a1f3-4f75-8383-a0945f2e672c>
|
CC-MAIN-2013-20
|
http://blog.paylesspower.com/power-company-rates-whats-a-kilowatt-hour/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.931459
| 415
| 3.421875
| 3
|
You might like
A tooth (plural teeth) is a undersized, calcified, whitish form ground in the jaws (or mouths) of innumerable vertebrates and used to sever down food. Some animals, surprisingly carnivores, also take teeth in behalf of hunting or for defensive purposes. The roots of teeth are covered by gums. Teeth are not made of bone, but rather of multiple tissues of varying density and hardness.
The unrestricted make-up of teeth is similar across the vertebrates, although there is respectable modulation in their form and position. The teeth of mammals drink deep roots, and this decoration is also found in some fish, and in crocodilians. In most teleost fish, how, the teeth are attached to the outer rise of the bone, while in lizards they are fixed devoted to to the inner come up of the jaw alongside harmonious side. In cartilaginous fish, such as sharks, the teeth are attached by means of rough ligaments to the hoops of cartilage that accumulate the jaw.
5 weeks ago.
Children's tooth development begins while the baby is in the womb. Teething usually occurs between the ages of six and nine months. Children usually have their full set of 20 primary teeth (milk teeth, baby teeth or deciduous teeth) by the age of three years. At about the age of six years, the first permanent teeth erupt (push through the gum).
|
<urn:uuid:67faa620-6239-4203-a8c2-084976f09cd5>
|
CC-MAIN-2013-20
|
http://musicmazaa.com/tamil/lyrics/read/movie/Idhayathai+Thirudathe/Oh+Papa+Laali.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960229
| 295
| 3.09375
| 3
|
More U.S. Kids Get High-Radiation Scans, Study Says
MONDAY, Dec. 3 (HealthDay News) -- Increasing numbers of U.S. children undergo diagnostic imaging tests such as MRIs and CT scans, and higher-radiation tests account for a growing proportion of these procedures, researchers report.
Their study of 2001-2009 insurance claims in southern California found that high-radiation procedures, which could raise the risk of cancer years later, are most commonly ordered for hospitalized children or those seen in emergency departments because of abdominal pain, headache and head injury.
Overall, physicians affiliated with a pediatric hospital in San Diego ordered 200,000 diagnostic tests using radiation for 63,000 children during the period reviewed, the researchers said. Almost 8,000 of the children had higher-radiation procedures, such as CT scans, angiography (x-ray of the inside of blood vessels) and/or fluoroscopy (moving images).
Older children and boys were more likely than younger kids and girls to undergo these tests, the study said.
"Our findings may help guide clinical practice to reduce unnecessary imaging-related radiation exposure in youth," said lead researcher Dr. Jeannie Huang, an associate professor in the division of pediatric gastroenterology at Rady Children's Hospital, at the University of California, San Diego. The study's goal was to identify where most of these pediatric imaging procedures are ordered, and for which children.
"We focused in particular on diagnostic imaging procedures associated with higher ionizing radiation, including CT, fluoroscopy and angiography," Huang said. "We found that . . . they were done mostly for gastrointestinal complaints and congenital conditions. Trauma and injuries and neurologic complaints also contributed to the use of these tests."
The report was published online Dec. 3 and will appear in the January print edition of the journal Pediatrics.
According to the U.S. National Cancer Institute, some 5 million to 9 million CT examinations are done each year on U.S. children. Their use in children and adults has increased about eightfold since 1980, growing about 10 percent each year, the agency says.
"Despite the many benefits of CT, a disadvantage is the inevitable radiation exposure," the agency says. Children are more sensitive to radiation exposure than adults because their bodies are still developing. Also, children have a longer life expectancy than adults, meaning more time for cancer to develop, and multiple scans further up the risk for developing cancer.
Over the past decade, awareness of the potential dangers of radiation has led children's hospitals in the United States to limit CT scans, said Dr. Nolan Altman, chief of radiology at Miami Children's Hospital in Florida.
"Our CT numbers are way down," said Altman. "We are doing much less CTs than we did 10 years ago." Many of these are being replaced by ultrasound and MRIs, which don't use radiation, he added.
Also, manufacturers have reduced radiation doses, Altman said. "In general, the dose is 30 to 40 percent less than it used to be," he said.
"For the average child, who has one CT scan or X-ray, parents should not be concerned," Altman added. However, a very sick child might need multiple scans, and "then there more reasons for concern," he said.
Imaging saves lives, said Dr. Marta Schulman, chair of the American College of Radiology Pediatric Imaging Commission. "Even if you believe the worst prophecy that you would get cancer, the chances of dying from the injury is 100 percent if you don't do the CT scan," she said.
Schulman said it is to be expected that most of these tests are done in the hospital, where patients are the sickest. Also, emergency department doctors don't know a child as well as the child's own doctor, so they need these tests to make a diagnosis, she said.
"You could turn this around and say there is less imaging done when you go to your doctor, because your doctor knows you, knows your family, knows your history," Schulman said. "In the emergency department, they don't have that luxury."
The key for doctors is to carefully evaluate the patient and do the test only if it's necessary and at the lowest possible dose of radiation, she said.
Also, proper interpretation of the results is key so that the test doesn't need to be repeated, she said.
Parents can play a role, she added. "Parents should ask about the test and why it is needed, but they shouldn't be so concerned that they avoid a test that is really necessary," Schulman said.
For more on radiation risks for children, visit the U.S. National Cancer Institute.
SOURCES: Jeannie Huang, M.D., M.P.H. associate professor, division of pediatric gastroenterology, Rady Children's Hospital/University of California, San Diego; Marta Schulman, M.D., chair, American College of Radiology Pediatric Imaging Commission; Nolan Altman, M.D., chief, radiology, Miami Children's Hospital; January 2012 Pediatrics
|
<urn:uuid:326ac137-4800-44a7-b421-80ec2ede337e>
|
CC-MAIN-2013-20
|
http://www.bridgeporthospital.org/healthlibrary/default.aspx?view=doc&pageid=671105&typeid=6&cTOCKey=12SCC53&cContentSource=healthnews&fontsize=3
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964683
| 1,073
| 2.75
| 3
|
A crowd gathers outside a bakery in Cairo, Egypt, that sells government-subsidized bread.
This story first appeared on the TomDispatch website.
What can a humble loaf of bread tell us about the world?
The answer is: far more than you might imagine. For one thing, that loaf can be "read" as if it were a core sample extracted from the heart of a grim global economy. Looked at another way, it reveals some of the crucial fault lines of world politics, including the origins of the Arab spring that has now become a summer of discontent.
Consider this: Between June 2010 and June 2011, world grain prices almost doubled. In many places on this planet, that proved an unmitigated catastrophe. In those same months, several governments fell, rioting broke out in cities from Bishkek, Kyrgyzstan, to Nairobi, Kenya, and most disturbingly, three new wars began in Libya, Yemen, and Syria. Even on Egypt's Sinai Peninsula, Bedouin tribes are now in revolt against the country's interim government and manning their own armed roadblocks.
And in each of these situations, the initial trouble was traceable, at least in part, to the price of that loaf of bread. If these upheavals were not "resource conflicts" in the formal sense of the term, think of them at least as bread-triggered upheavals.
Growing Climate Change in a Wheat Field
Bread has classically been known as the staff of life. In much of the world, you can't get more basic, since that daily loaf often stands between the mass of humanity and starvation. Still, to read present world politics from a loaf of bread, you first have to ask: of what exactly is that loaf made? Water, salt, and yeast, of course, but mainly wheat, which means when wheat prices increase globally, so does the price of that loaf—and so does trouble.
To imagine that there's nothing else in bread, however, is to misunderstand modern global agriculture. Another key ingredient in our loaf—call it a "factor of production"—is petroleum. Yes, crude oil, which appears in our bread as fertilizer and tractor fuel. Without it, wheat wouldn't be produced, processed, or moved across continents and oceans.
And don't forget labor. It's an ingredient in our loaf, too, but not perhaps in the way you might imagine. After all, mechanization has largely displaced workers from the field to the factory. Instead of untold thousands of peasants planting and harvesting wheat by hand, industrial workers now make tractors and threshers, produce fuel, chemical pesticides, and nitrogen fertilizer, all rendered from petroleum and all crucial to modern wheat growing. If the labor power of those workers is transferred to the wheat field, it happens in the form of technology. Today, a single person driving a huge $400,000 combine, burning 200 gallons of fuel daily, guided by computers and GPS satellite navigation, can cover 20 acres an hour, and harvest 8,000 to 10,000 bushels of wheat in a single day.
Next, without financial capital—money—our loaf of bread wouldn't exist. It's necessary to purchase the oil, the fertilizer, that combine, and so on. But financial capital may indirectly affect the price of our loaf even more powerfully. When there is too much liquid capital moving through the global financial system, speculators start to bid-up the price of various assets, including all the ingredients in bread. This sort of speculation naturally contributes to rising fuel and grain prices.
The final ingredients come from nature: sunlight, oxygen, water, and nutritious soil, all in just the correct amounts and at just the right time. And there's one more input that can't be ignored, a different kind of contribution from nature: climate change, just now really kicking in, and increasingly the key destabilizing element in bringing that loaf of bread disastrously to market.
When these ingredients mix in a way that sends the price of bread soaring, politics enters the picture. Consider this, for instance: The upheavals in Egypt lay at the heart of the Arab Spring. Egypt is also the world's single largest wheat importer, followed closely by Algeria and Morocco. Keep in mind as well that the Arab Spring started in Tunisia when rising food prices, high unemployment, and a widening gap between rich and poor triggered deadly riots and finally the flight of the country's autocratic ruler Zine Ben Ali. His last act was a vow to reduce the price of sugar, milk, and bread—and it was too little too late.
With that, protests began in Egypt and the Algerian government ordered increased wheat imports to stave off growing unrest over food prices. As global wheat prices surged by 70 percent between June and December 2010, bread consumption in Egypt started to decline under what economists termed "price rationing." And that price kept rising all through the spring of 2011. By June, wheat cost 83 percent more than it had a year before. During the same time frame, corn prices surged by a staggering 91 percent. Egypt is the world's fourth-largest corn importer. When not used to make bread, corn is often employed as a food additive and to feed poultry and livestock. Algeria, Syria, Morocco, and Saudi Arabia are among the top 15 corn importers. As those wheat and corn prices surged, it was not just the standard of living of the Egyptian poor that was threatened, but their very lives as climate-change driven food prices triggered political violence.
In Egypt, food is a volatile political issue. After all, one in five Egyptians live on less than $1 a day and the government provides subsidized bread to 14.2 million people in a population of 83 million. Last year, overall food-price inflation in Egypt was running at more than 20 percent. This had an instant and devastating impact on Egyptian families, who spend on average 40 percent of their often exceedingly meager monthly incomes simply feeding themselves.
World Bank President Robert Zoellick fretted that the global food system was "one shock away from a full-fledged crisis."
Against this backdrop, World Bank President Robert Zoellick fretted that the global food system was "one shock away from a full-fledged crisis." And if you want to trace that near full-fledged crisis back to its environmental roots, the place to look is climate change, the increasingly extreme and devastating weather being experienced across this planet.
When it comes to bread, it went like this: In the summer of 2010, Russia, one of the world's leading wheat exporters, suffered its worst drought in 100 years. Known as the Black Sea Drought, this extreme weather triggered fires that burnt down vast swathes of Russian forests, bleached farmlands, and damaged the country's breadbasket wheat crop so badly that its leaders (urged on by western grain speculators) imposed a year-long ban on wheat exports. As Russia is among the top four wheat exporters in any year, this caused prices to surge upward.
At the same time, massive flooding occurred in Australia, another significant wheat exporter, while excessive rains in the American Midwest and Canada damaged corn production. Freakishly massive flooding in Pakistan, which put some 20 percent of that country under water, also spooked markets and spurred on the speculators.
|
<urn:uuid:dbbc2c66-f51a-4e1b-a252-cd13ef3d5ddd>
|
CC-MAIN-2013-20
|
http://www.motherjones.com/politics/2011/07/climate-change-food-crisis-price-bread-political-instability?page=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.96156
| 1,511
| 2.953125
| 3
|
"Getting Moore from Solar Cells" by David J. Norris and Eray S. Aydil, Science 2012, 238, 625.
After describing some new and interesting materials for solar cells, the authors state:
"Although this sounds exotic, these materials are known to behave like semiconductors, allowing them to absorb the sunlight and create electrons"At the risk of sounding pedantic, electrons are not created--nor are they destroyed. They are there in the dark in the beginning, and they are still there after the lights go out. The electrons are merely excited by the light.
Photons knock up electrons and then leave the seen.
|
<urn:uuid:6940873f-ee6a-4952-8f34-d93720050feb>
|
CC-MAIN-2013-20
|
http://acrazychicken.blogspot.com/2012/11/correcting-misconception.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937162
| 132
| 2.75
| 3
|
What's the Latest Development?
This morning, scientists at the CERN laboratories in Geneva, Switzerland, announced they have found the Higgs boson, the world's most sought-after particle. "A Higgs boson with a mass of around 125 to 126 gigaelectronvolts (GeV) was seen separately by the twin CMS and ATLAS detectors at the Large Hadron Collider, each with a confidence level of 5 sigma, or standard deviations, the heads of the experiments announced today at CERN." The level of 5 sigma is a rigorous statistical benchmark meaning that the evidence of the Higgs particle has just a 5-in-10 million chance of being a fluke.
What's the Big Idea?
The Higgs boson is the last remaining particle predicted by the Standard Model, the set of equations which describe how the Universe functions on a subatomic level. Many see the discovery of the Higgs as a triumph of contemporary physics and a confirmation of our ability, as a species, to grasp the strange mechanics of the Universe. Still, the results from CERN are preliminary and whether the new particle exactly fits the Standard Model's predictions remain unknown. Since the Model does not account for either gravity or dark matter, scientists will continue to gather data on the Higgs to determine the accuracy of the Standard Model.
Photo credit: CERN
|
<urn:uuid:74f45b03-2a4d-4258-ba88-4e966d796a85>
|
CC-MAIN-2013-20
|
http://bigthink.com/ideafeed/scientists-announce-they-found-the-higgs-particle
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.917133
| 279
| 2.859375
| 3
|
If we assume that the individual has an indisputable right to life, we must concede that he has a similar right to the enjoyment of the products of his labor. This we call a property right. The absolute right to property follows from the original right to life because one without the other is meaningless; the means to life must be identified with life itself. If the state has a prior right to the products of one’s labor, his right to existence is qualified . . . no such prior rights can be established, except by declaring the state the author of all rights. . . . We object to the taking of our property by organized society just as we do when a single unit of society commits the act. In the latter case we unhesitatingly call the act robbery, a malum in se. It is not the law which in the first instance defines robbery, it is an ethical principle, and this the law may violate but not supersede. If by the necessity of living we acquiesce to the force of law, if by long custom we lose sight of the immorality, has the principle been obliterated? Robbery is robbery, and no amount of words can make it anything else.
Contributed by: peter
|
<urn:uuid:2fe61ec6-e2f1-47f5-9402-ef205852d782>
|
CC-MAIN-2013-20
|
http://blog.gaiam.com/quotes/authors/frank-chodorov
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939759
| 252
| 2.515625
| 3
|
Webster's Revised Unabridged Dictionary (1913 + 1828)
U"ri*nose (?), U"ri*nous (?), a. [Cf. F. urineux. See Urine.] Of or pertaining to urine, or partaking of its qualities; having the character or odor of urine; similar to urine.
U"rite (?), n. [Gr. tail.] (Zoöl.) One of the segments of the abdomen or post-abdomen of arthropods.
U"rith (?), n. The bindings of a hedge. [Obs. or Prov. Eng.]
Urn (?), n. [OE. urne, L. urna; perhaps fr. urere to burn, and sop called as being made of burnt clay (cf. East): cf. F. urne.]
1. A vessel of various forms, usually a vase furnished with a foot or pedestal, employed for different purposes, as for holding liquids, for ornamental uses, for preserving the ashes of the dead after cremation, and anciently for holding lots to be drawn.
A rustic, digging in the ground by Padua, found an urn, or earthen pot, in which there was another urn.
His scattered limbs with my dead body burn,
And once more join us in the pious urn.
2. Fig.: Any place of burial; the grave.
Or lay these bones in an unworthy urn,
Tombless, with no remembrance over them.
3. (Rom. Antiq.) A measure of capacity for liquids, containing about three gallons and a haft, wine measure. It was haft the amphora, and four times the congius.
4. (Bot.) A hollow body shaped like an urn, in which the spores of mosses are contained; a spore case; a theca.
5. A tea urn. See under Tea.
Urn mosses (Bot.), the order of true mosses; -- so called because the capsules of many kinds are urn-shaped.
Urn, v. t. To inclose in, or as in, an urn; to inurn.
When horror universal shall descend,
And heaven's dark concave urn all human race.
Urn"al (?), a. Of or pertaining to an urn; effected by an urn or urns. Urnal interments."
Sir T. Browne.
Urn"ful (?), n.; pl. Urnfuls (). As much as an urn will hold; enough to fill an urn.
Urn"-shaped` (?), a. Having the shape of an urn; as, the urn-shaped capsules of some mosses.
U"ro- (?). A combining form fr. Gr. o'y^ron, urine.
U"ro-. A combining form from Gr. o'yra`, the tail, the caudal extremity.
U`ro*bi"lin (?), n. [1st uro- + bile + -in.] (Physiol. Chem.) A yellow pigment identical with hydrobilirubin, abundant in the highly colored urine of fever, and also present in normal urine. See Urochrome.
U"ro*cele (?), n. [1st uro + Gr. tumor.] (Med.) A morbid swelling of the scrotum due to extravasation of urine into it.
U`ro*cer"a*ta (?), n. pl. [NL., fr. Gr. tail + , , horn.] (Zoöl.) A division of boring Hymenoptera, including Tremex and allied genera. See Illust. of Horntail.
U"ro*chord (?), n. [2d uro- + chord.] (Zoöl.) The central axis or cord in the tail of larval ascidians and of certain adult tunicates. [Written also urocord.]
U`ro*chor"da (?), n. pl. [NL. See Urochord.] (Zoöl.) Same as Tunicata.
U`ro*chor"dal (?), a. (Zoöl.) Of or pertaining to the Urochorda.
U"ro*chrome (?), n. [1st uro- + Gr. color.] (Physiol. Chem.) A yellow urinary pigment, considered by Thudichum as the only pigment present in normal urine. It is regarded by Maly as identical with urobilin.
U"rochs (?), n. (Zoöl.) See Aurochs.
U"ro*cord (?), n. (Zoöl.) See Urochord.
U"ro*cyst (?), n. [1st uro- + cyst.] (Anat.) The urinary bladder.
U`ro*de"la (?), n. pl. [NL.; Gr. tail + visible.] (Zoöl.) An order of amphibians having the tail well developed and often long. It comprises the salamanders, tritons, and allied animals.
U"ro*dele (?), n. (Zoöl.) One of the Urodela.
U`ro*de"li*an (?), a. (Zoöl.) Of or pertaining to the Urodela. -- n. One of the Urodela.
U`ro*e*ryth"rin (?), n. [See 1st Uro-, and Erythrin.] (Physiol. Chem.) A reddish urinary pigment, considered as the substance which gives to the urine of rheumatism its characteristic color. It also causes the red color often seen in deposits of urates.
U`ro*gas"tric (?), a. [2d uro- + gastric.] (Zoöl.) Behind the stomach; -- said of two lobes of the carapace of certain crustaceans.
U`ro*gen"i*tal (?), a. [1st uro- + genital.] (Anat.) Same as Urinogenital.
U`ro*glau"cin (?), n. [1st uro- + L. glaucus bright.] (Physiol. Chem.) A body identical with indigo blue, occasionally found in the urine in degeneration of the kidneys. It is readily formed by oxidation or decomposition of indican.
U`ro*hæm"a*tin (?), n. [1st uro- + hæmatin.] (Physiol. Chem.) Urinary hæmatin; -- applied to the normal coloring matter of the urine, on the supposition that it is formed either directly or indirectly (through bilirubin) from the hæmatin of the blood. See Urochrome, and Urobilin.
U`ro*hy"al (?), a. [2d uro- + the Gr. letter (Anat.) Of or pertaining to one or more median and posterior elements in the hyoidean arch of fishes. -- n. A urohyal bone or cartilage.
U*rol"o*gy (?), n. [1st uro- + -logy.] (Med.) See Uronology.
U"ro*mere (?), n. [2d uro- + -mere.] (Zoöl.) Any one of the abdominal segments of an arthropod.
U`ro*nol"o*gy (?), n. [Gr. urine + -logy.] (Med.) That part of medicine which treats of urine.
U"ro*pod (?), n. [2d uro- + -pod.] (Zoöl.) Any one of the abdominal appendages of a crustacean, especially one of the posterior ones, which are often larger than the rest, and different in structure, and are used chiefly in locomotion. See Illust. of Crustacea, and Stomapoda.
U*rop"o*dal (?), a. (Zoöl.) Of or pertaining to a uropod.
U`ro*po*et"ic (?), a. [1st uro- + Gr. to make.]
1. (Med.) Producing, or favoring the production of, urine.
2. (Zoöl.) Of, pertaining to, or designating, a system of organs which eliminate nitrogenous waste matter from the blood of certain invertebrates.
U`ro*pyg"i*al (?), a. [See Uropygium.] (Anat.) Of or pertaining to the uropygium, or prominence at the base of the tail feathers, in birds.
Uropygial gland, a peculiar sebaceous gland at the base of the tail feathers in most birds. It secretes an oily fluid which is spread over the feathers by preening.
U`ro*pyg"i*um (?), n. [NL., fr. Gr. , (corrupted form) ; the end of the os sacrum + rump.] (Anat.) The prominence at the posterior extremity of a bird's body, which supports the feathers of the tail; the rump; -- sometimes called pope's nose.
U`ro*sa"cral (?), a. [2d uro- + sacral.] (Anat.) Of or pertaining to both the caudal and sacral parts of the vertebral column; as, the urosacral vertebræ of birds.
U*ros"co*py (?), n. [1st uro- + -scopy: cf. F. uroscopie.] The diagnosis of diseases by inspection of urine.
Sir T. Browne.
U"ro*some (?), n. [2d uro- + -some body.] (Zoöl.) The abdomen, or post-abdomen, of arthropods.
U"ro*stege (?), n. [2d uro- + Gr. roof.] (Zoöl.) One of the plates on the under side of the tail of a serpent.
U*ros"te*on (?), n.; pl. L. Urostea (#), E. Urosteons (#). [NL., fr. Gr. the tail + a bone.] (Anat.) A median ossification back of the lophosteon in the sternum of some birds.
U`ro*ster"nite (?), n. [2d uro- + sternum.] (Zoöl.) The sternal, or under piece, of any one of the uromeres of insects and other arthropods.
U"ro*style (?), n. [2d uro- + Gr. a pillar.] (Anat.) A styliform process forming the posterior extremity of the vertebral column in some fishes and amphibians.
U"rox (?), n. [See Aurochs, and cf. Urus.] (Zoöl.) The aurochs.
U*rox"a*nate (?), n. (Chem.) A salt of uroxanic acid.
U`rox*an"ic (?), a. [Uric + alloxan.] (Chem.) Pertaining to, or designating, an acid, C5H8N4O6, which is obtained, as a white crystalline substance, by the slow oxidation of uric acid in alkaline solution.
U`ro*xan"thin (?), n. [1st uro- + xanthin.] (Physiol. Chem.) Same as Indican.
Ur*rho"din (?), n. [1st uro- + Gr. a rose.] (Physiol. Chem.) Indigo red, a product of the decomposition, or oxidation, of indican. It is sometimes found in the sediment of pathological urines. It is soluble in ether or alcohol, giving the solution a beautiful red color. Also called indigrubin.
Ur"ry (?), n. [Cf. Gael. uir, uireach, mold, clay.] A sort of blue or black clay lying near a vein of coal.
Ur"sa (?), n. [L. ursa a she-bear, also, a constellation, fem. of ursus a bear. Cf. Arctic.] (Astron.) Either one of the Bears. See the Phrases below.
Ursa Major [L.], the Great Bear, one of the most conspicuous of the northern constellations. It is situated near the pole, and contains the stars which form the Dipper, or Charles's Wain, two of which are the Pointers, or stars which point towards the North Star. -- Ursa Minor [L.], the Little Bear, the constellation nearest the north pole. It contains the north star, or polestar, which is situated in the extremity of the tail.
Ur"sal (?), n. (Zoöl.) The ursine seal. See the Note under 1st Seal.
Ur"si*form (?), a. [L. ursus, ursa, a bear + -form.] Having the shape of a bear.
Ur"sine (?), a. [L. ursinus, from ursus a bear. See Ursa.] Of or pertaining to a bear; resembling a bear.
Ursine baboon. (Zoöl.) See Chacma. -- Ursine dasyure (Zoöl.), the Tasmanian devil. -- Ursine howler (Zoöl.), the araguato. See Illust. under Howler. -- Ursine seal. (Zoöl.) See Sea bear, and the Note under 1st Seal.
Ur"son (?), n. [Cf. Urchin.] (Zoöl.) The Canada porcupine. See Porcupine.
Ur"suk (?), n. (Zoöl.) The bearded seal.
Ur"su*la (?), n. (Zoöl.) A beautiful North American butterfly (Basilarchia, ∨ Limenitis, astyanax). Its wings are nearly black with red and blue spots and blotches. Called also red-spotted purple.
Ur"su*line (?), n. [Cf. F. ursuline.] (R. C. Ch.) One of an order of nuns founded by St. Angela Merici, at Brescia, in Italy, about the year 1537, and so called from St. Ursula, under whose protection it was placed. The order was introduced into Canada as early as 1639, and into the United States in 1727. The members are devoted entirely to education.
Ur"su*line, a. Of or pertaining to St. Ursula, or the order of Ursulines; as, the Ursuline nuns.
Ur"sus (?), n. [L., a bear.] (Zoöl.) A genus of Carnivora including the common bears.
Ur*ti"ca (?), n. [L., a nettle.] (Bot.) A genus of plants including the common nettles. See Nettle, n.
Ur`ti*ca"ceous (?), a. (Bot.) Of or pertaining to a natural order (Urticaceæ) of plants, of which the nettle is the type. The order includes also the hop, the elm, the mulberry, the fig, and many other plants.
Ur"tic*al (?), a. Resembling nettles; -- said of several natural orders allied to urticaceous plants.
Ur`ti*ca"ri*a (?), n. [NL. See Urtica.] (Med.) The nettle rash, a disease characterized by a transient eruption of red pimples and of wheals, accompanied with a burning or stinging sensation and with itching; uredo.
Ur"ti*cate (?), v. t. & i. [imp. & p. p. Urticated (?); p. pr. & vb. n. Urticating.] To sting with, or as with, nettles; to irritate; to annoy.
G. A. Sala.
Ur`ti*ca"tion (?), n. (Med.) The act or process of whipping or stinging with nettles; -- sometimes used in the treatment of paralysis.
U*ru*bu" (?), n. [Cf. Pg. urub\'a3 a certain Brazilian bird.] (Zoöl.) The black vulture (Catharista atrata). It ranges from the Southern United States to South America. See Vulture.
U"rus (?), n. [L.; of Teutonic origin. See Aurochs.] (Zoöl.) A very large, powerful, and savage extinct bovine animal (Bos urus ∨ primigenius) anciently abundant in Europe. It appears to have still existed in the time of Julius Cæsar. It had very large horns, and was hardly capable of domestication. Called also, ur, ure, and tur.
Ur"va (?), n. [NL.] (Zoöl.) The crab-eating ichneumon (Herpestes urva), native of India. The fur is black, annulated with white at the tip of each hair, and a white streak extends from the mouth to the shoulder.
Us (?), pron. [OE. us, AS. s; akin to OFries. & OS. s, D. ons, G. uns, Icel. & Sw. oss, Dan. os, Goth. uns, L. nos we, us, Gr. we, Skr. nas us. . Cf. Nostrum, Our.] The persons speaking, regarded as an object; ourselves; -- the objective case of we. See We. Tell us a tale."
Give us this day our daily bread.
Matt. vi. 11.
Us"a*ble (?), a. Capable of being used.
Us"age (?), n. [F. usage, LL. usaticum. See Use.]
1. The act of using; mode of using or treating; treatment; conduct with respect to a person or a thing; as, good usage; ill usage; hard usage.
Is prisoner to the bishop here, at whose hands
He hath good usage and great liberty.
2. Manners; conduct; behavior. [Obs.]
A gentle nymph was found,
Hight Astery, excelling all the crew
In courteous usage.
3. Long-continued practice; customary mode of procedure; custom; habitual use; method.
It has now been, during many years, the grave and decorous
usage of Parliaments to hear, in respectful silence, all expressions, acceptable or unacceptable, which are uttered from the throne.
4. Customary use or employment, as of a word or phrase in a particular sense or signification.
5. Experience. [Obs.]
In eld [old age] is both wisdom and usage.
Syn. -- Custom; use; habit. -- Usage, Custom. These words, as here compared, agree in expressing the idea of habitual practice; but a custom is not necessarily a usage. A custom may belong to many, or to a single individual. A usage properly belongs to the great body of a people. Hence, we speak of usage, not of custom, as the law of language. Again, a custom is merely that which has been often repeated, so as to have become, in a good degree, established. A usage must be both often repeated and of long standing. Hence, we speak of a hew custom," but not of a new usage." Thus, also, the customs of society" is not so strong an expression as the usages of society." Custom, a greater power than nature, seldom fails to make them worship." Locke. Of things once received and confirmed by use, long usage is a law sufficient." Hooker. In law, the words usage and custom are often used interchangeably, but the word custom also has a technical and restricted sense. See Custom, n., 3.
|
<urn:uuid:d2505c85-5b92-4412-b650-63cfb21f5867>
|
CC-MAIN-2013-20
|
http://machaut.uchicago.edu/?action=search&resource=Webster%27s&page=1587&quicksearch=on
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.804909
| 4,390
| 2.8125
| 3
|
phenopticon writes "Researchers at Berkeley are attempting to revive the extinct passenger pigeon in order to set up a remote island theme park full of resurrected semi-modern extinct animals. (Well, maybe not that last part.) Quoting: 'About 1,500 passenger pigeons inhabit museum collections. They are all that's left of a species once perceived as a limitless resource. The birds were shipped in boxcars by the tons, sold as meat for 31 cents per dozen, and plucked for mattress feathers. But in a mere 25 years, the population shrank from billions to thousands as commercial hunters decimated nesting flocks. Martha, the last living bird, took her place under museum glass in 1914. ... Ben Novak doesn't believe the story should end there. The 26-year-old genetics student is convinced that new technology can bring the passenger pigeon back to life. "This whole idea that extinction is forever is just nonsense," he says. Novak spent the last five years working to decipher the bird's genes, and now he has put his graduate studies on hold to pursue a goal he'd once described in a junior high school fair presentation: de-extinction. ... Using next-generation sequencing, scientists identified the passenger pigeon's closest living relative: Patagioenas fasciata, the ubiquitous band-tailed pigeon of the American west. This was an important step. The short, mangled DNA fragments from the museums' passenger pigeons don't overlap enough for a computer to reassemble them, but the modern band-tailed pigeon genome could serve as a scaffold. Mapping passenger pigeon fragments onto the band-tailed sequence would suggest their original order."
|
<urn:uuid:a3fcb577-d082-4f7f-991b-b95b470bb21d>
|
CC-MAIN-2013-20
|
http://science.slashdot.org/story/13/03/15/1639254/berkeley-scientists-plan-to-jurassic-park-some-extinct-pigeons-back-to-life?sdsrc=next
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.971055
| 341
| 2.734375
| 3
|
If you want to spend some high-quality time with your children, look no farther than your backyard or community garden.
Gardening with your children offers an opportunity to relax, be active and have fun together while preparing them for life.
The National Gardening Association surveyed garden program leaders and found that gardening improves the following characteristics in children:
Gardening not only helps children gain lifelong skills, it also provides an excellent way to increase physical activity for parents and children. Raking and bagging leaves, digging, spading, tilling, laying sod and general gardening can help you burn 160 to 200 calories for every 30 minutes of activity.
Have a picky eater in the family? Gardening may be your answer. Research shows that children are more likely to eat their fruits and vegetables (or at least try them) if they help grow them. Gardening also provides a mini nutrition lesson for your kids. Be sure to discuss how plants, like people, need food and water to grow and stay healthy.
What activities can my child do in the garden? Children can help with nearly any gardening task, such as, planting the seeds, watering the plants and picking the food. Here is a list of foods that are easy for kids to grow:
Nurturing their plants teaches children a sense of responsibility and gives them a feeling of accomplishment.
Starting a garden does not have to be a huge expense. The main idea is to simply get outside, dig in the dirt and see what you can grow together.
In case you’re looking for a garden plot, there are still four left to rent at the Valley City Community Gardens. Contact Ellen at the Barnes County Extension Office, (701) 845-8549, or firstname.lastname@example.org for more information.
For more gardening tips and helpful nutrition information, visit www.ndsu.edu/eatsmart or see these publications:
*“Gardening with Kids A Win-Win Opportunity,” www.ag.ndsu.edu/foodwise/eatsmart/2010-eat-smart-play-hard-magazine/gard...
*Gardening with Children,” www.ag.ndsu.edu/pubs/plantsci/hortcrop/fn1372.html
*“Garden with Your Kids,” www.eatright.org/Public/content.aspx?id=6442463750&terms=gardening
Sources: Abby Plucker, NDSU student dietitian, and Julie Garden-Robinson, NDSU Extension food and nutrition specialist
|
<urn:uuid:fe2e8da6-a43e-4614-8df3-9c6e03177beb>
|
CC-MAIN-2013-20
|
http://www.times-online.com/print/3209?quicktabs_2=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.892243
| 551
| 3.40625
| 3
|
This data set consists of a subset of a 1-degree gridded global freshwater wetlands database (Stillwell-Soller et al. 1995). This subset was created for the study area of the Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) in South America (i.e., 10° N to 25° S, 30° to 85° W). The data are in ASCII GRID format.
The global freshwater wetlands database was assembled from two data sets: Aselman and Crutzen's (1989) wetlands data set and Klinger's political Alaska data set (pers. comm. to L. M. Stillwell-Soller, 1995). The aim of Stillwell-Soller's global data set was to provide an accurate, comprehensive and uniform set of files for convenient specification of wetlands in global climate models. The main source of data was Aselman and Crutzen's global maps of percent cover for a variety of wetlands categories at 2.5-degree latitude by 5-degree longitude resolution. There was some reorganization for seasonally varying categories. Aselman and Crutzen's data were interpolated to a standard 1-degree by 1-degree grid through bilinear interpolation. Their data were geographically complete except for the Alaskan region, for which Klinger's data set provided values.
More information can be found at ftp://daac.ornl.gov/data/lba/land_use_land_cover_change/soller_wetlands/comp/soller_readme.pdf.
LBA was a cooperative international research initiative led by Brazil. NASA was a lead sponsor for several experiments. LBA was designed to create the new knowledge needed to understand the climatological, ecological, biogeochemical, and hydrological functioning of Amazonia; the impact of land use change on these functions; and the interactions between Amazonia and the Earth system. More information about LBA can be found at http://www.daac.ornl.gov/LBA/misc_amazon.html.
Cite this data set as follows:
Stillwell-Soller, L. M., L. F. Klinger, D. Pollard, and S. L. Thompson. 2003. LBA Regional Freshwater Wetlands, 1-Degree (Stillwell-Soller et al.). Available on-line [http://daac.ornl.gov] from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, U.S.A. doi:10.3334/ORNLDAAC/674.
Aselmann, I., and P. J. Crutzen. 1989. Global distribution of natural freshwater wetlands and rice paddies, their net primary productivity, seasonality and possible methane emissions. Journal of Atmospheric Chemistry 8:307-358.
Stillwell-Soller, L. M., L. F. Klinger, D. Pollard, and S. L. Thompson. 1995. The Global Distribution of Freshwater Wetlands. TN-416STR. National Center for Atmospheric Research, Boulder, Colorado, U.S.A. Available on-line [http://www.cisl.ucar.edu/isg/].
Information about the data format, wetland classification, and the procedure used to create the LBA subset are in the following file: ftp://daac.ornl.gov/data/lba/land_use_land_cover_change/soller_wetlands/comp/soller_readme.pdf
|
<urn:uuid:4f009ba5-2f3e-4330-8580-e481249682bc>
|
CC-MAIN-2013-20
|
http://daac.ornl.gov/LBA/guides/lba_sollerwetlands.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.816209
| 751
| 2.78125
| 3
|
by Aymenn Jawad Al-Tamimi and Oskar Svadkovsky*
It’s become an article of faith among policy makers and analysts in the West that Syria is a nation of minorities. Various sources put the share of non-Sunni Muslim minorities at around one quarter of the population. These minorities are believed to constitute the bulk of the support base of the Syrian regime. Some ventured as far as to suggest that the regime was deliberately stoking sectarian tensions with the massacres in Houla and Qubeir in order to consolidate its minority support base.
The commonly accepted percentages of Syrian minorities are: Alawites and Shia — 13%, Christians — 10%, and Druze — 3%. Syria, however, does not collect or publish data related to the sectarian composition of its population and trying to track the origin of common estimates usually leads nowhere.
For example, all observers commenting on Syria believe that Syrian Druze live primarily in Jabal al Druze and constitute 3% of the Syrian population. This claim, however, does not square with the results of Syria’s last population census. According to the census, in 2004 the population of the province of Sweida, where Jabal al Druze is located, had only 313,231 inhabitants against 17,920,844 of the total population of Syria. This makes for 1.7% and not 3% of the population. On top of this, in 2004 the birth rate of Sweida stood at 1.7% against the national average of 2.5%. At this rate, discounting migration flows between Syrian provinces, by 2012 Sweida should have already shrunk to 1.6%, including not only the Druze but also a sizeable Christian community in the city of Sweida and some Muslim population.
Activists in Sweida often explain the low level of Druze participation in the Syrian uprising by widespread emigration of young Druze. Many young Druze have left the unemployment-stricken province for greener pastures. If they left for Damascus and other bigger cities, this could compensate for the decline of Sweida’s share in the general population. The contention that Syrian Druze remain concentrated in Jabal al Druze would be still wrong, though. Yet, according to the same sources, many of these young people have emigrated out of the country altogether. If true, it leaves almost a half of the estimated Druze population unconfirmed.
Another case in point are Syrian Christians who are generally believed to have declined from 14% in 1943 to 10% today. Syria Comment is one of the most comprehensive blogs and link aggregators on Syria. One of its contributors Ehsani recently estimated that Christians make up only between 4% to 5% of Syria’s population. Ehsani attributed this dramatic decline, again, to emigration and anemic birth rates.
Ehsani’s research into the subject was triggered by a conversation with a priest in Aleppo who remarked on his futile attempts to dissuade young Christians from emigrating. It turned out that Christians priests and bishops in Aleppo keep track of the families under their respective churches as well as the births and marriages of their members. After the examination of available data, Ehsani’s conclusion was that the share of Christians in the population of Aleppo is not 12% as claimed by Wikipedia and other sources, but can be as low as 3.5%.
The difference in birth rates between Syrian provinces, by the way, can be rather dramatic. In Sweida, Latakia, and Tartous, the three provinces with a Druze or Alawite majority, the birth rate ranged in 2004 from 1.7% to 1.9%. In the heavily Sunni provinces of Idlib, Deraa, and Deir ez Zor, it was 3.1%.
The census of 1943 put the share of the Sunni population at 69%. Almost 70 years later, it’s estimated to have grown only to 74%. Yet, considering the emigration and paltry birth rates of the non-Sunni minorities, it seriously beggars belief that they can be still retaining a share of as much as 26% of the population .
As far as Syria’s most important minority is concerned, the consensus goes, the Alawites dominate Syria’s armed forces. At the very least they dominate that part of the army that remains loyal to Bashar Assad, while the rest of the army is locked in barracks.
Yet, this estimation of the sectarian composition of the Syrian army conflicts with numerous interviews with army defectors published during the last year. According to their presentation of the situation in their units, the rank and file soldiers appear to be mostly Sunni. True, many officers seem to be Alawites, but other officers don’t. David Enders who traveled to Idlib with a convoy of UN monitors, used that opportunity to interview government soldiers unobstructed by the presence of minders. The soldiers told him that four months ago the commander of their unit defected himself and started a rebel brigade. It’s highly unlikely that that officer was an Alawite.
According to the census of 2004, the combined population of Latakia and Tartous does not reach even 9% of the population. It’s true that there is a significant Alawite presence outside the Alawite heartland. But it’s also true that the numbers for Tartus and Latakia also include a significant Sunni minority. Cities like Banyas in Tartous and even the capital of Latakia itself are majority Sunni. In fact, parts of Latakia are now infested with insurgents. So it’s not that Syria is teeming with Alawites, either.
Besides, the notion of an Alawite-dominated Syrian army simply does not square with the daily death tolls published by the Syrian official agency which list both the names and home provinces of fallen soldiers. For example, on June 9, one of the bloodiest days for the Syrian army until now, 57 army and law-enforcement martyrs were laid to rest according to the official SANA. To these Tartous and Latakia had contributed ten martyrs. While it’s more than their share in the population, they are hardly dominating the list. “We all know that most of the security forces shooting at us and killing us are Sunnis, not Alawites,” a Sunni activist from the Damascus suburb of Douma was quoted by Phil Sands on Jun 21, 2012.
As the civil war in Syria has escalated and taken on an increasingly sectarian dimension, many observers took to predicting a prolonged and drawn out conflict. With the minorities rallying behind the regime of Bashar Assad, these people reason, the regime can mobilize enough support in the population and armed forces to delay the inevitable. They are wrong. Wikipedia notwithstanding, Syria is not such a nation of minorities as it used to be in 1943. Neither these minorities are present in Syria’s armed forces in such overwhelming numbers. Their loyalty alone is not enough to prolong the agony.
It remains a very underappreciated fact, but at the beginning of the uprising the regime in Syria was commanding loyalty of a significant section of its Sunni Arab population.
Since the beginning of the uprising and until quite recently, reporters in Damascus have repeatedly noted that the regime appeared to enjoy widespread support among urban classes in the capital that transcended sectarian affiliations.
A rebel leader in Aleppo, quoted by Anthony Loyd on June 19, 2012, has confirmed that many Sunnis in the province joined the pro-government shabiha militias and identified two clans, the Bari and Baqqarah, as supporters of the regime in Aleppo. With more than one million members, the Baqqara is also a major tribe in Deir ez Zor.
Even the notion of the Syrian uprising as a poor Sunni man revolt does not do full justice to this reality. According to Phil Sands, as late as January of this year, a senior tribal figure in the impoverished Deir ez Zor estimated that the Sunni tribesmen in the province were still almost evenly split between supporters and opponents of the regime.
It’s this hidden minority of Sunni supporters that was keeping the regime on its feet until now. Losing this support to the sectarian polarization would set the regime on fast track to oblivion.
Meanwhile, according to the latest reports from Deir ez Zor, the alliance between the Sunni tribes in the province and the regime finally unraveled at last. But, once it happened, large chunks of the province and the city of Deir ez Zor quickly fell under opposition control. This is not the first time that the opposition has taken over center of the city of Deir ez Zor. But this was the first time a government-assault to recapture the city was repelled, leaving the streets of Deir ez Zor strewn with destroyed tanks and other military equipment.
At stake have been most of Syria’s oil and control over the border with Iraq which is known to be used to smuggle weapons and foreign fighters into the country. In fact, Deir ez Zor has well-armed and battle-hardened tribal allies on the Iraqi side of the border. Bashar Assad had been having it bad enough in Homs. But “Benghazi” turned out to be an even tougher nut, with the Free Syrian Army claiming to control 70% of Deir ez-Zor.
Now, as fighting reaches Damascus itself, with the Defense Minister reportedly killed in a suicide bombing, things look ever more bleak for the regime. The end appears to be at hand, with chaos set to rule the day. Where is this supposed Syrian army of more than 600,000 now?
Aymenn Jawad Al-Tamimi is a student at Brasenose College, Oxford University, and an adjunct fellow at the Middle East Forum. Oskar Svadkovsky is a computer networking professional based in Tel Aviv, and the owner of the Happy Arab News Service blog. He graduated in Indian and Chinese Studies at the Hebrew University of Jerusalem.
Related: Christianity, Dictator Watch, History, Islam, Society, Syria
|
<urn:uuid:101106a5-8647-47fe-bfb9-f0ce38116bdd>
|
CC-MAIN-2013-20
|
http://netwmd.com/blog/2012/07/23/9877
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965149
| 2,102
| 2.75
| 3
|
Name: Regina M.
I would like to try to propagate pine trees with a group
of students (grades 3 to 5). We are located in South Jersey. The pine
trees around the school are mostly White Pine and Scotch Pine. *I also
have a bag of pine cones (I think they are Loblolly) that I just
collected from a park in Delaware. Do you think we could be successful
at this in a classroom setting? (We have a Grow Lab) Where exactly are
the seeds on the pine cone? I can not seem to figure that part out?
|
<urn:uuid:556bdb19-b2eb-4130-a2e7-3f7d8fd12fc3>
|
CC-MAIN-2013-20
|
http://newton.dep.anl.gov/askasci/bot00/bot00506.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966731
| 124
| 2.8125
| 3
|
American Presidents 4These brain teasers rely on your ability to recognize groups of common attributes. For each of these puzzles you'll need to figure out why the words or letters are grouped as they are. Sometimes you will be asked to pick the odd-one-out or to place a new word into the correct group.
Which president in Group B can be added to Group A? Why?
Group A: James Monroe, Franklin Pierce, Chester Arthur, Gerald Ford
Group B: Thomas Jefferson, Zachary Taylor, James Buchanan, Ulysses Grant, Warren Harding, Richard Nixon
AnswerThe presidents in Group A were members of the Episcopalian Church. Zachary Taylor can be added to Group A. Thomas Jefferson was Deist, James Buchanan was Presbyterian, Ulysses Grant was Methodist, Warren Harding was Baptist, and Richard Nixon was Quaker.
See another brain teaser just like this one...
Or, just get a random brain teaser
If you become a registered user you can vote on this brain teaser, keep track of
which ones you have seen, and even make your own.
Back to Top
|
<urn:uuid:77974394-d3a0-4636-9219-99c4a74043bd>
|
CC-MAIN-2013-20
|
http://www.braingle.com/brainteasers/teaser.php?op=2&id=33031&comm=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948336
| 229
| 2.65625
| 3
|
(Why you end up spilling coffee…)
The properties of mugs, legs and liquid conspire to cause spills, most often at some point between the seventh and tenth step, a new study has revealed.
It just so happens that the human stride has almost exactly the right frequency to drive the natural oscillations of coffee, when the fluid is in a typically sized coffee mug.
A pair of fluid physicists at the University of California at Santa Barbara (UCSB investigated the science of sloshing and calculated the natural frequency at which coffee sloshes back and forth when held in mugs of a variety of sizes, from a dainty espresso cup to a cappuccino behemoth.
They found that a normal human gait moves at nearly the same frequency, so each step amplifies the coffee's heave-ho motion. Stumbling or changing pace — common occurrences when you're low on caffeine — make matters worse by causing chaos in your cup, increasing the chance of a splash over the rim.
By modelling the fluid and walking dynamics of the situation, and comparing the math with some real-world walking-with-coffee experiments, the UCSB scientists have uncovered a few tips for bleary-eyed coffee cup carriers.
"Of course, there are ways to control coffee spilling," Discovery news quoted study co-author Rouslan Krechetnikov as telling Life's Little Mysteries.
Coffee drinkers often attempt to walk quickly with their cups, as if they might manage to reach their destination before their sloshing java waves reach a critical height.
This method is scientifically flawed. It turns out that the faster you walk, the closer your gait comes to the natural sloshing frequency of coffee. To avoid driving the oscillations that lead to a spillage, walk slowly.
Secondly, watch your cup, not your feet. The researchers found that when study participants focused on their cups, the average number of steps they took before spilling coffee increased greatly.
Krechetnikov and his graduate student Hans Mayer, the primary author of the study, suggested two explanations for this result: First, focusing on one's cup tends to engender slower walking, and second, it dampens the noise, or chaotic sloshing, in the cup.
Whether focused carrying decreases the amount of noise because we perform "targeted suppression", automatically counteracting the sloshing of the liquid with small flicks of our wrists, or because we simply hold the cup more steadily when we're looking at it, the researchers could not say.
Third, accelerate gradually. If you take off suddenly, a huge coffee wave will build up almost instantly, and it will crash over the rim after just a few steps.
But the best way to prevent coffee spilling might be to find an unusual cup. According to Krechetnikov, ideas from liquid sloshing engineering studies, which historically were done to stabilize fuel tanks inside missiles, indicate three possibilities for spill-free cup designs - "a flexible container to act as a sloshing absorber in suppressing liquid oscillations, a series of annular ring baffles arranged around the inner wall of the container to achieve sloshing suppression, or a different shape cup."
The study has been published in the journal Physical Review Letters E.
|
<urn:uuid:0eed81a5-c761-40f8-9372-c4aad99aa4ae>
|
CC-MAIN-2013-20
|
http://articles.timesofindia.indiatimes.com/2012-11-02/drinks-corner/31655076_1_coffee-drinkers-rouslan-krechetnikov-study-participants
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940493
| 681
| 2.921875
| 3
|
A massive dust plume blew off the western coast of Africa and over the Atlantic Ocean on October 8, 2012. The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite captured this image the same day, showing dust extending from the Western Sahara-Mauritania border westward past Cape Verde. It was the second consecutive day of dust activity in this region.
Sand seas sprawl across Mauritania and neighboring countries, and those vast reservoirs of sand provide plentiful material for dust storms. The Saharan Air Layer—an arid, dust-laden air mass that forms over the Sahara between late spring and early fall—frequently transports dust westward across the Atlantic Ocean.
- Hurricane Research Division. (2012, March 17) Saharan Air Layer. National Oceanic and Atmospheric Administration. Accessed October 8, 2012.
NASA image courtesy Jeff Schmaltz, LANCE MODIS Rapid Response Team at NASA GSFC. Caption by Michon Scott.
- Aqua - MODIS
|
<urn:uuid:8563cd80-63d7-45c4-9f78-0bb4859f9575>
|
CC-MAIN-2013-20
|
http://earthobservatory.nasa.gov/NaturalHazards/view.php?id=79358&src=eorss-nh
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.833052
| 209
| 3.96875
| 4
|
EYP at School
The European Youth Parliament actively engages young people directly at the school level. Hence, in some countries small sessions are organised at schools using the EYP’s approach for the discussion and debate of European topics:
- to provide an opportunity to discuss and debate European issues and encourage students to discuss political questions and articulate their own opinions;
- to provide a possibility to deepen one’s knowledge of the issues in question;
- to provide a framework and a practical tool for teachers to include European issues in their teaching;
- to make students aware of the EYP and to provide them with the chance to participate in future international events of the EYP of a longer duration.
Those small sessions often also provide an outreach dimension seeking to engage students that often have not been in touch with European projects before.
The EYP sessions at schools are run by EYP alumni in cooperation with the teachers in charge.
A guidebook on the methodology can be downloaded here.
|
<urn:uuid:73d4884c-8c1f-431a-a0ee-1bf1d28f1904>
|
CC-MAIN-2013-20
|
http://eypej.org/page.1.773.EYP-at-School.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.952244
| 202
| 2.796875
| 3
|
Politics has been with us for as long as people have had to cooperate to achieve their goals. Over a half-million people currently hold full- or part-time elective offices in the United States, making decisions that affect communities on local, state, and national levels. For those who wish to participate in society’s decisions, a career in politics should absolutely be considered. Politicians have a hand in thousands of decisions important to their communities, from questions of dividing tax revenue for local schools to police funding to issues of federal tax policy. The profession offers great rewards to those with a combination of negotiation and public presentation skills. In addition to full-time political jobs, many find that part-time community boards, town councils, or even state assembly jobs make valuable and rewarding adjuncts to their full-time careers.
Politics is not for the shy. At all levels, it is characterized by publicity. Most successful politicians enjoy visibility, while those who leave the profession often cite loss of privacy as its greatest drawback. Whether in a small town or in the White House, politicians are subject to intense scrutiny. Elected officials have to campaign for reelection every time their term is up, but, for the most part, the first time is the real challenge; incumbency is a strong advantage in elections. More than 90 percent of the U.S. House of Representatives is reelected every two years, and the reelection rates at the lower levels of politics are similar.
There is no one career path which reliably leads to an elective office. Working as an aide for an established politician is one common way to meet contacts in the local political party apparatus. Law school is another common first step to a political career, since many lawyers achieve public notice and visibility or do work for state political parties. In general, political careers begin with an elective office in state government; most politicians in Washington start as state legislators and work their way up the party hierarchy. In politics, however, the exception is the rule, and people of all backgrounds pursue successful political careers, from peanut farmers to actors. Charisma is important, and being independently wealthy to finance campaigns doesn’t hurt either.
A significant majority of full-time, career politicians are lawyers, and many return to private practice after leaving office. Many represent clients doing business with the government offices they vacated, putting their knowledge of politics in this specific area for financial gain; others just go on to ordinary practice. Other former politicians become lobbyists or run professional organizations or foundations that can benefit from the politicians’ stature and experience. Finally jobs in academia or appointed positions in government are also quite common for former politicians.
|
<urn:uuid:731b0079-9d94-4687-a20b-edbf5a20a5b7>
|
CC-MAIN-2013-20
|
http://www.princetonreview.com/Careers.aspx?page=1&cid=123&uidbadge=%07
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.977935
| 536
| 2.625
| 3
|