text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
|---|---|---|---|---|---|---|---|---|---|
J. Tuzo Wilson was the first to propose a mechanism for the creation of volcanic island chains. Inspired by Hawaii, Wilson envisioned a column of magma erupting to the surface. As tectonic plates move over this column, a chain of islands, growing progressively older with distance, develops. W. Jason Morgan of Princeton expounded upon this by proposing a mode of convection independent of plate tectonics. Mantle plumes are created by the temperature differential at the core/mantle boundary, he suggested, when a hotter region within the mantle rises as a result of decreased density. The plumes' narrow tails (~100 km) are rooted in the lower mantle, below the level of vigorous convection associated with plate tectonics. The plume head gradually enlarges itself and becomes cooler, while the tail remains hot (Morgan, 1971). In November 2003 he won the prestigious National Medal of Science for his work.
The idea is not difficult to grasp, but its simplicity belies the current discord that surrounds the theory. Plumes have never been an area of complete harmony. Geologists have argued over the years about their number and depth. In addition, there are degrees of belief in plumes; some believe they are responsible for all anomalous volcanic activity, while others think they only cause a small fraction of it. The arguments on both sides have been heating up and G.R. Foulger, a seismologist at the University of Durham in England, is often at the forefront of the fray, publishing numerous papers criticizing the Mantle Plume Model and maintaining a Web site at (www.mantleplumes.org).
The alternative to the Mantle Plume Model lies not in refuting the existence of hot spots (which are pretty plain to see) but rather in the completeness of plate tectonic theory - that plumes are not necessary to explain the various processes in question. The advocates of this model point out that the Mantle Plume theory was originally formulated to explain the existence of stationary plumes, yet it currently encompasses the Icelandic hot spot, which is clearly on the move (Foulger 2002).
Return to Top
Some suggest that the debate stems from a lack of a formal definition for the word "plume" (Wikipedia 2004). Others contend that the word is ill-defined because of the frequent ad hoc changes made to plume theory in order to reconcile conflicting data. "If plume hypothesis cannot be adapted to fit the observations, then the observations are commonly adapted to fit the hypothesis," declares Foulger (Geology News 2003a). Yet others take issue with the current state of the investigations undertaken to verify plume theory, insisting that a statistical approach is necessary for a truly comprehensive theory (Anderson, 2003).
The question is more than merely academic. Volcanism releases carbon dioxide, which can lead to global warming; if the carbon dioxide is released into the ocean, ocean currents can be affected. Plumes could be the cause of massive eruptions that have altered the environment on earth drastically, perhaps to the point of mass extinction. They may also have a role in the formation of mountains and the reversing of the earth's magnetic poles (Larson 1995). There is almost no end to the variety of methods used to detect plumes, and their implications to earth science. This Web site will explore the most frequently cited geochemical and geophysical evidence for mantle plumes, and as well as their more profound implications.
Seismic tomography is the state of the art for mantle plume detection. It involves the use of seismic waves, which can be categorized as S or P waves. S waves are transverse waves which displace the ground perpendicularly to the direction of wave propagation. P waves are compressional, and alternately compress and dilate the earth in the direction of wave propagation. These waves generally travel twice as fast as S waves and can travel through any type of material. Both P and S waves are created by earthquakes, although seismologists can also generate them artificially. The time that the waves arrive at seismic stations can then be used to calculate the waves' speed through the Earth. By combining the data from many earthquakes across the globe a three dimensional map of wave speed through the earth can be generated. Surface waves, which cause displacement along the ground, can also be used to detect subsurface features; the process is then called surface-wave tomography.
Beginning in the early 1980's seismologists began to accumulate seismological data on the earth's interior. Seismic wave speeds differ with pressure, temperature, and rock composition. Typical speeds for P waves are 330m/s in air, 1450m/s in water and about 5000m/s in granite. Thus P wave velocity can be used to determine how far a plume extends into the mantle. Seismologists have recently discovered a 5- to 40-km-thick region at the base of the mantle where P wave velocities are depressed by as much as ten percent from the overlying mantle. This area is termed the ultra-low-velocity zone (ULVZ) and is conjectured to coincide with partial melting of the mantle. After an extensive survey, some geologists concluded that there was less than one percent chance that the correlation between the locations of hot spots and those of ULVZ's occurred strictly by chance (Williams 1998).
For the purpose of analysis, seismic waves are seen as a collection of lines. As the waves pass through molten rock, they slow. However, with a narrow body such as a plume, the waves can later regain speed as they are influenced by adjacent parts of the wave. Guust Nolet and F.A. Dahlen of Princeton propose that viewing a plume as a hollow banana is a more useful model. Only the peel of the banana is detectable to the curving ray path (Nolet 2003). Graduate student Raffaella Montelli of Princeton used this new technique, analyzing 87,806 seismic recordings. She concluded that deep mantle plumes are indeed present beneath Hawaii, Tahiti, and Easter Island. Some hotspot plumes (Reunion in the Indian Ocean and the Azores in the Atlantic) branch off of a superplume that rises beneath the South Pacific and Africa (Kerr, 2003).
Return to Top
Some of this conflict may be due to the different ways of interpeting the data. "It seems to me that the critics don't understand some aspects of the basic problem," says Donald J. DePaolo, a geophysicist at UC Berkeley. "For example, they claim that the mantle beneath Iceland is not hot, and the mantle beneath Yellowstone is not flowing upward. My view of the data in the papers they cite is that it leads straight to the conclusion that the rock is hot and flowing upward, just as we expect for mantle plumes" (AIG 2003).
Because of these difficulties in detection and interpretation of the results, the number of accepted hot spot areas on the earth varies from year to year. In 1999, the number of hot spots peaked at 5200; now less than ten remain to be explained by this process. In 2004, Montelli et al reported that P-wave velocities clearly show a plume model at work in six locations: Ascension, Azores, Canary, Easter, Samoa, and Tahiti (Montelli, 2004).
Another line of evidence popular with plume enthusiasts is the ratio of helium-3 to helium-4. A higher ratio is characteristic of deep mantle origin, they argue. Similar information can be gleaned from isotopes of the elements neodymium, strontium, lead, and hafnium. The ratio of 3He/4He increases over time as 4He is produced by the decay of uranium and thorium. The present day atmospheric 3He/4He ratio is 1.39x10-6, and is referred to as RA. Geochemists infer deep origins whenever 3He/4He are in excess of 9 to 10 RA. These ratios have been found at hot spot locations such as Hawaii, and are consistently different than the basalts of the mid-ocean ridges.
Foulger, naturally, takes issue with this line of reasoning. The concentration of 3He predicted by this model would be as high as that found in gas-rich chrondriic meteorites, she insists (Foulger, 2002). This conflicts with cosmological data. A better interpretation would be that the high 3He/4He ratio arises from a deficiency in 4He in the upper mantle caused by low U+Th areas, and thus low rate of addition of radiogenic 4He.
Many hot spot volcanic systems display a temporal variation in their 3He/4He ratios. At Mauna Loa, Hawaii, 3He/4He ratios decrease from 18-20 RA about 250,000 years ago to 8 to 9 RA today. "Plume contamination" is the standard explanation for these findings. On its way to the crust, the magma could be affected by material from the upper mantle or seawater.
From time to time the earth spews forth vast quantities of lava, which blanket the earth in basalt many km thick. These areas, which exist both on land and sea, are called large igneous provinces (LIGs). To account for them, geophysicists have developed a two-fold model of mantle convection. Ninety percent of the mantle heat is dissipated by orderly convection cells. However, ten percent remains to erupt in massive plumes (Coffin, 1993). The events that result in LIGs are commonly called "superplumes."
A vast LIG exists beneath the Western Pacific, where ocean floor there was covered in layers of basalt dating from the mid-Cretaceous, much younger than tectonic theory stipulates. This LIG is larger than any others on earth, and it led geophysicist Roger Larson to conclude that superplumes have built vast suboceanic plateaus with "pulses" of lava. Formation of ocean crust doubled at onset of pulse and tapered off over the next 709 to 80 million years (Larson 1995).
Others have gone on to say that superplumes are not just responsible for LIGs, but are also driving forces of plate tectonics. In 1999 Richard Kerr suggested that two vast superplumes cool the earth from opposite sides (the Pacific and Africa). These are critical to the stability of the core, which would otherwise collapse under the weight of subducting plates. Surface waves, earthquake waves, and S waves were used to locate a mass of magma beneath Southern Africa, which bends to the northeast and rise beneath the Afar Triangle in the form of a plume (Kerr, 1999).
Mantle convection could be related to the geomagnetic field. Heat dissipation from the core by plume
convection may upset the convective cycle of the geodynamo, causing a decline in geomagnetic reversals (Larson 1991).
The Cretaceous Long Normal Superchron (CLNS), an unusually long period of consistent polarity, supports this idea.
Others suggest that increased plume
flux and heat removal would instead increase the rate of geomagnetic reversal (Gubbins 1994).
Return to Top
Mantle plumes are an interesting topic and new studies are emerging almost every day. From the recent controversy, it seems that geologists have come to one conclusion: there are several types of plumes, not just a single category. Otherwise, the topic remains as divisive as ever. Because of of the vast literature accumulated on the subject, anyone wishing to characterize a particular area as the result of a mantle plume will be faced with an array of conflicting studies. Some suggest that scientists' imperative to publish prodigiously has created the need for a theory that does not really explain anything. Foulger notes (tongue-in-cheek) that "the assume-a-plume approach has also relieved researchers of the hard work of thinking up new theories, a welcome relief in these days when we are all expected to publish six papers a year or else" (Geology News 2003b). A better understanding of plumes should be a top priority. They could elucidate how the cooling of the earth drives mantle convection; where the mantle s compartments lie; and the formation of large igneous provinces. They can also elucidate such far-reaching topics as the formation of the lithosphere and chemical constitution of the mantle (Kerr, 2003). Luckily, new technology offers hope of this. A deployment of large networks of broadband ocean-bottom seismic instruments could fill the gap in the data (Solomon 2000). In addition, recently developed electromagnetic techniques, which are more sensitive to rock composition and temperature, could be used in concert with seismic wave profiling for a more exact image of the earth's interior (Tajima 2000).
Return to Top
Anderson, Don L. 2003. "A Short History of the Plume Hypothesis: The Inside Story." URL:http://www.mantleplumes.org/Penrose/BookChapterPDFs/AndersonHistory.pdf. Retrieved 4/18/2004.
AIG 2003. "Volcanoes Inspire Fiery Debate." URL:http://www.aig.asn.au/volcano_debate.htm. Retrieved 4/16/2004.
Bijwaard, H., Spakman, W. and Engdahl, E. R. 1998. Closing the gap between regional and global travel time tomography. J. Geophys. Res. 103:30,055-30,078. Coffin, Millard F. and Eldholm, Olav 1993. Large igneous provinces. Scientific American. 270:10,42-49.
Foulger, G.R. 2002. Plumes, or Plate Tectonic Processes? Astronomy & Geophysics 43:6.19-6.23.
Geology News 2003a. Making the evidence fit the plume. Geological Society Homepage URL:www.geolsoc.org.uk/template.cfm?name=Mikado. Retrieved 4/13/04.
Geology News 2003b. Plumes, plates and Popper. Geological Society Homepage URL: :www.geolsoc.org.uk/template.cfm?name=NakedEmperor. Retreived 4/13/04.
Gubbins, D. 1994. Geomagnetic polarity reversals - a connection with secular variation and core-mantle interaction. Rev. Geophys. 32 (1):61 - 85.
Ji, Y., and Nataf, H.C. 1998. Earth Planet. Sci. Lett. 159:99-115. Katzman, R., Zhao, L. and T. H. Jordan 1998. J. Geophys. Res. 103:17,933-17,971. Kerr, Richard A. 1999. Great African plume emerges as a tectonic player. Science. 285:5425,187-188.
Kerr, Richard A. 2003. Plumes from the Core Lost and Found. Science. 299:5603,35-36.
Larson, Roger L. 1995. The mid-Creatceous superplume epidsode. Scientific American. Vol. 272, Issue 2, p. 82-86.
Larson, R. L. and Olsen, P. 1991. Mantle plumes control magnetic reversal frequency, Earth Planet. Sci. Lett. 109:437-447.
Montelli, Raffaella; Nolet, Guust; Dahlen, F.A.; Masters, Guy; Engdaul, E. Robert; and Hung, Shu-Huei 2004. Finite-frequency tomography reveals a variety of plumes in the mantle. Science. 303(5656):338-343.
Morgan, W.J. 1971. Convection plumes in the lower mantle. Nature. 230:42-43.
Nolet, G. and F.A. Dahlen 2000. Wave front healing and the evolution of seismic delay times. J. Geophys. Res. 105:19043-19054.
Solomon, Sean C. 2000. Seismic Imaging of Mantle Plumes: Progress and Prospects. Plume 3 Conference, Kohaloa, Hawaii. URL:http://www.ciw.edu/plume3/abstracts/Solomon.doc. Retrieved 4/20/2004.
Tajima, Fumiko 2000. Modeling of mantle electrical conductivity anomalies associated with an upwelling hot plume. Berkeley Seismological Laboratory. URL:http://http://www.seismo.berkeley.edu/seismo/annual_report/ar00_01/node28.html. Retrieved 4/21/2004.
UC Berkeley Campus News 2002. Press Release. URL:http://www.berkeley.edu/news/media/releases/2002/12/05_plume.html. Retrieved 14/14/2004.
19.Wikipedia 2004. Mantle Plumes. URL:http://en.wikipedia.org/wiki/Mantle_plumes. Retrieved 14/18/2004.
20. Williams, Q.; Revenaugh, J.; Garnero, E. 1998. A correlation between ultra-low basal velocities in the mantle and hot spots. Science. 281:5376,546-549.
|
<urn:uuid:8e9474bb-c169-4347-82db-37112b0a8e5c>
|
CC-MAIN-2016-26
|
http://academic.emporia.edu/aberjame/student/sedlacek2/mantle.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00105-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.917774
| 3,657
| 3.671875
| 4
|
The Khitomer Conference was the first full peace negotiations between the United Federation of Planets and the Klingon Empire in 2293, held at Camp Khitomer, a Klingon colony on the planet Khitomer near the border with the Federation.
Following the destruction of the Klingon moon of Praxis, Chancellor Gorkon reached out to the Federation for détente and an end to the lengthy and costly hostilities between the two superpowers. Following Gorkon's assassination by anti-peace operatives from Starfleet, his daughter, Azetbur, became chancellor and continued her father's peace initiative. For security reasons, the location of the conference was kept secret. However, both sides underestimated the extent of the Khitomer conspiracy. Eventually, personnel from the USS Enterprise-A and the USS Excelsior intervened, preventing an assassination attempt against the Federation President and exposing the conspiracy.
Ultimately, the Khitomer Conference led to the signing of the Khitomer Accords and the beginnings of nearly a century of peace between the Federation and the Klingon Empire. (Star Trek VI: The Undiscovered Country)
Captain Spock and Ambassador Pardek first met during the Khitomer conference and began setting forth the goal of reuniting the Vulcan and Romulan governments. (TNG: "Unification I")
One of the Federation ambassadors present at the conference was Curzon Dax. (DS9: "You Are Cordially Invited")
One of Chancellor Azetbur's delegation members was Colonel Worf. (Star Trek VI: The Undiscovered Country)
|
<urn:uuid:9d8d11ec-4838-4ada-af8e-6580ab9fd133>
|
CC-MAIN-2016-26
|
http://www.startrekfreedom.com/wiki/index.php/Khitomer_Conference
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00112-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940634
| 317
| 2.609375
| 3
|
Courtesy is gentle politeness and courtly manners. In the Middle Ages in Europe, the behaviour expected of the gentry was compiled in courtesy books. The greatest of these was Il Cortegiano (The Courtier) which not only covered basic etiquette and decorum but also provided models of sophisticated conversation and intellectual skill.
- Courtesy is fundamental: sometimes it keeps at bay even snarling people.
- Fausto Cercignani in: Brian Morris, Simply Transcribed. Quotations from Writings by Fausto Cercignani, 2014, quote 50.
Hoyt's New Cyclopedia Of Practical Quotations (1922)
Quotes reported in Hoyt's New Cyclopedia Of Practical Quotations (1922), p. 144.
- A moral, sensible, and well-bred man
Will not affront me, and no other can.
- Cowper, Conversation (1782), line 193.
- Life is not so short but that there is always time enough for courtesy.
- Emerson, Social Aims.
- How sweet and gracious, even in common speech,
Is that fine sense which men call Courtesy!
Wholesome as air and genial as the light,
Welcome in every clime as breath of flowers,
It transmutes aliens into trusting friends,
And gives its owner passport round the globe.
- James T. Fields, Courtesy.
- Their accents firm and loud in conversation,
Their eyes and gestures eager, sharp and quick
Showed them prepared on proper provocation
To give the lie, pull noses, stab and kick!
And for that very reason it is said
They were so very courteous and well-bred.
- John Hookham Frere, Prospectus and Specimen of an Intended National Work.
- When the king was horsed thore,
<Launcelot lookys he upon,
How courtesy was in him more
Than ever was in any mon.
- Morte d'Arthur, Harleian Library. (British Museum.) Manuscript 2,252.
- In thy discourse, if thou desire to please;
All such is courteous, useful, new, or wittie:
Usefulness comes by labour, wit by ease;
Courtesie grows in court; news in the citie.
- Herbert, Church. Church Porch, atanza 49.
- Shepherd, I take thy word,
And trust thy honest offer'd courtesy,
Which oft is sooner found in lowly sheds
With smoky rafters, than in tap'stry halls,
And courts of princes.
- The thorny point
Of bare distress hath ta'en from me the show
Of smooth civility.
- Dissembling courtesy! How fine this tyrant
Can tickle where she wounds!
- I am the very pink of courtesy.
- That's too civil by half.
- Robert Brinsley Sheridan, The Rivals (1775), Act III, scene 4.
- High erected thoughts seated in a heart of courtesy.
- Sir Philip Sidney, The Arcadia, Book I, Part II.
|
<urn:uuid:53d8adbd-721f-41ec-97a1-c9a3a4c39d31>
|
CC-MAIN-2016-26
|
https://en.wikiquote.org/wiki/Courtesy
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00103-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.879533
| 664
| 2.875
| 3
|
Our breed standard allows eyes of any pigment color or combination of pigment colors. Aussie eyes have been seen that are golden, lemon yellow, amber, light brown, dark brown, green, orange, and blue. On very dark individuals they may even appear black. The iris can be monochromatic, have concentric rings of color, flecks of darker pigment, flecks of blue, or be split or marbled with blue (heterochromia iridis). The two eyes of one dog are not always both the same color; one may be pigmented while the other eye is blue, or both may be pigmented but be of different colors (heterochromia irides). Blue eyes are not confined to merles; there is a recessive gene in the breed that produces blue or split blue eyes in solid colored dogs as well. There are probably multiple genes which together affect eye color and it is not possible to predict with certainty eye color from a planned breeding. As a generalization, brown eyes tend to be dominant to lighter eyes. There is some relationship between eye color and coat color as well, since black pups will tend to have slightly darker eyes than red pups in the same litter. Below are some examples of the extraordinary variety of eye color we find in this breed.
This black and tan Aussie pup is 6 weeks old. His eyes are beginning to change from the dark grayish blue of early puppyhood to his adult color of medium brown. From the beginning the grayish blue shade was fairly dark compared to the light shade of a dog who will have blue eyes. A puppy that will develop amber eyes will have eyes a bit lighter than these, but they will still be considerably darker than a puppy with blue eyes. Pups that are destined to have eyes that are very dark, almost black, will have dark irises of a midnight blue color from the beginning. In a puppy with split or marbled eyes part of the iris is dark like this pup's eyes, and the blue area is pale.
This female from the same litter has eyes that stayed blue. Note that they are much lighter in shade and bluer in hue than her brother's eyes. Puppies with marbled eyes will typically have indistinct flecks or smudges of blue like this puppy with the dark areas like those of the darker eyed puppy above. As eye pigment develops the colors will become more easily identifiable and crisp.
This photo of three lovely raven pups is courtesy of Sandy Cornwell. These 3 pups are nearly all black, and their adult eye color will be dark brown. At 5 weeks, their irises are a medium shade of gray and are darkening gradually. They are not the ice blue of the puppy above, whose eyes will remain blue. Pups with eyes destined to be fairly light, such as amber, would have irises a lighter shade of gray than these.
These are two photos of Frank (Shawtowns Frank James), courtesy of Jan Branham. On the left, Frank is 7 weeks old and his eye clearly shows a difference in pigment color. At this point the dark quarter is just an indistinct smudge of darker brownish gray on a greenish blue field. On the right Frank is an adult with a distinct quarter split.
When a dog's iris contains two or more colors, the medical term is heterochromia iridis. If the irises of his eyes are different from each other (one blue/one brown, etc) the term is heterochromia irides. Common terms for multicolored irises are split eyes and marbled eyes. Common terms for two eyes different from each other include odd eyed, walleyed, glass eyed. In domestic mammals having one pigmented eye and one blue eye is not uncommon. It's been observed in horses, donkeys, cattle, water buffalo, cats, ranched foxes, and dogs.
This blue merle Aussie belonging to Melanie Magamoll has a sky blue field with an indigo ring around her pupil. She also has deep blue flecks in the lower left of the indigo ring. It is not known at this time why some Aussies inherit the indigo ring while others' blue eyes are uniform blue. There are probably several genes involved.
Jackson's eye shows dark amber with blue marbling. The light blue area has some fascinating heavy blue striations near the pupil. His right eye is amber with similar looking striations of darker amber. Jackson is a red merle.
Not all flecks are blue. Chaps has a blue spot in the brown half of his iris and a brown spot in the blue half! Chaps is a grandson of Krackers, a Slyrock dam.
This is the eye of Scrappy, a blue merle. Scrappy has the sky blue field, the indigo ring, and some very dark brown marbling in the upper left corner of his eye. He has a partial mascara line on the right side of his lower eye. Pigment on the eyerim is highly desireable for preventing sunburn and skin cancer. The pink half is much more prone to sunburn than the black half.
Hazel has blue marbling on her light brown eye. Sometimes a dog has two marbled eyes, and sometimes one is marbled while the other is not. This eye resembles a geometric split, but notice that the edges where blue meets brown are soft edged rather than sharp.
This rather striking eye belongs to Bonzer, a blue merle. His genetic eye color is dark brown, but the effect of the merling is to remove pigment from most of the iris. This gives him the very striking dark brown areas on a bright blue field. This is a very impressive marbled eye!
Zeke has some very interesting and attractive marbling. His base eye color is dark amber, but the iris is encircled by a darker brown ring. The eye is marbled in blue (left corner) and by a grayish color (bottom center). Here the distinction between blue and dark amber is a very soft blending without sharp distinctions.
Blue, pictured in the blue merle section, has dark amber eyes. In both eyes he has a darker amber ring around the iris and several brown flecks in the iris.
Vicky was a homozygous merle with blue eyes. Notice that a large piece of iris on the right side of the pupil is missing. This incomplete development of the iris is called iris hypoplasia. It is not uncommon in homozygous merles, though it can occur in other dogs also. Vicky had missing pieces of iris in both eyes. She also had a coloboma at about the 1 o'clock position in her iris. She tended to squint in bright light since her irises could not close down adequately to block out the excess light. She was able to navigate the furniture and toys in the yard without difficulty!
This is Fanfare, a lovely black bi with bright yellow eyes. Bold yellow irises like these are most often found on reds. They are definitely attention grabbing on a black dog. Research shows that livestock are quicker to retreat from the predatory threat of a dark colored dog with light eyes than to dogs with brown irises that don't stand out from the coat as much.
This is Mina, photo courtesy of Danette Brake. Her eyes have an arresting quality! They are just a shade darker than yellow and would be classified as light amber. Amber ranges in tone from very light eyes like Mina's through all shades of hazel.
This is Willow, photo courtesy of Judy Rolff. She is a red tri with greenish amber eyes! Green eyes are most often seen in reds and red merles, much more rarely in blacks or blues. As an adult her eyes retained their arresting quality, though in the closeup of the iris they look more of a golden greenish.
This is Monanee, a sable merle Aussie, photo courtesy of Leona Stabler. His eyes are an intense wolf yellow. Although they are as light as a blue eye, he was never observed to squint in bright light. Most wild canids have eyes approximately this hue; it is a very durable serviceable color. Monanee is now a service dog for a young man in a wheelchair due to his large size and his desire to work.
Tessa, owned by Mary Fillerup, is an example of a blue merle with a very light amber eye, almost yellow. Probably any red littermates of hers had even yellower irises. Her right eye is blue.
Breezy's irises are just a bit darker than Tessa's, and they are both bright amber. Breezy is a blue merle, and probably any red or red merle littermates of hers have irises even more toward the yellow range. I currently don't have any photos of a red or red merle with lemon yellow eyes, but I would be glad to add one if it is sent to me.
This striking blue merle guy is Tag (Touchstone Catch Me If U Can), photo courtesy of Jane McNee. He shows irises that are two separate and distinct shades of brown! His right eye (on our left) is medium dark brown, and his left eye is a caramel brown, approaching the lightness of amber. Presently it is not known if this is a heritable trait or just a phenomenon of fetal development.
This handsome blue merle guy is Mystic (ASCA Ch. Callisto's Into The Mystic), photo courtesy of Gail Karamalegos. A closeup of his eyes is shown on the right, photo courtesy Lisa McDonald. His right eye is caramel brown, and his left eye is darker brown marbled with blue. Gail reports that he has no known relatives with this trait. She also says the difference in color did not become apparent until the irises started turning hazel from the normal puppy grayish blue. At that point there was clearly a difference that became most pronounced when the color change was complete from hazel to brown.
This lovely red merle girl is Marley, photograph courtesy of Wendy Cushnie of Canada. Marley is a red merle and she has two brown eyes of different shades. In her case one is medium brown and the other dark brown. Again, the exact cause of this difference in color is unknown, but it is a fully acceptable in the show ring as a normal variation.
And here we have a beautiful black tri named Delta (Shartooz Tale Ov Love), photo courtesy of Josie in Australia. We have a striking degree of contrast in color. The presence of this pigment pattern in a nonmerle suggests that some other factor is involved in producing it. Aussie eyes can be any color, and this is within the normal range of variation. It is a very unusual and fascinating variation that is very eye catching when it occurs.
Is the tendency to have differently pigmented irises hereditary? Possibly. This is Matches, a red merle belonging to Kathi Linn. His right eye is amber with many flecks of brown pigment, and his left eye is brown. His son Maki also has two irises with different shades of brown. Unfortunately the colors of Maki's different eyes did not photograph well, even though it was very obvious when he was viewed in person. Of all the cases of nonblue heterochromia irides shown, this is the only case reported in which a parent produced offspring with the same type of eyes.
This gorgeous guy is CH Howard's Wanagi Ishna Ghost Eyes "Indy". He is almost entirely black with only a few white hairs on his toes and chin. He is a very minimal black bi. The presence of blue eyes does not always indicate a merle. This type of recessive blue eye can be observed both in merles and nonmerles. In Aussies, recessive blues like this are caused by a recessive gene similar to that found in Siberian huskies and Border collies. Recessive blues can be entirely blue like Indy's, or they can be geometrically split. This gene does not cause marbling. If a blue merle has solid blue or geometrically split blue, it's not really possible to tell by appearance alone whether the blue eyes are due to this recessive gene or due to the effects of merling.
This is Mistretta's Chaca, a very dark red tri puppy with bright blue eyes that will remain blue. In some bloodlines, including this one, there is a recessive gene that causes blue eyes independently of coat color and not related to merling. These recessive blue eyes are most striking on nonmerles. One or both eyes may be blue. They may even be geometrically split, but they are generally not marbled.
This is Princess, owned by Mari Flippen, photo courtesy of Lyndy Jacob. Princess has two very striking eyes showing crisp hard edged geometric splits. In this case the split is diagonal. Interestingly enough, Princess has a full sister from another litter who also has a split eye. We don't know for sure if this is inherited, but it could be.
These are detail shots of Jasper's eyes, photos courtesy of Debbie Brown. His eyes show a similar split pattern to those of Princess, and here is what they look like close up. There's no intermixing of the colors at their borders, and you could just about lay a ruler down and draw the demarcation line.
This is Dix (Redwest's Dixie-Dog, CGC), owned by Brenda Hutton, photo courtesy of Camelback Photography. Dix shows both types of eye! Her right eye (our left) is marbled. Her left eye is a very crisp geometric split. As with Princess, her split is also on the diagnonal. She is an example of both types of eye patterns coexisting on the same dog.
This is Boo (Moonlight's Witchy Woman), photo courtesy of Mark Raymond. Boo has a nearly vertical split in her right eye, very hard edged. Because her iris color is a medium amber the effect is more subtle than it is with the dogs above who have dark brown pigment in their irises.
This is Belle, photo courtesy of Bekka Borg. In her very intense expression the geometric split is seen in her left eye (our right). She has a horizontal split in which the top of the eye is dark brown and the bottom is blue. When she is sleepy and her eyelids are partially shut she appears to have a blue eye on that side. The split can occur in any direction.
This is Remuda Bar Nitty Gritty For Topeka (Grit), photo courtesy of Linda Whyman of England and owned by Susan Beavers. This spectacular girl has both a split face AND a geometrically split eye! She also has a highly desireable very dark liver nose for excellent UV protection. Just a point of interest, her split eye happens to be on the same side as her merle half. It's not known at this time whether the same mechanism caused both types of split (face and iris).
This is a detail shot of the eye of Austin (AKC/UKC/INT'L CH Keepsake Outta My Way), photo courtesy of Becky Tellalian. Not all geometric splits yield 2 equal halves. Austin's eye shows a nice sharp edged split, with the color division approximately 1/4 brown and 3/4 blue.
At the back of the eye there is a layer of reflective pigment called the choroid layer. This mirrorlike layer reflects available light and allows dogs to see well even in very dim light. In an eye with normal pigmentation the eye reflects a silvery greenish color in bright light. In eyes lacking pigment the eye reflection is red. Some dogs will have one of each. This layer is responsible for eyeshine in cats as well.
This is the eye of a black tri. The iris is dark amber and the choroid layer is well pigmented. This greenish reflection is the norm in most dog breeds and wild canids. This dog should see quite well in dimly lit conditions. No dog can see in total darkness, but they can easily navigate in condtions that would have us tripping over dirt clumps.
This is another black tri whose eyes have been caught in the flash. The angle is more head-on than in the previous picture, so the silvery aspect predominates here.
This red merle girl has blue eyes due to the action of the merle gene, and the choroid layer has been depigmented. Without pigment to cover the choroid layer, the bed of capillaries underneath reflects red in bright light. A similar effect is seen in humans and horses with light colored irises. In practice, most blue eyed dogs with depigmented choroid layers see better in the dark than we do. But they probably do not see as well under the dimmest conditions as dogs with normal choroid pigmentation.
This is CH Watermark's Harris o' Fairoaks "Tweed", photo courtesy of Kim Monti. Tweed has a partial split face and both of his irises show reduction in iris pigment due to merling. The blue eye reflects red in the camera flash because the choroid is depigmented like the iris due to merling. The mostly brown eye also shows red eyeshine; the iris and the choroid layer do not always show the same degree of merle depigmentation. In Tweed's case the choroid layer of his brown eye is more depigmented than his iris.
© 1999-2009 Lisa McDonald Comments
|
<urn:uuid:153b944d-636f-4276-b2b4-2bd83e6c3e8a>
|
CC-MAIN-2016-26
|
http://color.ashgi.org/color/aussie_eye_color.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00014-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955738
| 3,704
| 2.65625
| 3
|
I actually wrote about the development of RoboEarth almost four years ago, but since a lot of you weren't even born yet I thought I'd touch on it again. Touch = beat with a croquet mallet. RoboEarth: basically a real-life "central brain" Skynet that's being tested so robots can learn information from each other, take commands, plot murder, etc. TOTALLY NOTHING TO WORRY ABOUT.
"At its core RoboEarth is a world wide web for robots: a giant network and database repository where robots can share information and learn from each other," said Rene van de Molengraft, the RoboEarth project leader.
Author James Barrat, who has written extensively about the dangers of robots gaining their own intelligence, thinks there need to be safeguards.
"In the short term, RoboEarth adds security by building in a single point of failure for all participating robots," he said.
"In the longer term, watch out when any of the nodes can evolve or otherwise improve their own software. The consequences of sharing that capability with the central 'mind' should be explored before it happens."
Hey, I like this James Barrat guy. I think he and I would could be great friends. Unless he doesn't drink, in which case it would just be a lot of sitting around watching me drink, and I don't like an audience.
Thanks to STEVE, who agrees our first line of protection against the robots should be all the people who said we had nothing to worry about.
|
<urn:uuid:3836ad2a-6bad-448d-a1d7-4fc6fa4925bd>
|
CC-MAIN-2016-26
|
http://geekologie.com/2014/01/roboearth-the-internet-built-just-for-ro.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00118-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959936
| 316
| 2.515625
| 3
|
In response to a lawsuit filed by the Center for Biological Diversity, the Environmental Protection Agency is advising coastal states to start collecting data on ocean acidification in their coastal waters.
The Center wants all coastal states to list their waters as “impaired” under the Clean Water Act, and sued when Washington State failed to so. In March, EPA settled the lawsuit and started collecting comments on how the states could start tracking acidification on their coastlines.
In a memo released this week, the EPA recognized “the seriousness of aquatic life impacts” associated with ocean acidification, and explained how those impacts are handled under Clean Water Act. But the agency also acknowledged the fact that states don’t have the data they would need to list coastal waters as impaired by acidic conditions. Not yet, anyway.
A key problem here, the EPA hinted, is that scientists say ocean acidification is increasing because of carbon dioxide emissions (current and throughout the past 50 years). And brining ocean water into compliance with the Clean Water Act standards takes us right back to the debate over how to regulate CO2 emissions.
|
<urn:uuid:4e86398d-0e92-4b53-a7bb-de138fdecfb2>
|
CC-MAIN-2016-26
|
http://www.opb.org/news/blog/ecotrope/epa-to-states-start-measuring-ocean-acidification/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957953
| 227
| 3.03125
| 3
|
A Stanford University study reported that children who play sports are less likely to become obese. The report also found that overweight children are often too shy to join a sport. With this in mind, Napa Valley USD implemented Athletes as Readers and Leaders to introduce elementary students to high school athletes. They knew it would be a success with the students, but were surprised that it had such an impact on the players. Football coach Troy Mott said, "I don’t know who gained more from this experience, the players or the kids."
- Implementation: Talk to a coach or the athletic director about implementing this program. We recommend talking to an on-site and/or teacher coach as they have more access to the players. Form a board including the librarian, coach, and an elementary principal. Show the two minute embedded vided (Animoto) to highlight the program.
- Start up: After each high school athletic season ends, participating coaches and players visit their feeder elementary schools to read a sports related picture books. The books usually relate to the players’ sport and often have a theme celebrating diversity. If you need picture books ask the elementary school or borrow from your local bookstore. Give players the opportunity to pick out the book and to practice reading it.
- The Visit: After the coach introduces the players and the players read the book, start a discussion devoted to reading, wellness, nutrition, academics, leadership, and training.
REGISTER HERE if you start the program at your school. This program is good for students, meets curriculum standards, and is an excellent example of participation and collaboration in sharing knowledge. Start planning NOW. Implement this school year!
|
<urn:uuid:87e02b70-28e7-4f24-80d5-824dbb93ce72>
|
CC-MAIN-2016-26
|
http://athletereadersleaders.csla.net/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00041-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968716
| 341
| 3.5625
| 4
|
When you order food at a restaurant, would you make different choices if there was calorie info on the menu? The answer might surprise you.
HealthDay is reporting a new study done by researchers at Carnegie Mellon University. It found that making general calorie consumption guidelines available to restaurant customers doesn’t change their eating habits.
As you may know, a number of cities and states are now mandating that chain restaurants post calorie info on menus or menu boards. But some government types think that’s not enough—they believe restaurants should also post guidelines for daily calorie intake—or even how many calories we should consume per meal.
Researcher Dr. Julie Downs decided to test that theory, and she says “We found it didn’t help at all.”
Currently, New York City, Philadelphia, San Francisco and Seattle – as well as the entire states of California and Oregon require calorie labeling. And soon we may see that across the country, as part of the federal health care reform act.
The study found that providing calorie guidance did NOT seem to help consumers make better use of calorie labeling. It also failed to prompt a drop in the total number of calories informed patrons purchased.
Instead, those who received daily or per-meal calorie guidance chose to eat slightly more calories, not fewer.
Dr. Downs speculates that people see the calorie recommendations and compare them to the number of calories in a given item. The number of calories in the one item seems low, so they actually get a bigger dish and add side items to it. As a result, they go over the recommended guidelines for one meal.
For more on healthy food consumption guidelines, visit the U.S. Department of Agriculture’s website at usda.gov.
I’m Bill Maier for Shine.FM.
Listen to today’s audio here.
|
<urn:uuid:f238eccc-43e9-43fd-8885-07f3171026b6>
|
CC-MAIN-2016-26
|
http://chicago.shine.fm/familyexpert/does-calorie-information-on-restaurant-menus-cause-people-to-eat-better/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00171-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941166
| 383
| 2.703125
| 3
|
Macbeth Act 3, Scene 2
Lady Macbeth enters asking if Banquo has left the palace yet. Learning that Banquo is gone but will return again tonight, Lady Macbeth sends the servant to tell her husband that she wishes to speak with him in private. The servant goes to bring the king to his wife. Lady Macbeth feels that if her husband does not enjoy his royalty, then all of their deceit and treachery has been for nothing. If he does not seem happy, it would have been better if they had not killed the king to take his throne in the first place.
When Macbeth enters the room, she asks him why he is still thinking about Duncan when nothing can be done to revive the murdered king. "Things without all remedy / Should be without regard: what's done is done." Act 3, Scene 2, lines 11-2 Macbeth responds that they have not yet finished securing his throne and they are not yet safe. He says that Duncan lying quiet in his grave has it better than Macbeth who lives in fear and guilt after murdering the king. Lady Macbeth asks him to at least fake cheerfulness at dinner that night so that his guests will feel at ease and suspect nothing. He promises his wife that he will pretend to be happy and at ease and tells her to play up to Banquo and to speak well of him so that no one will suspect the malice that both Macbeth and his lady feel toward him. Macbeth's only comfort is that Banquo and Fleance can be killed. He warns Lady Macbeth that before the night is over another terrible deed will be done, but he does not tell her of his conspiracy to kill Banquo and his son. Night begins to fall around the castle.
|
<urn:uuid:48493a30-ffa5-4293-a077-759441e66662>
|
CC-MAIN-2016-26
|
http://www.bookrags.com/notes/mac/part13.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.989267
| 371
| 2.734375
| 3
|
By Anai Rhoads, Friends of Animals
A prominent international animal advocacy organization sent a strong message to environmental leaders on Friday: “Put your carbon credits where your mouth is.”
Friends of Animals believes that addressing animal agriculture, a significant contributor to global warming, is long overdue. As leaders in Copenhagen have been scrambling to come up with solutions to our climate crisis, most are contributing to it through their diets.
“While we applaud your actions to reduce industrial emissions, we feel that not enough is being done to reduce greenhouse gasses caused by animal agriculture,” wrote Friends of Animals President, Priscilla Feral. “That’s why we are calling on you…to make a New Year’s resolution to ‘Go Vegan.’”
Numerous studies prove that the demand for meat and dairy products is a major source of greenhouse gas emissions. Recent findings released by the Worldwatch Institute demonstrate that 51 percent of the world’s emissions can be attributed to dairy and meat production. Others show that animal agriculture alone emits a whopping 80 percent of all methane gas emissions. Even a small, family-owned organic dairy or chicken farm can produce more greenhouse gases than an industrial factory.
Environmental leaders should be committed to reducing all of their emissions, not just those they find convenient. By going vegan, these leaders will be letting people know that they can make a significant contribution to reducing greenhouse gasses just by changing their diet.
Added Feral “Going vegan is the easiest and most effective step a person can take to reduce greenhouse emissions…Animal agriculture is responsible for everything from deforestation to factory farming waste ponds —and if you eat meat or dairy, you are contributing to these destructive practices.”
Veganism is a diet that voids all use of animal products, including meat, poultry, eggs, dairy and honey. In addition, we shun all use of leathers, wool, silk and furs. For those interested in learning more about veganism, please download a free copy of the Vegan Starter Guide, courtesy of Friends of Animals, today.
|
<urn:uuid:c5b0dcfd-383b-4b6e-99dc-f7dd4f7c70f2>
|
CC-MAIN-2016-26
|
http://www.all-creatures.org/articles/env-carboncredits.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00195-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929029
| 432
| 2.8125
| 3
|
Singin is the act o producin muisical soonds wi the voice, an augments regular speakin bi the uise o baith tonality an rhythm. A bodie that sings is cried a sangster or vocalist. Sangsters perform muisic (arias, recitatives, sangs, etc.) that can be sung aither wi or athoot accompaniment bi muisical graiths. Singin is aften duin in a group o ither muisickers, like in a choir o sangsters wi different voice ranges, or in an ensemble wi instrumentalists, like a rock group or baroque ensemble.
In many respects, human sang is a form o sustained speakin; naur onybody that can tae speak can sing an aw. Singin can be formal or informal, arrangit or improvisit. It mey be duin for pleasur, comfort, ritual, education, or profit. Excellence in singin mey require time, dedication, instruction, an regular practice. If practice is duin on a regular basis, then the soonds is said tae be clearer an stranger. Profeesional sangsters uisually bigg their careers aroond ane specific muisical genre, like clessical or rock. They teepically tak voice trainin providit bi voice teachers or vocal coaches throu their careers.
|
<urn:uuid:c01c2af7-0995-49db-806f-b469c638a974>
|
CC-MAIN-2016-26
|
https://sco.wikipedia.org/wiki/Singin
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00096-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.770042
| 290
| 2.796875
| 3
|
Copyright © University of Cambridge. All rights reserved.
'Sorting Numbers' printed from http://nrich.maths.org/
Why do this problem?
provides an opportunity for children to sort and categorise, both of which are important mathematical processes. This open activity challenges children to find their own categories and then name them, which might mean it is a good way to introduce specific vocabulary.
You could begin by using the interactivity with the whole class, putting some numbers into the box yourself and asking the children to describe the set. Encourage learners to find different ways of naming a particular group of numbers or introduce them to new vocabulary as appropriate. Pairs of children could come to the board to create their own set in a similar way.
If you have access to a computer suite, children could work in pairs on a computer to create sets of numbers which they record elsewhere. Alternatively, they could easily work with pencil and paper. As they work, it is a great opportunity for you to listen to their justifications and how well they are able to use mathematical language.
As a plenary you could drag several numbers into the box and ask pupils to say which is the odd one out and why. Encourage creative responses to this - in fact any could be the odd one if we give an appropriate reason.
The interactivity could also be used as a starter activity on subsequent occasions. You could also use this interactivity
where you drag numbers you "like" (i.e. are part of a set) to one side and numbers you "don't like" (i.e. are not in your set) to the other. The children then have to ask questions with yes/no answers to determine the name of
What is the same about these two numbers?
Can we find others that could go with them?
What could we call this set?
Is there another way we could describe the set?
The range of numbers could be extended to include, for example, up to 100.
Some children might want to use just the numbers up to 20, for example, to start with. You could get them started by suggesting categories to make. Digit cards would be useful for many children as well.
|
<urn:uuid:7f6e86aa-5888-4621-9f59-e2cc59322b69>
|
CC-MAIN-2016-26
|
http://nrich.maths.org/5998/note?nomenu=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00084-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951795
| 459
| 4.65625
| 5
|
GNU/Linux Desktop Survival Guide
by Graham Williams
A list of useful books for Debian GNU/Linux is available from http://www.linux-books.us/debian.php.
A Quarter Century of Unix, by Peter H. Salus, 256 pages published by Addison Wesley, 1994, ISBN 0-201-54777-5. This is a good review of the history of Unix with many interesting insights. Well worth a read if you are interested in where Unix came from and you can find a copy of the book.
The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, by Eric S. Raymond, 288 pages, published October 1999 by O'Reilly & Associates, ISBN 1565927249. Discusses the free software business model. The author is an identity in the Open Source movement and here captures a model of Open Source and Free Software development.
Linux in A Nutshell: A Desktop Quick Reference, Second Edition, by Ellen Siever, 632 pages, published February 1999 by O'Reilly & Associates, ISBN 1565925858. An excellent reference to many standard GNU and Unix tools. An intermediate resource between this current book which aims to get you started with the tools and fully fledged manuals.
The Linux Sampler: A Linux Resource Guide, by Belinda Frazier and Laurie Tucker, 240 pages, published November 1994 by Specialized Systems Consultants, ISBN 0916151743. Presents an overview of Linux from the point of view of where, how, and why it is being used, with a little technical help thrown in.
Linux Rute User's Tutorial and Exposition, by Paul Sheer, 500 pages, published January 2002 by Prentice Hall, ISBN 0130333514. Presents an excellent guide to many aspects of the Linux operating system. Also available from http://www.icon.co.za/~psheer/book/.
Copyright © 1995-2014 Togaware Pty Ltd
|
<urn:uuid:7ab90fc5-3a5d-4cd4-bb11-47489f1a0caa>
|
CC-MAIN-2016-26
|
http://www.togaware.com/linux/survivor/Books.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00143-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.829569
| 411
| 2.796875
| 3
|
Learn how to create an interesting paper text effect in Photoshop using custom shapes, and paper textures and patterns. You can easily reshape the paper text using the Photoshop vector tools like the Pen Tool, Convert Point Tool and so on. This is not a tutorial for beginners, you have to know how to work with vector shapes in Photoshop, to create and edit a custom shape. I hope you will enjoy reading this text tutorial and I really curious to see your paper text effects, so upload your result on our comments section.
You might also like
In this tutorial we will create a paper text effect in Photoshop using torn paper shapes and paper textures. This paper text will look like a notebook with leather cover. The final result will be a vector shape so you can resize it to any size without losing the quality.
Let's start with a simple text, font Impact, the size is not important depends on how big is your canvas size. The color for the text is #d0d0d0
Call this first layer Text Paper 1 and duplicate it; call the second layer Text Paper 2. Open the layer style window of the Text Paper 1 and add a simple Drop Shadow. Be careful when you make the settings, especially the Angle.
Hide the Text Paper 1 for now and let's work with the Text Paper 2. Right click on the layer to convert the text in shape; or you can use the Photoshop menu as shown in the image. If you don't know how to do it read this quick tut on how to turn text in shape in Photoshop.
This step is really important! Pick the Path Selection Tool and select all the letters from the paper text, and press the Combine button.
Method 1:Now let's create the torn paper effect; for that you need to download these torn paper shapes. In a new layer add some torn paper shapes; use the Add to shape area to add all the shapes in a single layer. Make sure that the torn paper covers the paper text except the bottom area like shown in the image.
Select the paper text using the Path Selection Tool; press CTRL+C to copy the shape. Go to the torn paper shape created in the previous step, and press CTRL+V to paste the text shape.
Press the Intersect Shape Areas from the Path Selection Tool menu. The result is a torn paper text. Press the Combine button to combine the two shapes.
Method 2: If you are not familiar with vector shapes drawing you can simply rasterize the text layer and use the Lasso Tool to create the torn paper selection by hand. Use the selection and delete it from the rasterized text layer.
Rename the new shape layer Text Paper Shape 1 and delete the old Text Paper 2. Add a Gradient Overlay for the Text Paper Shape 1; use a combination of light gray and white for the gradient.
Add also a Drop Shadow effect like shown in the image.
In a similar way create another torn paper text. Duplicate it and call the original shape Shadow and the copy layer name it Text Paper Shape 2. Add a Gradient Overlay to the Shadow layer.
Rasterize the layer style effect of the Shadow layer and apply a Gaussian Blur filter like shown in the image.
For the Text Paper Shape 2 layer add a Gradient Overlay similar to the one used for the Text Paper Shape 1. Also use the Edit>Transform Path>Warp Tool like shown in the image.
Duplicate the Text Paper Shape 2 and change the old Gradient Overlay with this new one presented in the image below. The idea is to add some color to the letters. Use whatever colors you want but be careful when you place the color locations.
To make the paper text effect more realistic I have duplicated the Text Paper Shape 2 two times and I used the Freeform Pen Tool to create the torn paper effect; make sure you check the Intersect Shape Areas button before you begin.
As you can see one paper page is colored and one is white; I did that to obtain a nice looking effect and contrast. This is how the paper text looks after adding the two little torn paper layers.
Let's create a notebook cover for our paper text. To create the notebook cover first duplicate one of the text paper layers and use the Rectangle Tool to substract the rectangle from the paper text area like shown in the image.
Open the layer styles window and add the following Photoshop styles: Drop Shadow, Bevel and Emboss, Color Overlay and Pattern Overlay. All the settings can be changed and adjusted to match with your text effect and background color.
You can of course choose another leather texture for the cover; this is how your result should look so far:
For the final touches you can add a notebook pattern for each letter or you can even find a nice notebook texture.
I have also added these scotch shapes to the paper text. I have used vector shapes instead of the usual scotch tape brushes because our text is a vector based shape, so it is better to make all the items vector.
Use the color #f5efe4 for the scotch tapes and the opacity of the layer around 50%.
To make the scotch tape look more realistic I have added a Pattern Overlay and also a layer mask using Filter>Render>Clouds filter with black/white as foreground/background colors. If you don't know how to add Photoshop mask you can read this Layer Mask Guide for Beginners.
You can make the text look vintage and old if you add some stain vector shapes, blend mode Overlay; you can add again a layer mask to obtain opacity variations. And in the end add an old paper texture. So here is the final result for the paper text created in Photoshop using only vector shapes.
You might also like to try our premium Photoshop plugin that allows you to create torn paper corner for photo effects.
|
<urn:uuid:c4c02af0-0135-4572-bbe5-f41a8bc1f461>
|
CC-MAIN-2016-26
|
http://www.psd-dude.com/tutorials/create-a-paper-text-in-photoshop.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.843395
| 1,217
| 2.734375
| 3
|
A Short History
Afghanistan first emerged as a defined territory under the reign of Ahmad Shah Abdali, who was chosen as its leader by an assembly of Pashtun elders in 1747. Using a mixture of conquest and diplomacy, Ahmad Shah succeeded in extending his writ from Kandahar to Kabul and territory west of the Indus. He occupied Delhi in 1756, but military victories did not translate into political control, and he was forced to withdraw from India.
The full text of this essay is only available to subscribers of the London Review of Books.
|
<urn:uuid:cb897185-93cf-4856-a53b-4b68967163cb>
|
CC-MAIN-2016-26
|
http://www.lrb.co.uk/v30/n06/jolyon-leslie/a-short-history
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972274
| 113
| 3.59375
| 4
|
The latest news from academia, regulators
research labs and other things of interest
Posted: Dec 28, 2011
Describing reactions in a fuel cell on the nanoscale
(Nanowerk News) For the first time, physical chemistry reactions in a fuel cell can now be observed and described in detail on the nanoscale (see paper in Nature Chemistry: "Measuring oxygen reduction/evolution reactions on the nanoscale"). This innovation is due to a new microscopy technique devised by an international research team involving Heidelberg mathematician Dr. Francesco Ciucci and scientists from the United States and Ukraine. The new technique means that oxygen reduction, which is significant for the energy generated in a fuel cell, can now be represented at a resolution of one millionth of a millimetre. The research findings will be used to develop more efficient and powerful hydrogen fuel cells.
Visualisation of platinum coating in fuel cells. The red areas show high ion transfer activity, the turquoise-green elevations represent the nanoparticles. The picture shows that ion transfer is not uniform all over the coating. (Image: Sergei Kalinin)
Fuel cells convert the energy in a fuel like hydrogen into electric energy. A hydrogen fuel cell consists of two electrodes facing one another and separated by an ion conductor. Electric energy is gained via transfer of ions between the two electrodes. The oxygen in the air reacts with the hydrogen brought in from outside. In this process of oxygen reduction, a catalyst – frequently the rare and expensive platinum – plays an essential role as reaction accelerator. In all this, says Francesco Ciucci, the oxygen reduction process is the limiting factor in connection with the longevity and efficiency of fuel cells.
"To optimise ion transfer between the electrodes, a number of fundamental questions have to be answered," says the mathematician. "How and where exactly does oxygen reduction occur and how does platinum function as a catalyst? Up to now we have had to do without a suitable instrument for investigating the reaction dynamics involved."
Francesco Ciucci's research has been funded by a grant from Heidelberg University's Graduate School of Mathematical and Computational Methods for the Sciences. In its course, and in conjunction with Dr. Amit Kumar at the Oak Ridge National Laboratory in the USA and Dr. Anna Morozovska of the Ukrainian National Academy of Sciences, Dr. Ciucci has developed a new microscopy technique that can monitor ion transfer on the nanoscale. The technique is called "Electrochemical Strain Microscopy" (ESM).
The ESM technique is based on a mathematical model, a so-called partial differential equation, that describes the movement of oxygen in various materials. By way of this mathematical description, the measurement data from "Electrochemical Strain Microscopy" could be visualised on the computer screen.
"What we have found out from this," says Dr. Ciucci, "is that the catalyst layer of 50-nanometre platinum particles does not allow an equal degree of ion transfer at all points." The innovative microscopy technique ESM is not restricted to the optimisation of fuel cells. Dr. Ciucci points out that it is suitable for investigating chemical processes on all surfaces where materials interact via ion transfer.
Source: University of Heidelberg
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk:
|
<urn:uuid:b9d394bd-9620-459b-b7a4-50bf1abaa5ae>
|
CC-MAIN-2016-26
|
http://www.nanowerk.com/news/newsid=23839.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00179-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91711
| 713
| 2.625
| 3
|
|General Information about Hound Dogs|
All of the dogs listed below belong to the collection of dogs referred to as Hound Dogs.
As implied by its name Hound Dogs have been bred to chase (or hound) a quarry by sight or smell, or a combination of both senses.
Sighthounds have exceptional eyesight, combined with the speed and stamina necessary to catch the intended prey once seen, typical examples being the Greyhound and the Whippet.
Hounds which rely strongly on the sense of smell to follow the trail of a prey, such as the Bloodhound, quite literally follow their noses, speed and eyesight is of less importance.
|
<urn:uuid:b06bf5fb-cbe4-4d6b-9f9b-f58f3e0380b7>
|
CC-MAIN-2016-26
|
http://www.mans-best-friend.org.uk/hound-dogs.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954529
| 134
| 2.84375
| 3
|
The Churches Grow (1817–1843)
The Second Great Awakening was the dominant religious development among Protestants in America in the first half of the nineteenth century. Through revivals and camp meetings sinners were brought to an experience of conversion. Circuit riding preachers and lay pastors knit them into a connection. This style of Christian faith and discipline was very agreeable to Methodists, United Brethren, and Evangelicals, who favored its emphasis on the experiential. The memberships of these churches increased dramatically during this period. The number of preachers serving them also multiplied significantly.
Lay members and preachers were expected to be seriously committed to the faith. Preachers were not only to possess a sound conversion and divine calling but were also to demonstrate the gifts and skills requisite for an effective ministry. Their work was urgent and demanding. The financial benefits were meager. But, as they often reminded one another, there was no ore important work than theirs.
The deep commitment of the general membership was exhibited in their willingness to adhere to the spiritual disciplines and standards of conduct outlined by their churches. Methodists, for example, were to be strictly guided by a set of General Rules adopted at the Christmas Conference of 1784 and still printed in United Methodism’s Book of Discipline. They were urged to avoid evil, to do good, and to use the means of grace supplied by God. Membership in the church was serious business. There was no place for those whom Wesley called the “almost Christians.”
The structure of the Methodist, United Brethren, and Evangelical Association churches allowed them to function in ways to support, consolidate, and expand their ministries. General Conferences, meeting quadrennially, proved sufficient to set the main course for the church. Annual Conferences under episcopal leadership provided the mechanism for admitting and ordaining clergy, appointing itinerant preachers to their churches, and supplying them with mutual support. Local churches and classes could spring up wherever a few women and men were gathered under the direction of a class leader and were visited regularly by the circuit preacher, one who had a circuit of preaching placed under his care. This system effectively served the needs of city, town, village, or frontier outpost. The churches were able to go to the people wherever they settled.
The earlier years of the nineteenth century were also marked by the spread of the Sunday school movement in America. By 1835 Sunday schools were encouraged in every place where they could be started and maintained. The Sunday school became a principal source of prospective members for the church.
The churches’ interest in education was also evident in their establishment of secondary schools and colleges. By 1845 Methodists, Evangelicals, and United Brethren had also instituted courses of study for their preachers to ensure that they had a basic knowledge of the Bible, theology, and pastoral ministry.
To supply their members, preachers, and Sunday schools with Christian literature, the churches established publishing operations. The Methodist Book Concern, organized in 1789, was the first church publishing house in America. The Evangelical Association and United Brethren also authorized the formation of publishing agencies in the early nineteenth century. From the presses of their printing plants came a succession of hymnals, Disciplines, newspapers, magazines, Sunday school materials, and other literature to nurture their memberships. Profits were usually designated for the support and welfare of retired and indigent preachers and their families.
The churches were also increasingly committed to missionary work. By 1841 each of them had started denominational missionary societies to develop strategies and provide funds for work in the United States and abroad. John Stewart’s mission to the Wyandots marked a beginning of the important presence of Native Americans in Methodism.
The founding period was not without serious problems, especially for the Methodists. Richard Allen (1760–1831), an emancipated slave and Methodist preacher who had been mistreated because of his race, left the church and in 1816 organized The African Methodist Episcopal Church. For similar reasons, The African Methodist Episcopal Zion Church was begun in 1821. In 1830 another rupture occurred in The Methodist Episcopal Church. About 5,000 preachers and laypeople left the denomination because it would not grant representation to the laity or permit the election of presiding elders (district superintendents). The new body was called The Methodist Protestant Church. It remained a strong church until 1939, when it united with The Methodist Episcopal Church and The Methodist Episcopal Church, South, to become The Methodist Church.
From The Book of Discipline of The United Methodist Church - 2012. Copyright 2012 by The United Methodist Publishing House. Used by permission.
|
<urn:uuid:4b301827-608d-431f-a2a6-c4922fd886d8>
|
CC-MAIN-2016-26
|
http://www.umc.org/who-we-are/the-churches-grow-1817-1843
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00055-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979091
| 948
| 3.484375
| 3
|
The recent explosion in the development of nanomaterials with enhanced performance characteristics for use in commercial and medical applications has increased the likelihood of people coming into direct contact with these materials.
There are currently more than 800 products on the market -- including clothes, skin lotions and cleaning products -- claiming to have at least one nanocomponent, and therapeutic nanocarriers have been designed for targeted drug delivery inside the human body. Human exposure to nanomaterials, which are smaller than one one-thousandth the diameter of a human hair, raises some important questions, including whether these "nano-bio" interactions could have adverse health effects.
Now, researchers at UCLA and the California NanoSystems Institute (CNSI), along with colleagues in academia and industry, have taken a proactive role in examining the current understanding of the nano-bio interface to identify the potential risks of engineered nanomaterials and to explore design methods that will lead to safer and more effective nanoparticles for use in a variety of treatments and products.
In a research review published in the July issue of the journal Nature Materials (and currently available online), the team provides a comprehensive overview of current knowledge on the physical and chemical properties of nanomaterials that allow them to undergo interactions with biological molecules and bioprocesses.
"What we have established here is a blueprint that will serve to educate the first generation of nanobiologists," said Dr. Andre Nel, leader of the team and chief of the division of nanomedicine at the David Geffen School of Medicine at UCLA and the California NanoSystems Institute.
Despite remarkable advances in nanoscience, relatively little is known about the intracellular activity and function of engineered nanomaterials, an area of study particularly important for the development of effective and safe nanoparticle drug-delivery systems. Much of the current knowledge derives from the study of tagged or labeled nanoparticles and their effects on cells after cellular uptake -- without any detailed understanding of what these interactions may lead to, good or bad.
The review article examines the variety of ways in which nanomaterials interface with biological systems and presents a roadmap of the physical and chemical properties of the materials that could lead to potentially hazardous or advantageous interactions at the nano-bio interface. A better understanding of the biological impact, combined with appropriate stewardship, will allow for more informed decisions about design features for the safe use of nanotechnology.
In addition to Nel, the team included Tian Xia, a researcher in UCLA's nanomedicine division, UCLA associate professor of civil and environmental engineering Eric Hoek, Lutz Mädler of the University of Bremen, Darrell Velegol of Penn State University, Ponisseril Somasundaran of Columbia University, Fred Klessig of Pennsylvania Bio Systems, Vince Castranova of the National Institute for Occupational Safety and Health, and Mike Thompson of FEI Co.
"We are committed to ensuring that nanotechnology is introduced and implemented in a responsible and safe manner," said Nel, who also directs the Center for Environmental Implications of Nanotechnology, which is funded by the National Science Foundation and the Environmental Protection Agency and is headquartered at the CNSI.
"Based on our rapidly improving understanding of nano-bio interactions, we have done a thorough examination of the literature and our own research progress to identify measures that could be taken for safe design of nanomaterials," he said. "Not only will this improve the implementation and acceptance of this technology, but it will also provide the cornerstone of developing new and improved nanoscale therapeutic devices, such as drug-delivering nanoparticles."
The review article spotlighted several important research advancements:
- A classification of the interactions when nanomaterials contact and bind to biological systems will help scientists understand how man-made materials may react when exposed to cells, tissues and various life forms in different natural environmental contexts.
- When nanomaterials enter a biological fluid -- for example, blood, plasma or interstitial fluid -- the materials' surface may be coated with proteins. Understanding how these protein layers change the properties of the nanomaterials and the ways in which they interact in the body can provide valuable information on how to alter the protein coatings to allow for targeted delivery of nanomaterials to specific tissues, such as in cancer treatments.
- Physicochemical properties such as size, charge, shape and other characteristics could greatly affect the ability of nanomaterials to enter a cell; this could determine whether a material can be useful in nanomedicine applications or could cause harm if taken in by life forms in an ecosystem or food chain.
- Nanoparticles can elicit a wide range of intracellular responses, depending on their properties, concentrations and interactions with biological molecules. These properties and their relationships to cellular function can induce cellular damage or induce advantageous cellular responses, such as increased energy production and growth.
Based on the link between certain nanomaterial properties and potential toxic effects, the team asserts that scientists can reengineer specific nanomaterial properties that are hazardous while maintaining catalytically useful function for industrial use.
As an example of a safe design feature, some nanoparticles now receive a surface coating designed to improve safety by preventing bioreactivity. Nanoparticles in cosmetic formulations such as suntan lotions, for instance, may be coated with a water-repelling polymer to reduce direct contact with human skin. An extension of this principle uses polymers and detergents to decrease cellular uptake. However, there is the potential that when the coating wears off, the material may become hazardous. It is therefore important to consider improving the stability of coating substances. Coating nanoparticles with protective shells is also an effective means of preventing the breakup of materials that could release toxic substances upon dissolution.
"Instead of waiting for knowledge to unfold randomly, we can already begin to view the events at nano-bio interface as a discoverable scientific platform that can be used for setting up a deliberate inorganic-organic roadmap to new, better and safer products," Nel said. "What we can identify by understanding the rules that shape the nano-bio interface will have a massive impact on the ability to develop safe nanomaterials in the future."
The California NanoSystems Institute (CNSI) is an integrated research center operating jointly at UCLA and UC Santa Barbara whose mission is to foster interdisciplinary collaborations for discoveries in nanosystems and nanotechnology; train the next generation of scientists, educators and technology leaders; and facilitate partnerships with industry, fueling economic development and the social well-being of California, the United States and the world. The CNSI was established in 2000 with $100 million from the state of California and an additional $250 million in federal research grants and industry funding. At the institute, scientists in the areas of biology, chemistry, biochemistry, physics, mathematics, computational science and engineering are measuring, modifying and manipulating the building blocks of our world -- atoms and molecules. These scientists benefit from an integrated laboratory culture enabling them to conduct dynamic research at the nanoscale, leading to significant breakthroughs in the areas of health, energy, the environment and information technology.
For more news, visit the UCLA Newsroom.
|
<urn:uuid:9afcf596-1796-4b1f-a613-ab8b0da37dc4>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2009-06/uoc--rei061909.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00067-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929559
| 1,491
| 2.8125
| 3
|
Investigations in Number, Data, and Space (TERC)
Library Home || Full Table of Contents || Suggest a Link || Library Help
|Testbed for Collaboration, Eisenhower Regional Consortia|
|A K-5 mathematics curriculum with four major goals: to offer students meaningful mathematical problems; emphasize depth in mathematical thinking rather than superficial exposure to a series of fragmented topics; communicate mathematics content and pedagogy to teachers; and substantially expand the pool of mathematically literate students. This site includes descriptions of the units in each of first through fifth grades, ordering information, and information on a professional development workshop designed to facilitate the introduction of this curriculum.|
|Resource Types:||Lesson Plans and Activities|
|Math Ed Topics:||Non-traditional, Workshops/Inservice/Training, Curriculum/Materials Development|
© 1994- The Math Forum at NCTM. All rights reserved.
|
<urn:uuid:6112bdef-d261-4bf1-b4cd-171726360baf>
|
CC-MAIN-2016-26
|
http://mathforum.org/library/view/6950.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00072-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.829598
| 189
| 3.25
| 3
|
Enough: Staying Human in an Engineered Age. Bill McKibben. Times Books. $25. 288 pp.
It's not easy staying sane in a world of progress. Bill McKibben's solution is to stop the progress altogether.
In a thoroughly rational and extremely depressing book, he makes a persuasive case for saying "Enough!" to the hurly-burly modern world.
With the same well-reasoned and philosophical insights that marked his manifesto The End of Nature, McKibben questions the necessity of many technological marvels.
To some, robotics, genetic engineering and nanotechnology signal the beginning of a brave new future. But to McKibben, they present the risk of humans losing their humanity.
At some stage, he argues, people must reach a saturation point where technology has made our lives so comfortable that it's ridiculous to go further.
This "enough point" is rushing pell-mell toward us, if not already here, McKibben argues. Chapter titles hint at his frustration at the pace: "Too Much" is followed by "Even More" and then "Enough?"
Extensive research and interviews inform his view. Gently, but firmly, he shows how some science-fiction aficionados have gone beyond the point of reason in envisioning a future where robots replace human beings. He celebrates the environmental successes of the past few decades -- decreasing population, a healing ozone hole -- while never giving up hope for more progress.
In many areas, his pleas for self-restraint ring true. Human cloning was once thought near impossible, yet it may be achieved within years. New forms of genetic engineering have allowed unprecedented tinkering with animal and plant genes. The possibilities of what may be are both apocalyptic and frightening.
|
<urn:uuid:4a407d5e-bc0c-49ff-8c6c-7ac1af227cbe>
|
CC-MAIN-2016-26
|
http://articles.sun-sentinel.com/2003-07-27/entertainment/0307250550_1_bill-mckibben-genetic-engineering-nanotechnology
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937956
| 362
| 2.578125
| 3
|
Definition of Khmer Rouge
1. Noun. A communist organization formed in Cambodia in 1970; became a terrorist organization in 1975 when it captured Phnom Penh and created a government that killed an estimated three million people; was defeated by Vietnamese troops but remained active until 1999.
Category relationships: Act Of Terrorism, Terrorism, Terrorist Act
Generic synonyms: Foreign Terrorist Organization, Fto, Terrorist Group, Terrorist Organization
Geographical relationships: Cambodia, Kampuchea, Kingdom Of Cambodia
Definition of Khmer Rouge
1. Proper noun. A Cambodian communist guerrilla force active from the 1970s to the 1990s under the leadership of Pol Pot. ¹
¹ Source: wiktionary.com
Khmer Rouge Pictures
Click the following link to bring up a new window with an automated collection of images related to the term: Khmer Rouge Images
Lexicographical Neighbors of Khmer Rouge
Literary usage of Khmer Rouge
Below you will find example usage of this term as found in modern and/or classical literature:
1. Cambodia at War by Dinah PoKempner, Human Rights Watch/Asia, Arms Project (Human Rights Watch), Human Rights Watch (Organization) (1995)
"HUMAN RIGHTS ABUSES BY THE Khmer Rouge There is little sign that the ideology, leadership, or social regulations of the Khmer Rouge have changed ..."
2. Land Mines in Cambodia: The Coward's War, September 1991 by Eric Stover, Asia Watch, Asia Watch Committee (U.S.), Asia Watch Committee (U.S., Physicians for Human Rights, Rae McGrath, Physicians for Human Rights (U.S.) (1991)
"The Khmer Rouge The break between the Khmer Rouge and Vietnamese led to tension ... In the less than four years that the Khmer Rouge ruled, more than one ..."
3. Political Control, Human Rights, and the UN Mission in Cambodia by Dinah PoKempner (1992)
"Khmer Rouge violations of the ceasefire have been numerous and continuing. Since January 1992, the Khmer Rouge have engaged Phnom Penh forces in Kompong ..."
4. Low-Intensity Conflict in the Third World by Stephen Blank, Lewis B. Ware, Air University (U.S.). Press (1988)
"That party, which they later renamed the Kampuchean Communist party, guided what [Cambodia's leader, Prince] Sihanouk called the Khmer Rouge ..."
5. Landmines: A Deadly Legacy by Arms Project (Human Rights Watch), Physicians for Human Rights (U.S.) (1993)
"The Khmer Rouge seemed to use mines in a somewhat more sophisticated ... The Khmer Rouge often used mines to channel and control population movements. ..."
6. The Cold War in Asia edited by James G. Hershberg (1996)
"Only a handful survived the Khmer Rouge regime, and only two or three returned to work in the library once the Khmer Rouge were driven out in 1979. ..."
7. Human Rights Watch World Report 2000 by Human Rights Watch (Organization), Human Rights Watch Staff, Human Rights Watch (1999)
"By midyear, all surviving members of the top Khmer Rouge leadership—Khieu ... When former Khmer Rouge Standing Committee members Khieu Samphan and Nuon Chea ..."
8. Asia in the 21st Century: Evolving Strategic Priorities by Michael D. Bellows (1994)
"And Thailand, although there's still some leakage across the border and some lower military units look like they're still cooperating with the Khmer Rouge, ..."
|
<urn:uuid:50cb9127-f5aa-41d1-8ea6-ed82bf4fc885>
|
CC-MAIN-2016-26
|
http://www.lexic.us/definition-of/Khmer_Rouge
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00004-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.899529
| 756
| 3.0625
| 3
|
Sledge, George W. Jr., MD
There are few more tired metaphors than “The War on Cancer.” We have been fighting this war, if Richard Nixon had it right, since 1971. Practically from the beginning there were those who opposed the “war” metaphor's use. The objections were many: too bloodthirsty, too masculine, and too scary for kids with cancer (wars are about death and dying)—even, some would have it, unbiologic and simplistic and unhelpful.
Figure. GEORGE W. SL...Image Tools
Add to this the failed promise—the war would be over in 10 years—and the metaphor wore out its welcome long ago. No less an expert than the Director of the National Cancer Institute, Harold Varmus, has said “It's inaccurate, in my view, to think of a war on cancer as though cancer were a single, individual enemy—nor is the metaphor of war exactly right.”
Part of the issue was that wars, like cancers, differ greatly. Some end quickly, with lightning strikes (the Six-Day War between Israel and the Arab nations), while some drag on forever (the Hundred Years War between England and France). Some wars end in total victory—perhaps the World War II veteran Nixon was thinking in these terms—while some end in a negotiated surrender on terms. Some require total mobilization of resources (World War II, again, but think high-dose chemotherapy and bone marrow transplantation) while some are hardly noticed by the citizens of at least one of the combatants (the U.S. in Afghanistan, many carcinomas in situ). Some are wars of annihilation, while some are almost polite and rule-bound. So the metaphor isn't just tired, it is inexact and confusing.
Paul Kennedy's new book on World War II, Engineers of Victory, has made me re-think the “war” metaphor. Kennedy, a Yale historian, is perhaps best known for his The Rise and Fall of the Great Powers, one of the cultural zeitgeist books of the late 1980s and early 1990s. The book is his take on why the Allies beat the Axis powers in World War II.
The standard narrative of how the Allies won involves, respectively, the blood of millions of Russians, the overwhelming industrial might of the United States, and the bulldog tenacity of the British. Popular histories focus on defining moments, turning points, that together were said to decide the war: Midway, Stalingrad, El Alamein, Normandy, and a few others. Or, alternatively, they suggest that some “killer app” (to use an anachronistic term) like Bletchley Park's code-breaking efforts were responsible for victory.
Kennedy does not deny the importance of any of these, but he considers them both inadequate and too focused on nationalistic mythologies. Reality was far too complex, and this complexity leads him to resist “all efforts at reductionism, such as that the winning of the war can be explained by brute force, or by some wonder weapon, or by some magical decrypting system.”
Instead he thinks of World War II as a series of engineering problems, where many of the important “battles” were fought far from the front lines, through new solutions to seemingly intractable problems: How do you stop a Blitzkrieg? How do you overcome the “tyranny of distance” to defeat a country on the other side of the Pacific Ocean? How do you get a fleet of bombers to Berlin and back against determined resistance? How do you get a convoy across the Atlantic through submarine wolf packs?
Kennedy details how there was almost never a single “killer app.” Rather, the means of victory always required many separate solutions, from multiple sources, coming together to unravel complex problems.
An example, one among many, is the defeat of the German Air Force by British and American forces. The P-51 Mustang fighter plane was the closest thing to a “killer app” for this problem—a guardian for the bombers headed for the Reich, it prevented the collapse of the Allies bomber forces in 1944. Prior to the Mustang, no fighter could make it all the way and back, so the bomber forces were regularly slaughtered whenever they travelled over Germany.
But the Mustang was very much a series of linked events, rather than a solitary invention: originally a British-ordered American design with a weak engine, it only became powerful enough to take on German fighters when it received a replacement British Rolls Royce engine. And it was only able to fly to Berlin because of the invention of drop tanks that extended its range, a remarkably low-tech solution that somehow eluded the Air Forces for years.
Leadership mattered. The Mustang was bitterly opposed by the powers that be for some time—the original “Not Invented Here” syndrome that occurred because of its hybrid British-American lineage. It was not until a high-ranking Air Force officer took on the problem that the Mustang was put into production. Thousands of Allied airmen died while the leaders dithered.
But Kennedy is at pains, even in describing the Mustang's path to Berlin, to insist that this was only one part of the eventual solution, with dozens of other decisions along the way playing a part.
Again and again, in Kennedy's telling, it was the mid-level engineers (he uses the term in a very broad sense) who came up with the solutions, often creative types low on the totem pole. The cavity magnetron, the basis for centimeric radar, arguably the most important invention in the war, was created by two Birmingham scholars using materials scrounged from a local scrap metal dealer.
So back to the “War on Cancer.” Maybe the problem is not that the metaphor is tired, but that we never really understood what the metaphor actually implies, and what it takes to win the sort of global war that it will take to defeat the disease.
When Kennedy speaks of over-simplistic appeals to “brute force,” to “wonder weapons,” or to “some magical decrypting system,” the student of cancer can readily translate these as “high-dose chemotherapy,” “kinase inhibitor,” or “genomics.”
Some lessons from Kennedy's book, applied to the “War on Cancer”:
1. Don't focus just on “Turning Points.” We automatically think in terms of imatinib for CML or trastuzumab for HER2-positive cancer. Not to gainsay those great, and very real, victories, but neither drug ended the war. The war is much larger than CML or HER2-positive breast cancer, and the tools needed for those victories are not the tools that will be applicable to most other battles, nor do they even represent a final defeat for those diseases. One is tempted to remember Churchill's description of El Alamein: not the beginning of the end, but the end of the beginning.
2. No single solution exists. This doesn't mean just that imatinib won't cure every cancer, but also that no single category of solutions (kinase inhibitors, antibodies, chemotherapeutics, radiation, surgery) will ever do the trick. Instead, we need to accept the need for combinations of solutions, not just to win the war, but also even to win individual battles. Every problem requires a different solution, or collection of solutions (multidisciplinary clinics, anyone?). Magic bullets didn't end World War II and won't end the “War on Cancer.”
3. Continuous improvement cycles matter. We went from a 20 percent cure rate to a 90+ percent cure rate for ALL without introducing any new drugs. Sometimes learning how to use imperfect tools better (the Mustang was nearly tossed away early on because of its imperfections, as were cisplatin and trastuzumab) is a large part of the solution for some problems. Though not all: remember 20 maddening years of 5-FU studies in colorectal cancer.
4. Logistics matter more than Great Generals. The Germans lost at El Alamein with the war's best general (Rommel) because they could not get enough supplies to their troops. We still lose all too many cancer “battles” because of access and supply issues. It does not matter that we can cure childhood ALL if the dirt-cheap drugs cannot make it to an American clinic, or to Sub-Saharan Africa.
5. Solutions are often either low-tech (drop tanks for the Mustang, arsenic trioxide for APL) or emerge from a place at the bottom of the hierarchy. Often, in both World War II and the War on Cancer, it is the sergeants or the Assistant Professors who find a solution.
6. The role of leadership is to identify problems, and then enable the solutions of others; making sure that the best solutions (not the perfect solutions, for they don't exist) are adequately resourced. And for heaven's sake, when a solution exists, leaders should at least have the good sense to get out of the way and let it bubble up to the top. Churchill, for instance, had an almost uncanny ability to discover and enable low-level problem-solvers over the best (worst?) efforts of his own bureaucracy, in Kennedy's telling.
7. The experts are not always the best judges of what works. Kennedy discusses the lack of functioning American torpedoes for the first two years of World War II: if they didn't work, it was the fault of the “practitioners” (in this case, the submariners), according to the Navy bureau that created the devices. It turns out that the Navy bureau didn't actually understand the problem in “the clinic.” Their models were wrong, better suited to Rhode Island than the South Pacific.
Sound like anything we've dealt with? Effective translational science (bench to bedside to bench) is not an invention of the “War on Cancer.” The necessity of getting the end-users together with the basic researchers was learned through bitter experience in the 1940s.
8. Strategy matters, but only if solutions exist. Brute force rarely does the trick: in Kennedy's words, it is the intelligent application of force that matters. If anyone believes we have applied our forces intelligently (at either a scientific or societal level), I have a bridge I would like to sell you.
So the “War on Cancer” may well be the right metaphor. We just didn't understand what the metaphor actually required of us: a committed effort to complexity, the rejection of simplicity, the enabling of talent at every level, and the effective delivery of the right resources to those fighting in the trenches against a deadly and vicious foe.
More from George Sledge on His OT Blog!
Read and comment at http://bit.ly/OT-Sledge
© 2013 Lippincott Williams & Wilkins, Inc.
|
<urn:uuid:6c5f4cba-3ff3-4bbe-87c5-8a1246b1d7cb>
|
CC-MAIN-2016-26
|
http://journals.lww.com/oncology-times/Fulltext/2013/04250/MUSINGS_OF_A_CANCER_DOCTOR__Rethinking_the__War_on.14.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00000-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964122
| 2,318
| 2.59375
| 3
|
Let's pretend you're out for a jog in the woods. Suddenly, you find yourself surrounded by a pack of wolves. There must be at least eight of them. Panic sets in. What should you do?
Here are some of the key points (Starr gives instructions for the more likely scenario that you've entered a pen of captive wolves, but some of the advice can still apply in the wild).
- DON"T RUN! This will make you look prey, which is a bad thing. Remember, wolves are HUNTERS
- Don't "stare the animal down." This looks like a threat
- Don't turn your back on the wolves
- Make yourself appear scary: shout, throw stones, raise your arms over your head
- If you've entered an enclosure, back away slowly, moving toward the exit with your back against the fence
- Don't look scared or fall, this will encourage an attack
- If things get really bad, curl into a ball and protect your face
Read Starr's full answer below:
Let me preface this answer by stating that the likelihood of being attacked by wolves is incredibly poor. Not only because wild wolves are very fearful of people and try their best to avoid them, but also because your chances of being where wolves roam freely is also fairly poor. Wolves require about 10 square miles per wolf but since wolves form packs this really means something like 7 wolves together somewhere in a 70 square mile territory. If you are near wolves it's because you worked hard to be there.
A more likely scenario would be if you entered the enclosure of a captive wolf pack that had been habituated to people to some degree. With animals like this the risk of an attack is much higher. For the sake of accuracy, I'd prefer to use the second, much more likely scenario for my response.Let me preface this answer by stating that the likelihood of being attacked by wolves is incredibly poor. Not only because wild wolves are very fearful of people and try their best to avoid them, but also because your chances of being where wolves roam freely is also fairly poor. Wolves require about 10 square miles per wolf but since wolves form packs this really means something like 7 wolves together somewhere in a 70 square mile territory. If you are near wolves it's because you worked hard to be there.
Whatever you do, don't run. Wolves are what is known as coursing predators meaning they take their prey on the run.
If you watch wolves hunt you'll immediately see this in action. Wolves will attempt to get the animals they prey upon to run. If they don't run wolves usually don't pursue the attack.
Here's a video that demonstrates this with arctic wolves and musk oxen:
Do not try to "stare the animal down" Wolves appear to regard a direct stare as a challenge or a threat
Do not turn your back on the animal (s). If multiple wolves are threatening you some of them may try to flank you as you can see the lead animal doing in the picture above.
Try to make yourself appear large. If you have a jacket or shirt on, raise it above your head.
Shout at the animals.
If you can do so without making yourself vulnerable grab a few stones and throw them at the animals.
Back slowly away.
If you are working in an enclosure, get yourself to a position with your back to the fence and then keeping your back to the fence move towards an exit. Be careful not to trip. A fall could encourage an attack. In any case if you're working with captive animals in enclosures you should be working in pairs or at a minimum be connected to nearby help via a radio. This last advice holds true working with any large and/or wild animals. When you need help, you need it immediately.
If you are noisy, don't exhibit excessive fear and maintain control of yourself, this should be enough to get you out of trouble.
However if things go downhill from there your chances become worse the more animals there are. I survived a wolf attack from a captive male wolf and wrote about that experience in detail in another Quora answer here:
Given what I experienced with just a single animal I find it hard to believe a person could fend off two or more wolves for any length of time should they commit to an attack.
At more than one wolf, I myself expect I would curl into the tightest ball possible and try to protect my head, neck, face and sides. Chances are, if the wolves really meant to hurt me, this strategy would only be effective for a very short time...
So, there you go. Safe walking out there.
|
<urn:uuid:91096cdf-4e5e-4daa-a23e-dfe47c038a53>
|
CC-MAIN-2016-26
|
http://www.businessinsider.com/what-to-do-if-you-are-attacked-by-a-pack-of-wolves-2012-6
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00165-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966539
| 959
| 2.96875
| 3
|
National 4 Administration and IT
The National 4 Administration and IT Course develops learners’ administrative, organisational and IT skills. Learners develop an understanding of administration in the workplace and the key legislation affecting employeers, enabling them to contribute to the effective functioning of organisations through administrative positions.
Updates and announcements
This explains the overall structure of the Course, including its purpose and aims and information on the skills, knowledge and understanding that will be developed.
These provide an outline of what each Unit will cover within the Course and detail the Outcomes and Assessment Standards.
Added Value Unit Specification
These define the mandatory requirements for the Added Value Unit, including the Outcomes and Assessment Standards.
They also include the further mandatory information on Course coverage for the National 4 Course and include information on the assessment method to be used and the conditions of assessment.
Advice and guidance
Course and Unit Support Notes
These provide advice and guidance for teachers/lecturers on learning, teaching and assessment within the Course and its Units.
Past Papers and Marking Instructions
Added Value Unit Assessment
The Administration and IT Added Value Unit assessment is an assignment. It is supported by SQA-devised electronic files, so centres should not normally make any changes to the assessment. In the assessment, candidates have to undertake a series of tasks to plan and prepare for an event, communicate information and complete follow-up activities, making use of technology where appropriate.
Centres can choose from a bank of these assessments, available from September 2013. Centres can access the assessments through their SQA Co-ordinator.
As with all SQA Unit assessments, Added Value Unit assessments must be internally verified by centres.
Unit Assessment Support
These documents contain details of Unit assessment task(s), show approaches to gathering evidence and how the evidence can be judged against the Outcomes and Assessment Standards. Teachers/lecturers can arrange access to these confidential documents through their SQA Co-ordinator.
Understanding Standards materials
We are publishing examples of candidate evidence with commentaries as part of our Understanding Standards programme. These materials are for teachers and lecturers to help them develop their understanding of the standards required for assessment. As these materials become available, they are being published in the following locations:
- Available from our secure website Materials relating to Unit assessment, internally assessed components of Course assessment, and externally assessed components of Course assessment which are subject to visiting assessment. Teachers and lecturers can arrange access to these materials through their SQA Co-ordinator.
- Available from our Understanding Standards website Materials relating to externally assessed components of Course assessment, with the exception of those subject to visiting assessment.
More information on our Understanding Standards programme, can be found on our Understanding Standards page.
|
<urn:uuid:5630cc16-3443-4f9e-8647-e4c8f4de05c3>
|
CC-MAIN-2016-26
|
http://www.sqa.org.uk/sqa/47431.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.917969
| 558
| 2.8125
| 3
|
Panchayati Raj Institutions (PRI’S) : Panchayati Raj Institutions (PRI’S) Pranitha BT 008 Introduction:: It was Gandhi who was realized the importance of village panchayat as an important instrument of rural development. And it for promoting and nurturing democracy at the grass roots. In january1957, the GOI appointed a committee called community development programme (CDP), and suggest how best it could be implemented. The committee recommended a three tier system of local government , christened, panchayati raj by Jawaharlal Nehru. The committee offered two broad directional thrusts: a) It argued that there should be administrative decentralization for effective implementation of development programme and, b) The decentralized administrative system should be under the control of elected bodies Introduction: Contd…..,: Contd….., The new system of PRI’S was first adopted in Rajasthan and Andhra Pradesh in 1959. The Indian parliament passed the 73 rd constitution amendment act in December 1992. It envisages the establishment of panchayats as units of local self government in all states and union territories, except the tribal areas in the states Nagaland, Meghalaya and Mizoram and certain other scheduled areas. The panchayats receive funds from three sources : The panchayats receive funds from three sources (i) Local body grants, as recommended by the Central Finance Commission, (ii) Funds for implementation of centrally-sponsored schemes, and (iii) Funds released by the state governments on the recommendations of the State Finance Commissions. Powers and responsibilities are delegated to Panchayats at the appropriate level :- Preparation of plan for economic development and social justice. Implementation of schemes for economic development and social justice in relation to 29 subjects given in Eleventh Schedule of the Constitution. To levy, collect and appropriate taxes, duties, tolls and fees. Salient features of 73rd constitution amendment act,1992 are as follows:: Salient features of 73 rd constitution amendment act,1992 are as follows: 1.The gram sabha has been envisaged as the foundation of the Panchayati raj system. 2. There shall be 3 tiers of panchayats, at the village, intermediate and district level. 3.Seats in a Panchayat at every level are to be filled by direct election from territorial constituents demarcated for this purpose 4.Seat shall be reserved at every level of panchayat for scheduled tribes (ST’s) in proportion to their population in a given panchayat area ,and for women to extent of not less than one third of the total number of seats . 5. The term of office of panchayatas shall be five years and elections must be completed before the expiry of its duration. If dissolved earlier elections must be completed with in six month from the date of disolution. Contd…,: Contd…, 6.A state finance commission shall be constituted in every state, to go into the principle governing the distribution and devolution of financial resources between the pnachayat & states. 7.The superintendence, direction and control of the preparation of electoral rolls and conduct of all elections to panchayat shall be vested in a state elect commission. 8. The eleventh schedule has been added to the constitution, which denotes 29 subjects /functions which could be entrusted to the PRI’s Village level panchayat : Village level panchayat It is called a Panchayat at the village level. It is a local body working for the good of the village. The number of members usually ranges from 7 to 31; occasionally, groups are larger, but they never have fewer than 7 members. The block-level institution is called the Panchayat Samiti . The district-level institution is called the Zilla Parishad . Intermediate level panchayat : Intermediate level panchayat Panchayat samiti is a local government body at the tehsil or Taluka level in India. It works for the villages of the Tehsil or Taluka that together are called a Development Block. The Panchayat Samiti is the link between the Gram Panchayat and the district administration. There are a number of variations of this institution in various states. It is known as Mandal Praja Parishad in Andhra Pradesh, Taluka panchayat in Gujarat, Mandal Panchayat in Karnataka, etc.In general it's a kind of Panchayati raj at higher level. Constitution : Constitution It is composed of ex-officio members (all sarpanchas of the panchayat samiti area, the MPs and MLAs of the area and the SDO of the subdivision), coopted members (representatives of SC/ST and women), associate members (a farmer of the area, a representative of the cooperative societies and one of the marketing services) and some elected members. The samiti is elected for 5 years and is headed by the chairman and the deputy chairman . Departments : Departments The common departments in the Samiti are as follows: General administration Finance Public works Agriculture Health Education Social welfare Information Technology and others. Functions : Functions Implement schemes for the development of agriculture. Establishment of primary health centre's and primary schools. Supply of drinking water, drainage, construction/repair of roads. Development of cottage and small-scale industries and opening of cooperative societies. Establishment of youth organizations. Sources of income: Sources of income The main source of income of the panchayat samiti are grants-in-aid and loans from the State Government. District level panchayat: District level panchayat In the district level of the panchayati raj system you have the "zilla parishad". It looks after the administration of the rural area of the district and its office is located at the district headquarters. The Hindi word Parishad means Council and Zilla Parishad translates to District Council. It is headed by the "District Collector" or the "District Magistrate" or the "Deputy Comminissioner". it is the link between the state government and the panchayat samiti Functions : Functions 1. Provide essential services and facilities to the rural population and the planning and execution of the development programmes for the district. 2. Supply improved seeds to farmers. Inform them of new techniques of training. Undertake construction of small-scale irrigation projects and percolation tanks. Maintain pastures and grazing lands. 3. Set up and run schools in villages. Execute programmes for adult literacy. Run libraries. 4. Start Primary Health Centers and hospitals in villages. Start mobile hospitals for hamlets, vaccination drives against epidemics and family welfare campaigns. 5. Construct bridges and roads. Contd..,: Contd.., 6. Execute plans for the development of the scheduled castes and tribes. Run ashrams Halas for adivasi children. Set up free hostels for scheduled caste students. 7. Encourage entrepreneurs to start small-scale industries like cottage industries, handicraft, agriculture produce processing mills, dairy farms, etc. implement rural employment schemes. 8. They construct roads,schools,& public properties. And they take care of the public properties. 9. They even supply work for the poor people.(tribes,scheduled caste,lower caste) Sources of Income : Sources of Income 1. Taxes on water, pilgrimage, markets, etc. 2. Fixed grant from the State Government in proportion with the land revenue and money for works and schemes assigned to the Parishad. Conclusion : Conclusion It is hoped that PRI’s will emerges stronger and more dynamic to face various challenges and problems that still lie a head of them. Their success in the future will depend on the extent of transfer of rural development functions and devolution of financial and administrative powers to them by state Government. Given these features of panchayats and their elected heads , there is need for building the capacity of elected through education and training. This is all the more necessary ,given a variety of administrative and financial functions expected to be performed by the elected leaders, as also the ambivalent attitude of the bureaucracy and the reluctance of the state leaders to part this power.
|
<urn:uuid:df77bf3e-0ad1-4dab-9555-b18bcc13904c>
|
CC-MAIN-2016-26
|
http://www.authorstream.com/Presentation/vidya2k7-1361082-47889972-panchayati-raj-institutions-pri/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00074-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93648
| 1,715
| 2.859375
| 3
|
Information by Country
Global Section: Press releases
UNGEI global conference on girls’ education focuses on preventing “56 million wasted opportunities”
The “Engendering Empowerment: Education and Equality” (E4) conference on gender equality and education is being organized by the United Nations Girls' Education Initiative (UNGEI) and marks the tenth anniversary of the UNGEI global partnership. In the last decade, there has been progress in girls’ education and many more girls and boys have been enrolled in schools worldwide. Gender gaps have closed or are closing in most regions, including central and Eastern Europe, East Asia and Latin America.
But despite this progress, some 56 million children – over half of whom would be girls – could still be out of school in 2015 if current trends continue. Many countries missed the 2005 Millennium Development Goal (MDG) benchmark and other international targets on education and gender parity. More than two-thirds of the children not in school today live in sub-Saharan Africa and south and west Asia.
“At current rates of progress, 56 million lives will be blighted by lack of access to education in 2015, and 56 million opportunities to promote economic development will have been missed,” said Anthony Lake, UNICEF Executive Director. “This conference aims to map a better future for children who are already marginalized and vulnerable and who may fall farther behind unless we can provide them with access to education.”
The conference brings together a global mix of more than 200 scholars, government representatives, civil society representatives and other development partners to examine how to improve children’s access to a classroom and so to a better life. Poverty, violence, poor health and climate change often prevent girls from enrolling in school. Poor educational quality keeps them from staying in school.
Various studies have shown that educated girls grow into agents of change for their families, economies, and societies. Unleashing the potential of girls by providing them a quality education is a highly effective tool to address poverty, fight disease, and improve economies.
Additional funding of approximately US $16 billion annually is required to achieve universal primary education and meet education goals by 2015, according to the latest Education for All Global Monitoring Report estimates. Uncertainty about existing commitments is also inhibiting education planning in some countries most in need.
If the 2015 MDG targets of universal primary education and gender equality are to be met, then urgent action is needed now.
In addition to the UNICEF Executive Director and the Prime Minister of Senegal, speakers addressing the conference include the World Bank Director of Education, the Head of Education Indicators and Data Analysis at the UNESCO Institute of Statistics, and a panel of experts on violence against women and girls in post-conflict contexts.
Attn: Broadcasters: Video packages, B-roll and high resolution photographs will be available on www.thenewsmarket.com/unicef
The United Nations Girls’ Education Initiative (UNGEI) is a partnership of organizations committed to narrowing the gender gap in primary and secondary education. It also seeks to ensure that, by 2015, all children complete primary schooling, with girls and boys having equal access to free, quality education. UNGEI was launched in April 2000 at the World Education Forum in Dakar, Senegal, by then United Nations Secretary-General Kofi Annan in response to a troubling reality: Of the millions of children worldwide who were not in school, more than half were girls – a reality that continues today. To read more about UNGEI, visit: www.ungei.org
For further information, please contact:
Martin Dawes, UNICEF Media, West and Central Africa, Tel: + 221 775 69 19 26, email@example.com
Gaelle Bausson, UNICEF Media, West and Central Africa, Tel: + 221 338 69 76 42, firstname.lastname@example.org
Shimali Senanayake, UNICEF Media, New York, Tel: + 1 917 265 4516, email@example.com
|
<urn:uuid:18446ac7-c0dc-4cf9-8276-44275feb725b>
|
CC-MAIN-2016-26
|
http://www.ungei.org/infobycountry/247_2477.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935616
| 854
| 3.03125
| 3
|
John Perkins (1930- ), community organizer and social and racial peacemaker, was born into a sharecropper’s family in New Hebron, Mississippi. Enduring economic and racial hardship in Mississippi, his relatives sent him to California. After serving in the military during World War II, Perkins returned to Pasadena where he experienced a conversion during a Sunday-school class with his son in 1957. He subsequently became a missionary--employing what he described as a holistic mission--to young black individuals, and returned to Mississippi to help those oppressed by Jim Crow laws.
Perkins’s winsome personality and connections with white evangelicals allowed him to gain a significant amount of emotional and financial support from both white evangelicals and the black community for his organization, the Voice of Calvary. With this support Perkins employed his holistic mission, which centered on the development of individuals’ inner spiritual and behavioral needs together with their outer material needs. Perkins is perhaps best known for his model of Christian community development. Employing this model he created the Southern Cooperative Development Fund which provided finances and leadership to a number of communities in Mississippi, then to communities in other states and finally abroad. In 1983 this same model was used to create the John M. Perkins Foundation for Reconciliation and Development which provides leadership to hundreds of faith-based community development organizations.
For further reading see S.E. Berk, A Time to Heal: John Perkins, Community Development and Racial Reconciliation (Baker, 1997).
|
<urn:uuid:87f976c5-4607-4a7c-abf5-fad181bfd400>
|
CC-MAIN-2016-26
|
http://wheaton.edu/ISAE/Hall-of-Biography/John-Perkins
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00001-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969636
| 303
| 2.875
| 3
|
Extended duration stratospheric flights of large science instruments at mid latitudes is a goal of the National Aeronautics and Space Administrationís (NASA) Balloon Program Office. Balloon flights near the poles fly in almost constant sunlight. Balloon flying at mid-latitude will experience day night cycles which limit the flight duration of conventional balloons. Super Pressure Balloons offer the promise of extended duration mid-latitude flights. The goal of these flights is to lift a ton of science to greater than 33.5 km (~110,000 ft).
Development of a large pumpkin shaped Super Pressure Balloon (SPB) that will fly at a near constant pressure altitude for extended periods. Float altitude excursions can be reduced by an order of magnitude or more and would not require ballast for altitude stabilization to be flown, even in mid-latitudes compared to the traditional Zero Pressure Balloons. The project approach has focused on incremental steps up in balloon volume and payload carrying capability toward the defined program level requirements.
Report back for the latest update from NASA's Balloon Program Office.
|
<urn:uuid:e28bed3b-fc35-4c15-8c06-95512e0ae607>
|
CC-MAIN-2016-26
|
http://sites.wff.nasa.gov/code820/spb.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00150-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.89715
| 215
| 2.78125
| 3
|
Water quality affects property values
Residents of Potato Lake, convening for the annual meeting, reviewed a study on the relationship between lakeshore property values and water quality in the Mississippi Headwaters Region.
Bottom line: While properties on "prestige lakes" with lawns down to the shoreline currently bring the highest prices, they are likely to degrade water quality.
"Fine lawns can foul lakes," former Bemidji State University professor Dr. Charles Parson told his audience.
And in the long run, a water clarity change will cause a significant decrease in property value. The study concluded a one-meter clarity change on 3,700 lakes in Minnesota (a third of the total) could result in a $100 billion property value drop.
At present, the overall lakeshore values are endangered by the actions of property owners who wish to maximize their personal gains, the study states.
Dr. Patrick Welle, who holds a Ph.D. in economics, and Parson, with a Ph.D. in alpine geomorphology, said the question they were asked to address was "if water quality declines, what does it do to lakeshore property values?"
Secchi disc readings, measuring water transparency on lakes, were a key tool in the study.
"How much more are people willing to pay for a five-foot reading as opposed to 15?" Parson asked. "Water clarity is the only measure of lakes we have over time," he said, as opposed to dissolved oxygen, eutrophic indicators or the quality of fishing.
The study sample included 37 lakes with property sold between 1996-2001, just over 1,200 properties. Lakes were assigned to six groups that best approximated market areas, similar to a pilot study in Maine.
The region included lakes within the Park Rapids, Walker, Bemidji, Brainerd, Aitkin and Grand Rapids areas.
A shared resource
"We found the market was rewarding predominantly damaging land use practices," Welle said. People tend to look at economics from a personal impact, Welle said. "But lakes are a shared resource... That's why we need lake associations."
The challenge, he said, is to move from thinking individualistically (this is my property) to considering water bodies as a shared resource.
Water clarity was found to have a positive influence on property values, but site quality also had a significant impact. "More damaging lakeshore practices increased sales prices in most cases," the study found.
The carefully manicured, "prestige" lakeshore sends fertilizer into the lake. The lakeshore often is damaged by wave erosion and lack of shaded water, which is amenable to "undesirable species," all of which is potentially damaging to clarity.
One of the first hedonic pricing studies was conducted in Chicago neighborhoods, with air quality as the variable, Welle said. "The study found a substantial market reward for better air quality."
Water quality is a variable people can relate to," Welle said of a walk on the dock. "Lakeshore is individual property, but a critical part of the eco-system."
Professors and students from BSU studied Beauty Lake in northern Hubbard County in regard to implementation of best management practices (BMP).
The study models "what can happen when lakeshore development is not done using current knowledge of landscaping impacts."
The lake, Parson said, was pristine up until a decade ago when Pan o' Gold sold it and a developer platted it into lots.
The BSU research process involved developing a shoreland erosion potential model, based on an established one. Global information system base files were created for future evaluation and to run simulations of the effects caused as development took place.
The study found if the lake were left in a heavily forested natural state, the lake would fill in over a 37,000-year lifecycle to become a wetland.
"Normal landscaping," removing some canopy trees and planting grass, would decrease the lake's lifespan to 370 years, due to erosion.
"But what if we clear more of the trees and clear the beach and add a path that leads to the water?" which is "pretty much business as usual." The study concludes the lake will fill in 90 years, nutrient loading will shift the lake quickly to eutrophic, and finer sediment will cloud the water.
"Environmental quality is the big loser," the study concludes, "from a real beauty to a soupy pond in a few decades.
"Academically, it's an interesting site; otherwise it's a pending disaster," the study states, warning, "Property values may plummet."
Lessons from the model: A single lot with poor practices and significant erosion can offset 100 lots using BMPs.
"It won't take many lots with insensitive management to decrease everyone's property values as the lake's quality declines," the study states.
Lakeshore associations should exert peer pressure, Parson said.
Lakes may experience a loss in market among people who shun constraint, but there's security in knowing your neighbors abide with restrictive covenants, Welle said.
|
<urn:uuid:09bae986-72ee-4125-840c-c88feb23621d>
|
CC-MAIN-2016-26
|
http://www.parkrapidsenterprise.com/content/water-quality-affects-property-values?qt-latest_trending_article_page=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00105-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96467
| 1,071
| 2.796875
| 3
|
|A practical generator. In this diagram the iron core
that fills the space between the axle and the rotating coils has been
removed for clarity. The magnetic field is provided by the two electromagnet
coils ('field windings') which also have iron cores.
Also called a dynamo, a generator is a device for converting
mechanical energy into electrical energy.
Generators originated with the discovery of induction by Michael Faraday
in 1831; the considerable advantages of electromagnets over permanent magnets
were first exploited by E. W. von Siemens in 1866.
|The top picture shows a single coil of wire placed
in a magnetic field. As the coil is turned it cuts across the lines
of force and (if it forms part of a complete circuit) a current is
produced. When the coil is in the position shown in the second picture
it is moving along the lines of force without cutting them. No current
is produced here. In the last two pictures the red side of the coil
again cuts lines of force but this time it is moving upward, so the
direction of current is reversed.
Traditional forms are based on inducing electric
fields by changing the magnetic field
lines through a circuit (see electromagnetic
induction). All generators can be, and sometimes are, run in reverse
as electric motors.
The simplest generator consists of a permanent magnet
(the rotor) spun inside a coil of wire (the stator); the magnetic field
is thus reversed twice each revolution, and an AC voltage is generated at
the frequency of rotation (see also magneto).
Equivalent to this is rotating a coil of wire between the poles of a permanent
magnet, as shown immediately below and in the illustration to the right.
|A simple generator. On the right the coil is seen
from the end making one complete revolution. The size of the current
in each of the eight stages varies as shown by the curve. At 'e' (coil
vertical) the current reverses.
In practical designs (see top illustration), the rotor is usually an electromagnet
driven by a direct current obtained by rectification of a part of the voltage
generated, and passed to the rotor through a pair of carbon brush/slip ring
contacts. The use of three sets of stator coils 120° apart allows generation
of a three-phase supply (see also armature).
Direct current generation
Simple DC generators consist of a coil rotating in the field of a permanent
magnet: the voltage induced in the coil alternates at the frequency of rotation,
but it is collected through a commutator – a split-ring broken into
two two semicircular parts, to each of which one end of the coil is connected,
so that the connection between the coil and the brushes is reversed twice
each revolution – resulting in a rapidly pulsating direct voltage.
A steadier voltage can be achieved through the use of multiple coil/commutator
arrangements, and except in very small generators, the permanent magnet
is again replaced by an electromagnet driven by part of the generated voltage.
|A generator can be made to give a 'one-way' or direct
current by connecting the ends of the coil to the two halves of a
split-ring or commutator. This device neatly puts whatever is the
'outgoing' end of the coil onto the same brush at the moment the coil
comes up to the vertical and reverses the flow. In the first of the
diagrams above, the red side of the coil is moving downward and the
current produced in it flows out of the coil into the right hand brush.
In the second diagram the red side of the coil is moving upward and
now the current produced in it flows into the coil. But by this time
the red side of the coil is connected to the left hand brush. So the
current still flows out of the right hand brush, the rough the lamp
and re-enters the generator at the left hand brush as it did in the
|Although the commutator ensures that current always
flows in the same direction, it does not prevent the current from
falling to zero each time the coil reaches the vertical position.
No current is produced when the coil is vertical because it moves
along the lines of force instead of cutting across them. With a number
of coils it is possible to have the current in one reaching a maximum
when the current in another is zero. The commutator in that case consists
of several pairs of segments arranged around the axle instead of the
two halves of the split ring. The segments are insulated from each
other, and the ends of each coil are connected to opposite segments.
For large-scale generation, the mechanical power is usually derived from
steam turbines, or from dam-fed water turbines,
and the process is only moderately efficient. The magnetohydrodynamic generator
avoids this step and has no moving parts either. A hot conducting fluid
(treated coal gas, or reactor-heated liquid)
passes through the field of an electromagnet, so that the charges are forced
in opposite directions producing a DC voltage. In another device, the electrogasdynamic
generator, the voltage is produced by by using a high speed gas stream to
pump charge from an electric discharge, against the electric field, to a
|
<urn:uuid:c0dbc8aa-e183-4a80-96a0-d499e2081e65>
|
CC-MAIN-2016-26
|
http://www.daviddarling.info/encyclopedia/G/generator.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00167-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.915763
| 1,136
| 3.9375
| 4
|
Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.
Tuesday, July 12, 2005
Nearly one-half of the women who gave birth in Canada in 2003 were age 30 and older, according to new data on births. In fact, in Ontario and British Columbia, mothers age 30 and older were already in the majority.
This reinforces the long-term trend among Canadian women; they have been waiting longer and longer to start families. Two decades ago, three-quarters of moms in Canada were under 30.
Nationally in 2003, 48% of mothers were age 30 and older when they gave birth, and 52% under. But in Ontario, 54% were age 30 and older, as were 53% in British Columbia.
Older mothers have been in the majority in Ontario since 1999, and in British Columbia since 2001.
Conversely, in the territory of Nunavut, over three-quarters (77%) of mothers in 2003 were under the age of 30.
There were 335,202 births in Canada in 2003, up 1.9% from the previous year.
Slight increase in crude birth rate
The crude birth rate (the number of live births for every 1,000 people in the population) rose to 10.6 in 2003, recovering slightly from the record low set in 2002.
During the past 10 years, the number of births and the crude birth rate rose only twice: in 2001 and in 2003.
In 2003, the number of births increased in all provinces and territories except Newfoundland and Labrador, Nova Scotia and Yukon, which recorded small decreases.
Among the provinces, the biggest gain occurred in Prince Edward Island at 6.7%. However, the Northwest Territories had Canada's biggest increase in births at 10.4%.
The neighbouring provinces of Alberta and British Columbia provided an interesting contrast. The number of births in the two provinces was virtually equal, around 40,300. The number of births rose 4.1% in Alberta, but only 1.1% in British Columbia. In terms of absolute numbers, this was a gain of almost 1,600 babies in Alberta, but only 431 in British Columbia.
As a result, the crude birth rate in Alberta was 12.8 for every 1,000 population in 2003, compared with 9.8 in British Columbia.
Average age on long upward trend
The average age of women giving birth in Canada in 2003 was 29.6 years, continuing a long-established upward trend. Two decades ago, the average age was 26.9 years.
The oldest mothers on average were in Ontario and British Columbia (30.3 years and 30.2 years respectively), while the youngest were in Nunavut (25.3 years).
Among women giving birth for the first time, the average age was 28.0 years in 2003. The oldest first-time mothers on average were in British Columbia, at 28.8 years, followed closely by Ontario at 28.7.
Nunavut had the youngest first-time mothers with an average age of 21.7.
Low birth weight higher in younger and older mothers
Low birth weight has long been a public health concern because of its relationship to poor infant health and mortality.
It is mothers at the lower and upper ends of the age spectrum who have the highest rates of low birth weight babies. In 2003, 6.7% of babies born to teenage mothers and the same proportion of babies born to mothers age 35 to 39 weighed less than 2,500 grams at birth.
However, it was mothers age 40 and older who had the highest proportion of low birth weight babies, 8.4% in 2003.
The vast majority of babies born in Canada have a healthy weight at birth. Fewer than 6% of babies born in 2003 had a birth weight under 2,500 grams, the same proportion seen in each year of the last two decades.
Slight gain in fertility rate
The total fertility rate estimates the average number of children women aged 15 to 49 will have in their lifetime. In 2003, it increased slightly to 1.53 children per woman, up from 1.50 in 2002.
The lowest fertility rate for Canada was set in 2000, at 1.49 children per woman.
Nunavut continued to have the highest total fertility rate of any province or territory, at 3.1 children per woman in 2003, followed by the Northwest Territories at 2.0 children per woman.
In contrast, Newfoundland and Labrador recorded the lowest total fertility rate, 1.3 children per woman in 2003.
Despite having the highest average ages of first-time mothers in Canada, Ontario and British Columbia did not have the lowest fertility rates.
Ontario's fertility rate in 2003 was 1.5 children per woman, in the middle of the range, while British Columbia's fertility rate of 1.4 children per woman ranked third lowest.
Definitions, data sources and methods: survey number 3231.
The publication Births, 2003 (84F0210XIE, free), which contains tables on live births and stillbirths, is now available. From the Our products and services page, under Browse our Internet publications, choose Free, then Population and demography.
For general information or to order custom tabulations, contact Client Services (613-951-1746; firstname.lastname@example.org). To enquire about the concepts, methods or data quality of this release, contact Patricia Tully (613-951-1759; email@example.com) or Leslie Geran (613-951-5243; firstname.lastname@example.org), Health Statistics Division.
|
<urn:uuid:732d9051-b528-4390-a79d-c78f22e977fe>
|
CC-MAIN-2016-26
|
http://www.statcan.gc.ca/daily-quotidien/050712/dq050712a-eng.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959715
| 1,213
| 2.9375
| 3
|
Day-Night Temperatures and CO2 Enrichment
Carbon Dioxide is an odorless gas which makes up about 300 ppm of our atmosphere, yet dried plant material contains an average of 40% carbon which comes entirely from CO2. Therefore we need to consider CO2 to be a major plant nutrient, one that affects growth rate, yield and one that needs to be supplied in adequate quantities if crop growth is going to be maximized.
The main plant process a grower needs to consider is 'photosynthesis' as this is what drives growth, development and production. Photosynthesis is a reaction with occurs within the leaf tissue and requires light of the correct wavelength, water and carbon dioxide to produce assimilates (sugars) which are used for growth and development. As a by-product oxygen is released into the environment. When artificial lights are used to grow plants, the aim is to provide just the right intensity and wavelengths for optimal photosynthesis. Hydroponic plants also usually have more than sufficient water and nutrients, so the limiting factor in the process of photosynthesis in an enclosed environment, then becomes the availability of carbon dioxide (CO2). In a well sealed growing environment situation, CO2, under good lighting, begins to limit photosynthesis very rapidly. Since ambient CO2 levels in the air are around 360 ppm, which is relatively low, this can be used up by even a small population of actively photosynthesizing plants within a couple of hours. In fact CO2 can drop away to only a few ppm in well sealed growing environment and when this happens, if the CO2 is not replaced, photosynthesis and plant growth stops.
Not only is it important to prevent CO2 depletion, but enrichment to levels much greater than atmospheric levels is known to boost plant growth by over 40%. The level of enrichment and the timing of enrichment, since all methods of CO2 enrichment have a cost involved. Obviously since plants only require, take up and use CO2 when photosynthesizing in light, enrichment only needs to occur when the lights are on or during day light hours. Enrichment at night is pointless since the extra CO2 won't be taken up by the plants and will just accumulate. Secondly, enrichment levels need to be high enough to replace the CO2 used by the plants and to increase the levels of CO2 in the environment to a level where it will accelerate photosynthesis and therefore plant growth. Levels of 800 - 1800 ppm have proven to be optimal for the majority of crops grown under protected cultivation and having CO2 monitoring equipment then becomes important to make sure this level is reached and maintained. CO2 enrichment will have its greatest effect on accelerating photosynthesis and growth where other factors are also optimal - that is there is sufficient light for photosynthetic reactions and temperatures are not limited. Temperatures can be run a little higher where CO2 is enriched and light levels are at optimum levels - generally in the range 27(80F) too 32 C(92F) day temperatures for most flowering and fruiting plants.
CO2 enrichment to levels of at least 800 ppm has been shown to increase the growth rate, yields and early harvests of many crops and is certainly economically viable for most high value crops. Supplying CO2 The two most commonly methods used for CO2 enrichment of a growing area are burning of hydrocarbon fuels such as natural gas or propane, and compressed, bottled CO2. There are actually a few other, less practical ways - these are dry ice, fermentation, burning of candles and oil lamps and decomposition of organic matter.
CO2 generators are widely available for use in growing areas and this is less expensive than using bottled CO2. The major problem with burning fuel to create CO2 is that heat is produced as a by product - this may be useful under cooler conditions, but not if the environment is already sufficiently warm. As the CO2 is introduced to the greenhouse, it needs to be thoroughly mixed with use of a circulation fan.
Compressed, bottled CO2: Safer option for plant enrichment - in that no toxic by-products or additional heat can be produced.. Compressed CO2 comes in cylinders stored under high pressure (1600-2200 psi). Equipment such as a pressure regulator, flow meter and solenoid valve and timer are required to set up this type of enrichment system. CO2 is injected into the growing area via the pressure regulator and flow meter which is controlled via a solenoid and timer. One pound of compressed CO2 gas contains about 8.5 cubic feet of CO2 gas at normal atmosphere pressure.
Very small tightly sealed growing areas can use dry ice to provide CO2 enrichment - this also gives some cooling effect. Dry ice is solid, very cold CO2 and needs to be stored and handled with care. Dry ice can also 'melt' very rapidly in warm conditions, so may need to be well managed to ensure a continual supply of CO2 at the correct level.
No matter which method of enrichment is used it is important to firstly bring the environment up to the predetermined level and then constantly replenish to this level as the plants absorb the CO2. The rate of CO2 absorption will change with plant size, temperatures and light level and this is why constant monitoring of levels in important.
For the majority of flowering and fruiting plants produced hydroponically, plant growth and flowering will be optimal under conditions where the night temperature is lower than the day temperature. Most plant species exhibit these 'Diurnal rhythms' where certain plant process such as the rate of growth of the flower buds, stomata opening, discharge of perfume from flowers, cell division and metabolic activity, occur more rapidly at a certain time within a 24 hour period. For example, photosynthesis in most plants is known to reach a maximum just before noon, and cell division also seems to always reach a maximum just before dawn. Many species flower or grow well only when temperatures during the part of the diurnal cycle that normally comes at night are lower than temperatures during the day. Also light given during the normal night period may actually inhibit some plant processes.
Plants such as tomatoes seem to be particularly sensitive to the alternation in temperature between day and night: they produce more flowers when night temperatures are lower than day temperatures - this effect in plants is called 'Thermoperiodism', and is common amongst many plant species. Pepper plants also require lower night than day temperatures for good production, it has been found that many more buds on pepper plants will actually develop into open flowers when night temperatures are at least 6 C(11F) lower than day temperatures. Where day and night temperatures remain at similar levels on a long term basis, flowering and fruiting can be adversely affected, particularly where temperatures are warm. Bud, flower and fruitlet abscission is much more common on crops which do not receive lower night temperatures and this often limits production of crops such as tomatoes and peppers under tropical conditions.
Night temperatures for most plants are optimal at around 18 C (65F) too 24 C(75F) lower than day temperatures, provided day temperatures are held at optimal levels for photosynthesis. At night, where the 'sinks' which receive the assimilates (sugars) produced via photosynthesis, become cooler, transport of sugars into these is promoted. 'Sinks' on most plants are the developing flower buds, flowers and fruit which have the greatest affinity for the sugars produced by the plant. The 'Source' is the producer of the assimilates - usually the leaves, but sometimes also the stem in some plant species. So cooler 'sinks' get more assimilate pumped into them at night than if they remained as warm as they were during the day light hours.
Apart from the physiological effects on plant growth and flower development, having a lower night temperature setting has other beneficial effects on plant processes. Firstly root pressure is greater at night under cooler conditions - this increases the pressure in the xylem vessels, so that calcium and other plant growth compounds which are carried in the xylem stream are forced out to the leaf tips and into developing buds, flowers and fruits. This turgor pressure is often essential in the prevention of tip burn as it ensures calcium is carried to the very edges of the leaves. Often, this root or xylem pressure can be seen in the form of 'guttation' which are visible droplets of water which can be seen at the tips of leaves on plants in the early morning. It is this root or xylem pressure which also acts to 'pump up' the plant during the cooler night temperatures particularly after a day when transpiration rates and warm temperatures have resulted in some wilting and loss of turgour.
Maintaining cooler night temperatures also ensures that plant respiration does not occur at too greater rate. Respiration uses up valuable assimilates and the rate of respiration increases rapidly with temperature. Under very warm night temperature conditions, night respiration can burn nearly as much assimilate as has been produced via photosynthesis and can severely limit plant growth.
There are a few devices which can be used to measure and monitor CO2 levels in your growing environment. There are a range of CO2 sensors available - from simple 'syringe' test kits which allow a grower to take a sample of the air in the growing environment and determine the CO2 level, to using timed devises and the use of electronic controls and meters which accurately monitor CO2 levels and display this on an LCD readout.
CO2 enrichment to increase plant growth and yields is a well proven method of crop production which can benefit even the smallest grower and is widely used on a commercial scale.
|
<urn:uuid:17218dbb-2718-47a4-95ee-2327861ba210>
|
CC-MAIN-2016-26
|
http://www.quickgrow.com/gardening_articles/co2_enrichment.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00103-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955461
| 1,973
| 3.5625
| 4
|
I've recently written regarding the unique role of the Meiji Constitution in establishing the Emperor's governance over the national politics of that era. However, it would be innacurate (for me) to suggest that the Japanese did not already possess a long-standing notion of the Emperor's island-wide pre-eminence. The two primary instances of this are Prince Shotoku's "Constitution" of 604 AD and the Taika Reform Edicts of 645-650 AD.
We'll cover here the Taika Reform Edicts (and elsewhere Shotoku's Constitution) which, apparently for the first time, subordinated local governance to the national Emperor.
You'll notice that 645 AD is very early into the Japanese literary timeline. (For example, Japan's unique writing style of hiragana is the much later creation of Kobo Daishi (774 - 835 / aka Kukai).) Thus, in essence, everything written during this era is wholly in the (ancient) Chinese language, using Chinese terminology and vocabulary definitions. And as students of language fully realize, words require concepts. Perhaps primarily for this reason, the Taika Reform Edicts were written under the supervision of Confucian scholars. (During this era, up to the Maoist Revolution, China viewed the Confucian texts as the primary insight for proper (Moral) government).
It could be argued that at this this moment in time, Japan cultivated much of its understanding of the divine status of its Emperor directly from Chinese intuitions. It is here, for example, that the Japanese ruler was no longer described as a regional leader, but rather a "Tenno" or "heavenly Soveriegn", a title used in Japan to this day.
There's a bit of ambiguity which arises regarding the specific Taika Reform Edicts. There were in fact only four edicts declared during the period of 645 to 650 AD. The confusion arises from that fact that during this same period, several other statements and "decrees" were made by Emperor Kotoku which undoubtedly relate to the decrees but are not themselves such.
Although I provide the text of the Edicts below, the following summaries are undoubtedly helpful:
- Corruption of Regional Officials
- abolished private ownership of land & workers, deriving from "namesake", succession, or other means of appropriation.
Regulation of the Capital; Taxes; Women
- established a central capital metropolitan region, called the Kinai (�E��), or Inner Provinces. A capital city was to be built there, and governors would be appointed.
The Role of the Emperor
- established population registers, as well as the redistribution of rice-cultivating land equitably. It also provided for the appointment of rural village heads.
The Monarchy and the People
- abolished the old forms of taxes, and established a new system.
Corruption of Regional Officials
645 AD 8th Month 19th day
Commissioners were sent to all the provinces to take a record of the total numbers of the people. The Emperor on this occasion made an edict, as follows:
"In the times of all the Emperors, from antiquity downwards, subjects have been set apart for the purpose of making notable their reigns and handing down their names to posterity. Now the Omi 11 and Muraji 12, the Tomo no Miyakko 5 and the Kuni no Miyakko 2, have each one set apart their own vassals, whom they compel to labor at their arbitrary pleasure. Moreover, they cut off the hills and seas, the woods and plains, the ponds and rice-fields belonging to the provinces and districts, and appropriate them to themselves. Their contests are never-ceasing. Some engross to themselves many tens of thousands of shiro 13 of rice-land, while others possess in all patches of ground too small to stick a needle into. When the time comes for the payment of taxes, the Omi, the Muraji, and the Tomo no Miyakko, first collect them for themselves and then hand over a share. In the case of repairs to palaces or the construction of misasagi 14, they each bring their own vassals, and do the work according to circumstances. The Book of Changes 15 says, " Diminish that which is above: increase that which is below: if measures are framed according to the regulations, the resources of the State suffer no injury, and the people receive no hurt."
"At the present time, the people are still few. And yet the powerful cut off portions of land and water, and converting them into private ground, sell it to the people, demanding the price yearly. From this time forward the sale of land is not allowed. Let no man without due authority make himself a landlord, engrossing to himself that which belongs to the helpless."
Regulation of the Capital; Taxes; Women
646 AD 1st month, 1st day
As soon as the ceremonies of the new year's congratulations were over, the Emperor promulgated an edict of reforms, as follows:
"I. Let the people established by the ancient Emperors, etc., as representatives of children be abolished, also the Miyake 16 of various places and the people owned as serfs by the Wake, 17 the Omi, the Muraji, the Tomo no Miyakko, the Kuni no Miyakko and the Mura no Obito. 18 Let the farmsteads in various places be abolished."
"Further We say. It is the business of the Daibu to govern the people. If they discharge this duty thoroughly, the people have trust in them, and an increase of their revenue is therefore for the good of the people.
II. The capital is for the first time to be regulated, and Governors appointed for the Home provinces and districts. 20 Let barriers, outposts,guards, and post-horses, both special and ordinary, be provided, bell-tokens made, 21 and mountains and rivers regulated. 22
For each ward in the capital let there be appointed one alderman, and for four wards one chief alderman, who shall be charged with the superintendence of the population, and the examination of criminal matters. For appointment as chief aldermen of wards let men be taken belonging to the wards, of unblemished character, firm and upright, so that they may fitly sustain the duties of the time. For appointments as aldermen, whether of rural townships or of city wards, let ordinary subjects be taken belonging to the township or ward, of good character and solid capacity. If such men are not to be found in the township or ward in question, it is permitted to select and employ men of the adjoining township or ward.
The Home provinces shall include the region from the River Yokogaha at Nabari on the east, from Mount Senoyama in Kii on the south, from Kushibuchi in Akashi on the west, and from Mount Afusaka-yama in Sasanami in Afumi on the north. Districts of forty townships 23 are constituted Greater Districts, of from thirty to four townships are constituted Middle Districts, and of three or fewer townships are constituted Lesser Districts. For the district authorities, of whatever class, let there be taken Kuni no Miyakko 2 of unblemished character, such as may fitly sustain the duties of the time, and made Tairei and Shorei. 24 Let men of solid capacity and intelligence who are skilled in writing and arithmetic be appointed assistants and clerks.
The number of special or ordinary post-horses given shall in all cases follow the number of marks on the posting bell-tokens. When bell-tokens are given to (officials of) the provinces and barriers, let them be held in both cases by the chief official, or in his absence by the assistant official.
III. Let there now be provided for the first time registers of population, books of account and a system of the receipt and re-granting of distribution-land.
Let every fifty houses be reckoned a township, and in every township let there be one alderman who shall be charged with the superintendence of the registers of population, the direction of the sowing of crops and the cultivation of mulberry trees, the prevention and examination of offences, and the enforcement of the payment of taxes and of forced labor.
For rice-land, thirty paces in length by twelve paces in breadth shall be reckoned a tan. Ten tan make one cho. For each tan the tax is two sheaves and two bundles (such as can be grasped in the hand) of rice; for each cho the tax is twenty-two sheaves of rice. On mountains or in valleys where the land is precipitous, or in remote places where the population is scanty, such arrangements are to be made as may be convenient.
IV. The old taxes and forced labor are abolished, and a system of commuted taxes instituted. These shall consist of fine silks, coarse silks, raw silk, and floss silk, all in accordance with what is produced in the locality. For each cho of rice land the rate is ten feet of fine silk, or for four cho one piece forty feet in length by two and a half feet in width. For coarse silk the rate is twenty feet (per cho), or one piece for every two cho of the same length and width as the fine silk. For cloth the rate is forty feet of the same dimensions as the fine and coarse silk, i.e. one tan for each cho. Let there be levied separately a commuted house tax. All houses shall pay each twelve feet of cloth. The extra articles of this tax, as well as salt and offerings, will depend on what is produced in the locality.
For horses for the public service, let every hundred houses contribute one horse of medium quality. Or if the horse is of superior quality, let one be contributed by every two hundred houses. If the horses have to be purchased, the price shall be made up by a payment from each house of twelve feet of cloth.
As to weapons, each person shall contribute a sword, armour, bow and arrows, a flag, and a drum.
For servants, the old system, by which one servant was provided by every thirty houses, is altered, and one servant is to be furnished from every fifty houses [one is for employment as a menial servant] for allotment to the various functionaries. Fifty houses shall be allotted to provide rations for one servant, and one house shall contribute twenty two feet of cloth and five masu 25 of rice in lieu of service.
For waiting-women in the Palace, let there be furnished the sisters or daughters of district officials of the rank of Shorei or upwards?good-looking women [with one male and two female servants to attend on them], and let 100 houses be allotted to provide rations for one waiting-woman. The cloth and rice supplied in lieu of service shall, in every case, follow the same rule as for servants."
The Role of the Emperor
646 AD 8th month, 14th day
An edict was issued, saying,
"Going back to the origin of things, we find that it is Heaven and Earth with the male and female principles of nature, 29 which guard the four seasons from mutual confusion. We find, more over, that it is this Heaven and Earth which produces the ten thousand things. Amongst these ten thousand things Man is the most miraculously gifted. Among the most miraculously gifted beings, the sage takes the position of ruler. Therefore the Sage Rulers, that is, the Emperors, take Heaven as their model in ruling the World, and never for a moment dismiss from their breasts the thought of how men shall gain their fit place. . . ."
The Monarchy and the People
647 AD 4th month, 29th day
An edict was issued as follows,
"The Empire was entrusted (by the Sun-Goddess to her descendants, with the words) 'My children, in their capacity as Deities, shall rule it.' For this reason, this country, since Heaven and Earth began, has been a monarchy. From the time that Our Imperial ancestor first ruled the land, there has been great concord in the Empire, and there has never been any factiousness. In recent times, however, the names, first of the Gods, and then of the Emperors, have in some cases been separated (from their proper application) and converted into the Uji of Omi or Muraji, or they have been separated and made the qualifications of Miyakko, etc. In consequence of this, the minds of the people of the whole country take a strong partisan bias, and conceiving a deep sense of the "me" and "you," hold firmly each to their names. Moreover the feeble and incompetent Omi, 11 Muraji, 12 Tomo no Miyakko 5 and Kuni no Miyakko 2 make of such names their family names; and so the names of Gods and the names of sovereigns are applied to persons and places in an unauthorized manner, in accordance with the bent of their own feelings. Now, by using the names of Gods and the names of sovereigns as bribes, they draw to themselves the slaves of others, and so bring dishonor upon unspotted names.
The consequence is that the minds of the people have become unsettled and the government of the country cannot be carried on. The duty has therefore now devolved on Us in Our capacity as Celestial Divinity, to regulate and settle these things. In order to make them understood, and thereby to order the State and to order the people, We shall issue, one after another, a succession of edicts, one earlier, another later, one to-day and another to-morrow. But the people, who have always trusted in the civilizing influence exercised by the Emperors, and who are used to old customs, will certainly find it hard to wait until these edicts are made. We shall therefore remit to all, from Princes and Ministers down to the common people of all classes, the tax in lieu of service."
|
<urn:uuid:02bb82d5-b479-4b38-804d-57bdb7a3e40d>
|
CC-MAIN-2016-26
|
http://www.sarudama.com/japanese_history/taikareform.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961907
| 2,954
| 3.109375
| 3
|
That's the head that the New Scientist chose for the print version (in the 21 June issue) of its story (by Andy Coghlan) on the Savic/Lindström studies that Mark Liberman reported on here on Language Log (with a link to the New Scientist's 16 June on-line version, which had a different head: "Gay brains structured like those of the opposite sex"). Mark noted that different publications headed their stories in different ways: as the discovery of a similarity between gay people and straight people of the opposite sex; as a discovery about homosexuals; or (mostly) as the discovery of a similarity between homosexual men and heterosexual women. Now the New Scientist has promoted the "decided at birth" or "born that way" interpretation of the experiments from the story's lead paragraph to its head.
And it featured the story in an editorial
It's a queer life
We need to ditch the idea that homosexuality is unnatural
First, the main story. The two versions begin in slightly different ways, with "biologically fixed trait" on-line and "biology rather than choice" in print, and with "aggressiveness" on-line and "aggression" in print. On-line:
Brain scans have provided the most compelling evidence yet that being gay or straight is a biologically fixed trait.
The scans reveal that in gay people, key structures of the brain governing emotion, mood, anxiety and aggressiveness resemble those in straight people of the opposite sex.
Brain scans have provided the most compelling evidence yet that being gay or straight is down to biology rather than choice.
Tantalisingly, the scans reveal that in gay people, key structures of the brain governing emotion, mood, anxiety and aggression resemble those in straight people of the opposite sex.
There were two sets of findings, one concerning asymmetry vs. symmetry of the two hemispheres of the brain and one concerning patterns of connection between the amygdalas and other parts of the brain. In each substudy, gay subjects of one sex and straight subjects of the other sex resembled each other. As Mark argued at length in his earlier posting, the straight-gay differences in the first substudy were very small (unable to support "essentialist" claims that gay and straight are categorically different), and that such differences in the second substudy couldn't be evaluated from the information in the published report (though Mark suspected that the differences there would turn out to be equally unimpressive).
In the New Scientist, these second differences were communicated by images of amygdalas, with areas said to be strongly connected to other parts of the brain indicated in red. This picture is labeled:
HOW GAY EMOTIONAL CONNECTIONS CROSS THE GENDER DIVIDE
Brain connections from emotional centres, the amygdalas, clearly show that "gay" patterns match those in "straight" people of the opposite gender
These patterns of connectivity are described as follows:
In straight women and gay men, the signals from the amygdala ran mainly into the regions of the brain that mediate mood and anxiety [in the on-line version: "that manifest fear as intense anxiety"].
In straight men and lesbians, the amygdala fed their their signals mainly into the sensorimotor cortex and the striatum, regions of the brain that trigger "fright or flight" [a typo for "fight or flight"; this was correct on-line] in response to fear. "It's a more action-related response than in straight women," says Savic.
(Side point of interest to linguists: the occurrence of both the regular plural amygdalas and the zero plural amygdala in the article.)
First we get an (unsupportable) essentialist interpretation of the statistics, and then this feeds into some vulgar phrenology, in which the areas of the brain are seen as serving particular high-level functions: the amygdalas are the seat of emotion, other regions of the brain regulate mood and anxiety, and still others are action-oriented. What's communicated as a result reproduces folk theories of sex differences, with moody, passive, anxious women opposed to active, aggressive men. And it reproduces one folk theory of sexuality (there are several) — that gay men are feminine in nature, lesbians masculine. Indeed, it appears to support this folk theory by providing evidence that this cross-identification is anatomical, not cultural.
Not just anatomical, but probably present at birth. As Coghlan notes in the printed story:
Savic and her colleague, Per Lindström, chose to measure brain parameters that are probably [on-line: "are likely to have been"] fixed at birth.
That is, brain parameters that probably are either genetic or determined in utero (or, of course, some of each).
Two comments here. First, I know absolutely nothing about the development of brain structures in childhood, but someone ought to be looking at these two parameters, to see if they are indeed fixed at birth and not affected by experience (or processes of maturation).
Second, I wonder about the selection of these two parameters, from among the great many aspects of brain structure that the investigators might have looked at. In particular, I wonder if they (or associates of theirs) looked at some other parameters in pilot studies but came up short, so that what we're seeing now is their two lucky shots, with everything else languishing in that famous file drawer.
In any case, the studies are now being taken as showing that sexuality is determined at birth, though this conclusion doesn't follow at all from the results. Nor do the studies shed any light on the causes of homosexuality. In fact, they make the whole topic more mysterious than ever: how could these particular very small (and non-categorical) differences in brain anatomy work themselves out as sexual desire for persons of the same sex?
(Topic for a future posting: the bewilderingly large number of ways in which people use the term gay and related terms.)
Now to the editorial, which was set off by a radio interview:
Iris Robinson held nothing back this month when interviewed on BBC radio. "Homosexuality is disgusting, nauseating, shameful, wicked and vile". Her Christian belief told her it is "an abomination" and she advised homosexuals to seek psychiatric help. Such intolerance may be bread and butter for preachers of the fire-and-brimstone variety but it is rare in UK politics. Robinson is a Member of Parliament and of the Northern Ireland Assembly.
The editorial goes on to argue that homosexuality is neither "unnatural" nor a mental disorder, concluding:
Does it matter that a high-profile politician is peddling ideas not backed by scientific or medical evidence? For one particular reason, yes. Robinson is chair of the Northern Ireland Assembly's health committee. One can't help wondering about the quality of healthcare the people of Northern Ireland can expect.
|
<urn:uuid:52f1c8d9-46d2-4fe0-ac9f-af73d6e83698>
|
CC-MAIN-2016-26
|
http://languagelog.ldc.upenn.edu/nll/?p=278
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00057-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957141
| 1,440
| 2.546875
| 3
|
Stars wheel across the heavens as evening gives way to morning. The Moon recycles its phases every month. Planets creep through the starry vault, sometimes gathering together at dusk or dawn. Meteors occasionally rain down from the sky. Stars, including our Sun, intermittently disappear behind the Moon. Constellations come and go as the seasons pass by.
When planning an observing session, skywatchers depend on celestial predictability. But predictable doesn’t mean mundane. During 2005, the Sun, Moon, planets, and stars will intermingle in fascinating ways, providing a year full of variety. And every event, no matter how simple, invites us outside to explore the heavens and, perhaps, discover the unexpected.
Every month as the Moon glides eastward through the sky, it passes some of the bright, naked-eye planets. Occasionally, two or more planets gather in one part of the sky; if they’re close it’s called a conjunction. These celestial meetings make for interesting skywatching and often provide fine astrophotography opportunities. Daily celestial happenings featuring the Moon and planets are described in "This Week's Sky at a Glance," but here are a few planetary gatherings that are sure to be eye-catching.
Planets on Parade
Whether in the dawn or dusk sky, brilliant Venus is always eye-catching. During 2005 it will be a dazzling object before sunrise until mid-February, when it vanishes in the solar glare. In early May it reappears in the evening sky, where it remains until year-end. The other planet that never strays far from the Sun is Mercury. The best periods in 2005 to seek out this elusive planet will be the first two weeks of March (after sunset) and the first half of December (before sunrise). Mercury is visible at other times, but it’s usually near the horizon or difficult to find in twilight. But for several days in late June Venus will be nearby, and on the 27th the two will be a mere 0.1° apart, making Mercury easy to spot.
As the year begins, Mars will be low in the east at dawn in the constellation Scorpius. Mars remains in the morning sky for more than half the year as it moves rapidly through the constellations Ophiuchus, Sagittarius, Capricornus, Aquarius, and Pisces. By August it’ll rise before midnight in Aries, where it will remain, except for a brief sojourn into Taurus in late September, for the rest of the year. Mars will be closest to Earth on October 30th and at opposition (the point when a planet is directly opposite the Sun as seen from Earth) on November 7th. It will not appear as bright or as large in a telescope as it did during its 2003 opposition, but it’ll certainly be a red beacon in a constellation that’s otherwise devoid of bright objects.
Jupiter rises around midnight in Virgo at the start of the year and earlier as the months progress. After a pretty conjunction with Venus in early September, Jupiter will be lost in the solar glare until it reappears in the east at dawn, still in Virgo, in early November.
Saturn will be prominent in the evening sky in Gemini for the first half of 2005. In late June the ringed planet meets Venus and Mercury low in the west at dusk, but by mid-July it will vanish into the solar glare. Saturn reappears in the east, in Cancer, just before sunrise in late August. During September it lingers near M44, the Beehive Cluster; binoculars or a low-power field telescope will provide fine views.
For a week-by-week update of where the planets are and what they're doing, see "This Week's Sky at a Glance."
As the Moon moves through the sky, it occasionally occults (passes in front of) a planet, star, or other celestial body, snuffing out its light. The object reappears on the Moon’s opposite side up to an hour later. When a star is occulted, its light vanishes (and reappears) instantly. But when a planet or star cluster is involved, the event takes longer to unfold.
During 2005 the bright stars Antares and Spica will be occulted for the first time in several years. The Moon will also hide Jupiter, Venus, Mars, and the globular star cluster Messier 4 (M4). Many of these (and other) events are discussed in the article "Lunar Occultation Highlights for 2005," though some occultations are visible only from rather remote sites. For further details prior to the more spectacular events, check Sky & Telescope’s Occultation page.
Lunar and solar eclipses are rare and beautiful sights. Although the cause of each type of eclipse is essentially the same the Sun, Earth, and Moon form a straight line in space, with Earth’s shadow falling on the Moon or vice versa the visual result is strikingly different.
There are four eclipses during 2005: two solar (April 8–9 and October 3rd) and two lunar (April 24th and October 17th). More information about each eclipse will be available prior to the event on S&T's Eclipse page.
On April 89 the Sun will undergo an unusual annular-total hybrid eclipse. Because the region of visibility straddles the International Date Line, the event begins as an annular, or “ring,” eclipse visible off the eastern coast of New Zealand at sunrise on the 9th. It becomes total for a maximum of 42 seconds in the mid-Pacific, and then is annular again before sunset from Panama, Colombia, and Venezuela on the 8th. New Zealanders will see the Sun rise partially eclipsed, while observers on the west coast of South America, throughout all of Central America and Mexico, and in the southeastern US will see the partial phases in late afternoon on the 8th. Remember, observing a partial eclipse of the Sun without a safe viewing filter is always potentially dangerous.
Two weeks later the Moon will pass through the outer edge of Earth’s faint penumbral shadow, but this lunar eclipse on April 24th will likely go unnoticed by most observers. The slight shading will be greatest from about 2:30 to 3:20 a.m. Pacific Daylight Time.
The year’s second solar eclipse, on October 3rd, is a true annular the Moon never completely hides the Sun’s disk and at mideclipse is surrounded by a ring of sunlight. The path of annularity starts in the Atlantic Ocean, runs through Spain, Algeria, Tunisia, Libya, Sudan, and Kenya, and ends in the Indian Ocean. Maximum annularity (in Sudan) is 4½ minutes. All of Europe, the Middle East, western Asia, and all but the extreme southern tip of Africa will see the Sun partially eclipsed.
The final eclipse of 2005 is a partial lunar eclipse on October 17th. The Moon nicks the umbra (the dark portion of Earth’s shadow) for about an hour, and at mideclipse 8:03 a.m. EDT, 5:03 a.m. PDT observers across most of North America (except the northeast) will see some minor darkening of the Moon’s southern limb. This partial eclipse will also be visible from the Pacific region, all of Australia, and much of the Far East.
At one time or another, almost everyone has glimpsed a bright streak of light dashing across the night sky. These sudden celestial visitors are meteors, commonly called falling or shooting stars. Meteors are meteoroids (pieces of interplanetary debris) that vaporize as they plow into Earth’s upper atmosphere. If a meteoroid survives its fiery plunge to the ground, it’s called a meteorite.
An average of six sporadic (random) meteors per hour may appear on any night. But at certain times of the year our planet slices through streams of dust and dirt left behind by passing comets. When this happens Earth experiences a meteor shower, and you might see meteors at a rate of one every few minutes, though there are often bursts and lulls. Shower meteors can appear anywhere in the sky, but their direction of motion can always be traced back to the constellation whose name the shower bears. This apparent point of origin is known as the radiant. Meteor showers are best observed in the predawn hours. A list of the major upcoming showers can be found in S&T's Web article "Upcoming Meteor Showers."
|
<urn:uuid:7981ade9-c58e-457a-83fe-fe10f1158618>
|
CC-MAIN-2016-26
|
http://www.skyandtelescope.com/astronomy-news/observing-news/celestial-highlights-for2005/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934251
| 1,794
| 3.25
| 3
|
It’s hard to take a photo through a window without picking up reflections of the objects behind you. To solve that problem, professional photographers sometimes wrap their camera lenses in dark cloths affixed to windows by tape or suction cups. But that’s not a terribly attractive option for a traveler using a point-and-shoot camera to capture the view from a hotel room or a seat in a train.
At the Computer Vision and Pattern Recognition conference in June, MIT researchers will present a new algorithm that, in a broad range of cases, can automatically remove reflections from digital photos. The algorithm exploits the fact that photos taken through windows often feature two nearly identical reflections, slightly offset from each other.
“In Boston, the windows are usually double-paned windows for heat isolation during the winter,” says YiChang Shih, who completed his PhD in computer science at MIT this spring and is first author on the paper. “With that kind of window, there’s one reflection coming from the inner pane and another reflection from the outer pane. But thick windows are usually enough to produce a double reflection, too. The inner side will give a reflection, and the outer side will give a reflection as well.”
Without the extra information provided by the duplicate reflection, the problem of reflection removal is virtually insoluble, Shih explains. “You have an image from outdoor and another image from indoor, and what you capture is the sum of these two pictures,” he says. “If A+B is equal to C, then how will you recover A and B from a single C? That’s mathematically challenging. We just don’t have enough constraints to reach a conclusion.”
Thinning the field
The second reflection imposes the required constraint. Now the problem becomes recovering A, B, and C from a single D. But the value of A for one pixel has to be the same as the value of B for a pixel a fixed distance away in a prescribed direction. That constraint drastically reduces the range of solutions that the algorithm has to consider.
Nonetheless, a host of solutions still remain. To home in on one of them, Shih and his coauthors — professors of computer science and engineering Frédo Durand and Bill Freeman, who were Shih’s thesis advisors, and Dilip Krishnan, a former postdoc in Freeman’s group who’s now at Google Research — assume that both the reflected image and the image captured through the window have the statistical regularities of so-called natural images.
The basic intuition is that at the level of small clusters of pixels, in natural images — unaltered representations of natural and built environments — abrupt changes of color are rare. And when they do occur, they occur along clear boundaries. So if a small block of pixels happens to contain part of the edge between a blue object and a red object, everything on one side of the edge will be bluish, and everything on the other side will be reddish.
In computer vision, the standard way to try to capture this intuition is with the notion of image gradients, which characterize each block of pixels according to the chief direction of color change and the rate of change. But Shih and his colleagues found that this approach didn’t work very well.
Playing the odds
Ultimately, they settled on a new technique co-developed by Daniel Zoran, a postdoc in Freeman’s group. Zoran and Yair Weiss of the Hebrew University of Jerusalem created an algorithm that divides images into 8-by-8 blocks of pixels; for each block, it calculates the correlation between each pixel and each of the others. The aggregate statistics for all the 8-by-8 blocks in 50,000 training images proved a reliable way to distinguish reflections from images shot through glass.
In their paper, Shih and his colleagues report performing searches on Google and the Flickr photo database using terms like “window reflection photography problems.” After excluding results that weren’t photos shot through glass, they had 197 images, 96 of which exhibited double reflections that were offset far enough for their algorithm to work.
“People have worked on methods for eliminating these reflections from photos, but there had been drawbacks in past approaches,” says Yoav Schechner, a professor of electrical engineering at Israel’s Technion. “Some methods attempt using a single shot. This is very hard, so prior results had partial success, and there was no automated way of telling if the recovered scene is the one reflected by the window or the one behind the window. This work does a good job on several fronts.”
“The ideas here can progress into routine photography, if the algorithm is further robustified and becomes part of toolboxes used in digital photography,” he adds. “It may help robot vision in the presence of confusing glass reflection.”
|
<urn:uuid:afda9c10-3034-4e29-b546-683d69d1ad4d>
|
CC-MAIN-2016-26
|
http://news.mit.edu/2015/algorithm-removes-reflections-photos-0511
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00116-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949257
| 1,023
| 2.703125
| 3
|
The Name Tekesta/Tequesta
We here use the Indian spelling of Tekesta rather than the Spanish
Tequésta or the more familiar English Tequesta.
Cacike Pedro Guanikeyu Torres, Principal Chief, Taino Tribe of
Jatibonicù, analyzes the Tekesta name is as follows:
- Te (short form of Tei): "To be," or "to be constituted as a people". For example,
- Teitoca: To be still, estate quieto
- Tekína: To be an envoy or messenger
- Ké: "Earth" or "land." For example,
- Borikén, the proper name for Puerto Rico
- Ké refers to the Earth, as would be used to refer to a woman of great value.
- (Qué Radical indo-antillana, significando tierra. Mejor seria para fijar la fonética escribir ké;
Qué Indo-Antillean radical which signifies land. It is best written phonetically as ké).
- Sta (Ta, short for Tai): "Good" (as in the term Taino). For example,
- Késhta: Good Earth?
- (The root word for sta and its concept may be a misspelling or corruption of Taino sounds like Késh
or Kishkéya. The only other Taino sound that is related is the "sh" sound found in proper names
like Guarionesh(X), Caguash(X). Orocobish(X). The X sound in these Chiefs' names is a Spanish corruption of
the shé sound found in words like warishé [woman]).
In short, the term Tekeshta or Tekesta means "We the People of the Good Earth." But this intepretation is based
on a dialect and should be checked against Dr. Julian Cranberry, The Timucua Dictionary.
|
<urn:uuid:c143329f-694b-4c4f-bd45-c8eba9be8a0a>
|
CC-MAIN-2016-26
|
http://www.hartford-hwp.com/Tekesta/docs/name.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00014-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.783002
| 425
| 3.25
| 3
|
In 1860, there was a man named Ernst Haeckel who believed in evolution. He was a German professor at the University of Jena. During his years of teaching, he tried to convince his students that evolution is true. To “prove” this to his students and fellow teachers, he made up the idea that a human baby goes through different evolutionary stages as it grows. According to Ernst Haeckel, a human embryo (a baby in its early stages) starts out in a one-celled stage, just as its ancient amoeba-like ancestor. It develops gill slits, just like its ancient fish ancestor. And it even has a tail, just as its ancient ape-like ancestor. Therefore, suggested Dr. Haeckel, if we will just watch a human embryo grow, then we will see the different stages of evolution.
In order to prove his theory, he made several drawings of the different stages. But when he published these drawings, other professors began to question Haeckel’s accuracy. Upon further investigation, it seemed that Dr. Haeckel had not only been inaccurate, but he had even been dishonest. Not only had he faked some of his drawings, but he also used the same picture three different times, and labeled one a human, the second a dog, and the third a rabbit. Haeckel was proven to be wrong and his idea about humans going through their evolutionary family tree as embryos was shown to be completely false.
That should be the end of the story, but it is not. Even though Haeckel’s false theory and drawings were disproved about 150 years ago, they are still being used today in many science textbooks to “prove” evolution. Why are textbook writers still using drawings that were faked, altered, and falsified? That is the real mystery. On May 29, 2010, I was speaking to a group of teenagers in Michigan about Ernst Haeckel and his false evidence. I explained to them that many textbooks still use the false idea that human embryos are similar to animal embryos to “prove” evolution, even though this idea was disproven over 100 years ago. A few days after my visit, a ninth-grader sent me an e-mail that said:
“This week in my Biology class we learned about the theory of evolution. During this segment we had to do worksheets on evolution. Two of the main things we did were on the pepper[ed] moths and similarity in embryos. Those were two things you proved false during your sermon. You taught us that these things were proven false, but still put in textbooks and taught in schools today. I was both astonished and humored that these two false teachings showed up in my high school the week following your sermon.”
You see, even though Haeckel’s ideas were proven false, they are still used to teach evolution. Why do you think that is? One reason is because if textbooks took out all the “evidence” for evolution that we know is false, then they would not have anything left that they could use to “prove” evolution.
Let me give you another example. In August of 2009, a man named Jack sent an e-mail to Apologetics Press. He is a person who believes in evolution and who thinks our writing about God and creation is not right. When he read our information on the Discovery Web site, he said: “Your website is absolutely horrible.” And he said that in many instances, our answers were “dead wrong.” I asked him to provide us with information that proved evolution and showed our information to be wrong. He wrote back and said: “Also, evolution predicts that in the womb we produce gill sacs and a coat of fur which we shed before we are born. How does ‘creation’ explain this phenomena?” He used the false idea that humans have gill sacs to “prove” that our information was wrong. He did not know that humans never have gills, and that the idea was proven false more than 100 years ago. But, as you can see, it is still being used as evidence that evolution is true.
In 2006, a very well-known biology teacher named Francisco Ayala wrote a book titled Darwin and Intelligent Design. In that book, he tried to prove that evolution is true. In fact, he actually teaches evolutionary biology. He wrote: “The embryos of humans and other nonaquatic [not living in water] vertebrates [animals with backbones] exhibit gill slits even though they never breathe through gills. These slits are found in embryos of all vertebrates because they share a common ancestor: the fish in which these structures first evolved.” Dr. Ayala should know better. Humans never have gill slits. Haeckel was wrong, and we have known that for many years. But, as you can see, even the “top” evolutionists still use these false, disproven ideas in their attempts to “prove” evolution.
The next time you see drawings of “similar” embryos, remember that Ernst Haeckel lied to us about evolution.
|
<urn:uuid:2a83977d-80f2-4b14-8203-e54c1389267c>
|
CC-MAIN-2016-26
|
http://www.apologeticspress.org/apPubPage.aspx?pub=2&issue=992
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00120-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.984122
| 1,093
| 3.03125
| 3
|
Please note: This information was current at the time of publication. But medical information is always changing, and some information given here may be out of date. For regularly updated information on a variety of health topics, please visit familydoctor.org, the AAFP patient education Web site.
Information from Your Family Doctor
Toxoplasmosis in Pregnancy
FREE PREVIEW. AAFP members and paid subscribers: Log in to get free access. All others: Purchase online access.
FREE PREVIEW. Purchase online access to read the full version of this article.
Am Fam Physician. 2005 Oct 15;72(8):1580.
What is toxoplasmosis?
Toxoplasmosis (say: tox-oh-plas-MOH-sis) is an infection caused by a parasite. This parasite lives in the intestines of cats and is spread through cat feces, usually into litter boxes and garden soil. You can get the parasite by handling cat litter or soil where there is cat feces. You also can get it from eating undercooked meat from infected animals, such as rare beef.
What happens if I’m infected?
Healthy adults usually don’t get sick from toxoplasmosis. Most people with the infection don’t have symptoms, but those who do may feel like they have the flu. If you get infected while you are pregnant, your baby also can get infected. Babies with toxoplasmosis don’t always get sick. Sometimes, though, the infection can cause eye problems and brain damage.
If you were infected with the parasite at least six months before you got pregnant, you will be immune to it. This means there is very little risk to your baby.
How do I know if I’m infected?
Your doctor can do a blood test to see if you’ve been exposed to the parasite, but this test is not done routinely. If you are not tested and don’t know if you’re immune, you can take steps to protect yourself and your baby
Here are some things you can do to protect yourself and your baby from toxoplasmosis while you are pregnant:
Don’t let your cat go outside, where it can come into contact with the parasite.
Try to find someone who will take care of your cat while you are pregnant. Have him or her change the cat litter and clean the litter box with boiling water for five minutes. If you have to change the cat litter yourself, wear gloves and wash your hands with warm, soapy water as soon as you are done.
Wear work gloves when you are gardening, and wash your hands afterward. Cover children’s sandboxes when no one is playing in them. Cats like to use sandboxes as litter boxes.
Control flies and cockroaches as much as you can. They can track soil or cat feces onto food.
Don’t eat raw or undercooked meat or poultry. Wash fruits and vegetables before eating them.
Wash your hands well before you eat and after you touch raw meat, soil, sand, or cats.
Don’t rub your eyes or face while you are cooking. Wash all cutting boards, knives, and countertops after you cook.
Don’t eat raw eggs or drink unpasteurized milk. (Most milk sold in stores has been pasteurized, but check the label if you’re not sure.)
This handout is provided to you by your family doctor and the American Academy of Family Physicians. Other health-related information is available from the AAFP online at http://familydoctor.org.
This information provides a general overview and may not apply to everyone. Talk to your family doctor to find out if this information applies to you and to get more information on this subject.
Copyright © 2005 by the American Academy of Family Physicians.
This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact email@example.com for copyright questions and/or permission requests.
Want to use this article elsewhere? Get Permissions
|
<urn:uuid:086b7d5e-5d52-44e2-9fbf-01b4a1343585>
|
CC-MAIN-2016-26
|
http://www.aafp.org/afp/2005/1015/p1580.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00114-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930818
| 914
| 2.78125
| 3
|
Integers – add subtract multiply divide negative numbers
iOS iPad Education
Every Integers – add subtract multiply divide negative numbers App provides a virtually limitless supply of questions to let students DO math.
Intended for classroom use but also suitable for individual work. Questions can be repeated as often as needed - the numbers are random. The list of questions is color coded to show the degree of difficulty. The question difficulties range over several grades. Those students that need to can start with easier perimeter, area and volume formulas questions. Those who finish quickly can move on to more challenging ones. The teacher and student chose where to work. They can go back or skip forward, at any time, to get Integers – add subtract multiply divide negative numbers App questions that are appropriate. There are tutorial screens - intended for revision. Every question has an example screen showing a worked example, but with different numbers. The Integers – add subtract multiply divide negative numbers App selection list is updated to show how many times each question has been correct answered, so both teacher and student can monitor progress. The Integers – add subtract multiply divide negative numbers App generates the question and marks the answer, leaving the teacher free to work one-to-one with individuals or groups of students.
In this Integers – add subtract multiply divide negative numbers App the questions are carefully graded from easy up to difficult in 28 steps.
▪The number line
▪What comes next?
▪1, 0, -1, -2, ?
▪7, 5, 3, 1, ?
▪add integers: -a + b, a + -b, -a + -b
▪subtract integers: a - b, -a - b, a - -b, -a - -b
▪multiply integers: -a × b, a × -b, -a × -b
▪divide integers: -a ÷ b, a ÷ -b, -a ÷ -b,
▪evaluate powers of integers: (-a)², (-a)³, (-a)⁴, (-1)⁸
▪use BEDMAS to evaluate integer expressions
The Educational App Store gave Integers five stars and this review:
This integers app is one of a series of mathematics apps branded “XR Math” available on iTunes. It sets out to provide practice questions on a wide variety of integer topics, starting with the very basics (example question: “What comes next? 4,3,2,1,?”). The app is very clearly laid out, meaning that students should easily be able to access the material themselves with minimal teacher input.
Each subtopic contains a brief explanation and a set of questions which are randomly generated each time, meaning that it stands up to repeated use should a student require further consolidation. Once the required number of questions are completed, the student can progress on to new subtopics. The progression throughout the app is well designed such that students are not faced with a huge leap forward at any point, but a gradual increase in complexity of the questions will keep them challenged.
The feedback between questions is particularly good, with students given gentle (and relevant) reminders if they've made a mistake and then the opportunity to try again. If they get answers right, they get congratulated, sometimes in a new language – this doesn't help towards the learning objectives of the app, but is an interesting feature to help maintain student engagement.
Overall, this app provides a quality set of questions that would really help with practising and assessing how much a learner had understood of the given topic. Its ease of use and clear layout are real bonuses and I could see it being very useful when used in conjunction with other apps in the same series – giving students a familiar interface with which to consolidate their skills. It's not the prettiest app, but it delivers integers topic content very well.
The Integers – add subtract multiply divide negative numbers App is a very easy to use and user friendly app that helps both teachers and students in the learning process
···· Download Integers – add subtract multiply divide negative numbers App today ····
Update to latest iOS
|
<urn:uuid:8f107ace-b428-4c1a-a37a-ca95aa6909b2>
|
CC-MAIN-2016-26
|
http://appshopper.com/education/integers
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00083-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.924545
| 870
| 3.203125
| 3
|
Journal of Nervous and Mental Disease Commences Series of Historical Papers
Source Newsroom: Wolters Kluwer Health: Lippincott Williams and Wilkins
1878 Call for Asylum Reform Still Has Echoes for Psychiatry Today
Newswise — Philadelphia, Pa. (August 29, 2011) – In the late nineteenth century, reform of insane asylums was a hotly debated topic that pitted two emerging medical specialties—psychiatry and neurology—against each other, according to a historical paper presented and discussed in the September issue of The Journal of Nervous and Mental Disease. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
The historical article and accompanying commentaries are the first in a series commemorating the 200th volume of The Journal of Nervous and Mental Disease, America's oldest continuously published independent psychiatric journal. The series will reprint articles from each decade since JNMD began publication in 1874, along with expert commentaries setting the historical papers in a contemporary context. While some of the historical topics may seem "quaint and archaic," the issues they raise are still relevant today, according to Dr. John A. Talbott, Editor in Chief of JNMD.
Contentious Debate over 'Insane Asylum Reform'
The first paper in the special series was written by Dr. Edward C. Spitzka in 1878, amidst a tumultuous debate over the operation of insane asylums. "We may not call them asylums anymore but the reform of the public mental health system is as critical today as it was in 1878," writes Dr. Talbott in an introductory editorial.
Spitzka was a leading neurologist whose address to the New York Neurological Society was a furious attack on the country's insane asylums. At the time, the medical superintendents of insane asylums were psychiatrists—or, in the terminology of the day, "alienists." Spitzka accused the asylum superintendents of a wide range of malfeasances: from lack of accountability to sloppy recordkeeping to inhumane treatment of patients. The superintendents countered that the neurologists were ignorant of the nature of insanity and the practical challenges of operating an asylum.
A key objection was that the asylum superintendents were standing in the way of a scientific approach to understanding insanity—including the ability to perform autopsy examinations of the brains of insane patients after death. Spitzka urged a long list of reforms, including limited use of restraints, improved conditions for patients, and accurate recordkeeping.
More than a Century after Reforms, Psychiatry Still Faces Many of the Same Issues
Were Sptizka's charges against psychiatry justified? In a commentary, Dr. Jeffrey Geller of University of Massachusetts Medical School presents an overview of the state of psychiatry in 1878 to address that question. He concludes that, while there were major shortcomings in the infant science of psychiatry, the "alienists" of the era were well aware of them, and were "engaged in all manner of deliberations about important issues of the day relevant to the practice of psychiatry."
Sptizka's call for reform was far from the end of the controversy over insane asylums, or of the tensions between psychiatry and neurology. In an accompanying article, Dr. Kenneth J. Weiss of University of Pennsylvania Perelman School of Medicine discusses an 1894 address by famed neurologist Silas Weir Mitchell—who leveled many of the same charges against the insane asylums.
Mitchell's critique of the asylum and asylum doctors for their "isolationism and backward ways" is regarded as a landmark moment in the history of psychiatry. But from a historical perspective, Dr. Weiss believes that Mitchell's speech was more a reflection of changes that were already in the air. He writes, "Although Mitchell is often credited with delivering psychiatry a wake-up call, it is equally feasible that he was merely channeling the organic reforms from within the profession."
Dr. Geller notes that the issues with which alienists were grappling—such as competing interests of the patient and society, outside interference with psychiatric practice, lack of funding in the face of high demands for treatment, and stigma associated with psychiatry and psychiatric patients—were not that different from the problems facing psychiatry today. "In fact, to an absolutely remarkable degree, the issues of 1878 are the same as those of American psychiatry in the twenty-first century," Dr. Geller adds. "That might say much more about psychiatry than Spitzka could ever have known 133 years ago."
About The Journal of Nervous and Mental Disease
Founded in 1874, The Journal of Nervous and Mental Disease is the world's oldest independent scientific monthly in the field of human behavior. Articles cover theory, etiology, therapy, social impact of illness, and research methods.
About Lippincott Williams & Wilkins
Lippincott Williams & Wilkins (LWW) is a leading international publisher for healthcare professionals and students with nearly 300 periodicals and 1,500 books in more than 100 disciplines publishing under the LWW brand, as well as content-based sites and online corporate and customer services.
LWW is part of Wolters Kluwer Health, a leading global provider of information, business intelligence and point-of-care solutions for the healthcare industry. Wolters Kluwer Health is part of Wolters Kluwer, a market-leading global information services company with 2010 annual revenues of €3.6 billion ($4.7 billion).
|
<urn:uuid:c2de5b97-7258-429f-8178-d17d1c7691fd>
|
CC-MAIN-2016-26
|
http://www.newswise.com/articles/journal-of-nervous-and-mental-disease-commences-series-of-historical-papers
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00134-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952
| 1,139
| 2.828125
| 3
|
the trichina worm in it lodges on to muscles, travels through the spinal fluid and finally to the brain - and is the cause of alot of illnesses and dis - ease.
despite what the so called experts say, this worm is dam near
indestructible - even when pork is cooked thoroughly they can still be found
shure it can be made to taste good - but just about anything can
Definition of Trichina spiralis
Trichina spiralis: is a parasitic
worm that lives in the intestines and causes a serious illness known as trichinosis.
The eggs usually enter the body via raw or undercooked pork, sausage or bear meat. In the intestines, the eggs hatch, mature, and migrate to other parts of the body through the bloodstream and the lymphatic system.
Early symptoms include vomiting, diarrhea, and abdominal cramps. In time, a high fever, puffiness of the face and muscle pain develop.
Eventually the worms can penetrate the muscles, the heart and the brain and can cause death.
Treatment with an anti-worm drug such as thiabendazole, as well as bed rest and a physician's care, can cure trichinosis. Recovery may take several months. Diagnosis of trichinosis sometimes requires analysis of a tissue sample (biopsy) taken from muscle.
|
<urn:uuid:6653eed2-3a46-4235-ba49-102e88663b25>
|
CC-MAIN-2016-26
|
http://www.wutang-corp.com/forum/showpost.php?p=320105&postcount=4
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931303
| 282
| 3.21875
| 3
|
Definitions for punta arenasˈpun tɑ ɑˈrɛ nɑs
This page provides all possible meanings and translations of the word punta arenas
a city in southern Chile on the Strait of Magellan; the southernmost city in the world
Punta Arenas is a commune and the capital city of Chile's southernmost region, Magallanes and Antartica Chilena. The city was officially renamed Magallanes in 1927, but in 1938 it was changed back to Punta Arenas. It is the largest city south of the 46th parallel south. As of 1977 Punta Arenas has been one of only two free ports in Chile. Located on the north shore of the Strait of Magellan, Punta Arenas was originally established in 1848 as a tiny penal colony. During the remainder of the 1800s Punta Arenas grew in size and importance due to the increasing maritime traffic and trade destined to the west coast of both South and North America. This period of growth also coincided with the a gold rush and sheep farming boom in the 1880s and early 1900s. Chile used Punta Arenas to firm up its sovereignty in this southernmost part of South America, which led to the Strait of Magellan being recognized subsequently as Chilean territory in the Boundary treaty of 1881 between Chile and Argentina. The geopolitical importance of Punta Arenas has remained high in the 20th and 21st centuries because of its logistic importance in accessing the Antarctic Peninsula.
The numerical value of punta arenas in Chaldean Numerology is: 5
The numerical value of punta arenas in Pythagorean Numerology is: 4
Images & Illustrations of punta arenas
Translations for punta arenas
From our Multilingual Translation Dictionary
Get even more translations for punta arenas »
Find a translation for the punta arenas definition in other languages:
Select another language:
Discuss these punta arenas definitions with the community:
Word of the Day
Would you like us to send you a FREE new word definition delivered to your inbox daily?
Use the citation below to add this definition to your bibliography:
"punta arenas." Definitions.net. STANDS4 LLC, 2016. Web. 25 Jun 2016. <http://www.definitions.net/definition/punta arenas>.
|
<urn:uuid:a684cce6-f396-4b2f-8e65-c5f5cb04fd36>
|
CC-MAIN-2016-26
|
http://www.definitions.net/definition/punta%20arenas
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00057-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.870439
| 486
| 3.25
| 3
|
Monday, November 07, 2011
View the Risk of Flooding with Google Maps
allows you to view the risk of flooding at any location in the world.
Using flood map you can set a water elevation height for any location and view the likely effects on a Google Map. Areas that are likely to be flooded are displayed on the map with a blue overlay.
It is also possible to right-click on any location and view the elevation level at that point. If you want to share a Flood Map search you can cut and paste a link to the current map view.
Global Sea Level Rise Map
- view the world at different rates of sea rise
- also looks at the effects of sea rises on the world
The Sea Level Rise Explorer
- explore the areas of the earth most vulnerable to sea level rise
Share to Twitter
Share to Facebook
Share to Pinterest
Post a Comment
Post Comments (Atom)
|
<urn:uuid:5d48aecc-1b54-4382-acf2-314175863d40>
|
CC-MAIN-2016-26
|
http://googlemapsmania.blogspot.com/2011/11/view-risk-of-flooding-with-google-maps.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.843723
| 189
| 3.09375
| 3
|
British Girls in the Third Reich: 'We Had the Time of Our Lives'
In the 1930s, many English families sent their daughters to finishing school in Nazi Germany. Rachel Johnson, sister of the London mayor, interviewed several for her most recent book. She told SPIEGEL ONLINE about Britain's enthusiasm for Hitler's Reich.
In February 1936, Daphne and Betsy, two girls from Oxford, discover the charms of Munich in Nazi Germany. Rachel Johnson, 47, tells the unique story of young British women in Hitler's Third Reich from the perspective of two fictional characters. The British press has praised the book for being both entertaining and historically accurate. Johnson, who is the sister of London Mayor Boris Johnson, only recently discovered that her own family had close ties to Nazi Germany.
SPIEGEL ONLINE: Ms. Johnson, how did you find out that some members of your family were in Bavaria in the 1930s?
Johnson: A couple of years ago, the BBC did a program on my brother Boris and our family history. We had always been told that my paternal grandmother was French, but it turned out she was German. Her last name was originally von Pfeffel, and we had descendants from Munich. My maternal grandmother went to Bavaria as a schoolgirl in the 1930s. Later, when I married, I discovered that my mother-in-law had been to Munich at roughly the same time.
SPIEGEL ONLINE: A strange coincidence.
Johnson: The strangest thing of all was that my mother-in-law was sent from England to Munich in April 1938, when Hitler was already preparing to invade Czechoslovakia and Poland. She watched as the Annexation of Austria took place. She even ran out to Hitler's car.
SPIEGEL ONLINE: How did you decide to address that past in a novel?
Johnson: I did a radio documentary about the English colony in Germany before the war, but didn't have enough for a non-fiction book. The English girls in Bavaria were fascinating nonetheless, so I decided to write a novel from their perspective. These girls were there just before the outbreak of war, and in some cases they were even close to the government, hanging out with Hitler and Hess. Sending your daughters to finishing school in Germany was the thing to do.
SPIEGEL ONLINE: Why?
Johnson: Germany was probably our closest European partner at that time. And don't forget that George V. changed the name of his family from "Saxe-Coburg and Gotha" to "Windsor" only in 1917, during the First World War. There were still aristocratic connections and friendships to Germany between the wars. Two newspapers dealt with Anglo-German relations and printed articles about how wonderful Germany was, how amazing the scenery and how great Hitler was. The British liked that Germany was very clean.
SPIEGEL ONLINE: Where did the British girls in Germany typically go?
Johnson: Some moved to Berlin or Dresden, but Bavaria with its mountains, castles, museums and beer cellars was more attractive. Oberammergau was well known in England. My maternal grandmother was in Bavaria in the 1930s, she was Jewish. She enjoyed the opera in Munich, skiing in the mountains and later fell in love with a ski instructor from Freiburg, a member of the National Socialist party. His family called her "die Jüdin," the Jewess. Their relationship went disastrously wrong and she came back to England. I met a dozen English women while researching my book who were in Germany between 1935 and 1938, most of them over 90 by the time I interviewed them.
SPIEGEL ONLINE: What did they tell you?
Johnson: They said: "We had the best time of our lives." They felt fantastic being in Germany during the Third Reich. "It was the highlight of my life," one told me. To them, it was a rich experience, because England was very stuffy at that time -- lots of unemployment, terrible food and nasty weather. In Bavaria they had the crisp mountain air, a healthy life, the opera, the mountains and handsome Germans in uniform. They couldn't believe their luck! No chaperons, no parents. They had everything, including sex.
SPIEGEL ONLINE: What did they think about the Germans?
Johnson: They loved them! I asked the women: "Were you in love at that time?" And they said: "All the time, with everybody." They typically spent six months there, went to parties and were celebrated. Of course, they were not poor. The exchange rate was favorable for them.
SPIEGEL ONLINE: Were they aware of the dangers posed by the Nazis?
Johnson: They weren't aware of anything at all. They would see a sign at a swimming pool saying "No Jews," and they'd think: "What is a Jew?" They didn't know any Jews. Also, they were upper-middle class English girls, so almost by definition their fathers were probably quite anti-Semitic. It was an anti-Semitic time, not only in Germany. We had the rise of the far right, the brown shirts, and Oswald Mosley, leader of the British Union of Fascists. My mother-in-law's family was typical of aristocratic attitudes of this period. They were very pro-German. My mother-in-law's father was chairman of the Anglo-German Alliance, which was set up to bring the two countries closer together. He would make speeches in the House of Lords saying Hitler is a sound chap.
SPIEGEL ONLINE: What did the women say about Hitler in the interviews with you?
Johnson: They're not saying anything good about him, but they won't change their opinion of what they felt before the war. To them, it was the perfect time. Maybe they saw the SS marching on the street, but basically they enjoyed themselves. "Hitler was marvellous, the problem was, he went a little bit too far," one of the women told me. Others said they couldn't believe that these wonderful people they spent such a happy time with could be capable of things like these. You have to remember England in the 1930s suffered from a widespread depression. And then these girls go to Germany, and on the surface everything looks good. They didn't know what the regime was doing, they didn't know about the Nuremburg laws. One of them told me about her music professor, who suddenly disappeared. He was Jewish and had to flee. Nobody became suspicious. It was wilful blindness.
SPIEGEL ONLINE: When did this change?
Johnson: The English turned against Germany in September 1939, after the invasion of Poland. Most Britons had to leave Germany that summer. The only one left in Munich was Unity Mitford, a prominent British Nazi, big Hitler fan and part of his inner circle. In some way, Unity was an extreme example of the English fascination and admiration for Hitler. Her parents went to Germany and tried to get her to return to England, but she refused. They had to leave without her.
SPIEGEL ONLINE: Despite its subject, "Winter Games" is not an unhappy or even tragic novel. What has the response been like?
Johnson: It's been a quite difficult book to promote. People still think it's a dangerous topic. I talked about it during the Jewish Book week in London. The audience was almost entirely Jewish. The first question was: "What was the appeal of the Nazis for you, Rachel?"
SPIEGEL ONLINE: You went to Berchtesgaden, where many of the top Nazis had vacation homes, on a research trip. What was your impression?
Johnson: I found it really dark. By accident, I went there on Hitler's birthday. People lit candles on the site of the Berghof, his former residence. That was quite weird. The mountains and the scenery around the Königsee lake are beautiful, but it's very hard to avoid the history -- or, as the tourism people call it, "the challenging past."
SPIEGEL ONLINE: Why are the British still so obsessed with Nazis, Hitler and World War II?
Johnson: It's bizarre, isn't it? I think there are more English books published on Nazism than on any other subject. It remains a period of great fascination, a time of great danger, but also of great English bravery. I thought it was important to try to tell this part of our past from the perspective of some young and slightly naive women.
Interview conducted by Christoph Scheuermann
© SPIEGEL ONLINE 2013
All Rights Reserved
Reproduction only allowed with the permission of SPIEGELnet GmbH
336 pages; 18.99
|
<urn:uuid:a1aa38c6-2fa8-498a-8f0b-333118733817>
|
CC-MAIN-2016-26
|
http://www.spiegel.de/international/europe/young-women-from-britain-in-1930s-nazi-germany-a-905617.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00129-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.987703
| 1,839
| 2.921875
| 3
|
The systematic attempt to murder the Jews, gypsies, and others challenges the enlightenment assertion that humanity is progressing and that God is good. In terms of their pre-war program, Irwin believes the Nazis succeeded since so many citizens and nations participated in, or acquiesced to, the eradication of European Jews. Genocide has continued in Cambodia, Rwanda and Bosnia. The Holocaust is not an historical aberration but an aspect of human behavior. Can we still claim to be good? Jewish tradition asserts that human beings are neither inherently good or innately evil, but a mixture of both. In us is the spark of good that comes from being made in Gods image. Our lives are a moral struggle with small signs of kindness and faith. And we must realize and remember that the oppressed can become the oppressor. Why be Good? Because both God and humanity need us to be.
In any discussion of morality in our age, we cannot avoid the Holocaust. The systematic attempt to murder the Jews (as well as Gypsies and others) represents a challenge to the Enlightenment's assertion of human progress and to traditional understandings of God's goodness. So immense and horribly efficient was the Holocaust (called, in Hebrew, the Shoah), so vast was the scale of destruction, that no excuse of unawareness holds moral sway. The Nazis succeeded because so many citizens, in so many nations, participated in or acquiesced to evil behaviour. Since 1945 we have seen genocide repeated, in Cambodia and Uganda, in Rwanda and Bosnia. Modernity, with its access to science and technology, has perfected the killing of others in a way that makes the carnage exacted by religious wars of the past pale by comparison. In fact, the very question "Why be good?" challenges the assumption of modern Western thought that goodness is innate. If we have to ask the question, then perhaps we are not good; what we are trying desperately to do is to find reasons to keep at bay the chaos unleashed by seeing what we human beings really are. The Shoah is not an historical aberration, but a paradigm of human behaviour.
In posing the question "Why be good?" we must confront the Holocaust and the burning moral issues it raises. These questions are the focus of my reflection: Which is most truly humangood or evil? Where was God and what does our answer to this question mean? And, finally, can we learn anything about how to be good from the Holocaust?
The Reality of Evil... and Good
Many years ago I visited Dachau, a Nazi death camp less than 20 kilometres from Munich on the outskirts of a small Bavarian town. As I walked under the gate, with its mocking words Arbeit macht Frei, what struck me was the beauty of the place. Little was left to suggest that this was once Hell. Teenagers were laughing, birds were chirping. There were no echoes of the screams of horror, no residue of the stench of burning bodies. Inside the remaining barbed wire fence and the lookout towers, the only hint that this was a death camp was one reconstructed, spotlessly clean, wooden bunker with only the merest suggestion of the poverty of the accommodations. Most chilling of all was the lushness of the burial pit next to the crematorium where bits of bone and ashes were all that remained of fathers and daughters, hasidim and labour Zionists, sages and labourers.
If nature could so effectively obscure the evil of genocide, it is not hard to imagine how human beings could try to deny the Holocaust. Walking through Dachau it became clear that covering up the horrors of the Shoah is not a contemporary revisionist phenomenon. It was part and parcel of the Nazi attempt to wipe out a people, yet surreptitiously defend such action as good, useful and necessary. In this war against the Jews (and others), propped up by an Orwellian "newspeak," the forced removal of people from their homes was spoken of as "relocation," murder was a "solution," genocide was merely ridding the world its unwanted human refuse.1 Throughout the winter and early spring of 1945 camp after camp, in which Jews had been exterminated, was dismantled. In a desperate effort to destroy the physical evidence of their crimes, SS guards tore down barracks, planted grass over mass graves and destroyed records. That so little remained in Dachau, therefore, had nothing to do with the ravages of time or the disregard of later governments. Dachau was given over to nature because the Nazis wanted it forgotten.
Was this obfuscation of the truth a stratagem to fool Jews into passive cooperation with their own extermination, facilitating more efficient killing? No doubt, though this does not explain the continuation of the ruse after most of Europe's Jews were killed. Was the destruction of the camps done out of fear of retribution from the Allies which might follow the war? To some degree, perhaps, although at a stage when it was just as likely that the Nazis would triumph, Heinrich Himmler, SS Reichsführer, told an assembly of his high ranking officers: Athe killing of the Jews is the most glorious page in our history, one not written and which shall never be written."2
Herein, I believe, lies an inherent moral dilemma within National Socialism. Were the Jews not an inferior race, "rootless subhumans," vermin and "bacillus" to be exterminated?3 Why, if the Jewish nation was so evil, was their destruction not something of which to be proud? Why, with one breath, glorify the act of murder, and with the other try to hide the act of ridding the world of such evildoers? I would suggest that at least part of the reason is guilt and shame. Although the murder of the Jews was couched in innocuous, bureaucratic language, the stain of so much blood could not be washed off. And this, I think, made even the Nazis question themselves. The language and behaviour of deception, therefore, was meant to fool not only the victims, but the perpetrators also. Thus did evil blossom, as those who participated in the murder of millions justified their duplicity and those who turned aside were given the moral "out" to salve their consciences. By defining evil as good, Nazi ideology pushed aside the voice of good within their own souls.
Do I, then, wish to forgive those who committed such crimes? Let me answer by quoting a story told of someone who once came to the great Jewish scholar, Abraham Joshua Heschel, who himself escaped the Nazis. This person said, "Dr. Heschel, you are a man of deep faith. Do you not believe that it is time to forgive?" Heschel replied with a story that implied, "Forgiveness can only be granted by those who are wronged. Ask the dead for forgiveness."
My intent in raising this issue is simply this: that even among those most involved in genocide there was a recognition that there is a moral good which their heinous choices violated. This argument is not the same as saying people are basically good. Rather, the careful attempt of the Nazis, in word and behaviour, to hide the Holocaust even at the height of its execution, hints at the struggle that existed within them, as it does in all of us, between good and evil.4
Given the horrific evidence of the Holocaust we can no longer accept the idea of human progress or inherent human beneficence. Though Anne Frank's assertion in her diary that "in spite of everything I still believe that people are really good at heart" is touching, with hindsight it rings hollow and naive. After all, less than three weeks after she penned these lines she was captured after an informer turned her and her family in to the authorities. She died a few months before the war's end in a death camp at the age of 15. Sad to say, then, I do not believe people are basically good. The Holocaust rooted out that notion.
Hobbes said that human beings are, by nature, ruled by the law of the jungle and need to be constrained by law, while Rousseau saw humanity as essentially good. In contrast, Jewish traditions deny that human beings are either inherently good or innately evil. Judaism asserts that we are of two hearts or inclinations, our lives an ongoing morality play.5 One is called yetzer ha-tov, the inclination to good. The other is called yetzer ha-ra, often translated as "the evil urge," but which I prefer to translate as "the animal instinct." The yetzer ha-ra is not evil per se. In fact, the Talmud understood it as essential for the continuity of life. "Rabbi Shmuel ben Nahman said: Were it not for the yetzer ha-ra no man would build a house, marry a wife or beget children."6 Thus, the yetzer ha-ra is our innate desire to survive, the animalistic tendency to dominate and control, to have the self triumph and (in the words of current scientific understandings) leave our genetic material to perpetuity. Left to its own devices, however, the yetzer ha-ra would lead us to chaos, to the anarchy of self-fulfilment and the rule of evil. And it is powerful, as the Talmudic rabbis teach: "When the yetzer ha-ra is triumphant, none remember the good." Nevertheless, also within us is a spark of goodness because we are creatures made "in God's image." Goodness is the tendency towards self-sacrifice, devotion, and a caring spirit-no less real a force within us.
The Holocaust gave proof that the struggle for goodness within us is a serious endeavour, with far-reaching ramifications. It also demonstrated that evil is real, that it is (in the words of Hannah Arendt) "banal."7 Evil is not the enemy from without, but from within. Yet the Holocaust also hinted at an innate goodness, about which I will say more later on.
God's Need of Human Goodness
Among those writing about the Holocaust there is debate as to its historical uniqueness. Though my purpose is not to debate the merits of either side, it is important to me to note the different attitudes towards the Divine that each view brings. More traditional Jews tend to see the Shoah as one in a series of Jewish calamities and tragedies, quantitatively greater, but not qualitatively different enough to pose a challenge to the answers provided by past traditions. Liberal Jewish thinkers identify an inherent uniqueness in the Shoah that represents a break with, if not a challenge to, past theological perspectives.8
Like the traditionalists I do not believe that the Shoah is unprecedented in Jewish history, other than in the magnitude and efficiency of its murderous methodology. At the same time, I cannot accept the classic Jewish view that continues to assert a God who may be mysterious, but remains caring, ever-present and good. For me, previous understandings of our relationship with God no longer make sense. What answered our ancestors' doubts and questions two thousand years ago, or four centuries ago, cannot satisfy our need to make sense of a world turned upside down. While it is easy to say human beings chose to act in ways which allowed the Holocaust to happen, we are still stuck with how a God who works in history could allow humanity such moral latitude. In short: Where was God?
To suggest, as some have, that the Shoah was a punishment for human sin or the price for the end of Jewish exile is, to me, both theologically and morally repugnant.9 What God worth having faith in would so mock human reason or our sense of justice? The writer and storyteller Elie Wiesel, in describing a recitation of the traditional confessional in the camps on Yom Kippur, the Day of Atonement, touches the irony:
It was better to believe our punishments had meaning, that we had deserved them. To believe in a cruel but just God was better than not to believe at all. It was in order not to provoke an open war between God and His people that we had chosen to spare Him, and we cried out: "You are our God, blessed be your name. You smite us without pity, you shed our blood, we give thanks to you for it, O Eternal One, for You are determined to show us that You are just and your name is justice.10
Faced with such a logical incongruity, liberal Jewish theologians since the Holocaust have struggled to understand God's role in the Holocaust. The American rabbi, Richard Rubinstein, argues that God is dead (or, at least, the personal God of Jewish tradition).11 Martin Buber speaks of an "eclipse" or of the "hidden face" of God." The Yiddish poet, Jacob Glatstein, pushes the theological envelope even further. In a 1946 poem entitled, "Not The Dead Praise God" he hints that the Shoah ended God's role in our lives. Playing on the ancient Jewish tradition that the covenant with God was accepted when all the people of Israel stood together at Sinai, Glatstein hints that the vast, communal destruction of the Jews nullifies that bond:
- We received the Torah at Mount Sinai
- and in Lublin we gave it back.
- Not the dead praise God-
- the Torah was given for the living.
- And as we all together
- stood in a body
- at the Granting of the Torah,
- so truly did we all die in Lublin.12
Unlike these writers, I do not believe that the Shoah abrogated the covenantal relationship we have with God. When my ancestors established a link with God, it was an eternal bond. Like it or not, as a Jew I feel commanded by the covenant and by Torah, which constitute the terms of that partnership. What is the alternative? A sacralization of humanity? But what is most human-those who fought the Nazis, or the majority who acquiesced to them? If there is no God what moral ground allows me to say the Nazis were evil? Perhaps, given their world, we should say they were right?! (Not so unthinkable a notion. I have had students argue, "they were entitled to their opinion"!) That I cannot do. So, with all my uncertainty, I turn back to God.
The "but" in this, however, is that the Shoah has changed the nature of our relationship. As a modern, liberal Jew I perceive that the Divine-human connection is not static, but is influenced by historical circumstance. God may be unchanging (or maybe not?), but the bond between Israel and God is ever in flux as the Holy One and human beings move through history.13
Given this, I believe that the covenant remains. What changed in the Holocaust is that God failed. For whatever reason, there was no Heavenly Witness to the Divine presence in the Shoah. God, if it is possible to say so, was present, but came off the Heavenly Throne. Why? I do not know. Like Job, I face the Whirlwind unable to comprehend the often absurd, sometimes even cruel mysteries of the universe. In my troubled faith I sing with the Psalmist: "Why, O Eternal, do You stand aloof, heedless in times of trouble?"14 Yet even when God fails I am called (no, it is more than this ... it is commanded) by the conviction of my ancestors and my partnership in the covenant, to be witness to the Divine. After all, my people failed God in the past, but the bond remained. Now, in the mystery of God's failure it is we who are needed (more than ever) to sanctify the Divine Name.
God's need of humanity has a strong foundation in Jewish sources. In the seventeenth century the Jewish mystic Rabbi Isaac Luria argued that at the moment of creation a moral "Big Bang" occurred, scattering good and evil throughout the universe. Our mission, he argued, is to restore the broken fragments of the world together in what he called tikkun olam, a "repairing of the world." Why God will not (or cannot?) do this, we do not know. What Lurianic kabbalah does, however, is empower humanity to restore the universe to wholeness. After the Shoah the need for a tikkun, restoring God's universe (and thus God's Presence in the world) is even greater, for we now know that the brokenness is deeper and more profound than we previously understood. Thus do I find the power to make a difference, the courage to find meaning. Such questioning faith is not really as radical as it may seem:
A believer once came to the Hasidic rabbi, Menachem Mendel of Kotzk, saying he could no longer believe. The Kotzker did not throw him out, but questioned him. "What do you mean? Why can't you believe?"
"Because I doubt the world has rhyme or reason. The righteous suffer and the wicked prosper." "So why does that concern you?" "What do you mean 'why?'," the student answered, "If there is no justice in the world, I doubt there is a God governing the world."
"So what do you care if there is no God in the world?" "Rebbe, if there's no God in the world, my life makes no sense, there's no meaning at all."
"Do you care so much about the world and God's existence?" the teacher continued.
"With all my heart and soul, Rebbe."
"If you care so much, if you are pained so much, if you doubt so much ... you believe."l 5
God's failure in the Shoah indicates God's need for human faith. Indeed, the Biblical text itself speaks of God being made "holy" by the people: "And I will be sanctified within the People of Israel."16 And on the verse, "Let them make Me a sanctuary that I will dwell within them," one commentator notes that the text indicates that God's holiness is within them (i.e. in their acts, not the building).17 On the creation of this indwelling of God, the eminent Rabbi Joseph Soloveitchek reflects on the angels of Jacob's dream which "descend and ascend"-a rather unexpected ordering. He comments that it is our deeds which cause the Divine Presence to come to earth, and only then can the "holy ones" (God's Presence?) ascend. Our job, then, is to bring God back to earth.
The Holocaust as a Paradigm of Empathy
If, as I assert, the Holy One did fail us, then how can we know how to act? Is there any hint of righteousness that emerges out of the Holocaust, any possibility of gleaning what is ethical out of this event?
There are some who would deny any possibility of meaning in the Shoah. They look at the evil committed, the horrors of the crime, and angrily denounce any attempt to give "meaning" to such emptiness. Can we find purpose in the death of a child thrown out of third-story apartment, or still alive into the crematorium in order (and these are the words of a witness at the Nuremburg trials) to "economize on gas"? Is their any sense in the dehumanization of the ghettos? Any meaning in the way families were told to dig communal graves, then killed together, bodies of parents and children heaped one upon the other? No, say some, there is nothing we can glean from the Holocaust. It was a crime so great, so devoid of any humanity, that to try to give it meaning is to mock the dead. Let us find meaning elsewhere.
To some degree, I think this perspective is true. We can, indeed, learn much from the sorrows we encounter in our lives. In the Talmud there is an assertion that certain sufferings in life are yissurin shel ahavah "sufferings of love." What this means is that God sends tribulation not out of cruelty, but with design, to enable us to come to deeper understanding, compassion and empathy. But the rabbinic mind understood that there are limits.
Rava, in the name of Rav Sehora, in the name of Rav Huna, said ... "As there must be willingness in a trespass offering, so there must be willingness in the suffering" ... And "Rabbi Yohanan says, "Leprosy and children are not love sufferings."18
Though growth does come out of loss, therefore, the Talmudic sages could not conceive of a God so cruel as to send suffering which the one in anguish did not see as a test of faith. Should we, then, be brazen enough to find meaning in the Shoah-where a million and a half children were murdered, where few if any accepted their suffering with a willing spirit? The moral absurdity of the Holocaust denies this possibility.
In contrast, there are some who derive ultimate meaning in the Holocaust. The contemporary philosopher Emil Fackenheim argued that the new commandment for the Jew is not "to hand Hitler yet another, posthumous Jewish victory."19 It is a point-of-view which strikes a popular chord. Several years ago I read a letter from a young man who said that he was staying Jewish to spite Hitler. It made me incredibly sad. This is why this person was a Jew? What, I thought, of the values of our traditions? What of God's call in Torah to defend the orphan, the widow and the stranger-an imperative to protect those most vulnerable in society? What of the prophetic assertions of justice? What of the Talmud's openness to diverse paths to truth, a fine-tuned religious pluralism? What of Yom Kippur's assertion that repentance is possible, that we are not eternally damned by the wrongs we do? What of Torah's clarion message, "You shall be holy, for I the Eternal your God am holy"? "What is good?" has been the keystone of Jewish teaching and reaching towards the Divine since our origins. For this person, however, what was foundational was animus and revenge. That is not a way of life I can find meaningful-and if that is what the Holocaust gives us, then maybe it has no transcendent meaning, nothing which it can teach us fifty years later.
Surely we cannot (and ought not) establish a call to moral goodness in the attempt to nullify Jewish existence. There is, for me, a sense of giving in to the enemy by allowing the Shoah to define who I am. The Nazis wanted a world that was Judenrein. To allow their negation to define me only gives legitimacy to that intent.
Furthermore, if I accept that God enters history (thus, by nature, giving purpose to existence) as a God who needs human goodness to prevail, then I am forced to confront the Shoah as teaching more than just something negative. If there is meaning in the Shoah it is not because of it, but in spite of it. Only thus can it become a paradigm for human compassion, empathy and love, a model that demonstrates the transcendency of good and the dangers of evil.
In the hell of the Holocaust itself also lies the seed of redemption. And it is demonstrated in the goodness of those who saved Jews, in the response of Jewish victims to the Holocaust itself and in the maintenance of hope after the war was over. It is not through philosophical musings, then, but conscious acts of courage, fortitude, endurance and hope that breath is given to the Biblical vision that we are made in God's image. Goodness was and is demonstrated not so much in thought or "meaning," as it is in our behaviour. If there is a tikkun to be made, in the world and in human-divine relations, it must begin in the encounter we make with other human beings. "In a place where no one acts like a human," taught the sage Hillel some two thousand years ago, "strive to be human (or perhaps it is better translated 'humane')."20
In contrast to the evil of the Nazis and their collaborators stood a remnant of righteous men and women who saved Jews. Oskar Schindler represents thousands of people who risked their lives to help Jews during those years. Some were diplomats, like Raoul Wallenberg and Senpo Sugihara. Others, like the circle of friends who hid Anne Frank and her family, were "normal" people who acted in extraordinary ways. We ought not to consider them saints, for to do so would distance them from humanity. Rather, it is because they were human (with all their foibles) that the evil in others is so worthy of our contempt. Their courage (though many of them deny that they were doing anything other than what they felt anyone else would do), no less than their exceptionalness (they represented less than one percent of the European population) demonstrates the tension inherent in human existence. Our lives are a moral struggle and when we emulate those who acted righteously we give testimony to the power of good. As Rabbi Edward Feld concludes in his study of this era, "The extraordinary power of the breath of diaphanous holiness is as real as the boot of the armies of Gog and Magog. To negate the reality of either is to belie the truth of existence."21
Goodness was also seen in the Holocaust's victims. Maintaining human dignity was nearly an impossible task for those who lived during the Shoah, but many survived only because they struggled to do so. A number of survivors speak of the small acts of courage which allowed them to live.22 A kind word, a small piece of extra bread, a prayer book written on a roll of toilet paper-these were signs of faith, goodness, courage and humanity. Eliezer Berkovits responds to these acts of courage with the observation, "If man's [sic] ability to perpetrate incomprehensible crime against his fellow bespeaks the absence of God, the non-existence of divine providence, what shall we say of his equally incomprehensible ability for kindness, for self-sacrificial heroism, for unquestioning faith and faithful ness?"23 In the absence of God's witnessing to His/Her own Presence, therefore, human beings did not fail to serve as witnesses. I am not prepared to say that the six million died al kiddush Hashem, "in sanctification of God's Name." Yet their lives-and the acts of love between them which so many survivors speak about, in the most inhumane of situations-testifies to God even when God did not bear witness.
One of the most dramatic examples of this comes from a story told by Hugo Gryn, a rabbi now in England. When he was in a Nazi concentration camp with his father, he recalls how meagre were the daily rations. With luck the inmates were given a pat of margarine each week. It was barely enough fat to keep each person alive and was, as a result, among the most precious of items. One December, as the holiday of Hanukkah approached, the eight-day celebration which recalled the ancient Maccabean struggle for Jewish freedom and marked by a daily lighting of a flame, he saw his father and the other prisoners putting aside their margarine. When the first night of Hanukkah came he could not believe his eyes when his father took a small piece of string, placed it in the melted margarine, said the prayers and lit the wick.
"But father," protested the young Hugo, "I don't understand. You've taught me that pikuah nefesh, saving a life, nullifies all the other commandments. How can you give up your margarine, which you need to sustain yourself just to light a Hanukkah light?" "My son," Hugo's father replied, "You and I have seen that it is possible to live up to three weeks without food. We once lived almost three days without water; but without hope we would not be able to survive for even three minutes."24
If, after the Shoah, the Jewish people had given in to despair one could understand. That neither individual survivors nor the Jewish people did so is the most enduring negation of the evil unleashed by the Shoah itself. It is a willingness to laugh, a desire to have children, a willingness to fight for our own future and the future of others, which defeats the emptiness of the Shoah.
This is not a return to Rousseau's vision, that human nature is essentially good. The Holocaust taught the Jews that no one will help us unless we help ourselves (a lesson others should also bear in mind). I say this with neither bitterness nor anger, but as a learned communal reality. As a people we now know that the rhetoric of hate ought to be taken seriously and that those who say they want to harm us will, if given half the chance, do exactly that. If we do not stand up against antisemites and hate-mongers, if we do not pursue them in the courts and try to stymie them through legislation, why should others? Men and women of good will may well join us in our fight for Jewish rights and Jewish survival (all the better if they do), but why should they unless we remain vigilant ourselves?
This does not mean to say that Jews should only take care of their own. Far from it. The Holocaust should make us vigilant that such crimes never occur against any other people. It is a paradigm of empathy, for we know what it is like to suffer tyranny, what it means to be without power. Our experience must not embitter us, or distance us from others. In fact, the response must be the very opposite. "For you know the feelings of the stranger" God taught us soon after we were freed from slavery.25 The Shoah, then, only reinforces Torah's command to "love the stranger."
What, then, is the greatest weapon against the Holocaust? I think it is hope: a hope that despite the ongoing reality and power of evil, tomorrow can be better than today. It is not easy, this articulation of goodness. There is no surety that those who once were oppressed will not turn into oppressors. Nor is it certain that good will triumph, as can be attested to on the nightly news. Asserting the rights of those with little power is, even in the most tolerant of nations, a risky business. Thus, to grasp the yetzer ha-tov, to affirm life, justice, human dignity and equality, remains no less difficult today than it was inside the gates of Dachau. But to do otherwise is to fail the God who needs us, the world that needs us, our human sisters and brothers who need us.
A true story. Elie Wiesel recalls that a number of years ago, around 1980, he went to the border of Cambodia with a friend, Rabbi Marc Tanenbaum, bringing food and medication to refugees. By coincidence it happened to be the day when he commemorated the anniversary of his father's death. Called yahrzeit, it is a time to go to synagogue, pray and say kaddish, a prayer which, in remembering the dead, praises God. Wiesel recalls:
In the morning it was easy: there was an Israeli embassy. I organized a minyan [a quorum of ten Jews that constitutes a prayer "community"] and we prayed. But...for the afternoon prayer we were already at the border...and I turned to Rabbi Tanenbaum and said, "Get me ten Jews. I need ten Jews."
How can you get ten Jews among people you never met, Cambodians, Thais? ...We managed. A correspondent from the New York Times was there, a young philosopher from France, a young Sephardic Jew from England, and Rabbi Tanenbaum managed to get me a minyan.
After the prayer I said kaddish. And all of a sudden I realized there was a young man who was a physician from France, and he repeated the same prayer. When we finished I turned to him. I said, "Do you also have yahrzeit?" He said, "No." I said, "then why do you say kaddish?" And then naively, innocently, but fervently he stretched out his hand across the border to Cambodia and he said, "for them."
For me, that is the task before us. It is asserting God's name wherever suffering exists. It is not allowing God to fail again. Our people, which has walked the "valley of shadows," cannot be indifferent, apathetic, uncaring to wrongdoing, be it grand or small. We cannot be silent. We are commanded to act. Why be good? Because God needs us to be...and (God knows) humanity does, too.
|
<urn:uuid:8653896c-aa12-4dec-bb7a-1a46516bed78>
|
CC-MAIN-2016-26
|
http://www.mcmaster.ca/mjtm/2-3.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00059-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969484
| 6,843
| 2.796875
| 3
|
The Evolution of the Role and Office of the First Lady: The Movement Toward Integration with the White House Office
- Using All Available Means of Persuasion: The Twentieth Century First Lady as Public Communicator. Gutin, Myra // Social Science Journal;2000 Index Issue, Vol. 37 Issue 4, p563
Focuses on the public communication skills of first ladies in the United States. First ladies' use of public discourse, media and writings; Social hostesses and ceremonial presences; Independent activists and political surrogates.
- Party Politics: The Political Impact of the First Ladies' Social Role. Mayo, Edith P. // Social Science Journal;2000 Index Issue, Vol. 37 Issue 4, p577
Examines the political impact of first ladies' social role in the United States. Epitome of the chief executive's social entertaining; Development of the social role.
- Role Constraints and First Ladies. Wekkin, Gary D. // Social Science Journal;2000 Index Issue, Vol. 37 Issue 4, p601
Focuses on the role of first ladies in the United States. Capacities or advantages of the role of the first lady; Consigliere role; Shadow president.
- Ssh! // Time;9/1/1975, Vol. 106 Issue 9, p10
The article discusses the statements made by First Lady Betty Ford on the personal questions the media ask her regarding life. It states that her interview with Myra MacPherson, Ford recalls the moment their king's size bed was moved to the White House and how she manage to sleep with her...
- HILLARY RODHAM CLINTON 42nd FIRST LADY. // Monkeyshines on America;Jul1997 Arkansas Issue, p4
The article profiles the First Lady of the United States, Hillary Rodham Clinton. From an early age, Hillary Rodham took an active role in the welfare of others in her community in a Chicago, Illinois, suburb. When Hillary Rodham met U.S. President Bill Clinton, she was a student at Yale...
- First Ladies. Gould, Lewis L. // American Scholar;Autumn86, Vol. 55 Issue 4, p528
Discusses the American people's fascination with their First Ladies. Analysis of the performance of First Ladies; Reasons why First Ladies captivate the public; Ways in which the relationship between First Ladies and the American people have evolved in the 20th century; Quality of participation...
- A Dress Is a Dress Is a Dress. GALCHEN, RIVKA // New York;3/23/2009, Vol. 42 Issue 9, p34
The article discusses clothing worn by First Lady Michelle Obama. The author suggests that people read too much into Michelle Obama's fashion choices, since they are most likely not definitive statements about older Americans, racial minorities, possible socialist tendencies, or the benefits of...
- Clinton, Hillary Rodham. G. O.; M. E. R. // Current Biography;Mar2009, Vol. 70 Issue 3, p16
A biography of Hillary Rodham Clinton, former secretary of the U.S. Department of State, Senator and First Lady, is presented. She was born on October 26, 1947 in Chicago, Illinois. She finished her Business Administration degree at Wellesley College in Massachusetts. She then enrolled law at...
- Farrell: A Nod to Laura Bush. Farrell, John Aloysius // U.S. News Digital Weekly;5/1/2009, Vol. 1 Issue 15, p21
The author evaluates the performance of Laura Bush as a First Lady of the U.S.
|
<urn:uuid:d5825610-ab5b-4b8a-9951-574950612d21>
|
CC-MAIN-2016-26
|
http://connection.ebscohost.com/c/articles/3928230/evolution-role-office-first-lady-movement-toward-integration-white-house-office
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00201-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.896025
| 751
| 2.578125
| 3
|
When neither MCAR nor MAR hold, we say the data are Missing Not At Random, abbreviated MNAR.
In the likelihood setting (see end of previous section) the missingness mechanism is termed non-ignorable.
What this means is
- Even accounting for all the available observed information, the reason for observations being missing still depends on the unseen observations themselves.
- To obtain valid inference, a joint model of both Y and R is required (that is a joint model of the data and the missingness mechanism).
- We cannot tell from the data at hand whether the missing observations are MCAR, NMAR or MAR (although we can distinguish between MCAR and MAR).
- In the MNAR setting it is very rare to know the appropriate model for the missingness mechanism.
Hence the central role of sensitivity analysis; we must explore how our inferences vary under assumptions of MAR, MNAR, and under various models. Unfortunately, this is often easier said than done, especially under the time and budgetary constraints of many applied projects.
|< Prev||Next >|
|
<urn:uuid:b8ea01d7-ba9c-4288-a5ec-6d73db86bd5c>
|
CC-MAIN-2016-26
|
http://missingdata.lshtm.ac.uk/index.php?option=com_content&view=article&id=77%3Amissing-not-at-random-mnar&catid=40%3Amissingness-mechanisms&Itemid=96
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00125-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910579
| 224
| 2.6875
| 3
|
A strain of bacteria that can enter the human system through contaminated water or food such as meat or poultry, and eggs with cracked shells. Other foods can be contaminated by touching salmonella-carrying foods or unwashed surfaces (like cutting boards) that have had contact with them. The presence of salmonella is difficult to detect because it gives no obvious warnings (such as an off smell or taste). The bacteria can cause stomach pain, nausea, vomiting, diarrhea, headache, fever and chills. It can attack in as little time as six to seven hours or take as long as three days. It seldom causes death and can be cured with antibiotics.
From The Food Lover's Companion, Fourth edition by Sharon Tyler Herbst and Ron Herbst. Copyright © 2007, 2001, 1995, 1990 by Barron's Educational Series, Inc.
|
<urn:uuid:6518dea8-3856-4aa6-bc0e-e9c6987cdccf>
|
CC-MAIN-2016-26
|
http://www.foodterms.com/encyclopedia/salmonella/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949488
| 170
| 3.453125
| 3
|
Definition of ocean perch
: any of several marine scorpaenid food fishes (genus Sebastes): a : redfish a; also : a related food fish (S. fasciatus) b : one (S. alutus) abundant in the northeastern Pacific from Japan to the Bering Sea to southern California
First Known Use of ocean perch
Seen and Heard
What made you want to look up ocean perch? Please tell us where you read or heard it (including the quote, if possible).
|
<urn:uuid:5ad08e0b-3d42-4465-9ed3-29d717da65eb>
|
CC-MAIN-2016-26
|
http://www.merriam-webster.com/dictionary/ocean%20perch
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00070-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.87625
| 110
| 2.8125
| 3
|
RALLY RECALLS 1953 GERMAN REVOLT More than 200 memorial services recall the day in 1953 when Germans in the Soviet Sector attempted to revolt and were defeated by Soviet tanks and guns. Vice-Chancellor Erhard and Mayor Brandt lead 75,000 people in a rally before the town hall as a Freedom Torch is lit.
ADENAUER VISITS LBJ IN TEXAS A helicopter brings West German Chancellor Adenauer to a Texas celebration with Vice-President Lyndon Johnson at his Texas ranch. The Chancellor is presented with a ten-gallon hat.
HITLER APPOINTED CHANCELLOR - HD After a failure to build a majority government and weeks of negotiations, German president Paul von Hindenburg appoints Nazi leader Adolf Hitler to the position of Chancellor. Hitler meets with the also appointed Vice-Chancellor Franz von Papen and Minister for the Prussian Interior Hermann Goering. The Berlin Storm Troopers hold a parade in Hitler's honor. Master in Apple Pro Res 422 HQ 29.97fps 1080p.
Date: January 30, 1933 - BLACK/WHITE Source: HD Length: 00:00:49:00, With Audio
LBJ AND HUMPHREY VISIT WEST GERMAN OFFICIALS President Johnson and Vice-President Hubert Humphrey make a round of courtesy calls on West German officials in Bonn after attending the funeral for former Chancellor Konrad Adenauer. They pose for pictures with current chancellor Kurt Georg Kiesinger and vice-chancellor Willy Brandt.
Date: April 28, 1967 - BLACK/WHITE Source: Video: BetaSP Length: 00:00:37:00, No Audio
eFootage is a premier HD, film and video stock footage archive (contemporary and vintage) specializing in news, industrial, and more.
We have over one million additional stock footage clips that we can search upon request.
A premier dv, film and video, digital and hd stock footage archive
|
<urn:uuid:725bc763-c2dd-4c6e-9bec-e7a1a53568c3>
|
CC-MAIN-2016-26
|
http://www.efootage.com/clip_list.php?query=vice-chancellors
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00173-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.893134
| 409
| 2.59375
| 3
|
The distinction between tangible and intangible costs can be subtle, for example, time can be assigned monetary value. Intangible costs most affect individuals and society in the long run. The time it takes to do clerical and technical tasks such as ordering, installing, and securing hardware and software is usually buried in ordinary operating expenses, but individual teachers who use their time doing these tasks are not performing some other instructional tasks. Time to learn how to use a system or new software package may be accounted for in training costs and many companies and schools are becoming concerned about the productivity effects of ongoing changes in operating systems and software.
Significant amounts of time are invested by teachers in learning how to teach with technology. Although it is sometimes possible to use technology as an add-on to existing lessons, creating new assignments and activities that take advantage of new technology is time-intensive. Teachers who created new Perseus assignments invested many hours exploring what textual and graphic content was contained in Perseus and then scores of hours creating new assignments for their students (M et all). These assignments are then revised and augmented as more experience is gained. Instructors who use teaching theaters [e.g., Gilbert, 1993; Norman, 1994] also invest considerable effort in creating instructional activities and scenarios. In many cases, these materials and strategies must be modified or upgraded each semester as new hardware, software, and networking upgrades are made in the theaters. For example, simply changing the directory structure on a file server can require that multiple modules must be edited and recompiled.
The creation of conceptual infrastructure takes at least as much time and effort as creating physical infrastructure, but assigning costs is more difficult. First, these costs are often personal in that teachers work longer hours. This may be considered the cost of early adoption; early adopters make things work because they are committed to the technology itself and the change associated with it, and they believe that long-term benefits will outweigh the immediate costs. Although long-term personal benefits may accrue, it is wishful thinking to assume that such commitments scale up to the larger instructional community. Although benefits may scale vertically for individuals it is less apparent that they scale horizontally to groups of learners or teachers. Technological adoption will not take place at a linear rate of growth but rather be on the order of generational time frames.
Consider some of the systemic costs. Many early adopters of computer technology in the 1970s moved from instructional roles to technical or administrative roles. Many mathematics and science teachers who began using time-sharing systems and later microcomputers became computer coordinators and staff development specialists. Although some of these teachers would have moved to administrative posts or left teaching anyway, many of the most innovative teachers in these fields were lost to subsequent students. What are the effects to the overall teaching workforce? On the one hand, these teachers are not teaching math and science but calculators and computers are now integral to mathematics and science teaching. Would those changes have come more quickly or were they made possible by the early adopters efforts to create and share examples? A similar phenomenon is taking place today at the university level as professors who use technology in teaching may not do as much research, thus forgoing rewards such as tenure, promotion, and merit pay. Will those investments amplify or impede the technology integration in higher education? How do we balance the personal cost of an assistant professor not getting tenure to the longer-term effects on students, colleagues, and curricula that this persons examples provided?
As difficult as it is to assess the tradeoffs due to time, it is even more challenging to factor in phenomena such as psychological stress and risk taking. Not only are individuals who use technology subject to the stresses of time pressures and career development, but those who do not use technology may find it stressful to NOT adopt technology. How these stresses sum in considering the effects to a school system or a society is a classical diffusion of innovation problem [Rogers, 1982]. Using technology in teaching requires teachers to take the risk of failure. Teachers have to deal with the usual technical problems that invariably occur and share expertise with students who spend much time using the technology. Todays networked technologies provide rich sources of information and students can bring all these resources to the class as easily as the teacher. For most teachers this is a truly exciting advance, but is quite frightening to those who are less secure with roles as facilitators rather than information providers. Technology sometimes leads to power sharing and blurs distinctions between teaching and learning. In some settings, technology allows teachers to actually model research and learning--processes that employ heuristics and iterative hypothesis testing (e.g., estimation, intelligent guessing). Since much instructional theory calls for carefully planned presentations and students and parents typically expect such presentations, traveling down blind alleys and exploring ideas can easily be interpreted by students and other adults as disorganization or incompetence. It may be even more difficult to assess teaching for critical thinking than it is to assess the extent to which students do think critically.
Devoting more time to technology, some mechanical advantages are gained that allow skills and facts to be acquired more rapidly. However, reflection on and evaluation of ideas will likely remain dependent on time on task. Some topics must eventually be forced out of curricula as technology enables teachers and learners to address more abstractions. In some cases, such as the use of calculators in mathematics, some skills can be de-emphasized and some concepts can be more rapidly demonstrated. Technology itself has become an object of instruction at K-12 levels and other topics must be displaced. Efforts to weed curricula (and the associated retraining and updating of materials) must also be considered in assessing how technology changes the educational enterprise.
Technology also changes how learners think and behave; beyond learning how to use hardware and software, students must learn how to learn with technology. Today, this is mainly related to how attention is allocated. Students in classrooms where multiple stimuli are used (e.g., computer projection, overhead projection, chalkboard, teacher's words) must decide which stimuli is most essential at any instant, and how to record it for later study (e.g., memory, written notes, electronic recordings, etc.) If technology is used to assist in recording, new strategies for reviewing and studying those recording will be needed (it is easy to copy the teacher's electronic notes/materials, quite another to manage the electronic objects and work through them at a later time). As students invest time in developing these skills they are not allocating time to content. Although many argue that learning such skills is essential to intelligent citizenship in a technological society, others argue that all attentional resources and time should be focused on content. This was illustrated vigorously by the commentary of two students interviewed as part of the Perseus evaluation. Students used the primary texts and word analysis tools in Perseus to develop opinions about concepts such as wealth. One student praised the opportunity to explore, discover, and invent an interpretation; another complained that it wasted effort since reading a scholarly paper on the topic would take less time and be more authoritative. Both students were correct but the technologically-enabled assignment had quite different costs and benefits for each.Next Section...................... Back to Overview
|
<urn:uuid:8a3145e1-00fb-441c-abd2-085af2dca422>
|
CC-MAIN-2016-26
|
http://www.ils.unc.edu/~march/costet/costet.3.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962688
| 1,456
| 3.078125
| 3
|
MUNICH (Mar. 20)
Jewish communities in West Germany have protested to the Wuerttemberg-Baden provincial government against a decision to delay repayment of discriminatory taxes levied on the Jews by the Nazis, it was reported here today.
The provincial finance department recently announced that it would not repay an emigration tax imposed on Jews permitted to flee the country by the Nazis until the Federal Government put into effect the “Equalization of Burden Bill.” Nor would it make good a property tax levied by Hermann Goering in 1938 following the assassination of a Nazi diplomat in Paris by Hershe Grynszpan, a Jewish youth provincial authorities said.
The “Equalization of Burden Bill” is a measure which would tax all persons in Germany, including those entitled to restitution payments, who had not suffered major losses in the war or as a result of the postwar currency conversion. The taxes would be used to assist persons who suffered heavy losses under these circumstances.
Dr. Philip Auerbach, Bavarian Commissioner for Persecutees, today spoke to the West German Parliament in Bonn as a representative of persecutees living in Germany. He criticized the federal legislature for its failure to adopt a restitution measure for all of West Germany.
Dr. Auerbach charged that the government was continuing to pay pensions to former Nazi officials and army officers while former victims of the Nazis were being forced out of public office. He requested priority treatment for persecutees and added that as long as Nazi officials were involved in the government Jews would not participate in it.
|
<urn:uuid:4fdab8c3-3ae3-4f6d-ae69-224b9345c4b6>
|
CC-MAIN-2016-26
|
http://www.jta.org/1950/03/21/archive/jewish-communities-in-germany-protest-delay-in-repayment-of-special-taxes-on-jews
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.978972
| 327
| 2.796875
| 3
|
Comparing the Costs of Different Fuels
While energy prices have dropped from their record highs a few months ago, many area residents are still wondering how they’ll pay for heat this winter. The most common fuel in northern New England, heating oil, is still priced at over $4.00 per gallon.
But how does the price of oil compare with the price of other fuels and electricity? That sounds like a simple enough question, but it’s actually fairly complicated.
For starters, different fuels are sold in different units. Heating oil, kerosene, and propane are sold by the gallon; natural gas by the hundred cubic foot (ccf) or thermUnit of heat equal to 100,000 British thermal units (Btus); commonly used for natural gas. (100,000 Btus); firewood by the cord; wood pellets and coal by the ton; and electricity by the kilowatt-hour (kWh).
Second, the amount of useful heat obtained from a given fuel depends on how efficiently it’s burned. Combustion efficiency varies widely—from as low as 30% for the worst of the outdoor wood boilers to over 95% for a top-efficiency, condensing gas boiler. Baseboard electric-resistance heat is 100% efficient—since the electrons you’re paying for in the electric current are converted entirely into heat. Heat pumpHeating and cooling system in which specialized refrigerant fluid in a sealed system is alternately evaporated and condensed, changing its state from liquid to vapor by altering its pressure; this phase change allows heat to be transferred into or out of the house. See air-source heat pump and ground-source heat pump. efficiencies are much higher (typically 200-300%), because electricity is used for moving heat from one place to another, rather than being converted directly into heat. Note that these electric heat efficiencies don’t account for the “upstream” energy costs of electricity generation, such as the waste heat at a coal or nuclear power plant—but for the purposes of comparing your heating costs, that doesn’t matter.)
To further complicate fuel cost comparisons, a third factor is how efficiently heat is distributed. With electric baseboard radiators, the heat is produced right in the room, so the distribution is 100% efficient. Baseboard hot water (hydronic) heat is also usually very efficient, though uninsulated hot water pipes running through an unheated basement can lower that efficiency to some extent. With a hot-air furnace and ducts to carry the heat, however, the distribution efficiency can be quite low, especially if poorly insulated, leaky ducts run through an unheated attic or crawl space—distribution efficiency as low as 60-65% is not uncommon.
To calculate the actual delivered efficiency of your heating system, you have to multiply the combustion efficiency by the distribution efficiency. For example, if you have a 78% efficient oil furnace and a relatively leaky duct system running through an unheated attic (65% efficient distribution), your overall efficiency of delivered heat is just over 50% (.78 x .65)—meaning that only half of the energy you’ve paid for is actually being used to keep you warm!
Finally, to compare different fuels (sold, as described above, in different units), you have to convert the costs to an equal basis so you’re comparing apples to apples. The most common standard is dollars per million Btus of delivered heat. The easiest way to do this is with an online calculator like the one our company.
This allows you to enter the cost for a particular fuel, your heating system efficiency, and its distribution efficiency. The end result is a figure in dollars per million Btu that reflects your real costs of delivered heat and allows you to compare that with other options. Say you heat with oil and pay $4.40 per gallon (roughly today’s cash price in Brattleboro), using an Energy StarLabeling system sponsored by the Environmental Protection Agency and the US Department of Energy for labeling the most energy-efficient products on the market; applies to a wide range of products, from computers and office equipment to refrigerators and air conditioners. boiler (83% efficient) and hot water baseboard distribution (98% efficient). Your cost of delivered heat with these assumptions will be $39.00 per million Btu. By comparison, electric baseboard heat at the CVPS rate of 12.3¢/kWh converts to $36.05 per million Btu of delivered heat—that’s 8% lower cost for electric heat! Using a heat pump with a coefficient of performance of 2.0 (200% efficient) and ducts fully within the insulated house envelope drops the cost of delivered heat to $18.39 per million Btu. And firewood, at $250/cord burned in an EPA-compliant wood stove (70% efficient), converts to just $16.23 per million Btu of delivered heat. The beauty of an online calculator is that you can quickly and easily vary any of the inputs to compare lots of fuels and heating options.
Keep in mind that energy costs are volatile. It probably doesn’t make sense to rip out your oil boiler and put in electric baseboard heat, because either oil prices could go down, or electricity prices could go up. But if you’re thinking about replacing equipment anyway, you might want to consider an electric heat pump or pellet stove.
|
<urn:uuid:34a48977-5f5e-4255-8484-af911b9cd4fc>
|
CC-MAIN-2016-26
|
http://www.greenbuildingadvisor.com/blogs/dept/energy-solutions/comparing-costs-different-fuels
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00048-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935995
| 1,140
| 3.140625
| 3
|
One way you can find a queen bee in a hive that you can clearly see is to find an area where you see a large clump of bees all looking into one central area for a long period of time. Also, some bees will repeatedly come and go from this same place. In the middle of this clump, if you were to brush the workers away, you would most likely find the queen. She is normally guarded by as many as 10-20 workers at one time.
Many times it is not necessary to locate the queen herself as there is likely a lot of evidence of her presence in the hive. Eggs are the most compelling evidence that you've got a viable queen. Look for a uniform deposition of eggs (if it's spotty it may be a laying worker or an old queen).
If I missed something in here, or if you have more info to add, please do edit this article.
Brood is capped 9 days after the egg being laid. The best sign of 'queen rightness' are eggs (second to seeing the queen herself). If you can find eggs they were laid in the last 3 days. A laying worker will generally stick eggs to the side, rather than bottom, of a cell as her abdomen is not long enough. Workers can only lay drones (male) as they will not be fertilised. You can also get a drone laying queen however, if she is old, damaged or poorly mated.
|
<urn:uuid:1182c7cb-be58-4358-b82f-e58c09acb91b>
|
CC-MAIN-2016-26
|
https://en.wikibooks.org/wiki/Beekeeping/Queen_Locating
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00005-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979677
| 296
| 2.6875
| 3
|
An acute inflammatory autoimmune neuritis caused by t cell- mediated cellular immune response directed towards peripheral myelin. Demyelination occurs in peripheral nerves and nerve roots. The process is often preceded by a viral or bacterial infection, surgery, immunization, lymphoma, or exposure to toxins. Common clinical manifestations include progressive weakness, loss of sensation, and loss of deep tendon reflexes. Weakness of respiratory muscles and autonomic dysfunction may occur. (from Adams et al., Principles of Neurology, 6th ed, pp1312-1314)
An acute, autoimmune inflammatory process affecting the peripheral nervous system and nerve roots. It results in demyelination. It is often caused by an acute viral or bacterial infection.
Guillain-barre syndrome is a rare disorder that causes your immune system to attack your peripheral nervous system (pns). The pns nerves connect your brain and spinal cord with the rest of your body. Damage to these nerves makes it hard for them to transmit signals. As a result, your muscles have trouble responding to your brain. No one knows what causes the syndrome. Sometimes it is triggered by an infection, surgery or a vaccination. The first symptom is usually weakness or a tingling feeling in your legs. The feeling can spread to your upper body. In severe cases, you become almost paralyzed. This is life-threatening. You might need a respirator to breathe. Symptoms usually worsen over a period of weeks, then stabilize. Most people recover. Recovery can take a few weeks to a few years. Treatment options during the symptom period include medicines or a procedure called plasma exchange.
Progressive ascending motor neuron paralysis of unknown etiology, frequently following an enteric or respiratory infection.
|
<urn:uuid:d583852f-0ff5-4600-a499-ccf94efd651d>
|
CC-MAIN-2016-26
|
http://www.icd9data.com/2011/Volume1/320-389/350-359/357/default.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00000-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932472
| 350
| 3.5
| 4
|
Experts Confirm Surprising Discovery of Richard III's Skeleton
An undated photo released by the University of Leicester in England 4 February 2013 showing the skeleton remains of English king Richard III who was killed at the Bottle of Bosworth Field in 1485 following the confirmation by the university`s archaeologis
Experts have confirmed that a skeleton found beneath a Leicester car park is indeed that of English king Richard III.
Experts from the University of Leicester said DNA from the bones matched that of descendants of the monarch's family, BBC informs.
"Beyond reasonable doubt it's Richard,"lead archaeologist Richard Buckley, from the University of Leicester, has said.
The skeleton was discovered buried among the remains of what was once the city's Greyfriars friary, but is now a council car park.
Richard III's remains will be reburied in Leicester Cathedral, close to the site of his original grave, in a memorial service expected to be held early next year, once analysis of the bones is completed.
You are permitted to use any of the articles in this message only if you kindly quote the source - Novinite.com.
|
<urn:uuid:2702d6d3-9b4f-46bc-aa46-6d12194b571a>
|
CC-MAIN-2016-26
|
http://www.novinite.com/newsletter/print.php?id=147534
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00028-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943346
| 232
| 3.015625
| 3
|
The Genesis of the Celtic Cross at Grosse Ile
Speech given by Marianna O`Gallagher at the dinner offered by the AOH during the Centenary celebration of the unveiling of the Celtic Cross at Grosse Ile
There are many things to be said about monuments. They help us remember important things of the past. They help us remember important people of the past. They honour bravery, they commemorate tragedy, but they are an honour also to those who went to the trouble of putting them up.
My father used to recite a line or two:
Lives of great men all remind us
We can make our lives sublime
And departing leave behind us
Footprints on the sands of time.
If there are any footprints on the sands of time, certainly those of our Irish ancestors were planted on the sands of the
I think the idea of a monument to honour the past arises in the hearts of people when they fear that some event or people, particularly people are going to be forgotten.
The reason for my interest in the story of the Celtic Cross at Grosse Ile is the fact that my grandfather, Jeremiah Gallagher, born in Macroom,
In the 1880s, the Quebec Daily Telegraph, a newspaper founded by James Carrel in 1875, proposed that a monument should be erected on the
The feelings of the people on their return from GI you can find in Jeremiah Gallagher’s letter, sent later to the
Quotation from Jeremiah’s letter.
In 1897, mindful of the sad fate of so many of our kindred, and it being the 50th anniversary, our Division of the AOH organized a pilgrimage to the
However, the desolate and neglected aspect of the particular portion of the
After careful consideration of the matter in division meetings we have concluded that it is our duty to see that this hallowed spot where so many thousands of our country people are buried should be reclaimed, be becomingly enclosed and have a befitting monument, with suitable inscriptions (in Gaelic, in Latin, French and English) not only “in memoriam” of the unhappy Irish exiles but also as protest against the misgovernment of which they were the victims.
The Daily Telegraph’s work was local, but there was interest shared by many like Sir Wilfrid Laurier Prime Minister of Canada and Sir Charles Fitzpatrick Chief Justice of Canada; Senator John Costigan, Sir Richard Scott former Secty State.
BUT IT WAS
The AOH went about the preparations in very practical fashion:
FIRST St. Patrick’s School Cadet Corps was called into action. There were lessons given to the boys: the Irish language and the history of
SECOND The Quebec City Branch of the AOH in 1900 sent a delegation to the Boston National Convention. Father Eustace Maguire, the chaplain, and other members of the Quebec City Branch of the AOH attended. The national AOH voted $5000 for the monument. The Quebecers must have been convincing – the tone of Jeremiah’s letter was certainly letter above.
THIRD The AOH next applied to the Government of Canada for the right to use the top of Telegraph Hill for the monument. It was not difficult at the time to get things through government agencies either provincial or federal: for there were many Irishmen in government, either as elected members of Parliament, or in other roles: Charles Murphy was Secretary of State for the Dominion; Sir Charles Fitzpatrick was Chief Justice; John Costigan was a Dominion senator; PROV Charles R. Devlin was Minister of colonization and Mines in the Provincial Cabinet; John C Kaine was Irish Catholic Representative in the Provincial Cabinet; in the days when our existence was recognized.
FOURTH Through the AOH newspaper, international, a contest for the design of the monument was organized. Then AOH taxed every member of the order 10 cents – from the correspondence I cannot find whether it was a one shot tax, or an annual tax… be that as it may the money began to come in to the Quebec Division.
The results of the contest showed that by far the Celtic Cross in some form was the most desirable way of honouring the Irish people of the past… my grandfather Jeremiah Gallagher was given the task of transferring the idea into a practical monument.
As a civil engineer he knew all about the weights of stone and construction and all that. . . his father had been a stonemason in Ireland… and he himself had trained as an engineer at Ste. Anne de la Pocatiere – 1860s-– where, I might add, it is very likely that he heard stories of the heroic actions of the chaplains who served at Grosse Ile in 1847 – for many of those priests were graduates of the seminary of St. Anne including Father Bernard McGauran, himself from Sligo, head chaplain of GI in 1847 – Father McGauran later served as pastor of St. Patrick’s in Quebec, 1856-1874 – same time as Jeremiah lived here – Jeremiah taught English at the Seminaire in 1867 then later worked at City Hall in the Waterworks Department from about 1870 to 1914. Much of the correspondence I have is on stationery headed: Hotel de Ville – City Hall Bureau de l’Aqueduc Water Works Office Phone 400 . (Jeremiah had a phone at home too – one of the first in a home in
More than the above influences I think was the fact that he worked on the building of the
My father recounted that as more and more money came in for the proposed monument, the drawing that Jeremiah had sketched on the wall in the kitchen at 13 Conroy Street began to grow bigger and bigger.
NEXT Tenders were called for from quarries far and near – in order to get the right stone for the right price: Bids came in from the following quarries
Utopia Granite Works,
Maurice W. Flynn Westerly,
Eugene Sullivan and Sons, Barre
Fallon Brothers of
D.J. McCue Monumental Work,
T.C. Smith Marble Granite and Freestone,
The Stanstead Quarries in Beebe
CHOSEN: BEEBE in the Eastern Townships – granite – near Stanstead
There had been discussion about the quality of stone from the various places and a geologist’s opinion was called for. The sparkling granite of Beebe, Stanstead was chosen. It would withstand the raging easterly winds of winter….the salty winds. And it stands there today. (Table. . . bless the stone. . .) I will tell you more about that piece of stone at the end of my talk.
All this took nine years from the vote of the National Convention in 1900; and twelve years from the 50th anniversary of the worst year of the famine in
During the summer of 1909, the granite was shipped from Stanstead to
But before this happened there was a flow of letters between Jeremiah and Major Edward McCrystal, of the Fighting 69th in
The model for the Irish alphabet was sent to the sculptors by Major McCrystal – and that was only in the month of May – and plans were being made for the unveiling in August.
The four inscriptions on the monument are different: one states the day of dedication by the AOH; another lists the names of the Catholic priests who worked during the summer of 1847, noting those who fell sick , and those who died. The western side panel states a grateful blessing on those priests who came to the island so gallantly to care for the sick and dying.
There is more to say, especially about the work of the priests, both on the island in 1847, and the care of the orphans in the years following, but that can go to another day. Perhaps tomorrow
Through the 1920s and 30s there was a pilgrimage to Grosse Ile from
However…. Let this much be said – that the Irish across
Our foot prints in the sands of time have disappeared. They have been replaced by monuments like the one we venerate tomorrow, and by those that mark our place from coast to coast.
I’ll end with words from John Jordan’s book
“The world thinks better of a people who can thus keep green the memory of their dead.”
|
<urn:uuid:8b732599-0c96-4c0d-85aa-21c84d71d03b>
|
CC-MAIN-2016-26
|
http://irelandmonumentvancouver.com/the-irish-in-canada/grosse-ile/grosse-ile-the-genesis-of-the-aoh-celtic-cross/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975481
| 1,725
| 2.65625
| 3
|
Comparing the Divide: Education lies at the heart of inequality — economic and racial — in America and around the world. As the US approaches Martin Luther King Day, two cities still struggling to learn that lesson are Rio de Janeiro and Selma, Alabama. These two cities, which share a history of both economic and racial inequality, also share a close ranking for economic inequality on the Gini Index: 0.523 (Selma) and 0.519 (Brazil).
RIO DE JANEIRO — One of Rio de Janeiro’s finest private schools, the Colegio Teresiano is surrounded by tropical rainforest. Brazilian poetry and novels by Gabriel Garcia Marquez fill the library. Florid works of naïve art decorate the hallways.
This resplendent setting provides an intellectual jump start for children of middle and upper-class families who can afford the $700-a-month tuition. 2011 World Bank stats puts annual per capita GDP in Brazil at $11,640, or $970 per month. That’s why it’s jarring to find an 11-year-old slum-dweller wearing the Teresiano’s blue-and-white uniform.
The student, fifth-grader Lucas Junior, hails from Rocinha, a mountaintop ghetto, or “favela,” whose red-brick shacks are visible from the Teresiano school’s top-floor balcony and where many people get by on a few dollars a day. In Rocinha, public schools are overcrowded and lessons are sometimes interrupted by drug shootouts. But Lucas sidestepped that dead-end scenario because his father works at the Teresiano as a hall monitor and family scholarships are an employee benefit.
“My son is learning things here that kids don’t learn until two years later in public school,” said Adilson Junior, 34, while taking a break in the school snack bar. “Lucas has a golden opportunity. But all children should have this opportunity and an equal shot at success.”
An equal shot. That was the holy grail for 19th century US school reformer Horace Mann, who promoted equality in schools as the key to upward mobility for the lower rungs of society. One of the earliest advocates of universal public education, Mann lobbied for public schools to bring children of all social classes together in order to give them a common learning experience.
But in Brazil, a rising global power which sees itself as a peer of the United States, many experts say the two-tiered education system accentuates the country’s huge gap between rich and poor. Despite recent improvements, in 2009 Brazil scored 0.557 in the Gini Index, which placed it as the world’s tenth most unequal nation. In 2012, the Gini coefficient for Brazil was 0.519, according to the CIA World Factbook.
This inequality — whether in schools or in economic opportunity — also cuts along a sharp racial divide. Brazil has a brutal history of slavery, was late in accepting the abolition movement, then did little to help freed slaves — and it shows. Today, Brazilians who identify themselves as black or brown represent slightly more than 50 percent of the population, and their income level is half that of whites, according to IPEA, a government-linked think tank.
Given this racial divide, it is revealing that the Gini coefficient for the country of Brazil mirrors that of Selma, Alabama, a city that became synonymous with the struggle for civil rights and voting rights in America. To some degree, this movement was a reaction to a profoundly racist and unfair system of schooling, known as “separate but equal,” which the US Supreme Court finally overturned in the historic 1954 ruling in Brown v. Board of Education.
Education has been at the heart of rising inequality — racial and economic — in America and around the world.
Brazil's class divisions start hardening around the age of five. That’s because, depending on their economic status, Brazilian children are either funneled into rundown public schools that often prep them for mediocrity, or into high-quality private institutions that nurture great expectations and lay the groundwork for achievement.
“This amounts to educational apartheid,” Claudia Costin, the education secretary for the city of Rio de Janeiro, told GlobalPost.
Education and economic success are closely linked. Costin cited studies indicating that each additional year of schooling can provide an income boost of about 15 percent.
But even though more children from the Brazilian underclass are now attending public schools, second-rate teachers, badly equipped buildings, short school days, and lack of parental guidance combine to produce legions of dropouts and ill-prepared graduates.
That’s especially worrisome because Brazil is now the world’s sixth largest economy. Yet largely due to its education woes the country lacks qualified applicants for many high-tech jobs. In some cases, Costin says, headhunters have had to look outside the country for qualified personnel which means fewer Brazilians can move up the economic ladder.
It’s not just a Brazilian storyline. Parallel rich/poor school systems that help maintain a lopsided status quo are the norm throughout Latin America, the region of the world where according to the Gini Index inequality is highest.
“When a society becomes more and more divided, it becomes increasingly class-driven,” Nobel-Prize-winning economist Joseph Siglitz told GlobalPost. “And it’s very hard for democratic processes to work well in that kind of society.”
Thanks in part to deteriorating public schools, the United States is increasingly turning into “that kind of society.”
For the past generation, many parents who could afford it have been removing their kids from public institutions and placing them in private academies. Education isn’t the only factor, but this shift coincides with an alarming jump in income inequality. Between 1980 and 2007, the portion of national income held by the top 1 percent of Americans went from 10 percent to 24 percent.
“Which leads to a question for the United States: why would you allow that to happen, when we in Latin America can show you how difficult it is to achieve the kind of exemplary middle class that you invented in the first place?” wrote Jorge Castañeda, a former Mexican foreign minister who now teaches at New York University.
“So before the United States continues on its current road of dismantling its version of the welfare state, of shredding its social safety net, of expanding the gap between rich and poor,” Castañeda writes, “Americans might do well to glance south.”
Yes We Can?
If Americans fixed their eyes on Brazil, and in particular the northern fringes of Rio de Janeiro, they might spot the Monteiro Lobato K-12 public school. And unlike the palm-fringed courtyards of the Colegio Teresiano, it’s a barren landscape for learning.
The school is framed by dusty streets where drug dealers sell crack. During a recent visit, the toilets didn’t flush due to a lack of water and lunch service had been cancelled. In the 90 degree heat, electric fans struggled to cool the classrooms.
“If I had kids, I would not send them here,” said an exasperated Vivian Fadel, the school principal.
But for many poor children, Monteiro Lobato is the only option. Some of the students were gathered in a cramped outdoor courtyard that doubles as the school gymnasium. When asked about their aspirations, one of them, a curly-haired 13-year-old named Ingrid, said she wanted to be a fashion designer or an architect. But it seems like a stretch.
Ingrid, who comes from a broken family, has never met her father. In fact, 400 of the 1,300 students at the school do not have a father registered on their birth certificates.
Ingrid’s mother works as a maid and is gone all day so when she gets home from school there’s no one to help or encourage her to study. And Ingrid spends a lot of time at home because the school day in Brazil lasts just four hours.
During that time Ingrid doesn’t absorb much because her teachers, many of whom work at two or three schools to augment their salaries, sometimes fail to show up. Fadel would like to fire half of her teachers but can’t because of union contracts that protect civil servants.
Despite her interest in architecture, Ingrid has never heard of the late Oscar Niemeyer, a Rio native and a giant of modern architecture who designed the hyperboloid and flying saucer-like structures of Brasilia. In fact, Ingrid doesn’t even know that Brasilia is her nation’s capital. She’s also unfamiliar with the botanical garden, the theaters and the art galleries of Rio, one of the world’s great cities, because no one at her school ever bothers to organize field trips.
A little later, Ingrid and her friends excuse themselves. It turns out four of their nine teachers blew off work today so the students have decided to go home.
One of the few professors who does show up is Lenice Loiola. She tries to inspire students like Ingrid to take on the world. But she suspects many of her pupils will go on to become teenaged welfare mothers or gang members.
Given the barriers to deep learning at the nation's public schools, Loiola has come to a stark conclusion: “Brazil,” she says, “wants its poor people to remain ignorant.”
For much of the nation’s history, that was the quasi-official policy.
Organized as a slave colony for Portugal, Brazil imported as many as 3 million Africans captives to labor in mines and on sugar cane plantations. Brazil was the last nation in the hemisphere to abolish slavery — in 1888 — and when it did there was little effort to educate black, Indian and mixed-blood Brazilians who made up the majority of the population.
Many European immigrants to southern Brazil valued education. But the country’s public schools were so bad that they took it upon themselves to found their own private and religious schools. The concept of universal public education was ignored.
“Argentina and Chile and other countries were promoting universal education back in the 19th century but in Brazil you had nothing like that,” Simon Schwartzman, a Rio-based political analyst, told GlobalPost. “Even in the 1950s, half of all Brazilians were illiterate.”
During the country’s military dictatorship that lasted from 1964-85, Brazil’s leaders focused on building highways, ports and other infrastructure rather than schools. The drive to educate all Brazilians finally took hold in the 1990s and became a top priority following the 2002 election of President Luiz Inacio Lula da Silva.
Ironically, Lula, a former shoeshine boy who dropped out after fourth grade, was one of Brazil’s least educated presidents. Yet he became an inspiration for the underclass, a yes-we-can example of upward mobility.
Lula doubled per-student spending on education and introduced Bolsa Familia, a program that provides monthly cash stipends to poor families who keep their children in school. Today, according to UNESCO, 95 percent of Brazilian children, aged 7-14, have access to primary and middle school education.
In a 2010 speech shortly before he stepped down following two terms in office, Lula declared: “I want every child to study much more than I could, much more.”
As if cramming for a final exam after a semester of sloth, Brazil under Lula and his successor, President Dilma Rousseff, has been in a headlong rush to make up for past inaction by bringing education to the masses. But turning such a large ship of state has proven to be extremely difficult.
Indeed, Brazil’s public education system now appears to have grown too big too fast and experts say that educational quality has suffered.
For starters, there aren’t enough buildings to hold all the new students. Many schools operate on a series of four-and-a-half-hour shifts, with the first students arriving at 7 am and last departing at 10 pm. Classrooms are sometimes jammed with 40 or more students. At the Colegio Teresiano and other private schools, classrooms are often half that size.
Many students are being taught by less-qualified professors because the best and brightest are no longer interested in working at public schools. Costin cited one recent study showing that 60 percent of Brazil’s public school teachers were not in the habit of reading books.
Some policies have just plain backfired. For instance, to encourage underperforming kids to stay in school, the Rio city government issued a no-fail policy for elementary school. But as the lagging students advanced, the result was 28,000 illiterate fourth, fifth and sixth graders, Constin said.
Part of the problem is generational. Universal public education is so new that many families have little notion of its value while illiterate parents, who never went to school, may find it impossible to help their kids with homework, Schwartzman says.
This will likely change with time. But for now, Brazil continues to rank near the bottom of international student surveys.
In 2010, for example, students from 65 nations took part in the Program for International Student Assessment. Brazilian students ranked 53rd in reading and science and 57th in math.
All of this has hastened the ongoing flight of tax-paying middle and upper class families to private schools and produced a vicious circle because their exodus has reduced the pressure on elected officials to improve public education. By contrast, impoverished Brazilians pay no income taxes and that makes them less likely to hold officials accountable for decrepit public schools, Schwartzman said.
“Education is supposed to provide equal opportunities for all,” Costin says. “But this concept is turning into a myth, a utopia.”
Brazil’s slow and frustrating effort to reduce inequality through education and other means stands as a cautionary tale for the United States. As Castañeda points out, “Once inequality becomes entrenched, reversing it becomes incredibly difficult.”
And if its middle class withers, what might the United States look like? The answer, Castañeda says, is “what Latin America used to be, and in some ways still struggles to stop being.”
Back at the Colegio Teresiano, Brazilian journalist Patricia Lopes momentarily draws a blank when asked what she’d do if she lacked the wherewithal to send Clara, her freckle-faced six-year-old, to this private Roman Catholic school. Rather than risk sending Clara to a public school, Lopes finally replies, her family would likely leave the country.
“I look at my child and I know she will have a good future. She will be able to go to college and maybe become a doctor or a lawyer,” Lopes said. “But most kids in Brazil are badly educated and have no future. I feel bad about that.”
Note: The photograph featured in The Great Divide series art is from Sao Paulo, Brazil, by photographer Tuca Vieira.
This story is presented by The GroundTruth Project.
|
<urn:uuid:2c3ea87f-f58d-41ca-88d5-7469d869f3da>
|
CC-MAIN-2016-26
|
http://www.globalpost.com/dispatch/news/regions/americas/brazil/130111/brazil-education-income-inequality
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00100-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967947
| 3,242
| 3.15625
| 3
|
1 Answer | Add Yours
The most important trait was loyalty to their fellow men for other men. If one could show loyalty, one was a friend for life. Relationships were based upon duty, not common interests. Men who were brave, loyal, and trustworthy were revered. The expectations for men were to defend the territory or kingdom, for example, or work to provide for their families.
When Grendel begins attacking the Hrothgar's hall, men are stationed each night in it to try to thwart the attack. They put their lives on the line to defend Hrothgar's kingdom. Many, many die as Grendel continues his attacks. The deaths of these men were seen as honorable because they died defending their kingdom. Beowulf was the ultimate warrior, then, because he was able to defeat Grendel. He had all of the characteristics of a "manly" hero, including great, almost inhuman, strength, a sense of honor, trust, and loyalty. In fact, he came from another kingdom simply to slay Grendel for the Danes!
Kings were expected to be very wise and intelligent and to make sound decisions, in addition to the other qualities. They had to be calm in times of crisis and be trustworthy to all of their people to make the kingdom feel safe for its inhabitants.
We’ve answered 327,617 questions. We can answer yours, too.Ask a question
|
<urn:uuid:d992a174-9589-4d73-91ee-b0bc5d0e27d1>
|
CC-MAIN-2016-26
|
http://www.enotes.com/homework-help/what-ideas-masculinity-quot-beowulf-quot-38321
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00062-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.99092
| 293
| 3.28125
| 3
|
Korean researchers have engineered a new strain of E. coli that can produce a suitable substitute for gasoline. And as they quite rightly point out, bacteria that poops out petroleum could be some valuable shit.
Digging up fossil resources carries tremendous environmental, monetary, and geopolitical costs, which means figuring out a way to feed the world's huge addiction to gasoline without unearthing crude could have a tremendous impact.
Bacteria, meanwhile, has already proven itself capable of amazing things. It's responsible for making your booze boozy, and in recent years it has been used to produce everything from gold to diesel fuel. When it comes to producing biofuels, we're probably most familiar with bacteria that produce ethanol, but as the Korean researchers point out in a new study published in Nature, petroleum has a 30-percent higher energy content than traditional biofuels.
The new bioengineering process leverages existing E. coli strains to produce short-chain Alkanes molecules, which they claim is a chemically identical replacement for the combination of short-chain hydrocarbons commonly known as gasoline. In other words, you could put this bacterial excretion into your car and it would run. The WSJ reports:
When the modified E. coli were fed glucose, found in plants or other non-food crops, the enzymes they produced converted the sugar into fatty acids and then turned these into hydrocarbons that were chemically and structurally identical to those found in commercial fuel...
Unfortunately, as the WSJ points out, one liter of glucose produces just 580 milligrams of gas, which is a highly unfavorable yield to say the least. The tech's too new to power cars anytime soon, but it's an important step towards motoring the highways, powered by poop. [Nature via Slashdot and WSJ]
Image by Alexander Raths/Shutterstock
|
<urn:uuid:baddfed3-00a3-439e-8cbf-439de4f662ad>
|
CC-MAIN-2016-26
|
http://gizmodo.com/researchers-bioengineer-bacteria-that-poops-out-gasolin-1427882342
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00088-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957034
| 377
| 3.203125
| 3
|
By STEVEN J. FLECK, PHD Musclular Development
Q: Some lifters at my club train a muscle group several times per week, while others train a muscle group only one time per week. Both the lifters who train a muscle group one time per week and multiple times per week claim that their training program is best for muscle size gains. I have trained using programs with one session per week for a muscle group and with two or three sessions per week that with some exercises involve the same muscle group. I did not notice much difference between my muscle size gains with either type of program. What does sport science research say about how many days per week a muscle group should be trained?
A: Training frequency is an important consideration when training for muscle size. What people think is the best training frequency is many times affected by the training frequency of their favorite athlete. If their favorite athlete trains a muscle group one time per week, they will adapt a similar training program. Similarly, if the training frequency of their favorite athlete is three times per week, they will train three times per week. Training frequency needs to be defined, because without a standard definition, information concerning training frequency can become confusing. For most people, training frequency means how many times per week a particular muscle group is trained. This definition is important, because it would be possible to train six days per week and only train each muscle group one day per week or train each muscle group two, three or even six days per week with six training sessions per week.
So most people look at training frequency in terms of how many days per week a particular muscle group is trained and not as how many days per week training takes place on. Quite a bit of research has been done investigating the effect of training frequency on hypertrophy or muscle size. However, when looking at this research, unfortunately other factors than frequency need to be taken into consideration when trying to reach a conclusion about what is the optimal frequency when training for muscle size increases. For example, it would be possible to train a muscle group two days per week with 4 sets of each exercise or train the muscle group one day per week with 8 sets of each exercise. Assuming number of repetitions and the weight used for each set was approximately equal, total training volume (sets x repetitions x weight) performed per week would also be approximately equal. So even though training frequency was different, training volume is the same. Obviously many combinations of number of sets, number of exercises and weight used could be used to make up one or any other number of training sessions per week. Thus, whenever trying to come to a conclusion concerning training frequency, other training variables need be considered and do possibly affect the conclusion about the optimal training frequency for muscle size gains.
Swedish researchers recently published a review of the sports science research concerning muscle hypertrophy to reach a conclusion trying to consider not just training frequency, but all the other training variables that might affect any conclusion concerning the optimal training frequency. The majority of research projects concerning training frequency have looked at either the quadriceps or the biceps muscle group. So, these Swedish researchers chose to examine only studies that trained these two muscle groups. For quadriceps training, there was no difference between training two or three days per week, with both frequencies showing a daily increase of 0.11 percent in muscle size. For the biceps muscle group there is also no difference between training frequencies of two or three days per week, with an average increase of 0.18 percent per day in muscle size. It is interesting to note that these researchers found no studies examining muscle size increases with a training frequency of one day per week. They did find several studies training for five days per week. Thus, although they conclude that there is no difference in muscle size gains between frequencies of two and three days per week, they also concluded that muscle size gains can be made with training frequencies of anywhere between two and four times per week for as long as six months. So more research is badly needed concerning the effect of training frequency on muscle size gains, especially on the aspect of less frequent and more frequent training sessions than two or three days per week.
One consideration when thinking about long-term muscle size gains is how long after a weight-training session muscle protein synthesis goes on. How much net muscle protein synthesis goes on after each training session will affect muscle size gains over the long haul. After a weight-training session, peak protein synthesis rates take place somewhere between 3 and 24 hours after the training session. Muscle protein synthesis increases above resting values are apparent for as long as 48 to 72 hours after a training session. One might interpret protein synthesis rates to mean it does not pay to perform another training session until after protein synthesis rates have returned to normal or are no longer at peak values. Although this might make sense, current information concerning protein synthesis rates could be interpreted to mean that you should train a muscle group anywhere between every day (protein synthesis rates peak within 24 hours after a training session) or approximately two times per week (protein synthesis rates are increased for as long as 72 hours after a training session). So more research is definitely needed concerning the effect of weight training on protein synthesis rates and the long-term effect on muscle size gains.
Many times, discussions about training frequency eventually mention evidence of bodybuilders training a muscle group only once or twice per week and the athletes, such as Olympic weightlifters, performing exercises that involve a muscle group like the quadriceps several times per week or in some cases almost daily. Every training session for an Olympic weightlifter will normally include several of the following exercises: back squats, front squats, variations of the Olympic lifts such as clean pulls and snatch pulls, jerks, full cleans and full snatches. All these exercises involve the quadriceps, hamstrings, gluteals, calves and lower back. So these muscles are trained virtually every training session for an Olympic weightlifter and Olympic weightlifters many times train daily. Some bodybuilders train using body part routines, where a muscle group is emphasized or the focus of training only one day per week. Many variations of body part routines are possible, but the major aspect related to training frequency is a body part or muscle group is only emphasized one or two days per week of training. Both bodybuilders and Olympic weightlifters have a great deal of hypertrophy. So, comparison of their different training routines makes it difficult to come to a conclusion concerning the effect of training frequency on muscle size gains. One aspect that could be involved when talking about training programs of athletes is how experienced they are at weight training. A training frequency of one time per week has been shown to increase muscle size and fat-free mass. This, however, does not mean that a training frequency of one time per week is optimal; it only means that it does result in some muscle size gains. Some information from studies does indicate that with experienced weight trainers, greater increases in fat-free mass take place with three training sessions per week, compared to one training session per week per muscle group. This is even true when weekly training volume is the same. This information indicates that training a muscle group more than one time per week increases muscle size more than training only one time per week.
The effect of training frequency on muscle size gains definitely needs more research. This is especially true when it comes to the optimal training frequency of elite athletes, in part because many athletes who train a muscle group only one day per week do so with a very high training volume (several exercises for a muscle group and net assets of each exercise) per session. With a very high training volume per session, there may be need of a longer recovery time between training sessions. It must also be remembered that simply because one training session per week does result in an increase in muscle size, it does not mean that it is the maximum or optimal increase in muscle size. There may also be differences in the optimal training frequency for muscle size gains depending upon where you are in your training. For example, a very high frequency in combination with a relatively low-to-moderate volume per training session may be a good way to "kick-start" muscle size gains either at the start of a training program or if you are in a training plateau. This last point brings out the idea that some type of planned variation or periodization of training may be necessary to maximize muscle size gains. This could mean changing up the exercises for a certain muscle group, changing the number of sets, repetitions and weight used for different exercises, as well as changing training frequency for a muscle group in order to bring about long-term increases in muscle size over months or years of training.
Wernbon M, Augustsson J and Thomee R. The influence of frequency, intensity, volume and mode of strength training on a muscle cross-sectional area in humans. Sports Medicine, 37:225-264, 2007.
|
<urn:uuid:8fb83076-e125-473b-96d8-d93029e4b515>
|
CC-MAIN-2016-26
|
http://anabolicminds.com/forum/content/training-frequency-q-351/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00177-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961418
| 1,818
| 2.578125
| 3
|
Methane on Mars can disappear from the planet's atmosphere mysteriously fast, fading in less than a Martian year (or about 22 Earth months), a new study finds.
Researchers mapping out the Red Planet's methane cycle have discovered that concentrations of the gas vary by season, by year and by location, peaking when it's warm in regions home to underground water ice and past witnessed volcanic activity.
The findings add yet another layer to the tricky debate over whether Mars' methane is created by biological processes or has a more mundane geochemical origin. [Image: New Mars methane map.]
"The source of the methane could be geological activity or it could be biological we can't tell at this point," said Sergio Fonti of the Universita del Salento. "However, it appears that the upper limit for methane lifetime is less than a year in the Martian atmosphere."
Scanning Martian skies
Methane was first detected on Mars in 2003. The finding intrigued scientists, since the gas can be a sign of life. On Earth, methane bubbles up from the bottom of swamps as organic matter decays, and it is emitted by cows, goats and other animals.
But methane also forms from chemical and geophysical processes. For example, it's common in the atmospheres of Jupiter, Saturn, Neptune and Uranus. The Martian atmosphere, which is 95 percent carbon dioxide, has only trace amounts of methane.
Fonti and Giuseppe Marzo of NASA's Ames Research Center in Moffett Field, Calif., performed a comprehensive survey of the methane cycle on Mars. They compiled nearly 3 million observations taken by NASA's Mars Global Surveyor spacecraft to track the amounts of methane in the Mars atmosphere between July 1999 and October 2004 (about three Martian years).
The researchers found that methane, once emitted, sticks around in the Martian atmosphere for less than a single Martian year.
Levels of the gas
are highest in the autumn in Mars' northern hemisphere, with
localized peaks of 70 parts per billion about 4 percent of the
average methane concentration on Earth though the gas can still
be detected across most of the Red Planet at this time of year. There
is a sharp decrease of methane in the northern Martian winter, the
researchers found. Then concentrations build again in spring, rise
rapidly in summer and spread across the planet, they added.
Three regions in the northern hemisphere had higher-than-normal methane concentrations, the researchers discovered. These were Tharsis and Elysium, the two main volcano provinces, and Arabia Terrae, which has a substantial cache of underground water ice.
"It's evident that the highest concentrations are associated with the warmest seasons and locations where there are favorable geological and hence biological conditions such as geothermal activity and strong hydration," Fonti said. "The higher energy available in summer could trigger the release of gases from geological processes or outbreaks of biological activity."
Researchers aren't sure what's injecting methane into the Martian atmosphere, and they're equally puzzled about why it fades so fast. Photochemical processes destruction of the gas by sunlight should not happen so quickly, the scientists said.
However, the winds of Mars may play a role, they added.
High winds can mix strong, reactive chemicals into the Martian atmosphere, quickly breaking down methane. One such destructive compound, perchlorate, has been detected in Martian dirt.
The new study should help scientists get to the bottom of such questions, the researchers said.
"Our observations will be very useful in constraining the origins and significance of Martian methane," Fonti added.
Fonti and Marzo planned to present their results at the European Planetary Science Congress in Rome tomorrow (Sept. 21).
|
<urn:uuid:0f651c5e-491f-404a-9a3d-3ef29de399c9>
|
CC-MAIN-2016-26
|
http://www.space.com/9154-mystery-mars-methane-fades-fast.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.938153
| 761
| 3.875
| 4
|
E. Cobham Brewer 18101897. Dictionary of Phrase and Fable. 1898.
In Latin cicta means the length of a reed up to the knot, such as the internodes made into a Pan-pipe. Hence Virgil (Ecl. ii. 36) describes a Pan-pipe as septem compacta cictis fistula. It is called Cow-bane, because cows not unfrequently eat it, but are killed by it. It is one of the most poisonous of plants, and some think it made the fatal draught given to Socratês.
Sicut cicuta homini venenum est, sic cicutæ vinum.Pliny, xiv. 7.
|
<urn:uuid:08e6429a-e4da-446c-99f0-a6fdc8f5ad59>
|
CC-MAIN-2016-26
|
http://bartleby.com/81/3573.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00025-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.903911
| 155
| 2.8125
| 3
|
June 13, 2005
First Test of Predictions of Climate Change Impacts on Biodiversity
A new study published in the journal Global Ecology and Biogeography represents the first real test of the performance of models used to forecast how species will change their geographic ranges in response to the Earth's changing climate
Despite the weight of scientific evidence that the Earth is warming and that this is already affecting wildlife, many people - and a few scientists - still refuse to believe it is actually happening. These climate change skeptics usually justify their position by insisting that scientists' forecasts are just too inaccurate. Of course, we can never really know what the future will bring, but in a fascinating new study published this week in the journal Global Ecology and Biogeography a group of Oxford Scientists have tested the ability of environmental science to predict the future"¦ by going back to the past.Dr Miguel Araújo and his colleagues from Oxford University's Biodiversity Research Group imagined they were back in the 70's and were trying to predict the geographic ranges of British birds in 1991 using 16 commonly used climate-envelope models and the real data on how the climate had changed during this period.
Climate envelope model forecasts typically involve a three-step process: First, for each species, mathematical models are developed to link the species to its present climate envelope (actual environmental conditions where the species is found). Second, a climate change scenario for some point in the future, typically 2020 or 2050, is applied to generate a new potential range distribution for the species. Third, this new projected distribution is compared to the present distribution, allowing the scientists to forecast whether the species distribution, will grow, or shrink, or even become extinct.
Unlike previous studies that have provided untestable forecasts of range changes in response to future climate change, the Oxford study was able to directly compare the predicted range changes with what actually occurred. Surprisingly, the ability of any single model to accurately predict the 1991 distribution was very poor. The results of models applied to particular species were spectacularly variable. For 90% of species the models could not agree whether their geographic range would expand or contract. In the small minority of cases (10%) where all the models agreed about the direction of change, they only had a 50% chance of getting that direction right. "It would be just as accurate and a lot less hassle just to toss a coin" says one of the co-authors, Dr Richard Ladle.
So, will we ever be able to predict accurately how climate change will affect the distributions of animals and plants? The Oxford Group may have found a solution. "The accuracy of the predictions can be drastically increased if a set of alternative models are compared and used together to create a 'consensus' projection" Says Dr Araújo. Using the same data set for British birds, the consensus prediction was shown to be vastly superior to any single model and could predict bird range expansion or contraction with an accuracy of over 75%.
To avoid further accusations of crystal ball gazing, environmentalists and scientists now need to find further ways of improving the accuracy of models to provide more meaningful inputs into environmental policy making. "If we don't improve our forecasting soon then not only will the climate skeptics find it easy to criticize climate change research, but we will be left making decisions about the future of the planet based on guesswork" says Dr Ladle.
On the Web:
|
<urn:uuid:ab334700-d0ce-4934-bd0d-13ca3087651b>
|
CC-MAIN-2016-26
|
http://www.redorbit.com/news/science/155759/first_test_of_predictions_of_climate_change_impacts_on_biodiversity/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00067-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953269
| 691
| 3.359375
| 3
|
Leadership is consistently identified as a critical factor in effective economic development. Although leadership can come from many places within the community, local elected officials are particularly well-positioned to take on this role. "The Role of Local Elected Officials in Economic Development: 10 Things You Should Know" identifies fundamental ways elected officials can become informed and strategic decision-makers who can connect the policy "dots," be effective communicators, and take a leadership role in economic development.
The format of the guide is a "top 10 list" of things elected officials should know about economic development in order to be effective leaders. These include:
- Your local economic strengths and weaknesses. A stronger understanding of your community's economic profile will help you create a realistic vision and strategies for economic development.
- Your community's place in the broader regional economy. With a firmer grasp of how your community fits into the broader region, you're better prepared to work with other jurisdictions to share responsibility for regional economic success.
- Your community's economic development vision and goals. Local elected officials can play a key role in building consensus for a vision and goals that provide clear direction for local economic development.
- Your community's strategy to attain its goals. A strategic approach means linking economic development goals to specific activities, allocating a budget and staff to these activities, and evaluating performance based on measurable outcomes.
- Connections between economic development and other city policies. When crafting economic development policies, it is essential to consider how other city policies (e.g., transportation or housing) affect your economic development goals.
- Your regulatory environment. Your community's regulatory process should allow for timely, reliable and transparent resolution of issues facing businesses, while still remaining true to your long-term economic development vision.
- Your local economic development stakeholders and partners. Local officials should think strategically on a project-by-project basis about who needs to be involved, the resources they bring to the table, and what it will take to get them engaged.
- The needs of your local business community. Local officials can help create an environment that supports the growth and expansion of local businesses, primarily by opening lines of communication.
- Your community's economic development message. You will want a clear, accurate and compelling message that reflects your local vision and that helps ensure broad support for economic development projects undertaken by the city and its partners.
- Your economic development staff. Local elected officials will be more effective in leading economic development activities to the extent that they forge strong relationships with staff members who work on these issues on a daily basis.
|
<urn:uuid:e6543834-f31d-4eaa-9d67-644d22bbff01>
|
CC-MAIN-2016-26
|
http://www.nlc.org/find-city-solutions/city-solutions-and-applied-research/economic-development/the-role-of-local-elected-officials-in-economic-development
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937965
| 522
| 3.015625
| 3
|
Education and Research Center
The National First Ladies’ Library was given the City National Bank Building located on Market Avenue just a block north of the Ida Saxton McKinley house by the Marsh Belden, Sr. family in 1997. The goal to renovate the building and adapt it to the library’s growing need for space began almost immediately.
On July 23, 1999, this renovation project was designated as an Official Project of Save America’s Treasures, a Millennium Council initiative created by President and Mrs. Clinton. Mrs. Clinton came to Canton, Ohio to announce the dedication at that time. A $2.5 million match grant was awarded through Save America’s Treasures for the renovation.
The building was constructed in 1895 and has seven floors with approximately 20,000 square feet of usable space. It had a large skylight over the main banking room on the first floor that has been fully restored, as well as an extensive glass block floor under the skylight, a portion of which has been restored. The upper floors of the building are designed in a “U” shape with two wings connected by a lobby that create a “light well” for the skylight below. There is extensive use of marble on the first floor foyer/lobby and main banking room, as well as in the lobbies on the upper floors of the building. Transom windows were used throughout the building to provide additional light into the rooms. The lower level originally was used for public baths and later for small shops.
This building is the National First Ladies’ Library Education and Research Center. There is a 91-seat Victorian Theatre on the lower level, where films and documentaries on the first ladies are shown and author lectures and live presentations are held. The first floor has a large meeting/reception/exhibit room featuring restorations of the original skylight and a portion of the original glass block floor. Also on the first floor is a small library room, with a spiral staircase leading to a mezzanine where rare books are kept. This library also houses a collection of books that replicates the first White House Library created by First Lady Abigail Fillmore.
A monumental staircase constructed of cast iron railings, a wood handrail and slate steps leads to the upper floors of the building. The second floor is the main library area, having an east and west library with research and study space connected by a lobby with marble floors and wainscoting. Most of the office walls have been removed on this floor; however, the transom windows remain to indicate where the office walls were originally located. The third floor has a substantial size conference room where seminars and workshops are held and several small rooms for researchers, and the fourth, fifth and sixth floors have office space for library personnel, additional conference space and archival storage rooms.
|
<urn:uuid:dcf506a3-49c3-4f27-b4b4-e0a6fadfd265>
|
CC-MAIN-2016-26
|
http://firstladies.org/EdResearchCenter.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00123-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974101
| 591
| 2.65625
| 3
|
ESO's Stéphane Guisard captured this stuning panorama from the site of ALMA, the Atacama Large Millimeter/submillimeter Array, in the Chilean Andes. The 5000-metre-high and extremely dry Chajnantor plateau offers the perfect place for this state-of-the-art telescope, which studies the Universe in millimetre- and submillimetre-wavelength light.
When the panorama was taken, the Moon was lying close to the centre of the Milky Way in the sky, its light bathing the antennas in an eerie night-time glow. The Large and Small Magellanic Clouds, the biggest of the Milky Way's dwarf satellite galaxies, appear as two luminous smudges in the sky on the left. A particularly bright meteor streak gleams near the Small Magellanic Cloud.
On the right, some of ALMA’s smaller 7-meter antennas — twelve of which will be used to form the Atacama Compact Array — can be seen. Still further on the right shine the lights of the Array Operations Site Technical Building. And finally, looming behind this building is the dark, mountainous peak of Cerro Chajnantor.
The Daily Galaxy via ESO
|
<urn:uuid:c6e89cf3-14e9-4e5b-935d-c97aff835c14>
|
CC-MAIN-2016-26
|
http://www.dailygalaxy.com/my_weblog/2012/05/image-of-the-day-moon-the-arc-of-the-milky-way-at-space-observatory-center-in-chilean-andes.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00029-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.867751
| 259
| 2.734375
| 3
|
|<< Previous||Contents||Next >>|
Addressing the Quiet Crisis:
Origins of the National Environmental Policy Act of 1969
National Environmental Policy Act of 1969
Senator Henry "Scoop" Jackson (D-Wa.), Chairman of the Senate Committee on Interior and Insular Affairs, was looking for a way to ensure all Federal actions reflected the new environmental awareness. Late in the Johnson Administration, Senator Jackson wanted to increase the expertise of his staff in this area, but did not have the funds to do so. He asked Train if the Conservation Foundation would pay for the consultant services of Professor Lynton K. Caldwell of Indiana University to work with the committee. Train agreed.
Jackson introduced the National Environmental Policy Act (NEPA) on February 18, 1969. He said:
The purpose of this legislation is to lay the framework for a continuing program of research and study which will insure that present and future generations of Americans will be able to live in and enjoy an environment free of hazards to mental and physical well-being.
He cited the Santa Barbara oil spill as an example of an ecological disaster that prompted an outcry but not a comprehensive program:
We are still only reacting to crisis situations in the environmental field. What we should be doing is setting up institutions and procedures designed to anticipate environmental problems before they reach the crisis stage.
At this point, NEPA directed the Secretary of the Interior to conduct studies and research relating to ecological systems and environmental quality and to identify risks and ways of reducing them. It also created a Council on Environmental Quality (CEQ) in the Executive Office of the President:
The primary function of the Council shall be to study and analyze environmental trends and the factors that effect [sic] these trends, relating each area of study and analysis to the conservation, social, economic, and health goals of this Nation.
The President would appoint the three members of CEQ, to serve at his pleasure, subject to consent by the Senate:
Each member shall, as a result of training, experience, or attainments, be professionally qualified to analyze and interpret environmental trends of all kinds and descriptions and shall be conscious of and responsive to the scientific, economic, social, esthetic, and cultural needs and interests of the Nation.
The bill did not include a requirement for review of individual projects.
Secretary Hickel, with Train beside him, testified during the committee's 1 day of hearings on April 16, 1969, that the Administration opposed the bill. The Secretary explained that CEQ was unnecessary because the White House had established an Environmental Quality Council (EQC) on May 29, 1969, to perform a similar function. Train, whose support for the bill was well known to Senator Jackson, believe the EQC had proven ineffective, but could not say so during the hearing. In his memoir, Train recalled:
Subsequently, I (along with others, I am sure) was able to persuade the administration to change its position on NEPA. Aside from the self-evident inadequacy of the EQC, one of my main arguments with the White House in support of NEPA was the fact that the legislation was going to pass overwhelmingly. I was authorized to testify in favor of the legislation in the House, where Representative John Dingell of Michigan had introduced a companion bill. [Train, p. 69]
Representing the DOT before Senator Jackson's committee was Assistant Secretary for Urban Systems and Environment John Braman, a former Mayor of Seattle. As Chairman Jackson noted in introducing Braman, he had taken office only the day before. Braman acknowledged that he and his office were new, but the office was "a new attempt to better organize the capacities of the department to cope with this very, very important and very serious problem which we all face." Secretary Volpe, Braman explained, had implemented the change:
The charter of this particular office is a very broad one. We are only now beginning to get to the point where we can see the manner in which we will attack the specific problems, but certainly it is very clear that one of the things the Secretary expects from this new office is a better decimation [sic] of information and better coordination between all of the activities of the Department which go to the whole field of highways, mass transportation, aviation, railroads, and many others to the end that the utilization of all funds, local, State, and national, can produce for the people the very best system of movement possible, at the same time recognizing that in many instances the determinations will have to be changed from being based on economics alone to a consideration of the economies as tempered by the impact on the environment.
The DOT, he said, shared the Administration's opposition to NEPA. "We believe the argument for maintaining organizational flexibility is a compelling one and would recommend an administrative, rather than a statutory approach at this time."
The "Action-Forcing Mechanism"
Professor Caldwell also testified. He supported creation of the CEQ to advise the President, but was concerned that it placed too much responsibility in the President, who already faced "responsibilities and burdens that no human individual can be expected to manage." He believed that the country needed "an independent forum for a review of the Nation's condition." What was needed was:
...a body that is capable of making assessments not only of our current conditions, but of presenting alternatives for coping not only with the problems that we know about that are before us now, but problems we have yet to face...We cannot afford to continue to learn from experience.
He called for creation of a "body" to serve as an "action-forcing mechanism" that would evaluate Federal actions before they occurred:
For example, it seems to me that a statement of policy by the Congress should at least consider measures to require the Federal agencies, in submitting proposals, to contain within the proposals an evaluation of the effect of these proposals upon the state of the environment, that in the licensing procedures of the various agencies, such as the Atomic Energy Commission or the Federal Power Commission or the Federal Aviation Agency there should also be, to the extent that there may not now exist fully or adequately, certain requirements with respect to environmental protection, that the Bureau of the Budget should be authorized and directed to particularly scrutinize administrative action and planning with respect to the impact of legislative proposals, and particularly public works proposals on the environment.
Now, these are what I mean by action-forcing or operational measures. It would not be enough, it seems to me, when we speak of policy, to think that a mere statement of desirable outcomes would be sufficient to give us the foundation that we need for a vigorous program of what I would call national defense against environmental degradation. We need something that is firm, clear, and operational. [Hearing before the Committee on Interior and Insular Affairs, United States Senate, National Environmental Policy, April 16, 1969, p. 114-116]
When the committee issued its report on July 9, the revised bill included a variation of Professor Caldwell's action-forcing mechanism. For every Federal action significantly affecting the quality of the human environment, the sponsoring Federal Agency would be directed to study the environmental impacts of the proposed action, consider measures for mitigating any adverse environmental effects, and determine if any irreversible and irretrievable impacts were warranted by the need for the action. The report explained:
One of the major factors contributing to environmental abuse and deterioration is that actions-often actions having irreversible consequences-are undertaken without adequate consideration of, or knowledge about, their impact on the environment. Section 201 seeks to overcome this limitation by authorizing all agencies of the Federal Government, in conjunction with their existing programs and authorities, to conduct research, studies, and surveys related to ecological systems and the quality of the environment. This section also authorizes the agencies to make this information available to the public, to assist State and local government, and to utilize ecological information in the planning and development of resource-oriented projects. [Committee on Interior and Insular Affairs, National Environmental Policy Act of 1969, Report No. 91-296, July 9, 1969, p. 9]
Senator Jackson introduced the bill on the Senate floor on July 10. He said that it "directs that all Federal agencies conduct their activities in accordance with these goals, and provides ‘action-forcing' procedures to insure that these goals and principles are observed." He did not elaborate on the mechanism. The floor debate prior to Senate approval did not include discussion of the provision. The focus was on the proposed CEQ, according to Flippen:
With the proposal for CEQ dominating the debate over NEPA, few legislators realized the importance of the bill's impact statement requirement. Even Train, who had argued since his days at the Conservation Foundation for some established program to weigh environmental considerations, did not grasp its ramifications. [Conservative Conservationist, p. 85]
The Senate approved the bill the same day.
In July, the House Committee on Merchant Marine and Fisheries introduced a companion bill to create a CEQ as an amendment to the Fish and Wildlife Coordination Act. It did not contain a provision comparable to the action-forcing mechanism in the Senate bill. The House adopted the bill on September 23, 1969.
Senator Jackson returned to the Senate floor on October 8 to lay the House bill before the Senate, ask his colleagues to reject it, and agree to a Conference Committee to work out differences between the two bills. He submitted a formal statement, a report on differences between the Senate and House bills, the history of the legislation, and other material to be inserted into the Congressional Record. In discussing the origins of the bills, he cited the "inadequacy of present knowledge, policies, and institutions" related to a subject that "touches every aspect of man's existence." He said:
We see increasing evidence of this inadequacy all around us: haphazard urban and suburban growth; crowding, congestion, and conditions within our central cities which result in civil unrest and detract from man's social and psychological well-being; the loss of valuable open spaces; inconsistent and, often, incoherent rural and urban land-use policies; critical air and water pollution problems; diminishing recreational opportunity; continuing soil erosion; the degradation of unique ecosystems; the degradation of unique ecosystems; needless deforestation; the decline and extinction of fish and wildlife species; faltering and poorly designed transportation systems; poor architectural design and ugliness in public and private structures; rising levels of noise; the continued proliferation of pesticides and chemicals without adequate consideration of the consequences; radiation hazards; thermal pollution; an increasingly ugly landscape cluttered with billboards, powerlines, and junkyards; growing scarcity of essential resources; and many, many other environmental quality problems.
(This same list of environmental problems had appeared in the committee's July 9 report and would be repeated during deliberations on December 20.)
Several of these items related to transportation issues, but the report did not refer directly to highways or the Interstate System except in one instance. The Senator's report on legislative history concluded with a statement that the committee had reviewed and drawn on "many measures related to various aspects of environmental management." A footnote added:
In the closing days of the 90th Cong. [which ended October 14, 1968], the Legislative Reference Service tabulated over 100 bills concerned with environmental issues, covering a broad area of interest-cleaning up the Nation's rivers and better approaches to smog control, improving the use of open space and prevention of disorderly encroachment by superhighways, factories and other developments, improved protection of areas of high fertility, wiser application of pesticides, whose residues affect both man and wildlife, and the control of urban sprawl, unsightly junkyards, billboards, and power facilities that lower the amenities of landscape. [115 CongRec.91st Cong., 1st Sess., 29067-29068 (1969)]
This footnote is the only direct reference to highway construction in the material Senator Jackson presented on this occasion.
Congress Approves NEPA
After working out differences in the approved bills, the Conference Committee of the two Houses released its report on December 17, 1969. Section 101 was a "Declaration of National Environmental Policy":
The Congress, recognizing the profound impact of man's activity on the interrelations of all components of the natural environment, particularly the profound influences of population growth, high-density urbanization, industrial expansion, resource exploitation, and new and expanding technological advances and recognizing further the critical importance of restoring and maintaining environmental quality to the overall welfare and development of man, declares that it is the continuing policy of the Federal Government, in cooperation with State and local governments, and other concerned public and private organizations, to use all practical means and measures, including financial and technical assistance, in a manner calculated to foster and promote the general welfare, to create and maintain conditions under which man and nature can exist in productive harmony, and fulfill the social, economic, and other requirements of present and future generations of Americans.
The Federal Government was "to use all practicable means, consistent with other essential considerations of national policy, to improve and coordinate Federal plans, functions, programs, and resources to the end that the Nation may" fulfill this policy.
As Flippen explained, the final version of NEPA included "almost all the stringent provisions" of the Senate version, including the action-forcing mechanism now in Section 102:
Under pressure from the House conferees, the report added the qualifying phrase "to the fullest extent possible" to its impact statement requirement, and it mandated each agency to "consult" with CEQ, not receive its approval. In all other respects, however, the report was as forceful a statement of environmental policy as supporters had hoped. [Flippen, J. Brooks, Nixon and the Environment, University of New Mexico Press, 200, p. 48]
On December 20, Senator Jackson brought the bill before the Senate. He said that, "there is a new kind of revolutionary movement underway in this country." He continued:
This movement is concerned with the integrity of man's life support system-the human environment. The stage for this movement is shifting from what had once been the exclusive province of a few conservation organizations to the campus, to the urban ghettos, and to the suburbs.
In recent months, the Nation's youth, in high schools, colleges, and universities across the country, have been taking up the banner of environmental awareness and have been seeking measures designed to control technology, and to develop new environmental policies which reflect the full range of diverse values and amenities which man seeks from his environment.
The bill was "a response by the Congress to the concerns the Nation's youth are expressing." He saw NEPA not as a panacea, "but as a starting point" in addressing the consequences of "the exhaustive and impersonal technology modern science has created."
Senator Jackson explained the rationale behind the action-forcing mechanism:
To insure that the policies and goals defined in this act are infused into the ongoing programs and actions of the Federal Government, the act also establishes some important "action-forcing" procedures. Section 102 authorizes and directs all Federal agencies, to the fullest extent possible, to administer their existing laws, regulations, and policies in conformance with the policies set forth in this act. It also directs all agencies to assure consideration of the environmental impact of their actions in decisionmaking. It requires agencies which propose actions to consult with appropriate Federal and State agencies having jurisdiction or expertise in environmental matters and to include any comments made by those agencies which outline the environmental considerations involved with such proposals.
Taken together, the provisions of section 102 directs [sic] any Federal agency which takes action that it must take into account environmental management and environmental quality considerations.
Neither the impact of highways, including construction of the Interstate System, nor the action-forcing mechanism was uppermost during the debate. The focus was on CEQ. However, Senator Muskie stated that Section 102 would "apply strong pressures on those agencies that have an impact on the environment-the Bureau of Public Roads, for example, the Atomic Energy Commission, and others." He continued:
This strong language in that section is intended to bring pressure on those agencies to become environment [sic] conscious, to bring pressure upon them to respond to the needs of environmental quality, to bring pressure upon them to develop legislation to deal with these cases where their legislative authority does not enable them to respond to these values effectively, and to reorient them toward a consciousness of and sensitivity to the environment.
Senator Muskie did not fully understand the provision ("I understand that the nature and extent of environmental impact will be determined by the environmental control agencies"), but most of his colleagues took little or no notice of it.
Senator Jennings Randolph addressed his colleagues during the floor debate, but did not comment on how Section 102 might impact the roadbuilding program. Like many committee Chairmen, he was concerned about the jurisdiction issue that arose because the bill emerged from Senator Jackson's committee but spanned the activities assigned to many other committees, including his Committee on Public Works.
In addition, he acknowledged the need for NEPA while pointing out the tradeoff that, "as we put down a mile of highway, no matter what type of road it is, we are not only placing cement or asphalt on the earth, but we are enabling people to move from one point to another." In a reflection of the uncertainty at this stage of what constituted the "environment," he cited the requirement for negotiation with those whose homes or businesses would be taken for a highway project as an example "to indicate that we are moving more broadly and more sufficiently to improve environmental quality." Senator Randolph did not mention the Urban Impact Amendment of the 1968 Act that had nearly torn the Federal-State partnership apart.
The Senate approved the bill.
The House of Representatives took up the bill, introduced by Representative John D. Dingell (D-MI) of Detroit on December 23. Senator Randolph's House counterpart did not participate in the floor debate. Like Randolph, Representative George H. Fallon (D-Md.), Chairman of the Committee on Public Works, was a longtime supporter of roads. Fallon also was one of the chief authors of the Federal-Aid Highway Act of 1956. Through his long congressional career (1945-1971), he rarely addressed the House on any subject other than roads. On December 20, after reviewing the conference report, he submitted questions to Representative Dingell, who incorporated them, with answers, into the record.
One of Fallon's questions related to the jurisdictional issue that Senator Randolph had expressed: which committee would have jurisdiction over the annual report of the President required by Section 201? (The President's report and its recommendations would be shared with the appropriate committees.)
Fallon also asked about potential conflicts between CEQ and the proposed Office of Environmental Quality included in the Water Quality Improvement Act of 1969, then in conference. (The new office would mesh with the CEQ to assist in implementing environmental policy and legislation. The Office of Environmental Quality was authorized by Public Law 91-224.)
Finally, he asked:
Is it intended that the Council become involved in the day to day operation of the Federal agencies, specific project [sic], or in inter-agency conflicts which arise from time to time?
The question suggested that Chairman Fallon was concerned that CEQ might block highway or other projects, or that it might add costs for environmental mitigation, but Fallon's letter did not explain what was behind the question.
The answer was that the conferees did not view NEPA "as implying a project-by-project review and commentary on Federal programs" for CEQ:
Rather, it is intended that the Council will periodically examine the general direction and impact of Federal programs in relation to environmental trends and problems and recommend general changes in direction or supplementation of such programs when they appear to be appropriate. It is not the Conferees' intent that the Council be involved in the day-to-day decision-making processes of the Federal Government or that it be involved in the resolution of particular conflicts between agencies and departments. These functions can best be performed by the Bureau of the Budget, the President's Interagency Cabinet-level Council on the Environment, or by the President himself.
NEPA Becomes Law
After the House and Senate approved NEPA in a groundswell of environmental enthusiasm, the bill went to President Nixon. Flippen explained how the President viewed the bill:
The committee report sailed through both houses of Congress, reaching Nixon's desk just after Christmas... Surprisingly, no one in the White House recognized the significance of the impact-statement requirement, the only true coercive portion of the bill and the one in which environmentalists placed so much faith. No executive agency recommended against approval, despite potential conflicts with the new CEQ. In the years to come, Nixon would come to regret this oversight, but at the end of his first year in office, the bill appeared only a minor nuisance...In any event...to veto the bill was to court political disaster, for the environmental "bandwagon" ensured a congressional override...If he were to stage properly the signing ceremony, choose his words wisely, and follow with credible appointments, NEPA could work in the administration's favor. Coupled with his coming environmental message to congress, it would finally win the political initiative that the White House had so long sought. [Nixon and the Environment, p. 48-49]
Nixon decided that New Year's Day, a typically slow news day, would be perfect.
Few developments competed for the nation's attention, and, with opponents on vacation and the ceremony three thousand miles from the focus of national debate, Nixon could turn coverage to his advantage, away from the true Democratic genesis of the bill. In addition, signing NEPA on the first day of the new decade offered symbolic significance. If he were to highlight properly the signing as only the first action of a new era in which the government would protect America's environmental heritage, the press would focus on the future, in which the administration planned an environmental offensive, and not on the past, in which the White House had encountered little but environmental criticism. [Nixon and the Environment, p. 50]
On January 1, 1970, at around 10 am, President Nixon signed NEPA (Public Law 91-190) during a holiday stay at his home, known as the "Western White House," in San Clemente, California. It was the morning of New Year's Day, so he could not hold an elaborate signing ceremony with the congressional authors of the bill who might have distracted from the President's attempt to dominate the environmental issue. Photographers and a few reporters showed up for the event.
John Osborne, who wrote the weekly "Nixon Watch" column for The New Republic magazine, saw an additional purpose in the signing:
Two events during his stay in San Clemente at the end of his first year in office suggested that this very private President was trying, at the start of his second year, to correct the impression that he is so closely guarded, by himself and by his staff, because he is afraid to show himself in ways and situations that may expose to general view the man within the shell. On New Year's morning, at the signing of a bill requiring him to substitute a statutory environmental council for the one he created on his own authority, he appeared to the reporters whom he joshed and allowed to josh him, just a little, to be wholly at ease, really enjoying the occasion and the exposure that went with it. [The other event was allowing reporters to watch him golf, badly, at a Los Angeles country club.] [Osborne, John, The Nixon Watch, Liveright, 1970, p. 199]
His prepared remarks stated that the country would have to work in a bipartisan fashion on the environment "because it is now or never." Looking ahead 10 years, he said, if we do not start now, "we will not have an opportunity to do it later." The Nation will have "millions more automobiles," and water will be less pure, so it will be "much harder to turn it around." A major goal for the next 10 years "must be to restore the cleanliness of the air, the water, and that, of course, means moving also on the broader problems of population, congestion, transport and the like."
Nixon explained that all industrial societies have similar problems:
What we really confront here is that in the highly industrialized, richest countries, we have the greatest danger. Because of our wealth we can afford the automobiles, we can afford all the things that pollute the air, pollute the water, and make this really a poisonous world in which to live.
Flippen pointed out that while "Nixon had played no role in the passage of NEPA," he was now portraying it as a reflection of his concern about the environment. The President reinforced that idea after signing the bill:
Chatting with reporters after signing the bill, Nixon told how he had recently taken a friend, Charles "Bebe" Rebozo, on a drive through the countryside of Orange County outside Los Angeles. In ten years, they had agreed, development would scar forever the beauty of the land, an occurrence not unique to southern California. With NEPA and a slew of legislation planned in the near future, Nixon promised, his administration would not let such a tragedy unfold. [Nixon and the Environment, p, 51]
The White House also issued a Statement by the President on NEPA that concluded:
The Act I have signed gives us an adequate organization and a good statement of direction. We are determined that the decade of the ‘70's will be known as the time when this country regained a productive harmony between man and nature.
The statement also referred to Senator Muskie's proposal to establish an Office of Environmental Quality to staff CEQ. "I believe this would be a mistake," the President said. He added:
No matter how pressing the problem, to overorganize, to overstaff or to compound the levels of review and advice seldom brings earlier or better results.
In addition to the President's remarks and statement, the White House issued a press release focused on CEQ. None of these documents mentioned the environmental reviews that individual Federal Agencies would have to conduct on a project-by-project basis.
The New York Times covered the signing on its front page and reprinted the text of the President's statement on page 12 along with continuation of the article. A photograph on page 12 showed Nixon "giving reporters pens he used to sign" the bill. Nixon would not reveal his appointees to CEQ. "But he said that the council would be assisted by a 'compact staff,' and would function with the same close advisory relation to the President that the Council of Economic Advisors does in fiscal and monetary affairs."
A companion article on page 12 titled "Challenge by Democrats" discussed concerns expressed by Senators Jackson and Muskie. They agreed with the President's statements, but had "some residual doubt about how much effort and money the Administration was prepared to devote to carrying out the policy proclaimed in the new law." Senator Jackson said that implementation of NEPA "will require a real commitment of funds and a re-ordering of our national priorities."
Senator Muskie objected to the President's comments about staffing. In addition to rejecting the Senator's staffing proposal, the President told reporters he thought that NEPA provided an "adequate organization and a good statement of direction." The Senator said:
There is no surplus of staff involved. If the council is to do the substantive job contemplated by the Congress, it will have to have the Office of Environmental Quality.
Senator Jackson, the article noted, had disputed the capability of the President's EQC. He and Congressman Dingell "felt that the President had created his Cabinet council to forestall Congressional action and to give the impression that the Administration was more active than in fact it was." They also believed that EQC had too many responsibilities and too few employees for the task.
The newspaper also printed a three-column "Man in the News" story about the "Sponsor of Pollution Control Bill," calling Senator Jackson "one of the most powerful members of the United States Senate." It described his chief concerns as "the extension of America's nuclear and military powers" and his "staunch support of American involvement in Vietnam." Referring to his success in maneuvering NEPA to passage, the article said, "The last time he maneuvered so diligently for a piece of legislation was in support of the antiballistic missile." His support for the supersonic transport plane and other military investments earned him the nickname "Senator from Boeing." (Senator Jackson, who had been in the House of Representatives from 1941 to 1953, served in the Senate from January 3, 1953, until his death on September 1, 1983.)
The two Times articles about NEPA focused on CEQ. In the final sentence of the next-to-last paragraph of "Challenge by Democrats," the article referred to the action-forcing mechanism. "It also directs that all Federal agencies must include in their legislative recommendations and proposed actions a statement on the environmental impact of the proposals." ["Nixon Promises an Urgent Fight to End Pollution," Kenworthy, E. W., "Challenge by Democrats," and "Sponsor of Pollution Control Bill, The New York Times, January 2, 1970]
Critics had to give the President credit for signing NEPA and saying the right things, but they assumed, as Flippen put it, that "when the glare of publicity dimmed, Nixon would show his true colors and appoint weak members" to the CEQ, possibly even members hostile to Federal regulation or unwilling to stand up to industry. Instead, on January 29, Nixon appointed Train as chairman of CEQ. "Everyone knew where Train stood on the environment; he was, as [Deputy Assistant to the President for Domestic Affairs John] Whitaker later recalled, ‘for the environment first, Nixon second.'"
The President appointed two other distinguished members to CEQ along with Train:
Joining Train were Gordon MacDonald and Robert Cahn. MacDonald was a geophysicist and member of the Environmental Studies Board of the National Academy of Science, then serving on the faculty of the University of California at Santa Barbara. Cahn was a Pulitzer Prize-winning conservation reporter for the Christian Science Monitor. Together the appointees stood as a formidable trio, not one a lackey to industry. They were to "carry the ball," Nixon instructed them in the Oval Office, to "get the administration out front on the environment." [Nixon and the Environment, p. 52]Z
Nixon pointed out to reporters that Train and Cahn lived in Washington, while MacDonald would be moving from California, "the smog-free part-Santa Barbara." The President added that Dr. MacDonald "is an expert, incidentally, on the Santa Barbara oil problem. That is where I first became acquainted with him."
The President explained the CEQ's purpose to the assembled reporters:
This Council...is parallel in responsibility to the Council of Economic Advisers. For example, it will prepare for the President a report that will be made annually, the first one on July 1, on the environment.
The Council will also have responsibility for examining the facts on the environment, for setting up an early warning system with regard to how we can avoid some of the problems which may come back to haunt us, 5, 10, 15, even 20 years from now, and setting up programs for legislation as well as programs for the Federal agencies which may not require legislation, to deal with environmental problems.
In a separate statement, the President outlined the CEQ's role, adding that the EQC would be renamed the Cabinet Committee on the Environment "and will be used as a forum in which the President and appropriate Cabinet officers can discuss environmental issues." The statement concluded:
Environmental problems occur today because we were not alert enough, informed enough, or farseeing enough yesterday. The new Council on Environmental Quality will work to remedy these deficiencies and will thus contribute, in a most significant way, to the quality of American life for all.
Senator Jackson's committee confirmed the three promptly.
Professor Caldwell, in a retrospective article, said:
NEPA implies a major modification and even a reversal of long established priorities in the political economy of the Nation. The disruptive effects of the Act on the business-as-usual economy do not appear to have been foreseen by the Congress or by those interests most likely to have been affected. However, the weekly news magazine Time observed, in its issue of August 1, 1969, that if NEPA became law, its impact might be felt by ". . . every imaginable special interest-airlines, highway builders, mining companies, real estate developers, . . ." and all federal policies with environmental implications would be open to challenge. [Caldwell, Lynton K., "The National Environmental Policy Act: Retrospect and Prospect, Environmental Law Reporter, March 1976, 6 ELR 50036]
|<< Previous||Contents||Next >>|
|
<urn:uuid:1bb879fc-0296-45e3-ae08-41fd10d2107f>
|
CC-MAIN-2016-26
|
http://www.fhwa.dot.gov/highwayhistory/nepa/03.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965386
| 6,791
| 2.8125
| 3
|
|This article relies too much on references to primary sources. (November 2015) (Learn how and when to remove this template message)|
In computer network engineering, an Internet Standard (STD) is a normative specification of a technology or methodology applicable to the Internet. Internet Standards are created and published by the Internet Engineering Task Force (IETF).
An Internet Standard is characterized by technical maturity and usefulness. The IETF also defines a Proposed Standard as a less mature but stable and well-reviewed specification. A Draft Standard is a third, even less mature classification that was discontinued in 2011.
An Internet Standard is a Request for Comments (RFC) or a set of RFCs. An RFC that is to become a Standard or part of a Standard begins as an Internet Draft, and is later, usually after several revisions, accepted and published by the RFC Editor as an RFC and labeled a Proposed Standard. Later, an RFC is elevated as Internet Standard, with an additional sequence number, when maturity has reached an acceptable level. Collectively, these stages are known as the Standards Track, and are defined in RFC 2026 and RFC 6410. The label Historic is applied to deprecated Standards Track documents or obsolete RFCs that were published before the Standards Track was established.
Only the IETF, represented by the Internet Engineering Steering Group (IESG), can approve Standards Track RFCs. The definitive list of Internet Standards is maintained in Internet Standards document STD 1: Internet Official Protocol Standards.
Becoming a standard is a two-step process within the IETF called Proposed Standards and Internet Standards. If an RFC is part of a proposal that is on the Standard Track, then at the first stage, the standard is proposed and subsequently organizations decide whether to implement this Proposed Standard. After the criteria in RFC 6410 is met (two separate implementations, widespread use, no errata etc.), the RFC can advance to Internet Standard.
The Internet Standards Process is defined in several "Best Current Practice" documents, notably BCP 9 (currently[update] RFC 2026 and RFC 6410). There were previously three standard maturity levels Proposed Standard, Draft Standard and Internet Standard. RFC 6410 reduced this to two maturity levels.
A Proposed Standard specification is stable, has resolved known design choices, has received significant community review, and appears to enjoy enough community interest to be considered valuable. Usually, neither implementation nor operational experience is required for the designation of a specification as a Proposed Standard.
Proposed Standards are of such quality that implementations can be deployed in the Internet. However, as with all technical specifications, Proposed Standards may be revised if problems are found or better solutions are identified, when experiences with deploying implementations of such technologies at scale is gathered.
Many Proposed Standards are actually deployed on the Internet and used extensively, as stable protocols. Actual practice has been that full progression through the sequence of standards levels is typically quite rare, and most popular IETF protocols remain at Proposed Standard.
In October 2011 RFC 6410 in essence merged this second and the third Internet Standard maturity level for future Internet Standards. Existing older Draft Standards retain that classification. The IESG can reclassify an old Draft Standard as Proposed Standard after two years (October 2013).
An Internet Standard is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community. Generally Internet Standards cover interoperability of systems on the Internet through defining protocols, message formats, schemas, and languages. The most fundamental of the Internet Standards are the ones defining the Internet Protocol.
An Internet Standard ensures that hardware and software produced by different vendors can work together. Having a standard makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time. Normally, the standards used in data communication are called protocols.
Documents submitted to the IETF editor and accepted as an RFC are not revised; if the document has to be changed, it is submitted again and assigned a new RFC number. When an RFC becomes an Internet Standard (STD), it is assigned an STD number but retains its RFC number. When an Internet Standard is updated, its number is unchanged but refers to a different RFC or set of RFCs. For example, in 2007 RFC 3700 was an Internet Standard (STD 1) and in May 2008 it was replaced with RFC 5000. RFC 3700 received Historic status, and RFC 5000 became STD 1.
The list of Internet standards in RFC 5000 ends with STD 68 (RFC 5234, ABNF) published in 2008. It does not cover STD 69 (a set of five EPP RFCs), STD 70 (RFC 5652, CMS) published in 2009, STD 71 (RFC 6152, 8BITMIME), and STD 72 (RFC 6409, Mail Submission) published in 2011.
|Standard Type||Associated Protocols|
|Web||http, CGI, html/xml/vrml/sgml|
|Internet Directory||X.500, LDAP|
|Application||http, FTP, telnet, gopher, wais|
|Videoconferencing||H.320, H.323, Mpeg-1, Mpeg-2|
- "Internet Official Protocol Standards (STD 1)" (plain text). RFC Editor. May 2008. Retrieved 2008-05-25.
- "Characterization of Specifications". Characterization of Proposed Standards. IETF. January 2014. sec. 3. RFC 7127. https://tools.ietf.org/html/rfc7127#section-3. Retrieved March 11, 2016.
- "IETF Review of Proposed Standards". Characterization of Proposed Standards. IETF. January 2014. sec. 2. RFC 7127. https://tools.ietf.org/html/rfc7127#section-2. Retrieved March 11, 2016.
- "STANDARDS ordered by STD". Official Internet Protocol Standards. RFC editor. Archived July 19, 2011, at the Wayback Machine.
|
<urn:uuid:02c793cc-3ced-4e25-803f-b78bba477cb7>
|
CC-MAIN-2016-26
|
https://en.wikipedia.org/wiki/Internet_standard
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00157-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923922
| 1,251
| 3.421875
| 3
|
Ontario's Coal-Free Plan Progresses
Early this year, I wrote an article on the conversion of Ontario’s Atikokan Generating Station to wood pellets. With that conversion and the October closing of the 2,000-MW Lambton Generating Station, two coal-fired power stations were left operating in the province—Nanticoke Generating Station, the largest coal-fired power plant in North America at 2,760 MW, and the 300-MW Thunder Bay Generating Station.
Nanticoke is scheduled to be fully shut down by the end of the year (though the government has said in the future, refueling some of its eight generators with biomass and/or natural gas may be an option), and just recently it was officially announced that the Thunder Bay Generating Station will be converted to biomass by 2015. An initial plan to repower it with natural gas was cancelled.
The Thunder Bay Generating Station is operated by Ontario Power Generation, which says it will be the first advanced biomass station in the world that was formerly a coal plant. I wasn’t exactly sure what “advanced biomass station” meant and thought it meant torrefied material, but I was wrong. I checked in with Chris Fralick, plant manager of Thunder Bay, and he said it meant steam-exploded technology. Thunder Bay will be issuing an RFP for a fuel supply in the year, according to Fralick, and it will define the fuel required.
A 100-percent advanced fuel test burn was done at the plant in September, and OPG deemed it successful.
Once repowered, the plan for Thunder Bay is to operate at half its generating capacity of 300 MW under a five-year contract. It is being argued by some groups that won’t be enough power to meet the region’s demands (which is hard not to believe, considering the loss of nearly 5,000 MW of coal capacity), and that at the end of the contract, the plant should not be retired, but converted to natural gas or a mix of both fuels.
Modifications to Thunder Bay are to begin in 2014, and the plant is expected to be running on biomass in 2015.
All of this aligns with the government’s target to be coal free by the end of 2014, a goal that it will achieve early. Ontario may be just one province but it is a very large mass of land going coal free, at one-tenth the size of Canada, twice the size of Texas, and more than four times the size of the U.K.
Ontario has really raised the bar when it comes to combatting air pollution and fossil-fuel dependence. You can count on Biomass Magazine to keep you updated on its continued progress.
|
<urn:uuid:c7ccc400-9921-41c0-95db-1b15e73b815e>
|
CC-MAIN-2016-26
|
http://www.biomassmagazine.com/blog/article/2013/12/ontarios-coal-free-plan-progresses
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00025-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966755
| 573
| 2.546875
| 3
|
Penerapan Backpropagation Neural Network Untuk Peramalan Penjualan Produk Susu
MetadataShow full item record
Milk product developed very rapidly nowadays. This is so many brand comes up to the market and makes competition between those products harder than before. Therefore each company must have analytical plan to predict how many product should be produced in order to avoid over product or lack of product and in order to keep distribution process run smoothly. Artificial neural network is one of machinal learning approach that can be used to solve so many problems primarily a complex problem than can not be modeled. Forecasting data time series is one of type of problem that can be solved by this approach. The objective of this research is to forecast milk product sale in KPBS Pangalengan, Bandung. This research consist of five steps such as problem identification, data collection and field information, preprocessing, neural network model, and postprocessing. This research use implementation of artificial neural network forecasting to predict milk product variant cup flavor in KPBS, Pangalengan Bandung. Artificial neural network consist of some composition data of milk product sale during January 2008-Juni 2010 that already trained based on work principle of human brain. Model of artificial neural network that used is this research is backpropagation neural network with two hidden layer and one output. Artificial neural network with one layer has a restriction in pattern recognition. This restriction can be solved by adding one or some hidden layer between input and output layer. Backpropagation is a model of artificial neural network with plural layer. Like other artificial neural network, backpropagation trains the network to balance its capability to recognize pattern and to give appropriate response to input pattern that similar to the pattern that used in training session. After training and examination, validation was done to evaluate the performance of artificial neural network in recognizing actual data pattern and the result shows that 4 data (67%) is appropriate to the target (actual data) and 2 data (33%) is not appropriate. This result shows that error in examination process is low. Mean square error (MSE) of validation process is 0.0286. After further examination bigger data training, MSE changed to 0.083. This MSE is bigger than the previous result. In addition, artificial neural network in this further research shows only 52% data that appropriate with target and 48% is not appropriate with real data. This result is affected by some factors. One of them is data that used in training and examination phase is random data between maximum level and minimum level of each attribute. The result shows that implementation of artificial neural network can help industrial sector to anticipate raw material and product stock accurately.
|
<urn:uuid:91d5f434-3bd9-47c7-91ed-a48884c6ae46>
|
CC-MAIN-2016-26
|
http://repository.ipb.ac.id/handle/123456789/61563
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932263
| 546
| 2.625
| 3
|
Physicist who contributed to discovery of Higgs boson dies
In 1964 Guralnik, along with physicists Carl Richard Hagen and Tom Kibble, wrote a paper that predicted the existence of the Higgs boson, which explains how particles acquire mass. The Higgs boson was discovered at the Large Hadron Collider in Switzerland on July 4, 2012. Peter Higgs, for whom the theory is named, and Francois Englert were awarded the Nobel Prize in Physics in 2013 for this discovery.
Guralnik was the Chancellor’s Professor of Physics at Brown University.
“Gerry forged his own path and yet always focused on fundamental issues in physics,” said Chung-I Tan, professor of physics at Brown and a longtime colleague of Guralnik’s. “He was an early advocate and important contributor to the numerical approach to quantum field theories and also in exploring the structure of strong coupling expansion — paving the way for two of the most important current research areas in theoretical particle physics.”
In recent essays, Guralnik commented on how radical his search for the Higgs boson was to his professors when his seminal papers were published in 1964. He wrote that Werner Heisenberg, a Nobel Prize-winning physicist and one of the greatest scientists of his day, told Guralnik that his theories were “junk.” Guralnik feared that was the end of his career. He was present for the announcement of the boson’s discovery in 2012, and wrote about the field’s future:
“My hope is that as the puzzle continues to be unraveled that some of the wonder and excitement that we physicists have felt for decades will continue to be felt across the world the way it was on July 4th.”
|
<urn:uuid:f149bb26-3260-46e2-a51d-2a3a008c8980>
|
CC-MAIN-2016-26
|
http://www.pbs.org/newshour/rundown/physicist-contributed-discovery-higgs-boson-dies/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.983842
| 370
| 3.3125
| 3
|
Dry macular degeneration is a common eye disorder among people over 65. It causes blurred or reduced central vision, due to thinning of the macula (MAK-u-luh). The macula is the part of the retina responsible for clear vision in your direct line of sight.
Dry macular degeneration may first develop in one eye and then affect both. Over time your vision worsens, which may affect your ability to do things such as read, drive and recognize faces. But this doesn't mean you'll lose all of your sight.
Early detection and self-care measures may delay vision loss due to dry macular degeneration.
Dry macular degeneration care at Mayo Clinic
Dec. 04, 2015
- AskMayoExpert. Age-related macular degeneration. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2015.
- Preferred practice pattern: Age-related macular degeneration. San Francisco, CA: American Academy of Ophthalmology; 2015. http://www.aao.org/preferred-practice-pattern/age-related-macular-degeneration-ppp-2015. Accessed Sept. 22, 2015.
- Age-related macular degeneration? American Academy of Ophthalmology. http://www.geteyesmart.org/eyesmart/diseases/amd.cfm. Accessed Sept. 17, 2015.
- Facts about age-related macular degeneration. National Eye Institute. http://www.nei.nih.gov/health/maculardegen/armd_facts.asp. Accessed Sept. 17, 2015.
- Lawrenson JG, et al. Omega 3 fatty acids for preventing or slowing the progression of age-related macular degeneration. Cochrane Database of Systematic Reviews. http://ovidsp.tx.ovid.com/sp-3.16.0b/ovidweb.cgi. Accessed Sept. 22, 2015.
- Yanoff M, et al., eds. Age-related macular degeneration. In: Ophthalmology. 4th ed. Edinburgh, U.K.: Mosby Elsevier; 2014. https://www.clinicalkey.com. Accessed Sept. 17, 2015.
- American Medical Association. Age-related macular degeneration. JAMA Patient Page. JAMA. 2012;308:1702.
- Garg SJ. Age-related macular degeneration. Merck Manual Professional Version. http://www.merckmanuals.com/professional/eye-disorders/retinal-disorders/age-related-macular-degeneration-amd-or-armd. Accessed Sept. 22, 2015.
- Barbara Woodward Lips Patient Education Center. Lowering your risk for advanced age-related macular degeneration. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2013.
- Farid M. Transscleral suturing of the implantable miniature telescope. Journal of Cataract and Refractive Surgery. 2013;39:979.
- Arroyo JG. Age-related macular degeneration: Treatment and prevention. http://www.uptodate.com/home. Accessed Sept. 25, 2015.
- Riggin ER. Allscripts EPSi. Mayo Clinic, Rochester, Minn. July 22, 2015.
- Garrity J. Structure and function of the eyes. Merck Manual Consumer Version. http://www.merckmanuals.com/home/eye-disorders/biology-of-the-eyes/structure-and-function-of-the-eyes. Accessed Sept. 28, 2015.
- Robertson DM (expert opinion). Mayo Clinic, Rochester, Minn. Oct. 7, 2015.
|
<urn:uuid:6574d8f4-fa75-4a5c-9f63-62049a3ed49e>
|
CC-MAIN-2016-26
|
http://www.mayoclinic.org/diseases-conditions/dry-macular-degeneration/home/ovc-20164874
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00103-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.799978
| 803
| 3.3125
| 3
|
Location: Gulf of Aqaba
Area: 480 km2
Boundaries: The boundaries of this National Park extend from a point opposite the Qad Ibn Haddan lighthouse on the Gulf of Suez to the southern boundary of the Nabq Protected area on the Gulf of Suez. The area includes the all shorelines fronting the Sharm el Sheikh tourism development area.
Type: Marine Reserve
Year of establishment: 1983
Objective: Protection of marine and terrestrial wildlife
Management: The Egyptian Environmental Affairs Agency (EEAA).
Geographical aspects: Ras Mohammed is the headland at the southern most tip of the Sinai Peninsula, overlooking the juncture of the Gulf of Suez and the Gulf of Aqaba. . Littoral habitats include a mangrove community, salt marshes, inter-tidal flats, a diversity of shoreline configurations and coral reef ecosystems that are internationally recognized as some of the world's best. In addition a diversity of desert habitats such as mountains and "wadis", gravel plains and sand dunes.
Flora: Sea-grass beds and mangrove trees.
Fauna: There are more than 200 species of corals, species,125 of which are soft corals There are around 1000 species of fish, 40 species of star fish, 25 species of sea urchins, more than a 100 species of mollusc and 150 species of crustaceans.
Ras Mohammed is important as a bottleneck for migratory soaring birds. The majority of the world populations of white stork Ciconia pass through this area. important breeding populations of the threatened and endemic White-eyed Gull Larus leucophthalamus and Osprey Pandion haliaetus.
Tourism in Southern Sinai is inherently linked to resources of the area.
The Protected areas program seeks to establish equilibrium between development activities, tourism and the natural resource conservation measures needed to achieve sustainable economic development.
Due to Ras Mohamed's geographical position, divers find almost permanent strong currents all year long, which help attract larger fish.
Beautiful beaches, extraordinary coral reefs and exciting dive sites make Ras Mohamed National Park a worthwhile visit.
Unique Coral Reef ecosystem: Coral reef ecosystems found in the National Park are recognized internationally as among the world's best. This recognition is based primarily on the diversity of flora and fauna, clear warm water devoid of pollutants, their proximity to shorelines and their spectacular vertical profile.
The reef exists as an explosion of color and life in stark contrast to the seemingly barren desert adjacent to it. In reality, the desert is rich in fauna, mainly nocturnal. These ecosystems are intrinsically linked and thus must be managed as a single unit.
The National Park offers outstanding coral reef and nature viewing experiences to the visitor.
The Eel Garden, named for its population of garden eels at 20m, also provides excellent and calm conditions.
The Main Beach, often crowded, remains one of the best locations to see vertical coral walls. Access is restricted to the left side of the bay. The Old Quay, often calm but having more turbid water, has some of the best shallow water reef structure.
Marsa Bareika, newly opened with superior corals, calm water and excellent beaches. Mangrove Channel - Hidden Bay are the best locations to view resident or migratory birds such as Herons, White Stark, Osprey, etc.
|
<urn:uuid:77fd6c6f-1410-411b-b477-0f5920065c86>
|
CC-MAIN-2016-26
|
http://www.sis.gov.eg/En/Templates/Articles/tmpArticles.aspx?ArtID=1051
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00175-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.907265
| 702
| 2.90625
| 3
|
Sunday, February 27, 2005
Whence and Whither?
They demanded our names, whence we came, whither we were going, and what was our business. The last query was particularly embarrassing; since traveling in that country, or indeed anywhere, from any other motive than gain, was an idea of which they took no cognizance. (Chapter VIII)From time immemorial strangers and wanderers have faced these same questions. In Homer's Odyssey, the following line occurs several times, first at 1.170 (tr. Richmond Lattimore):
"How are you, strangers? whar are you going and whar are you from?" said a fellow, who came trotting up with an old straw hat on his head. (Chapter XXVI)
What man are you, and whence? Where is your city? Your parents?In Rome even friends asked some of these same questions when meeting:
- Horace, Satires 1.9.62-63: 'Whence are you coming and whither are you heading?' he asks and answers. ('Unde venis et / quo tendis?' rogat et respondet.)
- Horace, Satires 2.4.1: 'Whence and whither Catius?' ('Unde et quo Catius?')
Phil Flemming (via email) writes:
Socrates begins his conversation with Phaedrus by asking poi de kai pothen? It's a kind of scold, isn't it? Pretending to greet Phaedrus as someone who been away on a great voyage, he's really asking, where have you been hiding yourself, Phaedrus? And why have you become a stranger?R. Hackforth translates this passage (Plato, Phaedrus 227a) as follows:
Where are you coming from, Phaedrus my friend, and where are you going?
|
<urn:uuid:bbcc49c2-fc5f-412d-b963-f0dd1da529de>
|
CC-MAIN-2016-26
|
http://laudatortemporisacti.blogspot.com/2005/02/whence-and-whither.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00187-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968308
| 393
| 2.671875
| 3
|
Ground cover plants grow low to the ground and multiply quickly. These types of plants typically grow thick and prevent weeds from emerging. Ground covers are used in areas grass will not grow or is not practical, on hills and slopes or as part of the landscaping design. Evergreen ground covers come in varieties that grow well in sunny, shady or partial sunny/shady areas. Select the appropriate type of evergreen ground cover for the area needed in both size and amount of sun required for best growing results.
Evergreen ground covers that thrive in areas of full sun (hot and dry) consist of varieties that grow well in sites less than 50 sq. ft. and ones ideal for the spots over 50 sq. ft. Use these types of ground cover plants in all sunny locations. The first group includes: rockcress (Arabis caucasica), pineleaf penstemon (Penstemon pinifolius), goldmoss sedum (Sedum acre), kamschatka sedum (Sedum kamschaticum), stonecrop sedum (Sedum spurium) and houseleek/hen and chicks (Sempervivum spp.). Evergreen ground cover plants for larger spaces include: purple ice plant (Delosperma cooperi), spreading juniper (Juniperus x. media) and creeping juniper (Juniperus horizontails).
Sun to Part Shade
Use specific evergreen ground cover plants for parts of your yard or landscaping that have both sun and shade throughout the day. These divide among small (under 50 sq. ft.) and large (over 50 sq. ft.) areas. These types of evergreen ground covers grow best in these conditions. The varieties that thrive in small sections include: sea pink (America maritime), candytuft (Iberis sempervirens), creeping phlox (Phlox subulata), lemon thyme (Thymus x citriodorus), woolly thyme (Thymus pseudolanuginosus) and barren strawberry (Waldsteninia fragariodes). Ground cover plants to choose for large sections include: common juniper (Jumiperus communis), blue star juniper (Juniperus squamata), Hall's honeysuckle/Halliana (Lonicera japonica) and germander (teucruim chamaedrys).
Shade Loving Evergreen
These evergreen ground cover plants will thrive in shady sections of your landscaping, such as under trees, close to structures or beneath shrubs. Select the specific type to grow depending on the size of the area (small or large). Evergreen ground covers for small shaded sections include: mountain lover (Paxistima cambyi), running mat phlox (Phlox stolonifera) and pearlwort/Irish moss (Sagina subulata). Shade tolerant evergreen ground cover plants for large areas include: kinnikinick (Arctostaphylos uvaursi), purpleleaf wintercreeper/colorata (Euonymus fortunei), English ivy (Heder helix), creeping Oregon grape (Mahonia repens), Japanese spurge (Pachysandra terminalis) and periwinkle (vinca minor).
|
<urn:uuid:0c8eb6f9-4444-49f9-aa54-82fb7e36a253>
|
CC-MAIN-2016-26
|
http://www.gardenguides.com/91695-evergreen-ground-cover-plants.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00110-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.8504
| 681
| 2.65625
| 3
|
Earthworm casting creates maintenance nightmare
Earthworm casting on the surface of golf-course fairways rapidly is becoming one of the most challenging management issues for superintendents in the Pacific Northwest and other regions of the United States. I know firsthand that many superintendents are losing sleep over the earthworm-casting issue, and some fear losing their jobs as a result of their inability to control casting. Many golfers, club members, greens committees and board members feel the superintendents aren't doing enough to combat the problem.
Casting occurs when earthworms ingest soil and leaf tissue to extract nutrients, then emerge from their burrows to deposit the fecal matter, or casts, as mounds of soil on the turf surface (see top photo, page Golf 4). Extensive earthworm casting on fairways interferes with proper maintenance practices, the playability of the turfgrass and the overall appearance of the fairways. Affected turf can become thin and the playing surface can soften (see top photo, page Golf 6).
Superintendents urgently need management strategies that discourage earthworm casting and population growth, while at the same time maintain soil quality. At Washington State University and Oregon State University, researchers are working to develop a program that golf-course superintendents can use to reduce casting. The aim is to find methods that will reduce initial earthworm populations to a threshold level where the amount of casting is tolerable and to prevent these populations from returning to excessive levels.
Why is earthworm casting occurring? Earthworm populations can reach several million under golf-course fairways, with millions more in the roughs going unnoticed due to higher mowing heights. Unfortunately for superintendents, the basic cultural practices that produce excellent fairway surfaces also create optimal living conditions for earthworms. Earthworms feed on fairway clippings returned after mowing and on organic matter in the soil. They love the consistent, moist, fertile conditions that typically are present in fairway turf. In other words, earthworm casting is not an indication of a superintendent's inability to grow healthy turf. In fact, the opposite is true.
Populations can reach damaging levels on golf courses for several reasons:
* Fairways often are planted on fine-textured soils that receive regular irrigation and nitrogen-fertilizer applications, which provide optimal growing conditions and a continuous food supply for earthworms. * Earthworm activity is greatest when the soil moisture is near field capacity, which just happens to be the ideal moisture level for healthy turfgrass. * The turfgrass canopy also helps provide favorable temperatures by insulating the soil from extreme weather conditions.
Earthworms cast on the surface for two primary reasons. First, after they ingest organic matter, decaying leaf tissue and mineral soil, they must excrete the leftover material. Second, earthworms live in relatively permanent burrows. When soil fills the burrows (often after heavy rains), earthworms ingest the soil and move it up to the surface to perform "house cleaning." Researchers working with earthworms have estimated that they may bring 20 to 25 tons of soil per acre to the surface each year.
"Who" is responsible for the casting problem? More than 200 species of earthworms with varying behaviors and habitat preferences exist in North America. On Northwest golf courses, the primary species include Lumbricus terrestris, Aporrectodea calignosa and A. longa. Most species do not actually deposit casts on the surface. Many excrete material within the soil profile or not at all. Based on field observations, L. terrestris, familiarly known as the night crawler (see photo, page Golf 1), is the earthworm species causing severe casting damage on golf courses throughout the Pacific Northwest and in many other locations across the United States.
Night crawlers typically build permanent vertical burrows that vary in diameter from about 0.125 to 0.5 inch. In certain situations, these can extend up to 12 feet deep in the soil. However, due to regular irrigation and constant food supplies (clippings and other organic matter) on fairways, night crawlers tend to remain closer to the surface, migrating up and down in the soil with fluctuations in moisture content, soil temperature and atmospheric pressure. Peak earthworm activity and casting occurs during the cooler, wetter weather in the spring and late fall through early winter. Earthworms are generally intolerant of drought and frost conditions; they retreat to the bottom of their burrows during extremes in temperature and soil moisture, returning to the surface when conditions improve.
Night crawlers have long lives, with a reported average life span of 6 to 9 years (up to 20). They breed, on average, once every 2 weeks, producing up to 20 offspring with each cycle. Breeding activity is greatest in the cool, moist conditions of spring and late fall, which also results in increased casting activity as they surface to find a mate. Researchers report that one mature night crawler can produce several hundred offspring in one year. The young night crawlers can grow several inches per year and are sexually mature after the first year. Because fairways provide optimal living conditions, unlimited space and a constant food supply, it is easy to see how earthworm populations and casting can get out of control.
Control options for managing casts Currently, no pesticides are registered for controlling earthworms in the United States, which severely limits the ability of superintendents to manage this highly destructive problem.
At Washington State University (Puyallup, Wash.) and Oregon State University, we initiated a series of short- and long-term field studies in the spring of 1998 designed to reevaluate several cultural practices that other research has shown to be detrimental to earthworm activity in other regions of the world. These studies are evaluating soil-chemistry effects, the effects of clipping removal (alone and in combination with spring, and spring + fall, aerification), and sand topdressing at a rate of 0.625 inch of sand per year (alone and in combination with fertilizer treatments) to determine if sand effectively reduces casting.
Several environmental and cultural factors affect earthworm activity, populations, soil distribution and species. The most critical include an adequate food supply, moisture, temperature, soil texture and pH. Researchers have shown that certain cultural factors alter these critical properties so they are less conducive to earthworm activity.
* pH. Many researchers have reported declines in earthworm populations directly related to declining soil pH (increasing acidity). Consistent with this is the fact that earthworms are scarce in soils with a pH of 5.0 or less, and plentiful between 6.5 and 7.0. Thus, ammonium-sulfate fertilizers may be beneficial in this regard by increasing acidity. Similarly, you should avoid using excessive lime.
In the Pacific Northwest, one researcher reported no earthworm activity on plots of creeping bentgrass treated with sulfur. However, it is possible that instead of population reduction, acidity merely shifts species composition to non-casting earthworms that are more tolerant of acid conditions.
* Food. Earthworms feed on organic matter in the soil and decaying clippings returned to fairways after mowing, which provide a practically unlimited food supply. In turf, they often pull leaf clippings down into the mouth of the burrow, where the tissue softens for later consumption. The amount of food in the soil and on the surface can influence earthworm populations. One researcher found that nightcrawlers do not burrow deep into the soil profile if adequate food is available near the surface. Other investigators have reported that rates of casting were reduced when clippings were removed. Thus, it is possible that collecting clippings could reduce earthworm populations.
* Soil texture. Earthworm populations are highest in light- and medium-textured loam soils. Smaller populations occur in both heavy, poorly drained clay soils and coarse, abrasive, sandy soils. Researchers believe that the susceptibility of such soils to drought and the abrasiveness of sand particles can influence both species composition and overall earthworm numbers in the soil.
One researcher has observed that earthworm populations are low in soils that are compacted, puddled, overgrazed or contain heavy clays. Fairways often are established on heavy soils that are more prone to compaction, which forces Lumbricus terrestris to expel the majority of their castings on the surface instead of in the voids within the soil. This probably is one reason we often see more casting in the clean-up areas on the edges of the fairways.
In my own research, after a full year of evaluating soil acidity, clipping removal and sand topdressing, I have not observed a significant reduction in earthworm casting due to variations in these factors. Some authors have reported that the response to various soil factors is not the same for all earthworm species. From my observations, Lumbricus terrestris is a highly adaptable survivor able to persist in a wide range of soil conditions. Thus, I feel it is probable that the species of earthworms present in many of the research projects I mentioned above were not L. terrestris. In addition, most of the above studies required several years of treatment applications before the earthworm reductions were evident, and in most cases the reductions reported typically were only in the range of 20 to 50 percent.
New strategies for reducing earthworm populations To explore novel methods of reducing earthworm populations, we are working with several earthworm-harvesting companies in Washington and Oregon. A large demand exists around the world for the type of earthworm located under golf-course fairways, primarily for use as fishing bait. This demand makes earthworms a fairly valuable commodity. Currently, harvesting the worms involves bringing them to the surface by applying an irritant to the site (see bottom photo, facing page) and handpicking the earthworms when they emerge from their burrows. Obviously, this process is very labor intensive. Thus, we also are attempting to develop efficient methods for superintendents to bring the worms to the surface and remove them by mechanical or handpicked methods. We have currently assessed a core harvester, several turf vacuums and triplex brush units, but with limited success so far.
In evaluating the short- and long-term effects of physical removal, we have found that it significantly reduces casting in the short term, but it does not completely eliminate casts. A methodical, long-term approach of repeated harvests is necessary to continue to see measurable reductions. For example, an 18-hole country club in western Oregon hired a company to harvest all of their fairways in the spring of 1998. The final removal count over a 4-week period was 2.1 million worms. A second complete harvest in the fall produced about 750,000 additional worms. Although this seems like a lot of worms, this site still needs additional harvests to reduce casting to an acceptable level.
Tolerance and education The issue of earthworm casting is one that surfaces over and over again (no pun intended) in pro shops, clubhouses and board and green-committee meetings. Unfortunately, the issue is usually addressed without a complete understanding of the problem. At the heart of the issue is the quality of turf and how casting affects the type of lie a golfer has on the fairways (see bottom photo, page Golf 4). Fairway casting, when severe, can affect the implementation of summer and winter rules, with earlier winter rules initiated in the fall and a later start on summer rules in the spring. The golfers and club members who prefer to "play the ball down" year round tend to be the most vocal about earthworm casting.
For the record, I am an avid golfer. I understand the frustrations associated with soft fairways and thin turf caused by the soil deposits of casting. In addition, as an assistant superintendent at Everett Golf and Country Club in western Washington, I experienced firsthand the challenges of managing earthworm casting.
However, after studying earthworms for the last year and a half, I have gained a great deal of respect for their ability to adapt, survive and persist, even in less-than-optimal environments. Thus, earthworm casting is an issue that inevitably will require some tolerance on the part of golfers. It is unfortunate and unfair that there are superintendents around the country that face so much pressure over earthworm casting that they fear losing their jobs. After all, earthworms prefer the same conditions required to maintain healthy turfgrass.
I cannot over-emphasize that no products or pesticides are specifically labeled to control earthworms. This severely limits the ability of superintendents to manage this problem. Also, in most (not all) regions of the United States that experience heavy earthworm casting, the most severe casting tends to occur in late fall and winter when the recuperative ability of the turf is minimal. Fortunately, the number of rounds during these months typically is lower.
We must remember that earthworms provide far more benefit than harm to the soil/turf environment. The earthworm's burrowing and feeding activity initiates thatch decomposition, stimulates microbial activity, makes certain plant nutrients more available, increases soil aeration and, in general, improves overall soil quality.
Our research project will continue until we are able to develop an Integrated Management System for reducing casting. This is not an easy task. With time, we should have a better picture of how soil acidity, clipping removal, sand topdressing and other strategies affect earthworm casting after multiple years of treatment applications. We also will continue to look at new, untested strategies for earthworm-casting reduction. In the meantime, we must accept that few chemical options exist for earthworm control, and cultural strategies are not well understood. Golfer education and tolerance of casting are necessary for the time being.
Paul Backman is a research associate in turf management at Washington State University (Puyallup, Wash.).
No chemical is registered specifically for earthworm control in turf. However, several frequently used turf chemicals are fairly effective against them. If you incorporate such products into your maintenance regime for registered uses, you can obtain a measure of earthworm control as an added benefit.
Dr. Daniel Potter, an entomologist with the University of Kentucky, has investigated pesticidal effects on earthworms and found that two fungicides provide significant control-benomyl and thiophanate-methyl. Benomyl is no longer available for turfgrass use. However, thiophanate-methyl (available as Cleary's 3336, LESCO's Cavalier, Regal's SysTec and from Scotts under several brand names) is a fungicide that many golf-course operations already routinely use.
In addition, Potter identified two currently available insecticides that are effective against earthworms: bendiocarb (AgrEvo's Turcam) and, to a lesser (but still significant) extent, carbaryl (Sevin). Both of these are widely used products for turf pests. Thus, like thiophanate-methyl, they are easy to incorporate into a turf-management program.
Remember, you must never exceed label rates or apply a chemical to a site or crop not listed on the label. Further, some state regulations may be more restrictive than federal rules, so be sure to read any supplemental state labeling.
Want to use this article? Click here for options!
© 2016 Penton Media Inc.
|
<urn:uuid:ecac1701-3a10-4b6e-9707-87b243faa0bd>
|
CC-MAIN-2016-26
|
http://grounds-mag.com/mag/grounds_maintenance_earthworm_casting_creates/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93801
| 3,175
| 2.75
| 3
|
When extracting using a tangential:
1. Place each frame against the wall of the basket, with each frame pointing in the same direction. The correct number of frames and positioning must be used to ensure that the extractor is balanced.
2. Remember that the cells are not formed perpendicular to the frame's foundation, but slightly slanted. Bees form the cells this way to counter the effects of gravity and minimize the amount of nectar that escapes the cells as the bees place it in them. Therefore, you must spin the extractor in the right direction to take advantage of this fact. If you spin in the wrong direction, some of the honey will remain in the cells, and some cell damage would occur.
3. The first spin should remove approximately half of the honey from one side of the frame. If you spin to remove all the honey from one side of the frame, and if you spin too rapidly, you could damage the comb by excessive pressure applied to the emtpy side of the frame from the honey-filled other side of the frame. This is especially true with wireless comb, or with wired comb that is newly drawn and not yet hardened enough.
4. After a proper first spinning, flip the frames and do the second spin in reverse, but this time completely emptying the honey from that side of the frame.
5. Do a final flip and spin in the opposite direction of the second spin, to remove the remaing honey from the side of the frame that was partially extracted on the first spin.
|
<urn:uuid:b4e20e55-b38d-4425-9578-21ef116c9d54>
|
CC-MAIN-2016-26
|
http://forum.beemaster.com/index.php?topic=3486.msg19447
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930212
| 313
| 2.890625
| 3
|
Saint-Dié (săN-dyā) [key], city (1990 pop. 23,670), Vosges dept., E France, in Lorraine, on the Meurthe River. It is an industrial center where foundry products, chemical products, and machinery are manufactured. The printing industry is also important. The city grew around a monastery founded in the 7th cent. In World War II the Germans destroyed many of Saint-Dié's landmarks and deported much of the population. The Cosmographiae introductio by Martin Waldseemüller, a geographic work that for the first time referred to the newly discovered continent as America, was printed in Saint-Dié in 1507.
|
<urn:uuid:52610a82-b195-407b-a265-d9252cf04a2e>
|
CC-MAIN-2016-26
|
http://www.factmonster.com/encyclopedia/world/saint-die.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935406
| 149
| 2.625
| 3
|
Bluegill live in lakes and ponds all over Kentucky, and they are often the first fish caught by young fishermen. Plentiful and quick to bite, these scrappy panfish put up a surprisingly hard fight on light tackle. They typically run about 6 inches, but bluegills approaching 12 inches and well over a pound are not unheard of, especially in Kentucky's large reservoirs.
Habits & Habitat
Bluegill relate to aquatic vegetation and other similar cover throughout most of their lives. Large bluegill move into shallow water to spawn in May and June, when water temperatures approach the 70-degree mark. This is perhaps the most likely time of year to catch the biggest fish. Some bluegill stay in and around shallow vegetation throughout summer, but the biggest gills typically move to the deeper edges of vegetation, moving shallow to hunt around sunrise and sunset. Bluegill prefer warm, slightly stained water to cold, clear water; and they generally avoid currents.
Tips & Tactics
Use live baits such as nightcrawlers, red worms, crickets and maggots to tempt bluegill, but they will also fall for small jigs and soft plastic baits. Fly fishing also brings in its share of these fish. Bluegill are rarely found in open water, so fish around underwater structures such as weeds, brush, sunken timber, docks and rock piles. Suspend baits under a bobber, employ a cast-and-retrieve method, or fish on the bottom in areas where snagging is not too much of a problem.
So plentiful are bluegill that it is easier to list waters in which they don't exist than those in which they do. Still, certain waters in Kentucky are proven producers of large and plentiful bluegill. Large manmade impoundments such as Dale Hollow Lake, Herrington Lake, Kentucky Lake, Lake Cumberland, Lake Barkley and Taylorsville Lake are top bluegill fisheries; these locations also offer other species such as crappie, catfish, largemouth and smallmouth bass. Bluegill also thrive in small ponds all over the Kentucky landscape.
Licenses & Regulations
No regulations govern bluegill fishing in Kentucky; you may keep them at any size and in any number, at any time of the year. You do need a current valid Kentucky fishing license to fish in any of the state's waters. As of the 2011 fishing season, an annual fishing license costs $30 for Kentucky residents or $50 for nonresidents. More information on purchasing licenses is available through the Kentucky Department of Fish and Wildlife website (see Resources).
- Kentucky Department of Fish and Wildlife Resources; Kentucky Fish; Benjy T. Kinman; 1993
- Kentucky Department of Fish and Wildlife Resources: Kentucky Fishing & Boating Guide 2011
- All About Fishing: Kentucky Panfish Fishing
- Outdoors Kentucky; Bluegill; Staff Report; February 26, 2009
- OhioValleyfishing.com: The Bluegill
- Win McNamee/Getty Images News/Getty Images
|
<urn:uuid:41d6aa82-ef71-4879-a042-812065056d40>
|
CC-MAIN-2016-26
|
http://traveltips.usatoday.com/bluegill-fishing-kentucky-37593.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00178-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929687
| 634
| 2.96875
| 3
|
MATH 300: Mathematical Foundations
Prerequisites: MATH 142 or ITEC 122 or permission of instructor, and MATH 152 and any MATH course numbered 200 or above
Credit Hours: (3)
A first course in the foundations of modern mathematics. The topics covered include propositional and predicate logic, set theory, the number system, induction and recursion, functions and relations, and computation. The methods of proof and problem solving needed for upper-division coursework and the axiomatic basis of modern mathematics are emphasized throughout the course. The level of the course is challenging but appropriate for students with a minimum of 3 semesters of college mathematics. Students who have earned credit for MATH 200 may not subsequently earn credit for MATH 300.
Detailed Description of Course
Course content includes:
The propositional calculus:
- Propositional variables and logical connectives.
- The use of truth tables to test for truth conditions.
- Tautologies, and contradictions.
The predicate calculus:
- Predicate functions, variables, and logical connectives.
- The universal and existential quantifiers and their standard interpretations.
- Validity and satisfiability.
- Soundness and completeness.
- Using the language of predicate calculus in mathematical proofs.
- Naïve and formal set theory:
- Standard set notation.
- The set operations of union, intersection, symmetric difference, and power set.
- The Zermelo/Frankel axioms and the axiom of choice.
- Finite and transfinite sets, Cantor’s theorem.
Functions and Relations:
- Relations on sets, including transitive, symmetric, and reflexive relations.
- Partial orders, equivalence relations, and partitions.
- Functions on sets, including compound functions.
The Number System:
- The sets of Natural Numbers and Integers; well-foundedness and proofs by induction, ordinality and cardinality, countability, the Peano axioms.
- The Rational Numbers; rational number arithmetic and the field axioms.
- The Real Numbers; irrationality, algebraic and transcendental numbers, Dedekind cuts, and the non-denumerability of the reals.
- Other number systems; algebraic versus geometric closure of a field, extension by radicals (e.g., the Gaussian integers), transfinite ordinals and cardinals.
- Turing machines.
- Computation and primitive functions.
- Computable functions, recursion, recursively enumerable and non-recursively enumerable sets.
- The Halting Problem.
- Church’s Thesis.
Description of Conduct of Course
This is a traditional lecture course, but with a significant degree of classroom interaction encouraged and collaborative (group-learning) projects and assignments will be frequent. Students will use computers in and out of class to write their own computable functions and apply these programming techniques to solve problems in other topics in the course.
Student Goals and Objectives of the Course
The primary objective of the course is to prepare students for upper-division coursework in mathematics. Students will be able to
- Comprehend and express themselves clearly in the language of modern mathematics, including first-order logic and formal set theory.
- Employ the most common problem solving techniques and methods of proof needed in advanced coursework.
- Understand the axiomatic foundations of the mathematics they have previously learned, and be able to approach the study of new topics such as modern algebra, number theory, and analysis using an axiomatic framework and the expository cycle of “definition-theorem-proof.”
Graded tasks will include individual homework, quizzes, and written exams, including a cumulative final. Additional assessment measures may include collaborative projects or homework.
Other Course Information
Review and Approval Date
Revised: April 13, 2012
|
<urn:uuid:96a180e5-f874-4bd2-adde-5fbe2f856e81>
|
CC-MAIN-2016-26
|
http://www.radford.edu/content/registrar/home/course-descriptions/csat-descriptions/mathematics/math-300.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00083-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.860089
| 809
| 2.78125
| 3
|
The United States is trying to convince Burma to cut its military ties with North Korea. In exchange for that the U.S. will provide military and economic aid. Apparently the biggest problem here is that North Korea would pay bribes to key officials and the Americans won’t. However, there would be opportunities to plunder the American aid, but this is not something U.S. diplomats can use during their negotiations.
Violence between the Moslem Rohingya and police continues along the west coast. Buddhist clerics continue to call for the expulsion of all Moslems, describing Moslems as a constant threat to all Burmese. But there was never a problem with Islamic radicalism in Burma, thanks largely to the decades of army rule that kept Saudi missionaries and money for Wahhabi (the flavor of Islam al Qaeda likes) mosques and religious schools out. The military rule also relied on large doses of nationalistic propaganda, which extolled the importance of being Burmese. This meant the ethnic Burmese majority in the south who are largely Buddhist. The tribal peoples of the north and the Moslem and Christian minorities in the south were barely tolerated guests who had to keep their heads down. The Rohingya Moslems, living near the Bangladesh border, were not considered Burmese citizens but rather illegal migrants. The years of dictatorship suppressed all sorts of disruptive attitudes, but with the military rule gone, people are allowed to express themselves and the Buddhist radicals went after the Moslem minority first. Now there are a growing number of Burmese Moslems who see Islamic radicalism as a viable defensive tactic. It isn’t, but it makes sense to the young, determined, and stubborn.
Since last June over 250 people (mostly Rohingya) have died in ethnic and religious violence. Most of the unrest has been in Rakhine State, which has a population of 3.8 million, with about 800,000 of them Moslems, mostly Rohingyas. These are Bengalis, or people from Bengal (now Bangladesh) who began migrating to Burma during the 19th century. At that time the British colonial government ran Bangladesh and Burma and allowed this movement, even though the Buddhist Burmese opposed it. Britain recognized the problem too late and the Bengali Moslems were still in Burma when Britain gave up its South Asian colonies after World War II (1939-45). The current violence has caused over 140,000 Rohingya (mostly, with a growing number of non- Rohingya Moslems) to flee their homes, many of them seeking shelter in Thailand, Bangladesh, and Malaysia. The Rohingya say the government is starving those in refugee camps and not punishing local Buddhists who attack Moslems. In the last few months there has been more anti-Moslem violence in other parts of the country, where Moslems are a smaller minority. Despite government orders to crack down on the Buddhist mobs, the local police are Buddhist and reluctant to go after fellow Buddhists on this issue. Years of news about Islamic terrorist violence around the world has left many Burmese believing the radical Buddhist clerics preaching for more violence against Moslems in Burma as a national security issue, not an outburst of paranoid fear.
August 31, 2013: In the north (Kachin state) the ceasefire was broken as soldiers and a pro-government militia fought to drive Kachin rebels out of a teak forest in the northern part of the state. At least two soldiers were killed. A wealthy (and very corrupt) businessman has bribed government and military officials to allow him to illegally cut down the teak forest and smuggle out the valuable lumber. China is a prime market for teak, as are many Western countries, despite sanctions against this sort of thing. Some of the Burmese involved in teak smuggling have been declared criminals by the United States, but in Burma you can stay out of trouble if you have the right people on your payroll.
August 28, 2013: In the north a local military commander decided to deal with a long simmering border dispute by ordering the construction of a new border post five kilometers inside India. When confronted by Indian troops the Burmese insisted this was actually their territory. Only after days of negotiations, and some threats, did the Burmese agree to withdraw. The border has never been precisely marked in this remote area but as population grew, residents from both countries moved closer to each other and there arose disputes as to exactly where the border was. In this area the Indian villagers find themselves closer to a source of consumer goods across the unmarked border in Burma rather than in India. This ended up in a situation where Burmese troops were telling some Indians they were living in what local Burmese officials believed was Burmese territory. India is alarmed at the fact that the Burmese border claim would mean dozens of Indian families, in 18 border villages, would lose some of their land. The Indian villagers don’t seem to mind.
August 25, 2013: In central Burma (Sagaing) nearly a thousand Buddhists rampaged through a Moslem neighborhood, damaging and destroying homes and businesses and sending over 300 Moslems fleeing for their lives. Buddhist clerics first led a smaller crowd to demand that the police hand over a Moslem man under arrest and accused of attacking a Buddhist woman. Things escalated from there. This area had never had any anti-Moslem violence in this area before.
August 24, 2013: In the north (Shan state) a rebel militia (TNLA or Taang National Liberation Army) clashed with soldiers. This is the third such incident this month, and this sort of violence is preventing TNLA leaders from meeting with government officials to negotiate a peace deal.
August 22, 2013: In the south all police are on the alert to find three Moslem men who are believed to have entered illegally from Thailand and are seeking to carry out terrorist attacks in Burma. Police and intelligence officials have been particularly alert to a Moslem terror threat after a major Buddhist shrine in India was bombed last month.
August 21, 2013: The government and the UN are disputing accusations by a UN official that his car was attacked as he recently visited an area that had suffered anti-Moslem violence. The government pointed out that the UN official was unharmed and his car undamaged. The UN is under a lot of pressure from Moslem states to do something about the anti-Moslem violence in Burma.
|
<urn:uuid:37b633bc-7d84-4077-a3df-7403ef43a16c>
|
CC-MAIN-2016-26
|
http://www.strategypage.com/qnd/myanmar/articles/20130907.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972615
| 1,310
| 2.640625
| 3
|
Code of Conduct
The BAYS Code of Conduct should be taken very seriously. Your teams should know them, following them, and represent themselves as young athletes during practice, official team activities, and official game play.
Follow the Code
- Play to win.
- Play fair.
- Observe the laws of the game.
- Respect opponents, teammates, referees, officials and spectators.
- Accept defeat with dignity.
- Promote the interests of football.
- Reject corruption, drugs, racism, violence and other dangers to our sport. Football's huge popularity sometimes makes it vulnerable to negative outside interests.
- Help others to resist corrupting pressures.
- Denounce those who attempt to discredit our sport.
- Honor those who defend football's good reputation.
The following seven points were composed by the children of the Collège Henri Matisse in Choisy-le-Roi, France, and is addressed to the players of the World Cup in France. This Charter was read for the first time at the opening ceremony of France 98.
"You, who will play for the World Cup make us dream by playing fair and let the game rhyme with peace. Play strictly by the rules and make your supporters happy. Do not contest the referee and show us your fairness. Respect your adversaries as well as your partners and leave your bad temper outside the football pitch. Do not behave violently. Make sure you never lose your self-control. Admit you are defeated after doing your best and celebrate your victory fairly. Make us dream by showing us your solidarity so that our dreams will come true."
|
<urn:uuid:22dd909d-ba5e-4793-b971-293c1d438e04>
|
CC-MAIN-2016-26
|
http://www.bays.org/node/66
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00187-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963231
| 332
| 2.921875
| 3
|
Why Passwords are Required
Authentication plays an important role wherever the identification of a person is required. There are several ways in which a person can be authenticated some of them include biometrics authentication, password verification, etc. Since authentication is required to ensure that the person who needs access to any computer or device is genuine it becomes highly important to make sure that any person who tries to access a device or computer is to provide his/her credentials (username and password) for authentication. Same is the case with Windows 8 where, in most cases, a user is authenticated by the password he has been given by the administrators to logon to his account.
Whenever a user account is created in Windows 8 a password is assigned to the account by the administrator of the operating system. However if users want they can change their passwords according to their needs. Settings for whether users can change their passwords on their own or they need to request the administrators to do so can be configured by the administrator(s) only.
Whatsoever the case is, if as a Windows 8 user you want to change your password you are required to follow the steps given as below:
1 Logon to the computer on which Microsoft Windows 8 operating system is installed.
2 Click on Start button.
3 From the available options on the screen click on Control Panel.
4 On the opened window in the left pane click on Users option.
5 From the right pane click on Change your password option available under Your account label.
6 On Change your password page type in your current password in the available field and click on Next button.
7 On the next Window type and re-type your new password along with the password hint in New password, Retype password and Password hint fields respectively and click on Next button.
8 On the next Window click on Finish button to save the changes.
|
<urn:uuid:314185b7-c83d-4ccc-aaa0-9a5771c1559b>
|
CC-MAIN-2016-26
|
http://windows8themes.org/how-to-change-password-in-windows-8.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929013
| 379
| 3.296875
| 3
|
9 October 2012 Almost 870 million people, or one in eight, are suffering from chronic malnutrition, according to a new United Nations report released today, which shows a sharp decline in the number of undernourished people over the past two decades, but warns that immediate action is still needed to tackle hunger particularly in developing countries.
The State of Food Insecurity in the World 2012 (SOFI), which was jointly published by the Food and Agriculture Organization (FAO), the International Fund for Agricultural Development (IFAD) and the World Food Programme (WThe world has the knowledge and the means to eliminate all forms of food insecurity and malnutritionFP), reveals that the number of hungry declined more sharply between 1990 and 2007 than previously believed. The new estimates are based on an improved methodology and data for the last two decades, the agencies said in a news release.
Between the periods of 1990-92 and 2010-2012, the number of hungry people declined by 132 million, or from 18.6 per cent to 12.5 per cent of the world’s population. However, since 2007 global progress in reducing hunger has slowed and leveled off, which requires countries to take appropriate measures if they are to meet the Millennium Development Goal (MDG) of reducing the proportion of people who suffer from hunger by half by 2015, the report says.
“If the average annual hunger reduction of the past 20 years continues through to 2015, the percentage of undernourishment in the developing countries would reach 12.5 per cent – still above the MDG target of 11.6 per cent, but much closer to it than previously estimated,” the report says.
The revised numbers of hunger released today use updated information on population, food supply, food losses, dietary energy requirements and other factors. The numbers also reflect a better estimation of food distribution within countries.
“In today’s world of unprecedented technical and economic opportunities, we find it entirely unacceptable that more than 100 million children under five are underweight, and therefore unable to realize their full human and socio-economic potential, and that childhood malnutrition is a cause of death for more than 2.5 million children every year,” says the report’s foreword, written by FAO Director-General José Graziano da Silva, IFAD President Kanayo F. Nwanze and WFP Executive Director Ertharin Cousin.
“We note with particular concern that the recovery of the world economy from the recent global financial crisis remains fragile. We nonetheless appeal to the international community to make extra efforts to assist the poorest in realizing their basic human right to adequate food. The world has the knowledge and the means to eliminate all forms of food insecurity and malnutrition,” they said.
The new estimates suggest that the increase in hunger during 2007-2010 was less severe than previously thought, and that the 2008-2009 economic crisis did not cause an immediate economic slowdown in many developing countries as was feared could happen. Many governments also succeeded in cushioning the shock and protecting vulnerable populations from the effects of rising food prices.
The new report notes that the methodology does not capture the short-term effects of food price surges and other economic shocks and adds that FAO is working to develop a wider set of indicators to better capture dietary quality and other dimensions of food security.
The vast majority of the hungry – 852 million – live in developing countries in Asia and Africa. While the number of malnourished people declined by almost 30 per cent in Asia and the Pacific over the past two decades, Africa experienced an increase from 175 million to 239 million people during the same period.
The report suggests adopting a twin-track approach based on support for economic growth, including agriculture growth involving smallholders, and safety nets for the most vulnerable. In addition, higher priority must be given to getting quality nutrition to prevent malnutrition co-existing with obesity and non-communicable diseases.
News Tracker: past stories on this issue
|
<urn:uuid:1d1afd43-68d8-42d6-8eb8-8594b0c74c15>
|
CC-MAIN-2016-26
|
http://www.un.org/apps/news/story.asp?NewsID=43235&Cr=food+security&Cr1=
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948256
| 811
| 2.765625
| 3
|
Fishing line pollution poses a real threat to wildlife, not to mention a hazard to boaters and divers. The most common type of fishing line, nylon mono-filament fishing line, is made from various types of polymers which take a very long time to break down. Discarded fishing line can last for hundreds of years in the environment. Wildlife can become entangled or ingest discarded fishing line, injuring or killing the animal. There are numerous cases of birds, ducks, turtles, dolphins, seals, sea lions, fish, coral, whales, and many other animals being entangled in discarded fishing line.
This Laughing Gull is entangled in monofilament fishing line. Trying to eat is difficult with the fishing line wrapped around his beak.
A sea turtle is completely tangled in used fishing line. He was found, washed up on a beach and saved. A local veterinarian untangled the fishing line and successfully released the turtle back into the wild. Not all animals are this lucky when it comes to fishing line.
This Albatross chick was found dead near a lake. The contents of his stomach included plastic and fishing line.
It Takes Time to Decompose
Plastic bags -10-20 years
Glass bottle – 1 million years
Plastic Beverage Bottle – possibly 500+ years
Cotton rags - 1-5 months
Paper - 2-5 months
Rope (natural fiber) - 3-14 months
Orange peels - 6 months
Wool socks - 1-5 years
Cigarette filters - 3-12 years
Milk cartons - 5 years
Leather shoes - 25-40 years
Nylon fabric - 30-40 years
Plastic 6-pack holder rings - 450 years
Styrofoam cup - 100 years
Banana peels - 2-10 days
Monofilament fishing line - 600 years
*Monofilament fishing line*
If you ever wish to recycle some fishing line on your own, the address below is the main plant used to recycle fishing line.
1900 18th Street
Spirit Lake, Iowa 51360
|
<urn:uuid:0bd68121-1646-4985-b6cc-f6ad507b48d0>
|
CC-MAIN-2016-26
|
https://sites.google.com/site/goldawardgofishgreen/home/the-effects-of-fishing-line
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00076-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923841
| 436
| 2.734375
| 3
|
Dec. 8, 2010
FOR IMMEDIATE RELEASE
IOM Report Recommends Changes in How U.S. Tracks Health Trends And Measures Health Outcomes
WASHINGTON — Social and environmental factors are the most powerful shapers of life expectancy and health-related quality of life, yet the United States lacks a cohesive national strategy and appropriate measurement tools to track and respond to these critical influences, says a new report from the Institute of Medicine.
Deficiencies in the completeness, timeliness, and relevance of health information being collected and lack of agreement on the best indicators to measure progress are hindering efforts to improve the health of Americans, whose life expectancy ranks 49th among all nations. Moreover, the absence of a benchmark report on nonmedical care-related factors that influence health leaves the public in the dark about the true state of the nation's well-being and the types of efforts that are most likely to improve health outcomes.
"Although the United States invests over 17 percent of its Gross Domestic Product on medical care — far more than any other nation — we lag behind other countries in several measures of health," said Marthe Gold, chair of the committee that wrote the report and Arthur C. Logan Professor and Chair of Community Health and Social Medicine, Sophie Davis School of Biomedical Education, City College of New York, New York City. "Our understanding of more effective and efficient strategies for improving health is hampered by inadequacies in the current system."
The U.S. Department of Health and Human Services should provide greater leadership, coordination, and guidance to the population health information and statistics system, the report says. Strengthened capabilities and coordination within HHS could facilitate efforts to harmonize and integrate population health data collection, analysis, and reporting; provide guidance on developing and selecting health indicators; and analyze the effects of various determinants of health over time.
HHS also should lead development of a standardized, core set of indicators focused on priority health outcomes to improve the relevance and usefulness of data collection and reporting. The numerous health indicator sets developed in recent years and deployed in different contexts make assessment and comparison difficult for policymakers and other decision makers by highlighting similar information in different ways. A standardized set of indicators should reflect and support a convergence of national, state, and local priorities and enable realistic comparisons of jurisdictions.
The nation also should adopt a single summary measure of population health to serve as the GDP equivalent for the health sector. The United States and many other nations have long used death rates as the standard measure of population health, but life expectancy by itself is too blunt an indicator to capture information about the health-related quality of life associated with chronic illnesses and injuries. Summary measures of population health, such as health-adjusted life expectancy, encapsulate an overall picture of the well-being of communities and countries and support monitoring of health status, forecasting, and priority-setting.
HHS also should issue an annual report on the social and environmental factors that influence the population's health as a means of helping Americans better understand what shapes their well-being at the local, state, and national levels. A report of this kind would also potentially galvanize action that leads to better outcomes.
The report, sponsored by the Robert Wood Johnson Foundation, is the first of a series on public health strategies to improve health. Established in 1970 under the charter of the National Academy of Sciences, the Institute of Medicine provides independent, objective, evidence-based advice to policymakers, health professionals, the private sector, and the public. The National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council make up the National Academies. For more information, visit http://national-academies.org. A committee roster follows.
Christine Stencel, Senior Media Relations Officer
Christopher White, Media Relations Assistant
Office of News and Public Information
202-334-2138; e-mail firstname.lastname@example.org
Report in Brief
Pre-publication copies of For the Public’s Health: The Role of Measurement in Action and Accountability are available from the National Academies Press; tel. 202-334-3313 or 1-800-624-6242 or on the Internet at http://www.nap.edu. Reporters may obtain a copy from the Office of News and Public Information (contacts listed above).
# # #
INSTITUTE OF MEDICINE
Board on Population Health and Public Health Practice
Committee on Public Health Strategies to Improve Health
Marthe R. Gold, M.D., M.P.H. (chair)
Professor and Chair
Department of Community Health and Social Medicine
Sophie Davis School of Biomedical Education
New York City
Steven M. Teutsch, Ph.D., M.D. (vice chair)
Chief Science Officer
Los Angeles County Public Health
Leslie Beitsch, M.D., J.D.
Associate Dean for Health Affairs and Director
Center for Medicine and Public Health
College of Medicine
Florida State University
Joyce D.K. Essien, M.D., M.B.A.
Center for Public Health Practice
Rollins School of Public Health
David W. Fleming, M.D.
Director and Health Officer
Department of Public Health - Seattle King County
Thomas Getzen, Ph.D.
Professor of Risk, Insurance, and Health Management
Fox School of Business
Temple University, and
International Health Economics Association
Lawrence O. Gostin, J.D.
Linda and Timothy O'Neill Professor of Global Health Law and
O’Neill Institute for National and Global Health Law
George J. Isham, M.D.
Medical Director and Chief Health Officer
Robert M. Kaplan, Ph.D.
Distinguished Professor of Health Services and Medicine
David Geffen School of Medicine
University of California
Wilfredo Lopez, J.D.
General Counsel Emeritus
New York City Department of Health
New York City
Glen P. Mays, Ph.D., M.P.H.
Professor and Chairman
Department of Health Policy and Management
Fay W. Boozman College of Public Health
University of Arkansas for Medical Sciences
Phyllis D. Meadows, Ph.D., M.S.N., R.N.
Associate Dean for Practice
Office of Public Health Practice, and
Health Management and Policy
University of Michigan School of Public Health
Mary Mincer Hansen, R.N., Ph.D.
Master of Public Health Program
Adjunct Associate Professor and Department of Global Health
Des Moines University
Des Moines, Iowa
Poki S. Namkung, M.D., M.P.H.
Chief Medical Officer
Santa Cruz County Health Services Agency
Santa Cruz, Calif.
Margaret E. O’Kane, M.H.S.
National Committee for Quality Assurance
David A. Ross, Sc.D.
Public Health Informatics Institute
Martin Jose Sepulveda, M.D., F.A.C.P.
IBM Fellow and Vice President Integrated Health Services
Steven H. Woolf, M.D., M.P.H.
Departments of Family Medicine, Epidemiology, and
Virginia Commonwealth University
Alina Baciu, Ph.D., M.P.H.
|
<urn:uuid:7af8d438-bb25-412d-8215-dc605d1a2920>
|
CC-MAIN-2016-26
|
http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=13005
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.87809
| 1,528
| 2.765625
| 3
|
Security Choices, Part 1: The Software Firewall
This is the first in a series of introductory articles intended for less-experienced users who wish to learn more about the security product options available to them today. Others may also find these articles interesting as a concise summary, update and review of what is frequently a disparate collection of information. The goal of the series is to provide a balanced overview of currently-available categories of security solution, citing their main uses and capabilities as well as their limitations and drawbacks.
This first article focuses on software firewalls which, along with anti-virus software, is considered an essential part of computer security. We’ll be looking at anti-virus in the next article.
The Software Firewall
The firewall’s main task is to prevent malicious or unwanted connections between your computer and the network (usually the internet). Firewalls act like entrance guards – allowing authorized people (network traffic) in and out, and blocking less well-intentioned individuals (malicious or unauthorized connections) from entering or leaving, as determined by the boss (the PC user), and awaiting further instructions whenever it detects unknown activity (visitors with unknown IDs).
The firewall is considered a primary security element because it helps block unknown threats by denying them network access. Firewalls are proactive in their approach – they stop unknown connections, ask the user how these connection requests should be treated, and grant access only to those connections defined by the user as trusted. By blocking network access, firewalls block malware’s main propagation route – the Internet. Most of today’s threats – Trojans, botnets, worms and other malware – use the Internet to spread themselves and transmit stolen personal data to unauthorized individuals or entities.
Firewalls can hide a computer’s presence on the Internet so hackers can’t locate and exploit vulnerable machines. Some advanced firewalls also incorporate a list of known attacks and intrusions, automatically preventing those from reaching the PC. Firewalls can also be used to control the exchange of data in internal networks (such as a home network or office LAN), making sure data is sent to the designated recipient, preventing internal hacks and man-in-the middle attacks.
Firewalls monitor and control traffic in both directions. Data received from the network is referred to as inbound, while data that is sent out is called outbound. Although the majority of today’s threats constitute breaches of outbound security, it’s imperative that both directions are monitored. Some of the more basic firewalls, including those supplied with Windows Vista and XP, don’t monitor outbound connections by default; they must be specifically configured to provide this protection.
Unlike typical anti-malware applications, firewalls are not signature-based, meaning they don’t need to identify a threat according to a known sample of that threat in order to block it. Instead, they ask the user whether a particular program should be allowed to connect to the network or not. This is the most difficult part of firewall operation for users because, understandably, most people are not equipped with the specialist knowledge needed to make this determination. They are not familiar with the specifics of networking or operating systems’ internal functions and cannot provide an informed answer to the firewall’s question.
So, to a certain extent, the firewall is only as secure as the user’s ability to answer these questions; if it turns out that the user responded incorrectly and inadvertently allowed access to a Trojan, the firewall was simply doing what it was told by granting access to this particular malicious program. In an attempt to alleviate this situation, the majority of firewalls now include a “white list” of known good applications and system services that are automatically granted network access without asking the user. To enhance the user’s understanding of individual activities and help in making the right decision when configuring new access permissions, some firewalls now incorporate a system of context-sensitive advice and live hints in this process.
In order to correctly handle network activity for the majority of internet-enabled applications not covered by the firewall’s existing white list, some sophisticated firewalls (including Outpost Firewall Pro and ZoneAlarm Pro) are supported by a continuously-updated online database of known good/and known malicious programs that is regularly downloaded to users to minimize the number of questions users need to answer to keep their protection up to strength. But of course, no system is perfect, and not every software application will be included in any vendor’s list, so there will always be a few questions users need to answer for themselves.
As we can see, firewalls are rarely clearly-defined traffic filters. Many now include additional functionality such as Host Intrusion Prevention systems (HIPS) to control local interactions and application activity, parental control features, safe surfing controls, advanced connection monitoring and logging systems, and other approaches that will be discussed in future articles.
What firewalls can do:
- Guard network and internet connections against malicious or unwanted content.
- Block known internal or external attacks and protect the integrity and privacy of intra-network data.
- Prevent malicious code from accessing the network and transmitting personal data to cyber criminals.
- Filter network data according to user-defined criteria.
- Hide the presence of a PC on the internet, protecting it against network probes and botnets looking for vulnerabilities.
What firewalls cannot do:
- Remove malware from a system that has already become infected.
- Provide automatic protection against unknown connection attempts; user input is required for these decisions.
Potential drawbacks of firewalls:
- Because the firewall is a mutually exclusive tool, two firewalls cannot peacefully coexist on one system. Firewalls operate at a low level, communicating directly with networking hardware, and only one such set of communications can take place at one time.
- Firewalls may slow data transfer speeds and use additional processor resources when monitoring large volumes of data being sent over high-speed connections.
- Most firewalls also include some additional, secondary functionality such as parental controls or website content filtering which may cause interoperability issues with other security software offering similar functionality.
While this has been a brief overview/refresher on what firewalls can and cannot do, it’s clear that the firewall is a must-have element in any computer security product portfolio. Our next article will address the strengths and weaknesses of anti-virus, but if you have any questions in the meantime, please don’t hesitate to contact us through the Security Teacher comments space and we’ll do our best to help.
Posted in Security Insight
|
<urn:uuid:7a4553ee-9b68-40cb-8e6c-7254839654d8>
|
CC-MAIN-2016-26
|
http://www.securityteacher.com/2008/01/16/security-choices-part-1-the-software-firewall/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00198-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.92852
| 1,384
| 3.09375
| 3
|
This course surveys the development of landscape art in Japan from the 8th to 18th centuries. The seminar will focus on three main bodies of material: the polychrome landscape tradition (such as poetic evocations of famous places and medieval paintings of sacred sites), the monochrome tradition (especially Zen art and literati painting), and early modern landscapes (including woodblock prints and Western-style painting). We will also consider supplementary materials including Chinese and Korean landscape painting precedents, and “quasi-landscapes,” such as maps and non-painted representations of Japan. Throughout the course, we will examine inherited notions of “landscape,” as well as constructions of social identity, national community, and sacred space through visual means.
Category for Concentration Distributions: C. Asia (includes China, Japan, India, Southeast Asia), 2. Medieval, 3. Early Modern
No data submitted
All are welcome to attend.
|
<urn:uuid:017eea75-9691-49e2-95c6-1d3dba584b44>
|
CC-MAIN-2016-26
|
http://www.lsa.umich.edu/cg/cg_detail.aspx?content=1960HISTART394002&termArray=f_13_1960
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00172-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922224
| 195
| 2.9375
| 3
|
|As in the last tutorial the password input tag does not have a closing tag </input>. Also the element name is still input.|
Here we must set the value of the type attribute to password to tell the web browser to hide the password as it is typed into the input field.
As with the text input tag, you will set the value of the name attribute to something that represents the purpose of this input tag. I have set the value to password.
It is a good practice to have the password input field to be blank initially. So here the value of the value attribute is set to null. This is done by putting two quotation marks after the equal sign.
It is a common practice to set the value of the size attribute to the same number of characters as used for the username input field. A good size is 30 characters.
Usually you are asked to choose a password when you sign up for a new account and you are given a maximum number of characters that you can have in that password. You will want to set the value of the maxlength attribute to this maximum number of characters. This will help to prevent new members/customers from creating a password that is too long.
|
<urn:uuid:df7b4cec-e427-4ed7-a653-cbb55fa85ed8>
|
CC-MAIN-2016-26
|
http://www.bellaonline.com/articles/art22217.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.885409
| 245
| 2.828125
| 3
|
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created an algorithm that can predict how memorable or forgettable an image is almost as accurately as humans — and they plan to turn it into an app that subtly tweaks photos to make them more memorable.
For each photo, the “MemNet” algorithm — which you can try out online by uploading your own photos — also creates a heat map that identifies exactly which parts of the image are most memorable.
“Understanding memorability can help us make systems to capture the most important information, or, conversely, to store information that humans will most likely forget,” says CSAIL graduate student Aditya Khosla, who was lead author on a related paper. “It’s like having an instant focus group that tells you how likely it is that someone will remember a visual message.”
Team members picture a variety of potential applications, from improving the content of ads and social media posts, to developing more effective teaching resources, to creating your own personal “health-assistant” device to help you remember things.
Part of the project the team has also published the world’s largest image-memorability dataset, LaMem. With 60,000 images, each annotated with detailed metadata about qualities such as popularity and emotional impact, LaMem is the team’s effort to spur further research on what they say has often been an under-studied topic in computer vision.
The paper was co-written by CSAIL graduate student Akhil Raju, Professor Antonio Torralba, and principal research scientist Aude Oliva, who serves as senior investigator of the work. Khosla will present the paper in Chile this week at the International Conference on Computer Vision.
How it works
The team previously developed a similar algorithm for facial memorability. What’s notable about the new one, besides the fact that it can now perform at near-human levels, is that it uses techniques from “deep-learning,” a field of artificial intelligence that use systems called “neural networks” to teach computers to sift through massive amounts of data to find patterns all on their own.
Such techniques are what drive Apple’s Siri, Google’s auto-complete, and Facebook’s photo-tagging, and what have spurred these tech giants to spend hundreds of millions of dollars on deep-learning startups.
“While deep-learning has propelled much progress in object recognition and scene understanding, predicting human memory has often been viewed as a higher-level cognitive process that computer scientists will never be able to tackle,” Oliva says. “Well, we can, and we did!”
Neural networks work to correlate data without any human guidance on what the underlying causes or correlations might be. They are organized in layers of processing units that each perform random computations on the data in succession. As the network receives more data, it readjusts to produce more accurate predictions.
The team fed its algorithm tens of thousands of images from several different datasets, including LaMem and the scene-oriented SUN and Places (all of which were developed at CSAIL). The images had each received a “memorability score” based on the ability of human subjects to remember them in online experiments.
The team then pitted its algorithm against human subjects by having the model predicting how memorable a group of people would find a new never-before-seen image. It performed 30 percent better than existing algorithms and was within a few percentage points of the average human performance.
For each image, the algorithm produces a heat map showing which parts of the image are most memorable. By emphasizing different regions, they can potentially increase the image’s memorability.
“CSAIL researchers have done such manipulations with faces, but I’m impressed that they have been able to extend it to generic images,” says Alexei Efros, an associate professor of computer science at the University of California at Berkeley. “While you can somewhat easily change the appearance of a face by, say, making it more ‘smiley,’ it is significantly harder to generalize about all image types.”
The research also unexpectedly shed light on the nature of human memory. Khosla says he had wondered whether human subjects would remember everything if they were shown only the most memorable images.
“You might expect that people will acclimate and forget as many things as they did before, but our research suggests otherwise,” he says. “This means that we could potentially improve people’s memory if we present them with memorable images.”
The team next plans to try to update the system to be able to predict the memory of a specific person, as well as to better tailor it for individual “expert industries” such as retail clothing and logo design.
“This sort of research gives us a better understanding of the visual information that people pay attention to,” Efros says. “For marketers, movie-makers and other content creators, being able to model your mental state as you look at something is an exciting new direction to explore.”
The work is supported by grants from the National Science Foundation, as well as the McGovern Institute Neurotechnology Program, the MIT Big Data Initiative at CSAIL, research awards from Google and Xerox, and a hardware donation from Nvidia.
|
<urn:uuid:12f76e5e-1d34-48e2-bb24-564def9808f3>
|
CC-MAIN-2016-26
|
http://news.mit.edu/2015/csail-deep-learning-algorithm-predicts-photo-memorability-near-human-levels-1215
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961694
| 1,141
| 3.1875
| 3
|
“GREEN” TOOTHBRUSHES: TAKING A BITE OUT OF DENTAL PRODUCT WASTE
Did you know that approximately 450 million plastic toothbrushes, mostly non-biodegradable, are dumped into landfills nationally? Its a known fact (or maybe not) that landfills are designed to avoid degradation and will not break down toothbrushes and will mostly likely remain there for decades.
To combat these rising numbers, World Centric, a company known for creating a line of compostable products that are made from plants, have recently launched its first line of compostable toothbrushes and travel cases that are being released exclusively in the United States. These “Green” toothbrushes are made from a plant based resin called Ingeo, instead of the usual petroleum-based products of leading brands and they are designed to fully compost within 3 to 6 months when sent to a commercial composting facility.
An added plus, World Centric will send customers prepaid envelopes to return their outdated toothbrushes and cases. A handy notch between the head and the handle of the
toothbrush is designed to make it easy to break off prior to
sending it to the designated composting facility.
The 7 in. World Centric toothbrush and case comes in blue, green and orange and are available at natural grocery stores, some Whole Foods stores nationwide and
online at: www.worldcentric.org
|
<urn:uuid:0096858a-f026-44e7-839f-eee90a97398c>
|
CC-MAIN-2016-26
|
http://mochamanual.com/2012/09/12/green-toothbrushes-taking-a-bite-out-of-dental-product-waste/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956778
| 298
| 2.5625
| 3
|
Addressing Diversity's Gray Areas
On the very last day of its 2007 term, the U.S. Supreme Court handed down a decision that will affect schools, school board members, administrators, students, and parents for years to come. The Court struck down the voluntary integration plans of the Seattle and Louisville school districts in Parents Involved in Community Schools v. Seattle School District #1 and Meredith v. Jefferson County Board of Education.
The ruling, perhaps the most important since Brown v. Board of Education in 1954, has left a lot of questions unanswered. The Seattle and Jefferson County public schools both took the race of students into account, although in a slightly diff erent way, when determining which schools they would attend. The Court's ruling has forced school districts to reexamine how they assign students to schools while still maintaining a diverse learning environment. The following questions and answers will help school district administrators begin to address their concerns.
What does the plan mean for school leaders working on assignment plans?
We know that when children bring different histories to the classroom, everyone benefits. Research shows that students who receive their education in a diverse setting get a better education than those in a segregated setting. The dissenting justices agreed that the educational benefits of a diverse learning environment are undeniable. Justice Kennedy said in his concurring opinion, "A compelling interest exists in avoiding racial isolation. Likewise, a district may consider it a compelling interest to achieve a diverse student population."
While the Court's decision makes clear that race cannot be the deciding factor in whether a student is admitted to a particular school, the Court did leave room for districts to take race into account.
How quickly does my school district need to act to ensure legal methods of student assignment are in place?
No timeline has been established for compliance. School districts will find it difficult to make changes in student assignments for the current school year. What school districts can do easily is consult with their school attorneys before making a decision on a hasty policy. A deliberate, methodical review of student assignment plans can help to minimize the racial aspects of the plan and ensure that long-term goals are in line with the Court's decision.
What are some other ways that school districts can ensure diverse classrooms without using race as a deciding factor?
The Court encourages school districts to use "serious, good faith considerations of workable race-neutral alternatives," but it offers no explanation for what those are. NSBA recommends that school districts consider a race-neutral plan, using commissioned studies and research to inform the process, and involve the community.
By tying diversity to educational goals, school districts ensure that the maximum educational benefit is achieved, rather than simply ensuring a demographic solution.
How will this ruling affect the reporting of annual yearly progress in racial subgroups that is required under the No Child Left Behind act?
NCLB's reporting requirements are not the same type of classification as addressed in the Court's decision. The reporting is used to ensure that all groups within a district are making academic gains, and the law does not require that schools make student assignment decisions based on racial or ethnic classifications.
Are school districts finding success using race-neutral strategies to ensure diversity?
There are a number of examples that NSBA has followed closely, which are detailed in our FAQ document for school boards, An Educated Guess: Initial Guidance on Diversity in Public Schools after PICS v. Seattle School District. The guide offers practical policy applications for school districts. An Educated Guess presents five different examples of successful school district programs that are already in place. They include making student assignments based on socio-economic status, a lottery, attendance zones, parental choice, and a hybrid of parental choice, lottery, and socio-economic status.
In addition to An Educated Guess, NSBA also has published, in conjunction with the College Board, Not Black and White: Making Sense of the United States Supreme Court Decisions Regarding Race-Conscious Student Assignment Plans, which can be a valuable resource for school districts as they seek to improve educational outcomes and ensure equal opportunity by enhancing school-based student diversity. Both are available on the NSBA Web site at www.NSBA.org.
Anne L. Bryant is executive director of the National School Boards Association.
|
<urn:uuid:5bef483f-7de7-4953-a2c0-990e40f730e1>
|
CC-MAIN-2016-26
|
http://www.districtadministration.com/article/addressing-diversitys-gray-areas
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960442
| 865
| 3.203125
| 3
|
A new study reveals that a common underlying mechanism is shared by a group of previously unrelated disorders which all cause complex defects in brain development and function. Rett syndrome (RTT), Cornelia de Lange syndrome (CdLS) and Alpha-Thalassemia mental Retardation, X-linked syndrome (ATR-X) have each been linked with distinct abnormalities in chromatin, the spools of proteins and DNA that make up chromosomes and control how genetic information is read in a cell. Now, research, published by Cell Press in the February 16th issue of the journal Developmental Cell, helps to explain why these different chromatin abnormalities all interfere with proper gene expression patterns necessary for normal development and mature brain function.
"Although clearly distinct from one another, human developmental disorders that are linked with chromatin dysfunction often share similar cognitive clinical features," explains senior study author, Dr. Nathalie Bérubé from the University of Western Ontario. "Whether the overlapping cognitive symptoms are due to underlying interlinked molecular mechanisms is still poorly understood." Her work now demonstrates that chromatin proteins defective in RTT, CdLS, and ATR-X syndromes are all associated with each other - and are required for one another's function - at certain "imprinted genes" in the developing mouse brain. Imprinted genes are a relatively rare type of gene that carries different information depending on whether it is inherited from the mother or the father. The results support the conclusion that ATRX (the chromatin protein that is defective in ATR-X syndrome) and its binding partners regulate expression of imprinted genes, and likely other genes required for normal brain development, by controlling chromatin structure.
"Our findings provide the first glimpse of the cooperation between ATRX and multiple other disease proteins in the regulation of common gene targets, perhaps explaining similarities between the associated human syndromes," says Dr. Bérubé. "The failure to properly suppress genes that are essential during embryonic development, but potentially detrimental in the mature brain, might contribute to cognitive deficiencies characteristic of RTT, CdLS and ATR-X syndromes. Further studies are needed to gain a better understanding of the specific role of these chromatin proteins and the molecular pathogenesis of the associated human disorders."
The researchers include Kristin D. Kernohan, University of Western Ontario, Victoria Research Laboratories, London, Ontario, Canada; Yan Jiang, University of Western Ontario, Victoria Research Laboratories, London, Ontario, Canada; Deanna C. Tremblay, University of Western Ontario, Victoria Research Laboratories, London, Ontario, Canada; Anne C. Bonvissuto, University of Western Ontario, Victoria Research Laboratories, London, Ontario, Canada; James H. Eubanks, Toronto Western Research Institute, Toronto, Canada; Mellissa R.W. Mann, University of Western Ontario, Victoria Research Laboratories, London, Ontario, Canada; and Nathalie G. Berube´, University of Western Ontario, Victoria Research Laboratories, London, Ontario, Canada.
|
<urn:uuid:3ce97b29-77d8-4715-8294-8648415de2d5>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2010-02/cp-act021110.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.902422
| 619
| 3
| 3
|
What is the single biggest misconception people have about renewable energy in the U.S.? And why do you think they have this misconception?
CHRISTINE TODD WHITMAN: The biggest misconception is that renewable energy sources are always working. Solar power only works when the sun is shining and wind-generated power only works when the wind is blowing; it’s not a constant energy source.
Christine Todd Whitman was governor of New Jersey from 1994 to 2001 and administrator of the Environmental Protection Agency from 2001 to 2003. She is currently president of Whitman Strategy Group, a consulting firm that specializes in helping companies find solutions to environmental challenges.
Read the related article.
Read the latest Energy Report.
|
<urn:uuid:d5f16935-0c8b-4dde-975e-22bd5abd7bc2>
|
CC-MAIN-2016-26
|
http://blogs.wsj.com/experts/2013/09/23/sorry-but-renewable-energy-doesnt-always-work/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00084-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935577
| 143
| 2.75
| 3
|
Rate of HIV Infections, Deaths Depleting Across Africa
HIV infections are down 50% in 25 countries around the world, including many in sub-Saharan Africa, according to an annual report from UNAIDS.
A week before World AIDS Day, the report by the Joint United Nations Program on HIV/AIDS or UNAIDS, shows that the rates on infection have been cut by 73% in Malawi since 2001, 71% in Botswana, and 68% in Namibia. Additionally, AIDS-related deaths have reduced by a third in the region, while the number of people who have received antiretroviral treatment has increased by 59%.
"The pace of progress is quickening--what used to take a decade is now being achieved in 24 months," Michael Sidibé, executive director of UNAIDS, said in a statement Tuesday. “We are scaling up faster and smarter than ever before. It is the proof that with political will and follow through we can reach our shared goals by 2015.”
In just two years, South Africa has increased its rate of HIV treatment by 75%, ensuring treatment for 1.7 million people.
Half of the global reductions in new HIV infections since 2010 have been among newborn children.
“It is becoming evident that achieving zero new HIV infections in children is possible,” Sidibé said. “I am excited that far fewer babies are being born with HIV. We are moving from despair to hope.”
UNAIDS estimates that there are still 34 million people around the world living with HIV, and half of them do not know their status.
|
<urn:uuid:52b7e4cd-7513-4260-a651-87ab12f645eb>
|
CC-MAIN-2016-26
|
http://www.hivplusmag.com/case-studies/world-news/2012/11/20/rate-hiv-infections-and-death-depleting-across-africa
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00036-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979436
| 336
| 2.65625
| 3
|
The First Time Tech Ruined the Music Business
The music business was in turmoil at the turn of the century. Technological innovation had made songs much easier to copy, and established artists foresaw their sales plummeting.
The companies that had once dominated the industry were rapidly losing ground to upstarts who produced new devices for playing music. The established companies urged Congress to tighten up the law to prohibit copying, while the innovators argued that any such change would only harm consumers.
This sounds like a story about how Apple Inc. (AAPL) disrupted the music business in the past decade, but it actually describes the years around 1900. The technological change was the invention of sound recording.
Before sound could be stored and reproduced, the music business was built on the sale of sheet music. Stephen Foster sold more than 130,000 copies of “Old Folks at Home” in the early 1850s. “On the Banks of the Wabash,” a hit in the 1890s for the composer Paul Dresser, sold more than half a million. To the extent there was much income in composing, it came from publishing sheet music.
Then everything changed. Thomas Edison built the first phonograph in 1877. By the 1890s, phonograph makers and companies selling cylinders and disks containing recorded music were proliferating. Record sales skyrocketed, from about 500,000 in 1897 to 2.8 million only two years later, and they kept rising thereafter. The player piano, capable of reproducing music from rolls of perforated paper, came on the market in the first decade of the 20th century. Millions of them were sold between 1900 and 1930.
Sound had once been a fleeting sensation, but now it was a commodity capable of being sold -- it could become property. But who owned it?
The music publishers, and the composers they represented, insisted they did. “I myself and every other popular composer are victims of a serious infringement on our clear moral rights,” declared John Philip Sousa, perhaps the most commercially successful composer of the era. The new record companies were hiring musicians to record Sousa’s marches, and they were selling the cylinders and disks in huge numbers, but they weren’t paying Sousa anything.
All the profits went to the record companies and to popular performers like Enrico Caruso. “They pay Mr. Caruso $3,000 for each song,” complained Victor Herbert, another well-known composer. “He might be singing Mr. Sousa’s song, or my song, and the composer would not receive a cent.” If customers bought records and piano rolls instead of sheet music, the composers worried, their primary source of income would dry up.
On the other side of the debate were the rapidly growing manufacturers of player pianos, piano rolls, phonographs and records. Allowing composers to control the use of songs, they argued, would destroy this new industry just as it was getting off the ground. Records and piano rolls wouldn’t cut into sheet music sales, they contended. Instead, recordings might actually promote sheet-music sales, by serving as a form of free advertising: Customers would be eager to play for themselves the songs they heard on record.
Allowing composers and music publishers to exact a toll would only make recordings more expensive, the new companies added. The purpose of the copyright law was to encourage the production of creative work, but the composers were asking for a reform that would choke off the supply of music, one that would benefit themselves at the expense of their listeners.
And why should the law be especially solicitous of composers? They weren’t the only creators of recorded music, the industry pointed out. There were several steps between printed notes and a record, each of which required just as much talent and hard work, and each of which was just as essential to the finished product. “It takes the genius of a Sousa to play into the horn,” argued a lawyer for the American Graphophone Co. “It takes the voice of the magnificent singer to sing into the horn; and it takes the skill of the mechanician who is operating the graphophone.”
Caught between these two powerful forces, Congress did nothing but hold hearings for several years. The impasse was eventually broken by a compromise enacted as part of the Copyright Act of 1909. The composers were granted the right to forbid recordings of their compositions, but once a composer permitted one recording to be made, anyone else could make one, upon payment to the composer of a royalty set at 2 cents per copy. Composers thus received a share of the revenue, but probably not as much as the most famous ones could have commanded had they been able to negotiate a price. This arrangement, with some tinkering, has been with us ever since.
The larger story is the role of technological change in the creation of property rights. When sound could be stored and reproduced it could be sold in new ways. For the first time, there were gains to be had from establishing a system of property rights in sound. So we established one. But whose rights would they be? And where would those boundaries be located?
The answers to these questions would determine how the spoils of technological advance were divided. And they would help define the relationship between creators, consumers and commercial intermediaries. They’ve never been fully resolved.
(Stuart Banner is the Norman Abrams Professor of Law at the University of California, Los Angeles, and the author of “American Property: A History of How, Why, and What We Own.” The opinions expressed are his own.)
Read more Echoes columns online.
To contact the writer of this post: Stuart Banner at email@example.com.
To contact the editor responsible for this post: Timothy Lavin at firstname.lastname@example.org or.
|
<urn:uuid:76c546ac-35dc-46a0-b8e6-f6a1a02927a8>
|
CC-MAIN-2016-26
|
http://www.bloomberg.com/news/print/2012-12-14/the-first-time-tech-ruined-the-music-business.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00084-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981224
| 1,234
| 3.0625
| 3
|
by Staff Writers
Santiago (AFP) Nov 30, 2012
Astronomers are reporting a find that challenges traditional theories as to how rocky planets -- such as Earth -- are formed.
Besides Earth, our solar system has three other rocky planets: Mercury, Venus and Mars. They have a solid surface and core of heavy metals, and differ from planets that are large spinning bodies of gas, like Jupiter or Saturn.
The new findings suggest rocky planets may be even more common in the universe than previously thought. The research was presented Friday in the Astrophysical Journal of Letters.
The astronomers used a cutting-edge telescope called ALMA, on a mountaintop 5,000 meters (16,400 feet) high in the remote desert of northern Chile.
They peered out into space at a brown dwarf named ISO-Oph 102. A brown dwarf is an object that is like a star but too small to shine as brightly.
Traditional theory holds that rocky planets form through the random collision of microscopic particles in the disc of material that surrounds a star. The particles, like fine soot, stick together and grow.
Scientists thought the outer reaches of brown dwarves were different. They believed the grains there could not cling together because the discs were too sparse. Also, particles would be moving too fast to stick together after colliding.
But lo and behold, in the disc around ISO-Oph 102, the astronomers found things that, for them at least, were big -- millimeter-sized grains.
"Solid grains of that size shouldn't be able to form in the cold outer regions of a disc around a brown dwarf, but it appears that they do," said Luca Ricci of the California Institute of Technology, who led a team of astronomers based in the United States, Europe and Chile.
"We can't be sure if a whole rocky planet could develop there, or already has, but we're seeing the first steps. So we're going to have to change our assumptions about conditions required for solids to grow."
Lands Beyond Beyond - extra solar planets - news and science
Life Beyond Earth
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
|
<urn:uuid:33c6674a-977d-49b6-a706-64829cc9a825>
|
CC-MAIN-2016-26
|
http://www.spacedaily.com/reports/Astronomers_report_startling_find_on_planet_formation_999.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00046-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936177
| 557
| 3.796875
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.