text stringlengths 199 648k | id stringlengths 47 47 | dump stringclasses 1
value | url stringlengths 14 419 | file_path stringlengths 139 140 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 50 235k | score float64 2.52 5.34 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
b.0364 Bourgogne, France
Facts and Events
Not much is known of Theudemeres. According to Gregory of Tours a war broke out between the Franks and the Romans some unknown time after the fall of the usurping Emperor Jovinus (411-413) who had been supported by the Franks. Around 422, a Roman army entered Gaul. King Theudemeres and his mother Ascyla were executed by the sword. Theudemeres' reign is supposed to be before that of king Chlodio, and the Chronicle of Fredegar makes Chlodio his son.
Theudemeres must have been a cousin of Arbogastes. | <urn:uuid:6c3e7e7a-eb73-4913-85d5-d31bc3163068> | CC-MAIN-2016-26 | http://www.werelate.org/wiki/Person:Theodemir_Magnus_(20) | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00137-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959845 | 145 | 2.640625 | 3 |
Commentary on the Bible, by Adam Clarke, , at sacred-texts.com
Jehoahaz made king on the death of his father Josiah, and reigns only three months, Ch2 36:1, Ch2 36:2. He is dethroned by the king of Egypt, and Jehoiakim his brother made king in his stead, who reigns wickedly eleven years, and is dethroned and led captive to Babylon by Nebuchadnezzar, Ch2 36:3-8. Jehoiachin is made king in his stead, and reigns wickedly three months and ten days, and is also led captive to Babylon, Ch2 36:9, Ch2 36:10. Zedekiah begins to reign, and reigns wickedly eleven years, Ch2 36:11, Ch2 36:12. He rebels against Nebuchadnezzar, and he and his people cast all the fear of God behind their backs; the wrath of God comes upon them to the uttermost; their temple us destroyed; and the whole nation is subjugated, and led into captivity, Ch2 36:13-21. Cyrus, king of Persia, makes a proclamation to rebuild the temple of the Lord, Ch2 36:22, Ch2 36:23.
2 Chronicles 36:1
Took Jehoahaz - It seems that after Necho had discomfited Josiah, he proceeded immediately against Charchemish, and in the interim, Josiah dying of his wounds, the people made his son king.
2 Chronicles 36:3
The king of Egypt put him down - He now considered Judah to be conquered, and tributary to him and because the people had set up Jehoahaz without his consent, he dethroned him, and put his brother in his place, perhaps for no other reason but to show his supremacy. For other particulars, see the notes on Kg2 23:31-35 (note).
2 Chronicles 36:6
Came up Nebuchadnezzar - See the notes on Kg2 24:1.
Archbishop Usher believes that Jehoiakim remained three years after this tributary to the Chaldeans, and that it is from this period that the seventy years' captivity, predicted by Jeremiah, is to be reckoned.
2 Chronicles 36:9
Jehoiachin was eight - See on Kg2 24:6-15 (note).
2 Chronicles 36:10
Made Zedekiah - king - His name was at first Mattaniah, but the king of Babylon changed it to Zedekiah. See Kg2 24:17 (note), and the notes there.
2 Chronicles 36:12
Did that which was evil - Was there ever such a set of weak, infatuated men as the Jewish kings in general? They had the fullest evidence that they were only deputies to God Almighty, and that they could not expect to retain the throne any longer than they were faithful to their Lord; and yet with all this conviction they lived wickedly, and endeavored to establish idolatry in the place of the worship of their Maker! After bearing with them long, the Divine mercy gave them up, as their case was utterly hopeless. They sinned till there was no remedy.
2 Chronicles 36:19
They burnt the house of God - Here was an end to the temple; the most superb and costly edifice ever erected by man.
Brake down the wall of Jerusalem - So it ceased to be a fortified city.
Burnt all the palaces - So it was no longer a dwelling-place for kings or great men.
Destroyed all the goodly vessels - Beat up all the silver and gold into masses, keeping only a few of the finest in their own shape. See Ch2 36:18.
2 Chronicles 36:21
To fulfill the word of the Lord - See Jer 25:9, Jer 25:12; Jer 26:6, Jer 26:7; Jer 29:12. For the miserable death of Zedekiah, see Kg2 25:4, etc.
2 Chronicles 36:22
Now in the first year of Cyrus - This and the following verse are supposed to have been written by mistake from the book of Ezra, which begins in the same way. The book of the Chronicles, properly speaking, does close with the twenty-first verse, as then the Babylonish captivity commences, and these two verses speak of the transactions of a period seventy years after. This was in the first year of the reign of Cyrus over the empire of the East which is reckoned to be A.M. 3468. But he was king of Persia from the year 3444 or 3445. See Calmet and Usher.
2 Chronicles 36:23
The Lord his God be with him - "Let the Word of the Lord be his helper, and let him go up." - Targum. See the notes on the beginning of Ezra.
Thus ends the history of a people the most fickle, the most ungrateful, and perhaps on the whole the most sinful, that ever existed on the face of the earth. But what a display does all this give of the power, justice, mercy, and long-suffering of the Lord! There was no people like this people, and no God like their God. | <urn:uuid:f0774b72-5dcd-424f-ad48-cad8f795cb85> | CC-MAIN-2016-26 | http://www.sacred-texts.com/bib/cmt/clarke/ch2036.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97183 | 1,128 | 2.625 | 3 |
3D Application at the Center for Advanced Spatial Technologies University of Arkansas, FayettevilleThe ability to quickly and accurately map a wide range of surfaces with high levels of precision and accuracy creates new opportunities for surveyors, researchers and engineers who perform as-built surveys, monitor deformations in large structures and reverse-engineer mechanical structures. When integrated with traditional three-dimensional (3D) measuring technologies such as photogrammetry, laser scanning systems enable rapid and effective visualizations of 3D structures.
The ability to analyze the 3D surface elevation and characteristics of large-scale (small area) features is also a critical part of university research in many disciplines. Until recently, however, acquiring these data could only be accomplished with time-consuming surveying or close-range photogrammetric methods. The development of highly accurate terrestrial laser scanning systems such as the Optech ILRIS 3D scanner is revolutionizing research in fields from A (archaeology) to Z (zoology). Researchers affiliated with the Center for Advanced Spatial Technologies (CAST) at the University of Arkansas will be using an Optech system to map detailed rattlesnake habitats, to model the water flow across farm fields, to inventory archaeological mounds, to create architectural records of historic buildings and to provide detailed infrastructure data to support modeling of airborne aerosol distributions in urban environments to predict the effects of possible terrorist attacks.
"In minutes the Optech ILRIS system can produce millimeter-accurate point cloud measurements of surfaces at distances that range from 3 meters to more than 500 meters," says Jack Cothren, project manager for the scanning research at the university. "This provides researchers with a totally new set of data for many different studies."
The ability of the ILRIS system to both measure large areas and still maintain highly accurate details was important to Angie Smith, a graduate student at CAST. Smith used the system to scan a large cliff face that held a small rock shelter (a shallow cave) that had prehistoric rock art on its walls. "I was able to scan the entire surface of the cliff at centimeter level spacing in five scans, each taking less than 15 minutes. I then scanned the area with the rock art at 0.5 mm spacing at the rock surface. This revealed subtle surface differences created by the prehistoric artists, especially after color digital photographs were draped over the surface." The data was merged in PolyWorks, processed and then used as the basis for complex animation and visualization studies. "I cannot even imagine beginning to do the project with traditional survey techniques," Smith says. "Field data was acquired by the Optech [scanner] in less than one full day and the processing tools in PolyWorks provided automated tools to splice together the different scans, at different resolutions, into one product that could be easily moved into our visualization software, SoftImage."
"The Optech [scanner] is one piece of a series of instruments and software," Cothren says. "While fast and accurate, the data cloud from the scanner provides only relative coordinates. To link these to geographic systems, CAST also acquired a number of Trimble surveying instruments. We can either locate the Optech [scanner] with the Trimble 5700/5800 GPS, or locate targets within the scanning field that are tied back to the proper local reference system with our Trimble 5600." According to Cothren, "determining the scanner location with these instruments allows us to output the data from PolyWorks with precise geographic coordinates."
Acknowledgement: The CAST research was made possible with support from the National Science Foundation Award BCS-0321286.
If you have a case study you'd like highlighted, please send details to email@example.com.
Manufacturer Information: Optech Incorporated, www.optech.on.ca
InnovMetric Software, www.innovmetric.com | <urn:uuid:5fdefe51-3688-4dcc-85e5-1cdec28fae27> | CC-MAIN-2016-26 | http://www.pobonline.com/articles/87039-technology-profile | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00034-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934444 | 804 | 3.0625 | 3 |
Most scientists I know are map-geeks. What’s not to love about a 2-dimensional abstraction that captures gobs of information in an economical way? For those of us who love biogeography–the study of the distribution of life across the planet–how one renders the globe is vital to understanding where and why the diversity is. And the Mercator projection, the view of the world one sees from most North American classrooms, leaves, let us say, a little bit to be desired in that department. In the Mercator, the area of the continents around the equator–where most of the diversity of life can be found, is shrunk relative to poles. The story goes that Mercator, a German, devised a map that made Germany look as big as possible (but, in a karmic backfire, made Russia look even bigger, and let’s not even get started about Greenland).
So enter the The Peirce Quincunial, where the equator is a square. Sheer beauty.
Big tip of the hat to Victoria Johnson at The Awl. | <urn:uuid:6c0288df-f6a4-486a-80b5-59b7dccaf359> | CC-MAIN-2016-26 | https://eebatou.wordpress.com/2011/09/18/my-new-favorite-global-map/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00057-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922819 | 227 | 2.765625 | 3 |
Water is a polar molecule. This means it has a positive side and a negative side. This polarity allows it to be attracted to differently charged particles in the blood. Salts and other polar molecules should dissolve in the blood because blood is mostly water. Conversely, fats and other non-polar substances will not dissolve in the blood. Non-polar molecules get carried through the blood by attaching themselves to carrier proteins. | <urn:uuid:5e18bc1e-c0fa-4ccf-a076-42edc80bda9a> | CC-MAIN-2016-26 | http://www.chegg.com/homework-help/inquiry-into-life-13th-edition-chapter-2.8-solutions-9780077280109 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00181-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.93601 | 85 | 3.09375 | 3 |
It’s true that even drab birds are interesting. Sparrows, flycatchers, and the like have fascinating behavior and life histories, and they reward careful observation. But let’s face it: bright colors give a bird extra appeal. So it’s no surprise that one of the most familiar and popular of our songbirds is the northern cardinal.
The bright red males of this species are unmistakable (even the robust beak is orange). They’re equipped with a black mask on the face and a dramatic crest on their heads, which stands erect when the bird is excited or agitated. Female cardinals are a bit less striking; much of the male’s red is replaced by gray or grayish green. But even females are eye-catching and a cinch to identify.
While these birds are firmly established here — indeed, quite abundant in their favorite habitats — it’s noteworthy that the cardinal is a relatively recent addition to the bird life of the Vineyard, and that of southern New England generally. Older ornithological accounts consider the cardinal a bird of the South and especially the Southeast (the range of the species extends west into Arizona). But by the early 20th century, observers were noting a gradual northward expansion in the cardinal’s distribution, first up the Mississippi River watershed, then north along the East Coast.
By the middle of the 20th century, cardinals were turning up quite regularly in the Bay State, and the state’s first confirmed nesting record for this bird occurred around 1960. Today, cardinals range north throughout New England (though, to the north, they’re confined to lower elevations), and up into the Maritime Provinces. Especially because it frequents settled areas, the cardinal is now so familiar that most people can’t imagine a time when this bird was absent.
The expansion of cardinals north to the Vineyard’s latitude was a slow process, largely because the species is essentially non-migratory. Many cardinals spend their entire lives within a short distance of where they hatched, and records of banded cardinals moving more than a hundred miles or so are rare enough to be notable. Yet there is a tendency for cardinals (young ones especially) to disperse from their breeding territories in late summer and fall, and, oddly, dispersing birds tend to move north rather than south. A number of factors — a warming climate, gradual regrowth of dense cover on a landscape of abandoned farms, and especially an increase in the human habit of putting seed out for birds — gradually increased the odds that birds on the northern fringe of the cardinal’s range could survive the winter. And so, slowly but steadily, the species marched north.
Especially in the northern part of its range, the cardinal remains very much a bird of towns and neighborhoods; on the Vineyard, the species is scarce (though not absent) in expanses of natural habitat. But in settled areas, cardinals are numerous, and their habit of forming small flocks in winter makes them seem even more abundant. Cardinals are much less gregarious during the nesting season; in fact, they’re aggressively territorial, mating for life and then defending the area they’ve chosen against all comers. But the odds are good that a pair of cardinals nests within a short distance of your house.
Highly vocal except for a period in the late fall and early winter, cardinals are obvious even if their bright plumage isn’t in sight. The song (which, oddly for a songbird, is given by both males and females) consists of loud, clear, whistled notes: “Peter! Peter!” or “Cheer! Cheer! Cheer!” Like clockwork, the species starts singing around mid-February on the Island, and each day, it is one of the earliest birds to get started in the morning. Males often choose tree tops or other prominent spots to sing from, making them hard to miss.
For all their vigor and conspicuousness, though, cardinals are remarkably secretive about their nests, which are generally built in shrubs or vines within a few feet of the ground. Most of the species that nest near my house reveal at least the approximate location of their nests by making frequent trips in to feed their young. But cardinals approach their nests with great care, and while they always raise young in our yard, I never have a clear idea of where, exactly, their nest might be. This habit must surely be a secret of their success in suburbanized areas, since it helps hide eggs and young from roaming house cats.
A cardinal’s massive bill is adapted for crushing seeds, and a flock of these birds can empty a bird feeder of sunflower seeds in record time. But especially when feeding young, cardinals eat large quantities of insects as well, especially beetles. This bird, then, is a useful one to have around, helping keep insect populations in check.
Colorful, talkative, and comfortable around human activity, this bird surely deserves its popularity. We’re fortunate to have benefited from the success of this species. | <urn:uuid:0c529c64-3898-429b-b5eb-77c956f08fbd> | CC-MAIN-2016-26 | http://www.mvtimes.com/2013/03/27/vocal-social-cardinals-are-home-marthas-vineyard-14892/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00088-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.957055 | 1,070 | 3.59375 | 4 |
Classical Chinese/Lesson 1
- 子曰: "The master says" or "The master said".
- 子: (zǐ) pronoun used to address to a teacher or master. 子 is a respectful form of address to a man, here used to address 孔子 (Confucius). Other similar uses include 孟子 (mèng zǐ) for Mencius and 孫子 (sūn zǐ) for Sun Tzu. In this case, it is assumed by the author that the learned reader will know who spoke the following quote, so it is not necessary to give the exact identity of the speaker.
- 曰: (yue1) verb to say. 曰 is one of the frequently used words for the verb "to say" in Classical Chinese. However, 曰 is not the only frequently used word for "say".
- 學而時習之: Learn and practice often [what you have learned]
- 不亦說乎: Isn't it pleasant?
- 有朋自遠方來: Friends have come from distant places. (Or: A friend has come from a distant place.)
- 不亦樂乎: Isn't it enjoyable?
- 人不知而不慍: [When] other people don't understand [him], but [he] is not angry
- 不亦君子乎: Isn't that (also) how a gentleman should act?
This grammar sections reveals that Classical Chinese in many aspects is close to English:
- The subject precedes the verb: 朋來 (péng lái) 'friend(s) come'
- The object comes after the verb: 習之 (xí zhī) 'practice it'
- Adjectives used attributively precede nouns: 遠方: (yuǎn fāng) distant place
However, there are notable differences:
- Chinese does not inflect for tense or number. In this example,
- Questions are formed by adding a marker at the end (usually it's 乎 (hū), but other markers also exist)
- No linking verb is used with adjectives: 說乎 (yuè hū) 'is it pleasant?'; 遠方 (yuǎn fāng) 'distant place'
If you looked up words in the dictionary, you may have noticed that sometimes part of speech marked there doesn't match that in the dictionary:
- 君子 (jūn zǐ) is given as the noun ('gentleman'), not as an adjective ('gentlemanly', 'like a gentleman should act')
It is because of a process called conversion: one part of speech can become another one. This process can also occur in English: "I love her" (a verb) versus "my love" (a noun). | <urn:uuid:05fe0b31-970c-41ea-b2dc-c4015f4fff03> | CC-MAIN-2016-26 | https://en.wikibooks.org/wiki/Classical_Chinese/Lesson_1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00029-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.915439 | 659 | 3.8125 | 4 |
The Colville Tribes' forebears subsisted along the eastern half of the Columbia River's tributaries. They communicated with similar Salishan languages and were nomadic until the mid-19th century, when fundamental changes to their way of life took hold.
Before the advent of Europeans in the early 19th century, the Colville tribes differentiated among themselves according to traditional river valleys, language, and villages. During the cold months, families stayed warm in communal mat lodges and sturdy pit dwellings. During warmer months they camped in mat or hide tents. Through the seasons, families trekked to promising locales to harvest salmon, their dietary mainstay; gather berries and roots, and hunt game. They believed foods possessed spiritual power; thanksgiving feasts were held in their honor. Wintertime dances and song served to acknowledge the spirits that sustained the land and water that yielded such generous gifts. To promote social cohesion, each band had a headman who consulted with a group of advisors about everyday concerns.
The first change to have an impact on the traditional lifeways of the aborigines was the advent of the horse in the middle of the 18th century, traceable to 15th century European explorers on the other side of the continent. The animal increased their mobility and range. The next big change became permanent in the first quarter of the 19th century with the beginning of trade with Europeans. British and American fur traders erected several posts in the region. They bartered with the Indians for coveted pelts in exchange for new technology and other attractive goods. For numerous natives, exchanging furs and other Indian items for the white man's goods and services became a permanent alternative to traditional ways of subsistence.
The middle of the 1800s ushered in a great and relentless wave of westward pioneers of various sorts, along such famous routes as the Oregon Trail. Their land-hungry encroachment would wreak a decisive change in native lifeways. The outsiders also obliviously introduced diseases against which the natives had no natural immunity. The river drainages became scenes of a drastic withering of indigenous populations.
In 1855, agents of the American government induced numerous Washington tribes to sign land-ceding treaties in exchange for smaller parcels reserved for them, but the forebears of the modern Colville tribes did not become signatories and move onto a reservation. Nevertheless, in 1872 President Ulysses S. Grant established the Colville Indian Reservation by Executive Order. The tribes were compelled to subsist on a parcel in the Washington Territory. The famed Chief Joseph and the remnant of his Wallowa Nez Percé band joined the original tribes on the Colville Reservation in 1885.*
In 1887, the Congress passed the General Allotment Act that granted small parcels of acreage to Indian individuals, including some of the Colville. Allotments were created with tribal lands, including the Colville Reservation.
Over the next several decades, various societal and governmental pressures would chip away at the size of the Colville Reservation. In the late 19th century, encroachment by gold miners and other prospectors began to swallow up Colville lands. In 1892, a huge segment of northern acreage was removed. In the 1930s, dams along the Columbia and increased American settlement further compromised Colville jurisdiction.
In 1934, Congress commenced to close down the government's allotment policy that began 1887. A year later, the Secretary of the Interior signed an directive to terminate the withdrawal status of Colville reservation lands.
On February 26, 1938, the American government endorsed the Confederated Tribes of the Colville Reservation’s new constitution and bylaws. From this document, a governing unit and four voting districts were established.
In 1995, each member of Washington’s Colville Confederated Tribes received a federal check in the amount of $5,989 to compensate for acreage confiscated to construct the Grand Coulee Dam in 1933.
*Chief Joseph and his band were supposed Nez Percé Nez Perce bands, but local whites leery of the notorious chief prevented it.
See also Indian Wars Time Table .
Native American Cultural Regions Map
Off-site search results for "The Colville Indian Tribes"...
The Narragansett Indian Tribe
But the Narragansett were completely defeated.After the war, the remaining Narragansetts were forced to live on reservation lands, but by the end of the 18th century, the reservation lands had been drastically reduced. The state of Rhode Island ...
Delaware Indian - Is the Delaware Indian tribe, a Federally recognized tribe?
... recognized tribe? The Delaware Indian tribe is the same as the Lenape Indian tribe. If you are asking about their Federal recognition, it is true that some of the various Delaware tribes are not yet Federally recognized. Home ...
American Indians - The North American Indian Tribes
... The Kato, The Maidu, The Miwok, The Pomo, The Wailiki, The Wintun, The Yokuts, The Yuki Cahuilla, Dieguenos, The Luisenos, The Mono, The Pavioso, The Washo The Tiwa (Isleta, Taos),The Keres (Cochiti, Santa Domingo, Acoma, Laguna, Sia, and ... | <urn:uuid:c6c247b8-cc54-4031-8cdf-a1f130b8a947> | CC-MAIN-2016-26 | http://www.u-s-history.com/pages/h1547.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00161-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952576 | 1,102 | 4.375 | 4 |
Citizen Science and Wild Birds
Citizen science is a collaboration between scientists and volunteers for the purpose of collecting and analyzing data. Researchers have been depending on citizen scientists for data on birds for many years, and therefore, these volunteers are very important to the scientific community.
- What is Citizen Science?
Birding on a Mission: Three Projects for Citizen Scientists
While outdoorspeople have enjoyed a variety of hobbies for the sheer pleasure of them, few outside pastimes have the potential value to scientific research as birding does. For those who know about citizen science, birding is more than just a passion—it’s a step towards a greater understanding of environmental developments, a better awareness of bird populations and behaviors, and much more.
Despite the heady purposes of this research, the footwork is hardly more than what birdwatchers do for pleasure. Here are three programs that are blazing new scientific ground with the help of citizen scientists. Through these, birders from around the world can contribute to one of the world’s biggest research communities.
A joint project between The Cornell Lab and Bird Studies Canada that began in Canada during the 1970s, Project FeederWatch is a survey that takes place every winter in North America. As the name suggests, individuals who join the program simply watch their bird feeders from November through the beginning of April and record their sightings. Birders of any skill level can participate, and as infrequently as they please provided that they record over two consecutive days when they submit their sightings.
An important caveat about this program is that birders are only meant to count birds that appear specifically for something they provided, such as seed feeders, plantings, and waterers. This makes FeederWatch an ideal annual tradition for birders who enjoy their watching from home.
To join up, participants simply need to apply on the FeederWatch site and receive the research kit that they provide. There is a participation fee of $15, which covers the kit and information that participants receive. In addition, FeederWatchers are automatically subscribed to the Lab of Ornithology’s news publication. With over 20,000 participants in 2013, this program is making a huge impact in the birding world.
Another site on Cornell’s Citizen Scientist Network, NestWatch, enables participants to locate and monitor nests throughout the year. This program is a favorite among more dedicated birding hobbyists, as it educates participants on how to locate bird nests and requires a more regular observation schedule. While NestWatch is usually preferred for birders who prefer to venture outside to find nests, stay-at-home birders can still enjoy participation in this program by creating a nest box. Since this program involves looking for nests in particular, the way participants help differs hugely from FeederWatch.
Instead of merely counting birds to discover population trends, watchers are meant to record metrics that help researchers understand birds’ reproductive biology such as:
When nesting takes place
How many eggs are laid and hatched
The number of hatchlings who survive
NestWatch is a perfect option for those who want to engage in research year-round as opposed to short-term annual projects. And instead of merely counting birds, the observations that birders take in this project provide a greater insight into their behaviors. For birders who want to step up their dedication, NestWatch is an educational and engaging program.
Great Backyard Bird Count
With the support of Cornell and Audubon, the Great Backyard Bird Count lives up to its name as one of the biggest annual events for birders in the world. Although only a relatively brief 4-day event in February, this program allows participants from around the world to count birds and submit their findings to create a snapshot of birdlife. While enthusiasts are encouraged to record as much as they can, simply taking fifteen minutes on one day to record and submit can make a difference for researchers.
In addition to recording checklists of birds that have been positively identified and tallied, the GBBC is also a unique opportunity for photographers. Participants can enter a photo contest, in which photography is judged according to the artistry and technical skill of each shot. This includes categories such as composition, most interesting behavior, group shots, and habitats. Winners are eligible for prizes and exposure on their website.
Given that this event can take place anywhere in the world, the GBBC is a favorite among birders with a passion for camping and family outings. And although the event only takes place once a year, more avid birdwatchers can still continue to record and submit their checklists on Cornell’s eBird site anytime they get the notion to watch.
There are a lot of ways that birders can participate in the scientific community. These are the 3 largest projects, but there are many more out there, hosted by specific chapters of Audubon, local wildlife refuge centers, and university ornithology programs.
Birdfeeders.com is the top destination to find quality Wild Bird Feeders and Accessories. Perky-Pet®, K-Feeders® and NO/NO® wild bird products are trusted brands to bird lovers everywhere. Interact with nature, relax and build memories that last a lifetime by conveniently ordering from birdfeeders.com. Happy Bird Feeding! | <urn:uuid:b0fadf6a-0198-40a5-a619-51d3228969d9> | CC-MAIN-2016-26 | http://www.birdfeeders.com/advice/bird-watching/citizen-science/projects-for-citizen-scientists | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00117-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951698 | 1,085 | 3.46875 | 3 |
Posted by Pauline Lejeune on December 10, 2009
On September 27, German voters elected the members of their 17th Bundestag, the lower house of their federal parliament. As expected, they handed conservative Chancellor Angela Merkel a second term—the Christian Democrats (CDU/CSU) won 33.8% of the vote—and a chance to create a center-right government with the economically libertarian Free Democratic Party (FDP), which scored its best result ever with 14.6%.
This parliamentary election may be part of a lasting realignment in German politics. The two main parties performed miserably, winning the least seats in the Bundesrepublik’s history. The abysmal showing was partly because the “grand coalition” had blurred parties’ identities. The Social Democrats (SDP) suffered a huge loss with only 23% of the votes, an 11.2% drop-off from the last election. Despite Merkel’s success, however, her CSU/CDU alliance won a record low of 33.8% of the party vote. In contrast, the three smaller contending parties (the FDP, the Greens and the Left Party) did very well, gathering between 10.7% and 14.6% of the votes. More surprisingly, from an American point of view, all of them won representation in the Bundestag that almost perfectly mirrors their support within the electorate despite the fact that the Greens and Free Democrats collectively won only one of the single-member seats up for election despite earning nearly a fifth of all votes cast.
From the perspective of reforming the U.S. electoral system to ensure both fair and proportional representation, the German election is especially remarkable. The highly representative outcome of the German election is the product of its Mixed Member Proportional System, in which each voter casts two ballots to elect about half of seats from U.S.-style single member districts and half from regional party lists: first they directly elect a Bundestag member in their local district , and then vote for their preferred party at an “at-large” regional level (the Land). This second ballot determines a party’s overall share of seats proportional to its Land-wide share, with seats added in to correct the typical distortions in the district elections.
This system gave voters the opportunity to elect one candidate from a party on their single-winner ballot and to choose another party in their second at-large vote. For the single-member district votes, voters mainly elected candidates from the CDU/CSU and the SPD because of the winner-take-all rules governing those elections. They are more likely to express their more sincere political preferences with their second votes, enabling smaller parties to secure seats through proportional representation.
The CDU/CSU in fact won more single-member district seats than its share of second votes would allow. As a result, 24 extra seats were created, changing the size of the Bundestag: the overall number of seats increased from 598 to 622. That fact, combined with the fact that some votes went for parties below the victory threshold, resulted in Merkel’s Christian Democratic alliance and their partner, the FDP securing a clear majority in the Bundestag (53.37% of the seats) with 48.4% of the nationwide votes.
The five percent threshold for parties allows representation for smaller parties, such as the FDP, the Greens and the Left Party, while excluding parties with less-than-substantial support (including the new Pirate Party that won 2% of votes overall while winning more than 10% of votes from young men). For example, the FDP received 9.4% of the single-member district ballots without being able to win any single-winner seats, but the proportional representation seats enabled it to secure 93 seats (15% of the Bundestag’s seats) that reflected their greater share of the party list votes. Due to the mixed-member proportional system, the FDP now constitutes 30% of the coalition-to-be and has become the third most powerful party in the Bundestag.
Without the party list votes and seats, the CDU/CSU would have earned an overwhelming 73% of all seats and been able to govern on its own despite taking less than 40% of the district votes and barely a third of the party list votes. Due to its strong regional support in the former East Germany, the Left party would have won more than 5% of seats even as the FDP would have earned no seats despite its higher level of national support; the Greens would have won only a single seat despite 11% party list support.
Germany’s elections served as a good illustration of how a proportional system can better reflect the various constituencies of a given electorate than our single-member district system | <urn:uuid:e421ffcb-ec70-4431-b6d4-c64e9c241e0e> | CC-MAIN-2016-26 | http://www.fairvote.org/germany-s-federal-parliament-fair-and-accurate-representation | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00126-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967415 | 994 | 3 | 3 |
The majority of men with androgen deficiency may not be receiving treatment despite having sufficient access to care, according to a report in the May 26 issue of Archives of Internal Medicine, one of the JAMA/Archives journals.
Androgen deficiency in men means the body has lower than normal amounts of male hormones, including testosterone, according to background information in the article. Although prescriptions for testosterone therapy for aging men have increased in recent years, treatment patterns for androgen deficiency are not clearly understood in community-dwelling U.S. males.
Susan A. Hall, Ph.D., of New England Research Institutes, Watertown, Mass., and colleagues examined data collected from 1,486 Boston-area men (average age 46.4) from April 2002 to June 2005 to estimate the number of men receiving treatment for androgen deficiency, to explain how treated and untreated men varied in seeking care and to understand potential barriers to health care. Specific symptoms of androgen deficiency include low libido, erectile dysfunction and osteoporosis and less-specific symptoms include sleep disturbance, depressed mood and tiredness.
A total of 97 men met the criteria for having androgen deficiency. Eighty-six men were symptomatic and untreated, and 11 were prescribed testosterone treatment. "Men were using the following: testosterone gel (n=1), testosterone patch (n=3), testosterone cream (n=1), testosterone cypionate [an injectable form of testosterone] (n=1) or unspecified formulations of testosterone (n=5)," the authors write. "All of the unspecified forms of testosterone used were self-reported as administered in intervals defined in weeks, which suggests that these were injectable formulations."
"Men with untreated androgen deficiency were the most likely of the three groups to have low socioeconomic status, to have no health insurance and to receive primary care in an emergency department or hospital outpatient clinic," the authors write. However, all men with treated and untreated androgen deficiency were more likely to report receiving regular care than those without the condition and reported visiting their doctor more often throughout the year (with averages of 15.1 visits for those with untreated androgen deficiency, 6.7 visits for those without the condition and 12 visits for those with treated androgen deficiency).
"Under our assumptions, a large majority (87.8 percent) of 97 men in our groups with androgen deficiency were not receiving treatment despite adequate access to care," the authors conclude. "The reasons for this are unknown but could be due to unrecognized androgen deficiency or unwillingness to prescribe testosterone therapy."
(Arch Intern Med. 2008;168:1070-1076. Available pre-embargo to the media at www.jamamedia.org.)
Editor's Note: This study was supported by an unrestricted educational grant to New England Research Institutes (NERI) from GlaxoSmithKline (GSK). The Boston Area Community Health Study is supported by a grant from the National Institute of Diabetes and Digestive and Kidney Diseases. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc. | <urn:uuid:0380c4b7-e77c-4733-b1f6-7df0731951bd> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2008-05/jaaj-mmw052208.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00128-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952889 | 644 | 2.546875 | 3 |
Community Guide to Development Impact Analysis by Mary Edwards
|Introduction||Fiscal||Traffic||Socio-Economic||Environmental||Putting it Together||Cost of Community Services|
As Wisconsin communities continue to grow, local officials and community members are constantly challenged by the need to balance fiscal, social, economic, and environmental goals. One aspect of this challenge is deciding how much and what types of new development the community can accommodate without compromising the day-to-day quality of life for residents. Socio-economic impact assessment is designed to assist communities in making decisions that promote long-term sustain-ability, including economic prosperity, a healthy community, and social well-being.
Assessing socio-economic impacts requires both quantitative and qualitative measurements of the impact of a proposed development. For example, a proposed development may increase employment in the community and create demand for more affordable housing. Both effects are easily quantifiable. Also of importance, however, are the perceptions of community members about whether the proposed development is consistent with a commitment to preserving the rural character of the community. Assessing community perceptions about development requires the use of methods capable of revealing often complex and unpredictable community values.
This chapter provides an overview of socio-economic impact assessment, including what it is, why it is important and guidance on how to conduct a socio-economic impact assessment.
A socio-economic impact assessment examines how a proposed development will change the lives of current and future residents of a community. The indicators used to measure the potential socio-economic impacts of a development include the following:
Quantitative measurement of such factors is an important component of the socio-economic impact assessment. At the same time, the perceptions of community members about how a proposed development will affect their lives is a critical part of the assessment and should contribute to any decision to move ahead with a project. In fact, gaining an understanding of community values and concerns is an important first step in conducting a socio-economic impact assessment.
The socio-economic impacts of a proposed development on a community may actually begin the day the project is proposed. Changes in social structure and inter-actions among community members may occur once the new development is pro-posed to the community. In addition, real, measurable and often significant effects on the human environment can begin to take place as soon as there are changes in social or economic conditions. From the time of the earliest announcement of a pending policy change or development project, attitudes toward the project are formed, interest groups and other coalitions prepare strategies, speculators may lock up potentially important properties, and politicians can maneuver for position.
Because socio-economic impact assessment is designed to estimate the effects of a proposed development on a community’s social and economic welfare, the process should rely heavily on involving community members who may be affected by the development. Others who should be involved in the process include community leaders and others who represent diverse interests in the community such as community service organizations, development and real estate interests, minority and low income groups, and local environmental groups. In addition, local agencies or officials should provide input into the process of assessing changes in the social environment that may occur as a result of the proposed development (e.g., providing estimates and information demographics, employment and service needs).
Conducting a social impact assessment is important for several reasons. In general, it is used to alert the community, including residents and local officials, of the impact and magnitude of the proposed development on the community’s social and economic well-being. The assessment can help communities avoid creating inequities among community groups as well as encourage the positive impacts associated with the development.
The impact assessment provides estimates of expected changes in demographics, housing, public services, and even the aesthetic quality of the community that will result from the development. Equally important, the assessment provides an opportunity for diverse community values to be integrated into the decision-making process. Together, these components of the assessment provide a foundation on which decisions about whether to alter or change a proposed development can be made.
Development constitutes a significant change in the type and intensity of use on a parcel of land. In Wisconsin, development often means conversion of productive agricultural land. Development may occur in the form of a residential subdivision, industrial park, or commercial center. Depending on the location chosen for the new construction and the type of development, the social impact on the community may affect one group of residents more significantly than another (e.g., farmers, the elderly, low income or minority groups).
It is critically important to devote attention to the potential impacts of development on vulnerable segments of the human population. Hopefully, the proposed development will not require investigation into such possibilities, yet the staff con-ducting the socio-economic impact assessment should be aware of social equity concerns. Other demographic groups that may be disproportionately affected by a pro-posed development include adolescents, the unemployed, and women; members of groups that are racially, ethnically or culturally distinctive; or occupational, cultural, political or value based groups for whom a given community, region or use of the biophysical environment is particularly important. No category of persons, particularly those that might be considered more sensitive or vulnerable as a result of age, gender, ethnicity, race, occupation or other factors, should have to bear the cost of adverse social impacts. Socio-economic impact assessment can help avoid future inequities associated with new development by pre-emptively considering the potential impacts of a project.
In thinking about vulnerable populations, it is also useful to examine the consequences of a no-development option. For example, if the proposed development is a residential care facility for senior citizens, what are the consequences for the community if the facility is not built?
Socio-economic impact assessment also provides a foundation for assessing the cumulative impacts of development on a community’s social and economic resources. For example, a community may not recognize a change in their quality of life if a small strip mall goes up on the edge of town. In fact, their quality of life may improve if the businesses located in the strip mall provide services which would otherwise not be available to residents. However, if the construction of a small strip mall on the edge of town sets a precedent for constructing additional commercial establishments on the outskirts of town, the socio-economic impacts on a community may become significant indeed. Small, family-owned businesses located downtown may begin to close as competition lures consumers to the outskirts, where accessibility to more diverse commercial establishments is greater. The result may be a loss in the sense of community and cohesion among residents that existed prior to development because the focal point or “common meeting place” for residents has shifted to a new location. The change is subtle, yet may have a profound impact on the long-term sustainability of the community.
It is necessary to conduct the socio-economic impact assessment in the context of the other impact assessment components (i.e., fiscal, environmental, transportation). The relationship between the socio-economic impacts and other impacts of a pro-posed development is a close one. For example, changes in the physical environment or fiscal expenditures required of the community as a result of the development may directly influence community perceptions about whether to proceed with the project.
Unfortunately, socio-economic impact assessment often takes a backseat to other types of impact assessment such as fiscal and environmental impact analysis because the impacts are often more difficult to measure, and the social impacts associated with a development are generally more subtle than impacts on a community’s fiscal balance sheet or local natural resources. However, it is important to consider, as early in the planning process as possible, whether the proposed development will have a significant effect on the social and economic welfare of the community.
The following section provides a two-step process for conducting a socio-economic impact analysis. The process is designed to establish a framework for evaluating cur-rent and future proposed developments in a community.
Carefully defining the socio-economic assessment can save considerable and scarce resources (i.e., time and money). Since it is often impossible to assess every socio-economic impact associated with a proposed development, local officials are encouraged to refine the scope of the assessment based on the most important social and economic priorities of the community. The most reliable sources of information about community concerns and needs are residents and community leaders. Surveys and interviews are two excellent methods for identifying priority social and economic goals of the community. If time permits, a survey of community members can guide the design of an assessment for a single proposed development. Such surveys can also provide a foundation for local officials in designing and conducting future assessments, provided that the survey is representative of the diverse community values, concerns, and interests. Box 4.1 provides a sample of the types of survey questions that may be used to gauge community perceptions. Questions that are specific to community perceptions about a particular proposed development are provided later in this chapter. Interviews with community leaders (e.g., civic group representatives, religious leaders, citizen action groups) can also provide valuable information about what social, economic and other issues are important to community members.
The design of the impact assessment also needs to reflect the specific characteristics of the proposed project. The development impacts associated with a new development will vary depending on the proposed project’s type, size, location, socio-economic characteristics of the community. As such it is important to be familiar with both the project characteristics and the social and economic resources of the community. The better one understands the proposed project, the more accurate will be the assessment in estimating potential impacts. If you have the time to complete a general survey, you may use the answers to the above questions to define the scope of the assessment. What are the most significant issues facing the community? If you do not have the resources for such a comprehensive survey, you may refine the scope of the analysis based on the specifics of the project.
SAMPLE SURVEY QUESTIONS FOR USE IN DESIGNING A SOCIO-ECONOMIC ASSESSMENT
Explicit in the introductory sections of this chapter is the need to assess impacts both in terms of quantitative and qualitative measures of community socio-economic well-being. Measuring community perceptions about development is important just as is estimating the number of new jobs created by a proposed development.
Thus this section is divided into two sections: estimating quantitative changes in the socio-economic characteristics of the community and measuring community perceptions about a particular development. Each section describes the types of information that may be useful, available resources, and questions to facilitate the data collection process. Please note that this discussion is not exhaustive since methods for social impact assessment are plentiful. It does, however, provide a starting point for gathering information that will be useful in assessing the socio-economic impacts of a proposed development. Additional references are provided at the end of the chapter. Worksheets are provided in the Appendix to assist with the analysis.
Development can cause changes in several community characteristics including demo-graphics, housing, public services, markets, employment and income, and aesthetic quality. Methods for measuring each of these factors is discussed in the following section.
Demographic impacts include the number of new permanent residents or seasonal residents associated with the development, the density and distribution of people and any changes in the composition of the population, (e.g., age, gender, ethnicity, wealth, income, occupational characteristics, educational level, health status).
Development invites growth in new jobs in a community and draws new workers and their families into the community, either as permanent or temporary residents. When this occurs, the incoming population affects the social environment in various ways including increased demand for housing and social services (e.g., health care, day care, education, recreational facilities). Because residents’ needs depend on a wide range of variables (e.g., age, gender, employment status, income level and health status), the diversity of service needs are determined not only by the absolute size of the incoming population but also by the old and new populations’ demographic and employment profiles. As a result, a proposed development may have a significant impact on the community’s ability to accommodate new residents and adapt to changes in the social environment for existing residents. Assessing the magnitude and rate of population change has important implications for community infrastructure and service requirements and can play a major role in determining social impacts associated with the proposed development.
ASSESSING THE DEMOGRAPHIC IMPACTS
There are numerous modeling techniques available to aid in assessing population impacts. The models range in complexity and depending on the resources available for your assessment, a particular model may be more appropriate than another. Specific models are not described in this guide, but are referred to at the end of the chapter as part of the various social impact assessment guidance documents reviewed during development of this guide. The questions listed below are designed to help you begin the impact assessment process for a proposed development. Data collected during the Fiscal analysis (i.e., estimation of the number of new residents) will help answer the questions. For each of the questions listed above, estimate and analyze the significance of how the population change will impact the social environment of the community (e.g., will the number of new school-aged children require additional public education facilities?, will an increase in the number of elderly residents require additional health care facilities?)
HOUSING MARKET IMPACTS
A housing market analysis helps determine whether the proposed development will be beneficial to your community in terms of its effect on your housing market needs. In the case of a residential development, the market study assists in ascertaining whether there is sufficient demand for the type of housing proposed and whether a sufficient number of households in the area can afford to purchase or rent the pro-posed type of housing. The analysis also assists in the examination of the connections between the housing market and employment. For example, if the proposed development is a manufacturing plant expected to generate a specified number of low-wage jobs, can the community’s current housing market absorb the new workers or is there a need for more affordable housing?
To understand the impact of a new residential development or a new employment center on your housing market (or on the regional market), the initial step of the analysis is to complete an inventory and analysis of existing and projected housing needs—a supply and demand analysis. To better understand whether your community is meeting the needs of residents and workers in terms of affordability, an analysis of housing affordability which includes an examination of typical rents and mortgage payments compared to what households at various income levels can afford is necessary.
Once these analyses are complete, the proposed development can be placed in a context in which a number of important questions specific to the development can be addressed. The Guidebook does not provide the steps for the housing market needs and affordability analysis, as it is provided in another recent publication, “Housing Wisconsin: A Guide to preparing the Housing Element of a Comprehensive Plan,” available soon from UW–Extension. This publication provides a practical guide on how to organize and analyze the data necessary for a housing needs assessment. Having conducted this analysis, you can examine the proposed development within a broader framework, using the questions provided below as a guide.
ASSESSING LOCAL HOUSING MARKET AFFECTS
Beyond consideration of the need for new housing, it is also important to consider the location of the proposed housing development and the impacts of that particular choice of location on the community. Housing is strongly linked to a community’s employment centers, land use and transportation system. The location of housing affects commuting patterns. Separation and segregation of residential areas from other areas, including retail, service and office centers, generates more com-muting trips and eventually requires more investment in roads and other transportation- related facilities. The location of housing in relation to other public facilities also affects overall energy use, lifestyles and personal costs for transportation. Further-more, if there is a lack of affordable housing in the area, people may be forced to commute longer distances to work, because the affordable homes are far away from employment centers. The location of housing is also important if historical development patterns in the community have resulted in large areas of all one type of housing or housing that serves a majority of one income group. When one type of housing is over-concentrated in an area, the impacts on land utilization, infrastructure and public service needs may become distorted. Over-concentrations of single-family housing, for example, becomes an issue in terms of the infrastructure needs of education services. The housing needs assessment will also assist in the identification of concentrations of housing and diversity of housing patterns.
RETAIL MARKET IMPACTS
Growing communities often attract a variety of new commercial developments including both free-standing stores and neighborhood or community shopping centers. These developments provide a community with products, services and conveniences important to the quality of life of local residents. The challenge to accommodating these types of new developments becomes one of minimizing losses to existing retailers in the area, such as those downtown, while allowing the market to respond to the wishes of the increasingly demanding consumer.
To respond to this challenge, community leaders can conduct an assessment of the retail market with a focus on anticipated market supply and demand by retail category. The intent is to anticipate how well the market will respond to changes in the number, type and location of retail businesses and to provide community leaders with information to guide future business expansion and recruitment efforts. This section provides guidelines on how to conduct such an analysis. The Appendix includes several worksheets to facilitate the retail analysis in your community.
Worksheet 4.1: Analysis of Your Community's Retail Mix
Table 4.2: Wisconsin Retail Demand in Square Feet (SF) Per Household (HH)
ASSESSING MARKET FOR RETAIL DEVELOPMENT
Before an analysis of a particular development can be conducted, the economic health of the local retail community must be assessed. This requires a close look at retail activity, particularly in the central business district. Key indicators of economic health in the retail sector include vacancy levels, property values, store turnover, retail mix, employment, tax revenues, new business incubation, critical mass/concentration of retail, and the availability of goods and services demanded by the community. See the following web address for more information: http://www.uwex.edu/ces/cced/lets/lets798.html
Second, changes in trade area demographics should be estimated. The trade area is generally defined as the geographic area in which three-fourths of current customers reside. A significant increase in population could signal new opportunities for retail expansion or development. The profile of these new or anticipated residents can help you assess future market demand for various types of products or services. See the following web address for more information. http://www.uwex.edu/ces/cced/lets/0599ltb.pdf
Third, regional retail competition must be assessed. New retail concepts are threatening traditional retail stores. These concepts include large non-mall stores offering assortment and low prices for selected types of goods like electronics, off-price apparel stores, food/drug stores and neighborhood drug stores that offer convenience, outlet centers, warehouse clubs and the internet. By recognizing the changes in competition, both locally and regionally, your assessment of proposed retail developments can offer valuable insight into the changing market and risk facing the traditional retailers in the community. See the following web address for more information. http://www.uwex.edu/ces/cced/lets/letsrt.html/
Finally, with an understanding of general retail trends, changes in trade area demographics, and regional competition you can use secondary data to measure market gaps in the community and assess the impacts of the proposed development. Two techniques can be used: retail mix analysis and retail space analysis.
Retail Mix Analysis—The retail mix in “comparison” communities can be used to measure how many and what type of retail stores might be supported in your community. Comparison communities might include those with similar population, household incomes and distances from major metropolitan areas. If your community is growing in population, comparison communities with a larger population can be used.
Once the comparison communities are identified, the retail mix in each community is inventoried by specific retail category. The average numbers of stores by retail category in the comparison communities is then compared with the number in your community to identify any significant differences that might suggest business expansion or development opportunities. See the Appendix for data and a retail mix work-sheet that can be used in your community.
Retail Space Analysis-The amount of additional retail space that can be sup-ported by a growing community can be projected using two types of data: Household Consumer Expenditure Estimates and Sales per Square Foot of Existing Retailers.
Table 4.2 in the Appendix provides a rough approximation of how many square feet of retail space can be supported per additional household in a “typical” Wisconsin community (last column). These estimates were based on state and national data and do not reflect local supply and demand conditions. Nevertheless, they provide a starting point in determining potential market opportunities.
This analysis can be refined by using “household consumer expenditure data” or “median store sales per square foot” that more accurately reflect the socio-economic conditions of your community. Data can be purchased through private data firms that describe spending of consumers or store sales in your particular community or other representative areas. By using more reflective data, your calculations will more accurately determine the additional retail space necessary to serve the market area.
These steps can help you to anticipate how well the market will respond to changes in the number and type of retail businesses. Assessing the impact of community growth in the retail sector is important to ensure a successful and sustainable business community. In addition, it helps ensure that necessary goods and services will be available to a growing population. A guidebook on how to conduct a comprehensive Business District Market Analysis is available through the University of Wisconsin, Center for Community Economic Development. The Center provides information through their web address http://www.uwex.edu/ces/cced/. They also offer educational programs and technical assistance to business districts in Wisconsin interested in analyzing their local economy, including market opportunities.
ASSESSING THE IMPACTS ON EMPLOYMENT AND INCOME
EMPLOYMENT AND INCOME
Development directly influences changes in employment and income opportunities in communities. Such changes may be more or less temporary (e.g., construction projects, or seasonal employment) or may constitute a permanent change in the employment and income profile of the community should the development project bring long-term job opportunities for community residents (e.g., establishment of a light industrial, manufacturing, or commercial establishment). Assessing these types of changes is an important component of social impact analysis because growth in employment places additional demands on community services and resources. For example, a development that brings lower-wage jobs to a community may generate the need for different types of housing in the area. Changes in income also influence the social environment in a number of ways such as raising or lowering the average standard of living for residents.
Data sources for analysis of the local economy, employment and income trends include the University of Wisconsin’s Center for Community Economic Development, which provides information and data sources for local economic analysis. The U.S. Census Bureau also provides information on employment and income. To retrieve community data from the 1990 Census, go to http://venus.census.gov/cdrom/lookup/. The Bureau of Labor Statistics provides information on employment and wages. To view Metropolitan Area Occupational Employment and Wage Estimates, go to http://www.bls.gov/oes/msa/oessrch1.htm.
The new residents and their associated activities will require a variety of services pro-vided by the areas public and private institutions. A social impact assessment must determine the quantity and variety of anticipated needs. The goods and services most commonly included in a social evaluation are open space and parks; cultural and recreation facilities; education; health care; special care for the elderly, the disabled, the indigent and preschool-age children; police and fire protection; and a variety of administrative support functions. The optimum amount of resources that would be required for the satisfaction of needs is based on either planning standards, which are guidelines established by professional organizations and government agencies, or service levels, which are observed national (or regional) average amounts of resources expended per capita or some unit of size.
Service resources are objective indicators of the level of resources available for the satisfaction of society’s needs. For example, the number of physicians, dentists, acute-care hospital beds, and psychiatric care hospital beds are indicators of the level of health care resources. Square feet of parkland, picnic areas, tot lots, etc., are indicators of facilities for recreation needs.
The Appendix includes worksheets designed to assist you in assessing the specific current and future needs of a variety of public services based on commonly applied planning standards. Once the tables are complete with information about the community’s current service level and current and future needs, you can begin to deter-mine the feasibility of the proposed development and how it may affect the quality of services provided to residents.
Worksheet 4.4: Public Safety
Worksheet 4.5: Education and Libraries
Worksheet 4.6: Health and Recreation
ASSESSING THE CURRENT ACCESSIBILITY OF PUBLIC SERVICES
Impacts on the aesthetic quality of a community are often the most obvious sign of development; yet, are too often not included in the development impact assessment. Shopping malls and subdivisions in the rural landscape are one example of the impact development has on the aesthetic quality of a community. In many cases, community members perceive themselves as powerless in guiding “the way development looks” in their community and thus do not participate in making decisions that protect the visual and aesthetic qualities of the natural and built environment. While aesthetic impacts are often associated with environmental impacts, they also have a significant impact on the social well-being of the community and resident perceptions about the quality of life in the community.
There are several methods available to local communities for assessing the potential impact of a proposed development on the aesthetic quality of a community. These include: design review, geographical information technology, image processing technology, multi-media technology, and communications technology.
Design review is an effective tool for identifying urban and rural community aesthetic preferences and integrating such preferences into comprehensive plans and zoning ordinances. In fact, many Wisconsin communities have adopted design review processes which involve the review of individual development proposals by a special body such as the planning commission, an architectural review board, design review committee or a historic preservation commission (Ohm 1999). Citizen surveys and photographs depicting desirable as well as non-desirable types of development are often used to formulate and document community preferences which can then be translated into a formal zoning ordinance or integrated into the comprehensive plan. In particular, design review provides an opportunity for community members to influence the layout and appearance of buildings or express preference for open space preservation as an area is developed. The elements for conducting a design review for a proposed development are outlined below. Other technologies for assessing aesthetic impacts include:
ELEMENTS OF DESIGN REVIEW
Socio-economic impact assessment is also important for assessing changes in a community’s social well-being that result from development. This type of social change is more difficult to quantify than changes in the social environment because the assessment relies on the perceptions of current and new residents about how a proposed development may affect their quality of life. Social impact assessment of this nature is important because it can help local officials, planners, developers and the public identify and address potential conflicts of interest that may accompany development. In addition to quality of life issues, it is important to assess how a proposed development may influence neighborhood cohesion or cultural differences among members of the community.
QUALITY OF LIFE
The attitudes community residents have toward development and the specific actions being proposed as well as their perceptions of community and personal well-being are important determinants of the social effects of a proposed action. Such attitudes are a reflection of the quality of life residents seek to enjoy and preserve, whether it be limiting growth in order to maintain the rural image of a small community; expanding the boundaries of the village; or providing a variety of housing choices to new, diverse residents and businesses. Changes in a community’s social well-being can be determined by asking the individuals and representatives of groups or neighborhoods in the area to make explicit their perceptions and attitudes about the anticipated changes in the social environment.
ASSESSING ATTITUDES TOWARD DEVELOPMENT
Information about attitudes and perceptions should be gathered from community leaders because their attitudes are important and may lend insight into the overall attitudes of residents if community leaders are perceptive and sensitive to community concerns and interests. However, it is perhaps more important, though generally more time-consuming and costly, to profile the attitudes of the residents living and working in the community and each of the distinguishable social groups because they represent the population in the community most affected by changes in social well-being. In assessing resident attitudes, consider the questions on page 46. The responses may provide an indication of what additional information is necessary and in what detail it should be gathered for a particular proposed development.
Some of the methodologies and techniques for assessing changes to the social environment are quantitative in nature and existing sources of data such as the Census Bureau provide a useful starting point for estimating social impacts. Other techniques such as surveys, focus groups, charrettes, public hearings and meetings with community residents may be appropriate for collecting data that is both more qualitative in nature and useful for assessing the perceptions of community members. A summary of techniques that may be used to elicit community perceptions about development, including features of the technique, advantages, and disadvantages to their use is provided page 50.
As should be evident from the preceding discussion, socio-economic impact assessment is a complex, yet important aspect of development impact analysis. The various changes in the social environment and social well-being of a community that result from development may be significant, yet they are often subtle and not easy to quantify. However, this does not mean that socio-economic impact assessment should not be considered an essential component of the development impact assessment process.
It is important to bear in mind that while certain individuals or community groups may be active and forthcoming with input into the planning process, other community groups (e.g., low income or minority groups) that may be equally or even disproportionately affected by the proposed development may be less vocal in expressing concerns and interests. In situations where traditionally disempowered groups may be impacted by a development, it is important to make a concerted effort to involve them in the social impact assessment process.
Depending on the resources available to conduct the socio-economic impact assessment and the specific objectives of the analysis, some methods may be more appropriate than others. At any rate, a list of references is provided at the end of this chapter to guide further efforts in conducting a socio-economic impact assessment.
Finally, it is important to note that a socio-economic impact assessment not only forecasts impacts, but should also identify means to mitigate adverse impacts. Mitigation should include efforts to avoid an impact by not taking or modifying an action; minimizing, rectifying or reducing the impacts through the design or operation of the project or policy; or compensating for the impact by providing substitute facilities, resources or opportunities.
Branch, K., D.A. Hooper, J. Thompson, and J. Creighton. 1984. Guide to Social Assessment: A Framework for Assessing Social Change. Westview Press: Boulder.
Burdge, R.J. 1995. A Community Guide to Social Impact Assessment. University of Illinois: Urbana.
Burdge, R.J., P. Fricke, K. Finsterbusch, W.R. Freudenberg, R. Gramling, A. Holden, L. Llewellyn, J.S. Petterson, J. Thompson, and G. Williams. 1995. Guidelines and Principles for Social Impact Assessment. Environmental Impact Assessment Review. 15:11–43. Elsevier Science, Inc.: New York.
Canter, L. W. 1985. Socio-economic Factors Used in Environmental Impact Studies. In Canter L.W., Impact of Growth: A Guide for Socio-economic Impact Assessment and Planning, pp. 328–394. Lewis Publishers: Chelsea, MI.
Chadwick, A. 1995. Socio-economic Impacts 2: Social Impacts. In Morris, P. And R. Therivel, Methods of Environmental Impact Assessment, pp. 29–49. University of British Columbia Press: Vancouver.
Chenoweth, R. 1999. Integrating information technologies for citizen-based land use decision-making. College of Agricultural and Life Sciences, University of Wisconsin.
Christensen, K. Social Impacts of Land Development: An Initial Approach for Estimating Impacts on Neighborhood Usages and Perceptions. The Urban Institute: Washington D.C.
Environmental Protection Agency, Office of Wastewater Management and Region V. 1990. Urban Runoff Management Information/Education Products. OWEC (EN–366). Washington, D.C.
Freudenberg, W.R. 1986. Social Impact Assessment. Annual Review of Sociology. 12:451–78.
Hustedde, R.J., R. Shaffer, G. Plover. 1993. Community Economic Analysis: A How-to Manual. North Central Regional Center for Rural Development. Ames, Iowa.
Ohm, B.W. 1999. Guide to Community Planning in Wisconsin. Department of Urban and Regional Planning, University of Wisconsin–Madison.
Ryan, B. J. Braatz and A. Brault. 1998. Retail Mix in Wisconsin’s Small Downtowns: An Analysis of Cities and Villages with Populations of 2,500–15,000. Center for Community Economic Development, University of Wisconsin-Extension.
Tlusty, W. 1999. Land use planning, design, and design review: essential components for maintaining countryside character. Prepared for the Planning Committee and Town Board of Lyons, Walworth County. January 6. Urban Land Institute. Development Impact Assessment. Chapter 6. Social Impact Analysis. | <urn:uuid:c17296ac-f0ba-4e52-868b-17e640f807f5> | CC-MAIN-2016-26 | http://www.lic.wisc.edu/shapingdane/facilitation/all_resources/impacts/analysis_socio.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00079-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.930441 | 7,031 | 2.6875 | 3 |
Kattenhorn, S.A., Hurford, T.A. (2009)
Tectonics of Europa.
In: Europa, Pappalardo, R.T., McKinnon, W.B., Khurana, K., eds, University of Arizona Press, 199-236.
Europa has experienced significant tectonic disruption over its visible history. The description, interpretation, and modeling of tectonic features imaged by the Voyager and Galileo missions has resulted in significant developments in four key areas addressed in this chapter: (1) The characteristics and formation mechanisms of the various types of tectonic features; (2) The driving force behind the tectonics; (3) The geological evolution of its surface; and (4) The question of ongoing tectonics. We elaborate upon these themes, focusing on the following elements: (1) The prevalence of global tension, combined with the inherent weakness of ice, has resulted in a wealth of extensional tectonic features. Crustal convergence features are less obvious but are seemingly necessary for a balanced surface area budget in light of the large amount of extension. Strike-slip faults are relatively common but may not imply primary compressive shear failure, as the constantly changing nature of the tidal stress field likely promotes shearing reactivation of preexisting cracks. Frictional shearing and heating thus contributed to the morphologic and mechanical evolution of tectonic features. (2) Many fracture patterns can be correlated with theoretical stress fields induced by diurnal tidal forcing and long-term effects of nonsynchronous rotation of the icy shell; however, these driving mechanisms alone probably cannot explain all fracturing. Additional sources of stress may have been associated with orbital evolution, polar wander, finite obliquity, ice shell thickening, endogenic forcing by convection and diapirism, and secondary effects driven by strike-slip faulting and plate flexure. (3) Tectonic resurfacing has dominated the ~40-90 Myr of visible geological history. A gradual decrease in tectonic activity through time coincided with an increase in cryomagmatism and thermal convection in the icy shell, implying shell thickening. Hence, tectonic resurfacing gave way to cryomagmatic resurfacing through the development of broad areas of crustal disruption called chaos. (4) There is no definitive evidence for active tectonics; however, some tectonic features have been noted to postdate chaos. A thickening icy shell equates to a decreased tidal response in the underlying ocean, but stresses associated with icy shell expansion may still sufficiently augment the contemporary tidal stress state to allow active tectonics. | <urn:uuid:a6f8987d-4df6-41e7-b757-45980fb97fdb> | CC-MAIN-2016-26 | http://webpages.uidaho.edu/~simkat/papers/europachapter.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00016-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.91526 | 549 | 2.6875 | 3 |
Friedrich Max Mueller:
Appropriation of the Vedic Past
The German romantic attachment to India's ancient past, being exempted from possible colonial interests, has been widely accepted as self evident. In this paper I shall seek to demonstrate that this interest was part of a cultural politics seeking to establish a new basis for the German national tradition—a complex, often contradictory process, coherent primarily in the framework of an internal European dialogue. Three figures stand out as conceptually significant for the scholarly activity which was to unfold during the course of the nineteenth century: Johann Gottfried Herder (1744-1803), Friedrich Schlegel (1772-1829) and finally Friedrich Max Mueller (1823-1900), who occupies a remarkable position between a scholar and a poet and whose works provide, as it were, a climax of concern for India's ancient heritage.1 These three demonstrate also the shift in perspective during the course of the century.
I shall confine myself to an analysis of an early influential work of each author in some detail: the three works under discussion having been conditioned by the quality and quantity of the source material available to each author. A discussion of the respective personal relationship to India, which as in the case of Schlegel underwent many stages, would be outside the scope of the present paper.
In the seventies of the eighteenth century a variety of travel literature on the Orient had become available, written by missionaries, tradesmen and the civil servants of the East India Company. There was evidently an eager readership of these works, for immediately upon publication, they were translated into other European languages. Primary textual material was scarce and consisted of preliminary translations of maxims, Puranic legends, moral-philosophic dialogues of dubious origin.2 In spite of this paucity of first-hand knowledge, Voltaire had not hesitated to locate the place of the origin of the human race on the banks of the Ganges, expressly against the bibilical tradition. Voltaire's readiness to use any supportive evidence for his thesis had in fact been used by the Jesuits, who in 1760 had allowed a manuscript of the Ezourvedam to fall into his hands. A mixture of Puranic and | <urn:uuid:6ce5e415-bdb2-43bc-b36f-d33b77a72241> | CC-MAIN-2016-26 | http://dsal.uchicago.edu/books/artsandideas/text.html?objectid=HN681.S597_17-18_045.gif | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00072-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961625 | 454 | 2.625 | 3 |
Apparently consumers don't have as much control as they thought over avoiding products with bisphenol A (BPA), a hormone-disrupting chemical. What are two items that consumers cannot avoid? Dollar bills and receipts. And that's exactly where BPA is being found, according to a new study conducted by nonprofit groups Safer Chemicals, Healthy Families and the Washington Toxics Coalition.
The research found large amounts of unbound BPA on half of thermal paper receipts tested. The data suggests that the toxic chemical is easily transferred to our skin. And no, you can't just gingerly grab the receipt and toss it away really fast. In just ten seconds, 2.5 micrograms of BPA are transferred to your fingers.
As for dollar bills, BPA was found on 95 percent of the bills tested. BPA levels were much lower than those found on the receipts, but that doesn't help us rest any easier. Recent research has explored the connection between BPA exposure and various health problems like cancer, and Canada recently became the first country to officially list BPA as a toxic chemical.
Safer Chemicals, Healthy Families is pushing Congress to adopt a chemical policy with greater regulation of chemicals like BPA. Meanwhile, the Washington Toxics Coalition is offering a list of ways to avoid BPA exposure. Their tips include refusing a receipt, washing hands frequently, and choosing bottles that are not polycarbonate plastic.
Suddenly online shopping seems a bit more appealing this holiday season.
SUBSCRIBE AND FOLLOW
Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements.Learn more | <urn:uuid:db96fa82-b1a1-4b4a-bec8-7234eeccbeb5> | CC-MAIN-2016-26 | http://www.huffingtonpost.com/2010/12/09/bpa-found-on-receipts-and_n_794067.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00180-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962241 | 341 | 3.109375 | 3 |
How long has it been since you thought about the American Revolution? In 2014 not that many of us spend our time thinking about the catalysts that caused that revolution over 200 years ago. Who has the time? Between jobs, families, bills, a bit of TV and some recreation and socializing, there are only so many hours in the day. That’s the beauty of things being settled… you can get on with other things. If we had to spend our days thinking about what kind of government we wanted, how would we get anything else done? It would be impossible. Think about election season every 4 years. From July through November we can’t escape it: campaign advertisements on TV and radio, signs on the side of the street, wall to wall press coverage and endless water cooler chatter. All that and we have a government that’ been largely settled for 150 years.
Sometimes however it makes sense to go back and look at why things happen. In the case of American independence and the Revolutionary War, it was not any single event that caused the war to begin, rather it was a series of things that took place over years.
Among these things were well known events such as the Stamp Act, the Boston Massacre, the Intolerable Acts and the conflict at Lexington & Concord. Then there were other lesser know but equally important events such as The Quartering Acts of 1765 and The Declaratory Act. In all the Declaration of Independence lists 27 grievances against the King. Among them are:
- He has refused his Assent to Laws, the most wholesome and necessary for the public good.
- He has erected a multitude of New Offices, and sent hither swarms of Officers to harrass our people, and eat out their substance.
- For imposing Taxes on us without our Consent:
Importantly, the most significant lines in the Declaration of Independence are not among those 27, but rather in the 2nd paragraph.
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it…
… all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.
Americans today are acting as Jefferson suggested most people do… they are suffering evils rather than throwing off the yoke of a despotic government. Jefferson suggests however that when the abuses become too much, the people will indeed revolt. So the questions in 2014 are two: Are close to that point of abuse that sparks a revolt, and is it too late to turn the tables on the usurpers?
On the first question, the answer is a resounding yes. While 2014 may not quite yet be George Orwell’s 1984 with an all knowing all seeing – and listening – all powerful government… We’re close. We have a government that uses the police power of the taxing authority to silence its critics. We have a government that has decided that children can be punished for their parent’s mistakes, even if it was the government who made it. We have a government that spies on its citizens with impunity. We have heavily militarized local police forces across the country and every federal agency seems to come with its own SWAT team, regardless of its mission. We have an executive branch that brazenly ignores the clear language of the Constitution. And then of course we have the shrinking pool of citizens who are forced to pay stifling taxes in order to support a growing horde of those suckling at the government teat.
On the second question, is it too late, the answer is less clear. Human nature suggests that at some point those who shoulder the burden of financing the government’s largesse will revolt. Today we don’t see taxpayers manning the barricades and storming America’s Bastilles so we clearly have some time to reverse course and strip the usurpers of their power.
Settled can be good, but as Jefferson notes, it can turn to complacency. And complacency can lead to rot. Whether it’s the five year reign of Usurper in Chief Barack Obama or the last 40 years of creeping comforts and expanding government, Americans have assumed that we’ll get through everything just because politicians tell us we will. That’s not how things work. Freedom and prosperity demand vigilance. Neither can survive where a government decides that they are its to withhold or grant. In 1776 fifty six men risked their lives and decided that they could no longer abide the complacency of accepting tyranny. And they started a revolution. How many Americans will come to the same conclusion in 2014 and risk far less when they turn off the television and enlist as foot soldiers in an electoral revolution in 2016? Only time will tell. | <urn:uuid:4eab2b49-8021-442c-bc71-6cc51c49d2ef> | CC-MAIN-2016-26 | http://www.redstate.com/diary/imperfectamerica/2014/04/14/1776-2014-complacency-revolution/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.969317 | 1,117 | 3.109375 | 3 |
In situations where medication, psychosocial treatment, and the combination of these interventions prove ineffective, or work too slowly to relieve severe symptoms such as psychosis or suicidality, electroconvulsive therapy (ECT) may be considered. ECT may also be considered to treat acute episodes when medical conditions, including pregnancy, make the use of medications too risky. ECT is a highly effective treatment for severe depressive, manic, and/or mixed episodes. The possibility of long-lasting memory problems, although a concern in the past, has been significantly reduced with modern ECT techniques. However, the potential benefits and risks of ECT, and of available alternative interventions, should be carefully reviewed and discussed with individuals considering this treatment and, where appropriate, with family or friends.
Herbal or natural supplements, such as St. John's wort (Hypericum perforatum), have not been well studied, and little is known about their effects on bipolar disorder. Because the FDA does not regulate their production, different brands of these supplements can contain different amounts of active ingredient. Before trying herbal or natural supplements, it is important to discuss them with your doctor. There is evidence that St. John's wort can reduce the effectiveness of certain medications.20 In addition, like prescription antidepressants, St. John's wort may cause a switch into mania in some individuals with bipolar disorder, especially if no mood stabilizer is being taken.
Omega-3 fatty acids found in fish oil are being studied to determine their usefulness, alone and when added to conventional medications, for long-term treatment of bipolar disorder.
Even though episodes of mania and depression naturally come and go, it is important to understand that bipolar disorder is a long-term illness that currently has no cure. Staying on treatment, even during well times, can help keep the disease under control and reduce the chance of having recurrent, worsening episodes.
Alcohol and drug abuse are very common among people with bipolar disorder. Research findings suggest that many factors may contribute to these substance abuse problems, including self-medication of symptoms, mood symptoms either brought on or perpetuated by substance abuse, and risk factors that may influence the occurrence of both bipolar disorder and substance use disorders. Treatment for co-occurring substance abuse, when present, is an important part of the overall treatment plan.
Anxiety disorders, such as post-traumatic stress disorder and obsessive-compulsive disorder, also may be common in people with bipolar disorder. Co-occurring anxiety disorders may respond to the treatments used for bipolar disorder, or they may require separate treatment.
Get professional help for bipolar disorder
Anyone with bipolar disorder should be under the care of a psychiatrist skilled in the diagnosis and treatment of this disease. Other mental health professionals, such as psychologists, psychiatric social workers, and psychiatric nurses, can assist in providing the person and family with additional approaches to treatment.
Help for bipolar disorder can be found at:
If you have a loved one with bipolar disorder, you may need to be the one to get her to a professional.
About this article: The National Institute of Mental Health (NIMH) is part of the National Institutes of Health (NIH), a component of the U.S. Department of Health and Human Services. January 2008
And you'll see personalized content just for you whenever you click the My Feed .
SheKnows is making some changes! | <urn:uuid:b978f93d-e461-4a09-a87f-4cd70b8197bf> | CC-MAIN-2016-26 | http://www.sheknows.com/health-and-wellness/articles/801915/other-treatments-for-bipolar-disorder | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00009-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956335 | 694 | 2.59375 | 3 |
The recent international military effort to find missing flight MH370 revealed a serious deficiency in Chinese naval strategy. In short, without access to foreign ports for resupply the Chinese Navy cannot sustain large numbers of ships far from China. Chinese naval planners are aware of this and the political leaders have been listening. That has resulted in more supply ships being built for the navy. Those orders may now be increased.
China sent two dozen warships and support vessels into the southern Indian Ocean and it was obvious that without access to nearby Australian ports the Chinese ships would not have been able to remain in the area for long. The classic solution is a large fleet of support (“sustainment”) ships to constantly deliver food, fuel and other supplies to ships at sea. China is rapidly building such ships, but not enough of them to maintain a large force for an extended period. China is unlikely to obtain the overseas ports it needs to support its current expansion plans because Chinese expansion plans have angered nearly all the nations in area. China does have few allies, like Pakistan, Cambodia and Burma. This is not enough and if it came to outright hostilities these three would not be enough as any or all of them could have their port access blocked by neighboring countries that are at odds with China.
This logistical weakness is no secret but the Chinese have played it down. After the April MH370 operation it’s a much more visible issue. Chinese naval threats are now a bit less intimidating, until there are reports that China is building more sustainment ships than it already is.
In 2013 China commissioned its third and fourth Type 903 replenishment ship. Thus in less than two years China has built and put into the water two more Type 903 replenishment ships. The first two of these 23,000 tons tanker/cargo ships appeared in 2004. Then in 2008, these ships became heavily used, supporting 13 task forces sent to the anti-piracy patrol off Somalia. Usually one Type 903 accompanied two warships (usually a frigate and a destroyer). The replenishment ship did just that, supplying fuel, water, food, and other supplies as needed. The replenishment ship would go to local ports to restock its depleted stores of fuel, water, food, and other necessities. China needs more Type 903s to support the growing number of long distance training operations into the Western Pacific.
The Type 903 is similar to the twelve American T-AKE replenishment ships in service. These 40,000 ton ships service a much larger fleet than the four Chinese Type 903s and are part of a larger replenishment fleet required by American warships operating worldwide.
Meanwhile China has, over the last two decades, trained more and more of its sailors to resupply ships at sea. It’s now common to see a Chinese supply ship in the Western Pacific refueling two warships at once. This is a tricky maneuver and the Chinese did not learn to do it overnight. They have been doing this more and more over the last decade, first refueling one ship at a time with the receiving ship behind the supply ship and then the trickier side-by-side method. This enables skilled supply ship crews to refuel two ships at once.
This is all part of a Chinese navy effort to enable its most modern ships to carry out long duration operations. In addition to the ships sent to Somalia, the Chinese have been sending flotillas (containing landing ships, destroyers, and frigates) on 10-20 day cruises into the East China Sea and beyond. The MH370 search off west Australia was the largest Chinese fleet deployment in modern times.
The Chinese have been working hard on how to use their new classes of supply ships. These are built to efficiently supply ships at sea. This is called underway replenishment and it means transferring fuel and other supplies to moving ships. This requires skill and practice and the Chinese are out there obtaining both, so much so that it’s become a regular practice. The crews have also learned how to keep all the needed supplies in good shape and stocked in the required quantities. This requires the procurement officers to learn how to arrange resupply at local ports in a time basis. This was particularly important off Somalia, where warships often had to speed up (burning a lot of fuel in the process) or use their helicopters to deal with the pirates.
Modern at-sea replenishment methods were developed out of necessity by the United States during World War II because of a lack of sufficient forward bases in the vast Pacific. The resulting service squadrons (Servrons) became a permanent fixture in the U.S. Navy after the war. Ships frequently stay at sea for up to six months at a time, being resupplied at sea by a Servron. New technologies were developed to support the effective use of the seagoing supply service. Few other navies have been able to match this capability, mainly because of the expense of the Servron ships and the training required to do at sea replenishment. China is buying into this capability, which makes their fleet more effective because warships can remain at sea for longer periods. | <urn:uuid:eb2950d2-5be1-4ae5-ba45-b9cfa0284ccf> | CC-MAIN-2016-26 | http://www.strategypage.com/htmw/htlog/articles/20140428.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00119-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97426 | 1,047 | 2.578125 | 3 |
A combination of lower birth rates and longer life expectancies has conspired to create new geometric shapes. These two major demographic shifts are so significant that Peter Drucker predicted that historians, looking back at the 20th Century, will view the demographic changes as the most important events of the century (more so than technology, industrialization, globalization and so on).
The first is substantially lower birth or fertility rates. Rates are falling below replacement level of 2.1 children per woman, stabilizing at about 1.85 children per woman in many parts of the world. Western Europe, the U.S., China and Japan are all under replacement levels today. China fell from 5.8 children per woman in 1950 to 2.3 in 1980 (before the start of the One Child policy). Africa fell from about 5 children per woman in 1950 to just over 4 by 2000. Children have shifted on the "great balance sheet of life" — from assets in an agrarian society to liabilities in an industrial society — and people are choosing to have fewer.
The second big change is longer life expectancies. Human life expectancy averaged about 35 years for most of the last 1000 years of man's history on earth, but has more than doubled to 75-80 years today. We are experiencing, for the first time, a new life stage: people have never before had a period of non-child-rearing, healthy, active adulthood. In China, the number 20-24 year olds and 65+ year old is about equal today; in just 20 years, by 2030, the old will outnumber the young by 150 million.
For companies, the new geometry requires rethinking many aspects of our organizations. Many of today's organizational designs and talent management practices are based on the idea that the population, and specifically the workforce, is shaped like a pyramid. As we prepare for a workforce in which older workers outnumber the young, we need to redesign many of our standard approaches.
Here's an initial list of practices that are derived from the old assumption of a population pyramid, along with the questions you should begin asking:
- Mandatory retirement — Will there be enough young people to replace those who leave? For many skill sets, the answer is increasingly "no." How can you make your workplace more attractive to older workers, to encourage talented employees to stay on longer?
- Linear careers — Do people always want to take on "more?" Career paths today assume that taking on more responsibility is the only logical move. One way to make your organization more attractive to older workers is to offer options to do less. Many people would like to stay active, but few want to work as hard at age 70 as they did at age 50. How can you create bell-shaped-curve career options that allow people to decelerate toward the end of their work lives?
- Headcount-based metrics — Are your metrics limiting your ability to tap the widest possible pool of talent? Are you able to job-share and use part-time and cyclic workers?
- Recruiting initiatives aimed primarily at young hires — If your business model depends on an influx of young talent, recognize that you're going to be challenged to hire a disproportionate share of the available hires. Are you really good at recruiting? If you can use talent at multiple levels, make sure your recruiting investments are geared to seek out people of all ages, from a variety of sources?
- Career paths that always move "up." Promotion has become a standard expectation — a primary form of reward and the key source of variety. It's how we get to do new things and make more money. This won't be a feasible approach in the new geometry. How can you give people variety without moving them up? How should you provide additional compensation opportunities? Is it appropriate to pay for breadth (people who can fill multiple roles)?
- Prestige-based titles — Titles can lock organizations in to an "always up" career path design. People are reluctant to take on a different role, if the title associated with it isn't as prestigious as the one they currently hold. How can you begin to move toward task-based (Leader of xyz Initiative), rather than prestige-based (Vice President) titles? | <urn:uuid:ce14dec5-60c0-4d91-86c1-137f4c9502e2> | CC-MAIN-2016-26 | http://www.novaimpresa.com/2012/03/talent-management-when-old-outnumber.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956163 | 866 | 2.90625 | 3 |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
The behavior engaged in by bullies: bullying Edit
Researchers accept generally that bullying contains three essential elements: “(1) the behavior is aggressive and negative; (2) the behavior is carried out repeatedly; and (3) the behavior occurs in a relationship where there is an imbalance of power between the parties involved.”
Bullying is broken into two categories: 1) direct bullying, and 2) indirect bullying, also known as social aggression. Direct bullying is the form most common to male bullies. Social aggression or indirect bullying is most common to female bullies and young children, and is characterized by forcing the victim into social isolation. This isolation is achieved through a wide variety of techniques, including: spreading gossip, refusing to socialize with the victim, bullying other people who wish to socialize with the victim, and criticizing the victim's manner of dress and other socially-significant markers (including the victim's race, religion, disability, etc).
Bullying can occur in situations including in school or college/university, the workplace, by neighbours, and between countries (See Jingoism). Whatever the situation, the power structure is typically evident between the bully and victim. It seems to those outside the relationship that the bully's power depends only upon the perception of the victim, with the victim being too intimidated to put up effective resistance. However the victim usually has just cause to be afraid of the bully due to the threat and actually carrying out of physical/sexual violence, or loss of livelihood. Bullying (in addition to ignorance) is behind most claims of discrimination in the workplace.
Types of bullying Edit
- Main article: Types of bullying
Bullying is when someone repeatedly acts or says things to have power over another person. Bullies mainly use a combination of intimidation and humiliation to torment others. The following is some examples of bullying techniques:
- Calling the victim names; accusing the victim of uselessness in all of his or her pursuits
- Spreading gossip and rumours about the victim
- Theft of minor belongings of the victim's
- Demoting the victim without just cause
- Making the victim do what he or she does not want to do, using threats to ensure that the victim follows orders
- Cyberbullying through the use of various information technologies
- Repeated physical assault on a person, be it to his or her body or property
- Getting a victim into trouble with an authority figure, or incurring disciplinary action against the victim, for an indiscretion either not committed by the victim or for one that is exaggerated by the bully
- Making derogatory remarks about a person's family, (particularly mother) about one's home, personal appearance, sexual orientation, religion, race, income level, or nationality
Locations of bullying Edit
Bullying can occur in schools, universities, families, between neighbours, and in workplaces.
Schools In schools, bullying usually occurs in areas with minimal or no adult supervision. Common places include the school bus, cafeteria, hallways between classes, bathrooms, and the school-yard during recess.
An extreme case of school-yard bullying is that of an eighth grader named Curtis Taylor at a middle school in Iowa who had been the victim of continuous bullying for three years, which included name-calling, being bashed into a locker, having chocolate milk poured down his sweatshirt and vandalism of his belongings. This drove him to suicide on March 21, 1993.Some bully experts have termed this extreme reaction "bullycide".
In the 1990s, the United States saw an epidemic of school shootings (of which the most notorious was the Columbine High School massacre). Many of the children behind these shootings claimed that they were the victims of bullies and that they resorted to violence only after the school administration repeatedly failed to intervene. In many of these cases, the victims of the shooters sued both the shooters' families and the schools.
As a result of these trends, schools in many countries strongly discourage bullying, with programs designed to teach students cooperation, as well as training peer moderators in intervention and dispute resolution techniques, as a form of peer support.
Since media coverage has exposed just how widespread bullying is, juries are more likely now to sympathize with victims. In recent years, many victims have been suing bullies directly for intentional infliction of emotional distress, and including their school as a defendant under the principle of joint and several liability. American victims and their families have other legal recourse, such as suing a school or teacher for failure to adequately supervise, civil rights violations, racial or gender discrimination or harassment, or other civil rights violations. Special education students who are victimized may sue a school or school board under the ADA or Section 504.
Bullying in schools (or other institutions of higher education) may also take the form of reduced grading, non-return of assignments, segregation of competent students by incompetent/non-performing teachers, for example, to protect the reputation of a college. This is so that their programmes and internal code of conduct are never questioned, and that parents (usually the ones paying the fees), are made to believe that their children are unable to cope with the course. Typically, these attitudes serve to create the unwritten policy of 'if you're stupid, you don't deserve feedback. if you're good, you don't need it.' Frequently, such institutions (usually in Asian countries) run a franchise programme with foreign (usually Western) institutions with the clause that foreign partners have no say in local grading or codes of conduct of staff involved on the local end. It serves to create a class of 'educated fools', people with degrees who have not learned to adapt to situations and create solutions by asking the right questions and solving problems.
- Main article: Bullying in schools
Workplace In the workplace, bullying is now one of the most contentious issues in the occupational health and safety arena.
However, with respect to workplaces, there are few localities that are governed by legislation which specifically targets workplace bullying. This is because lawmakers fear that those rules could be used as leverage in other industrial or interpersonal matters. Therefore most bullying claims are conducted under discrimination laws. In the United Kingdom bullying in the workplace is against the law under The Health and Safety at Work Act 1974.
- Main article: Workplace bullying
Cyberspace Cyberbullying occurs in electronic space. It "involves the use of information and communication technologies such as e-mail, cell phone and pager text messages, instant messaging, defamatory personal Web sites, blogs, and defamatory online personal polling Web sites, to support deliberate, repeated, and hostile behaviour by an individual or group, that is intended to harm others." -Bill Belsey
- Main article: Cyber-bullying
Familial Bullying in the family is normally ignored by society unless it includes a form of physical/sexual abuse. Once it does, outside parties such as the police and social services can get involved if the victim speaks up, or if the abuse has gone too far
Neighborhood Between neighbours bullying normally takes the form of intimidation by nuisance behaviour, such as excessive noise to disturb sleep and normal living patterns, and reports to authorities such as the police for minor or made up incidents. The purpose of this form of behaviour is to make the victim so uncomfortable they move from their property. It should be noted that not all nuisance behaviour is bullying, as some individuals are unaware of other people's feelings and the havoc they are causing.
Military Bullying in the military may occur when a superior persists in negative behavior toward his or her inferiors. Some argue that this behavior should be allowed because the military is not subject to normal civilian laws. Since military bullying is shielded from open investigation, subordinates may commit suicide out of lack of legal recourse. Deepcut Barracks in the UK is one example of the government refusing to conduct a full public enquiry to possible military bullying. In some countries, ritual hazing among recruits has been tolerated and even lauded as a "rite of passage" that builds character and toughness; while in others, systematic bullying of lower-ranking, young or physically slight recruits may in fact be encouraged by military policy, either tacitly or overtly (see dedovschina). Also, the Russian armies usually have older/more experienced candidates abusing - kicking or punching - younger/less experienced soldiers.
- Main article: Bulling in the military
Effects of bullyingEdit
- Main article: Effects of bullying
Persistent bullying may have a number of effects on an individual, and in the environment where bullying takes place.
Effects on the individual include:
- Reactive Depression, a form of clinical depression caused by exogenous events
- Post-traumatic stress disorder
- Gastric problems
- Unspecified aches and pains
- Loss of self esteem
- Relationship problems
- Drug and alcohol abuse
- Suicide (also known as bullycide)
Effects on a school include:
- High levels of truancy
- High staff turnover
- Disrespect for teachers
- High level of absence for minor ailments
- Weapon-carrying by children for protection
- Legal action
- Against the school or education authority
- Against the bully's family
- See Only Wayne - a racist bullying case study in a wiki-format, that illustrates some of the unfortunate effects of bullying on a particular school community...
Effects on the organisation such as a workplace:
- Loss of morale
- High level of sick leave absence for depression, anxiety and backache
- Decreased productivity and profit
- High level of staff turnover
- Loss of customers
- Bad reputation in industry
- Negative media attention
- Legal action
- Against the organisation for personal injury
- Against the organisation and individual bully under discrimination laws
Ways to prevent/stop bullying and strategic methodsEdit
A multitude of methods can be deployed in order to deal with or stop the effects of this behaviour from affecting the individual being abused. However, much of this can be very unsuccessful and may need fairly ingenious and/or devious solutions which often change because of the bully getting to understand ways around this tactic.
- Telling other people This is a situation in which the victim reports the incidents of abuse against them; however there are many problems with this method. There are often so many incidents that one cannot easily report a tremendous back-log of events they may have without people getting to the point of disbelief (for instance 1,460 cases of assault - roughly 3 times a day for approximately 12-15 months). Secondly, the person who is supposed to help can be a problem themselves due to incompetence and may refuse to listen.
However, telling other people may help and telling authorities such as police forces, certain charities (including the NSPCC) or parents, head-master etc can be helpful (but it may be advisable for one to protect their identity by remaining anonymous when reporting). If one authority fails to take actions, there are procedures for complaint against that authority such as using inspectors or independent bodies for complaining. However, some people can be ineffective advice givers when speaking to them by saying things such as "Punch his lights out" or "Ignore it" (person denies any responsibility for tackling this issue). Certain web-sites may carry procedures on how to tackle this abuse, like this one for instance [Government of Canada advice and information http://www.gov.mb.ca/stopbullying/listen.html]. Some websites may have contacts that give advice to people also on what to do.
- Fighting back This is something that can be a natural response (flight or flight) or a forced one. Although the abuser will often try to get the individual to fight back, possibly to intimidate them or to appear that they themselves are being victimised and are defending themselves from attack. Of course, one cannot allow themselves to be assaulted and means by reasonable force may be the only course of action to defend a person from injuries, this does not normally involve weapons. The use of self defence is a controversial issue and indeed in many situations, it may be appropriate; but like any battle, it can go pear shaped. Some fighting to can lead to some more severe injuries that seems to be present due to the fact that individuals are fighting so hard that they do not notice pain (common on the battle field) and can lead to horrific injuries. Although, the lack of pain can be useful as this allows the person to continue fighting without being repelled by pain.
The use of weapons by both sides, leads to more horrific injuries in most cases. It is generally accepted that once under attack, that an individual can fight back to defend themselves as this is the only means to over-power the attacker once the fight has commenced. This is in the same way a cornered animal, will fight when threatened as its only means of escape is straight through you (if you corner it). Self defence courses are available and instructors are often keen to enable a person to learn self-defence but they often make clear, the consequences of abusing this tool by abusing others with your learned skills. Such self defence training is known as martial art. However, most people will not recommend that this is the way to tackle this issue. Usually, the larger and stronger the opponent is; the more likely they are to overcome you. But this is not always the case because you may have fought tactically (the general idea of martial art). Of course, one must take into consideration the fact that the enemy (particularly older people generally) can know more tactical methods of counter attacking.
- Tactical management
There are other ways that people can cope with this abuse also. For example, a person may decide that it is a good idea to take a dictaphone (small tape recorder) with them to show evidence of this bullying, but this is generally illegal in most countries (although most people would probably see it as being worth the cost).
Other methods are things such as putting school work into a brief case rather than a school bag to prevent the offender from vandalising school work and paper-work. This however, may come with a price (i.e. the bully uses the brief-case as a weapon by taking it from you and hitting you with it). Such objects may also be deemed unacceptable in a school for safety reasons, although there is no evidence of that. Money may be stored in the brief-case, but this only works if they do not force you to open the case to give them the money.
Other methods may be to lure them into a trap in which witnesses may be waiting. Witnesses may take pictures with mobile phone cameras or ordinary cameras as evidence and so forth.
Walking about with other friends as protection may scare the bully into avoiding you.
Make a record of the events as this may be able to be used to track down what is happening as evidence for making a complaint. Also record any actions from the people you talk to about this abuse, if there are reactions that you do not want such as being told off for complaining; write down who these people are and what they have done too.
Finding other victims may also act as evidence. Ask them to back you up!
Changing school or class-rooms are another way to avoid contact with these people.
- Legal action
Using your evidence, it may be possible to take legal action against the offender, possibly through sueing, claiming compensation, pressing charges, going to the media and so on. In the United Kingdom, such actions would be discussed with a legal adviser, solicitor or the citizens advice bureau. It is an offense to assault and hit people, sexually attack them, to use threatening behaviour, psychological abuse such as pestering and insulting, death threats, black-mail, defamation of character and so forth. People whom are responsible such as teachers can be fired for not doing their jobs if they are discovered to be allowing abuse to go on without it being investigated properly.
- The Fight That Never Ends by Tim Brown
- Bullying at Work: How to Confront and Overcome It by Andrea Adams
- The Bully at Work: What You Can Do...by Gary Namie and Ruth Namie
- Bully in sight: How to predict, resist, challenge and combat workplace bullying by Tim Field
- Bullycide, Death at Playtime by Neil Marr and Tim Field
- A Journey Out of Bullying: From Despair to Hope by Patricia L. Scott
- "Peer Abuse Know More! Bullying From A Psychological Perspective" By Elizabeth Bennett
- Aggressive behavior
- Antisocial behavior
- Bullying in academia
- Bullying in teaching
- Emotional abuse
- Physical abuse
- School bullying
- School violence
- Excellent set of links to bulying resources
- Bullying Online A UK charity
- Bullying and emotional intelligence
- ACAS Information on Bullying and Harassment at Work (go to "Our publications" and search for "Bullying")
- www.bullying.org "Where you are NOT alone!"
- Bullying in schools (Australia - schools)
- Bullying in schools (UK - schools)
- Just Fight On! (Bullying in workplaces)
- How to Stop Bullies
- Peer Abuse Know More!
- Canadian anti-bullying safety database
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | <urn:uuid:84fd21c4-8890-47f5-896d-ea6f649527cd> | CC-MAIN-2016-26 | http://psychology.wikia.com/wiki/Bullying | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00166-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961326 | 3,583 | 4.0625 | 4 |
Electoral systems and voting
As the UK prepared to go to the polls, there were calls to lower the voting age to 16 to involve young people in elections more. Would a lower voting age have helped improve voter turnout?
Answer a quiz on legal ages then look at arguments for and against changing the age.
- Some minimum legal ages
- Arguments for and against change
How do students feel about having to wait until they are 18 before they can vote?
Try a quick quiz on legal ages, there's an online version or a worksheet version of the quiz that can print on two sides of A4.
- 1 - C
- 2 - B
- 3 - B
- 4 - C
- 5 - C
- 6 - A
- 7 - A
- 8 - C
- 9 - A
- 10 - B
Ask students if there are any issues in the news they would like to be able to vote on.
If they were Prime Minister, what issues would be their top priority?
Print out a voting worksheet for each student.
Students put an X next to the issue they think is the most important.
The votes for the class can be totalled, like ballot papers at a polling station.
Which issue does the class as a whole think is the most important?
Ask one or two students, who voted for this issue, to explain why it is their top priority.
Use these two Press Pack reports to get the discussion started.
Ask the group to look at the two opposing arguments and decide on their own individual views.
In small groups storyboard a 30 second TV advert explaining what they believe and why. They should include the benefits of doing things their way, but also how they would overcome any likely problems. If they favour the status quo then they will need to find other ways of avoiding a low turnout.
Discuss how 16-year-olds having the vote might change the way politicians treat young people.
How old should you be to stand for election? (it's 21 at present)
If we didn't use age, how else could we decide when someone was ready to vote?
What does that tell us about using ages to decide rights? Is it fair?
What happens in other countries?
- All the countries in the EU and most of the other 190 countries in the world have their voting age at 18. There are about 6 countries where people vote at 15 and 16. Approximately 12 countries set the voting age at 20 or 21.
- In the UK, a general election must be held at least every five years but the Prime Minister can call one whenever she or he likes.
- You have to be at least 18-years-old to vote.
- Some other people aren't allowed to vote such as the mentally ill, criminals and peers (the nobility).
- Members of Parliament (MPs) represent everyone in their constituency, even the ones who didn't vote for them. | <urn:uuid:06100274-548c-4e17-aafc-cb0af717f2cb> | CC-MAIN-2016-26 | http://news.bbc.co.uk/cbbcnews/hi/newsid_3180000/newsid_3187900/3187922.stm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00018-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962669 | 613 | 4.71875 | 5 |
Archiving and compressing files is a very common task. Whether it is for backup purposes or exchanging files over the Internet, being able to handle the various archive and compression formats available is an important skill. Linux has tools to access all common archive and compression file formats.
Zip is the most popular compression format on any platform. The Zip format was originally created by PKWare in the 1980s and is now supported by hundreds of programs and used primarily to distribute software over the Internet. Linux uses Info-Zip (www.info-zip.org) to access Zip files. Tools to archive, compress and decompress files are included.
To create a Zip archive containing several files, use the following command:
$ zip archive.zip [files to zip separated by spaces]
To decompress a zip archive, use the following command:
$ unzip archive.zip
RAR is a very popular format for distributing large files on the Internet because it supports multi-volume archiving. This allows a single large archive to be broken into several RAR files of an identical size, commonly referred to as ‘disks’. This makes it much easier to transfer the archive using a relatively slow Internet connection such as a modem.
RAR files are created in sequential order, commencing with the suffix .rar for the first file, followed by .r00, .r01, etc. for each subsequent archive.
To create a multi-volume RAR archive, use the following command:
$ rar a -v
The ‘a’ command tells RAR to create an archive. The ‘-v’ switch splits the archive into disks of
To decompress an RAR archive, place all of the disks in the current directory and type:
$ rar x archive.rar
The ‘x’ command tells RAR to extract the archive with full paths intact. RAR will automatically decompress the disks in the correct order.
Under UNIX, archiving and compressing tasks are given to different programs.
Tape ARchiver or tar, is the standard archiving tool for UNIX, and was originally designed to be used with tape backups. The tar program takes a list of files and combines them into a single file. No compression is applied so the file will take up approximately the same amount of space as the original files.
To create a tar archive, use the following command:
$ tar cvf archive.tar [files to tar separated by spaces]
The ‘c’ switch tells tar to create a new archive. The ‘v’ switch turns on verbose mode to make diagnosing errors easier. The ‘f’ switch specifies that the archive filename follows.
To extract a tar archive into the original list of files, use the following command:
$ tar xvf archive.tar
The ‘x’ switch tells tar to extract files from an archive. The other switches have the same functionality as above.
GZip is a popular compression tool, and files compressed this way end in the suffix .gz. It is most commonly applied to tar archives, resulting in files ending with the suffix .tar.gz.
To create a GZip compressed archive, first create a tar archive of the files you wish to compress and then use the following command:
$ gzip archive.tar
To extract a GZip compressed file, use the following command:
$ gunzip archive.tar.gz
You can also take advantage of UNIX pipes to use gunzip and tar together:
$ gunzip -c archive.tar.gz | tar xv
The ‘-c’ switch outputs the contents of the decompressed archive.tar.gz to the tar command, which then extracts the individual files from the archive.
BZip2 offers significantly better compression than GZip and requires a lot more CPU power. BZip2 is best used to compress large files where the space savings are worth the extra time taken to compress the file.
To create a BZip2 compressed archive, first create a tar archive of the files you wish to compress and then use the following command:
$ bzip2 archive.tar
To extract a BZip2 compressed archive, use the following command:
$ bunzip2 archive.tar.bz2
You can also use take advantage of UNIX pipes to use BZip2 and tar together:
$ bunzip2 -c archive.tar.bz2 | tar xv | <urn:uuid:048a5852-6ab6-43cd-b8d3-46d9e97c9b7e> | CC-MAIN-2016-26 | http://www.pcworld.idg.com.au/article/54281/archiving_compression_linux/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00173-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.836303 | 936 | 3.859375 | 4 |
County Vermont (Introduction to the 1983
McClellan’s Map of Windham
An Historical Sketch of the Map
a short period of time in the decade preceding the Civil
War, an extraordinary series of detailed county maps was
produced, which recorded the names and locations of homes
and businesses throughout the Northeastern United States. In
1856, early in this period of county mapmaking,
McClellan’s Map of Windham County, Vermont was
published. This was the first comprehensive map of Windham
is known about the Philadelphian firm of C. McClellan & Co.
(the producers of the map). Of the hundreds of county maps
produced in the 1850s, the name C. McClellan appears on only
this one. In contrast, the name of the surveyor, J. Chace,
Jr., is associated with at least eighteen other county maps
in the period 1854—1860. Chace was surveyor for maps of
Rutland, Windsor and Windham counties in Vermont, as well as
for county maps in Maine, New York and New Hampshire. Like
many of the map publishers, Chace was based in Philadelphia,
although the Wind-ham County map lists his address as Troy,
of the survey methods used and of the sales techniques have
been hard to find. Based on research done into other county
maps, however, it is possible to speculate with some
confidence. Surveyor Chace traveled the roads of Windham
County, carrying notebooks and measuring devices. He used a
wheeled odometer, probably horse-drawn, to measure
populated areas he may have used a hand propelled odometer
similar to the one pictured here. He used a compass to
determine the bearings of roads and the locations of hills
and other geographic features. As he recorded the
cartographic data, the surveyor would also note the
locations of the principal buildings, and, more importantly,
the names of the owners. This was the point of printing the
map, a new map which, for the first time, would record the
locations of all the cultural sites in the county, with the
owner’s name beside each. As he noted the homeowner’s name,
Chace, who was probably as much salesman as geographer,
would explain the significance of the new map, and invite an
advance order for it. The map was probably available only by
advance subscription. The price was about five dollars, a
substantial sum in the 1850s.
the field work for the map was done, Chace prepared a draft
which was no doubt shown around Brattleboro as he, or
perhaps a sales agent, attempted to sell advertising space.
Mapmakers increased their revenue by adding engraved views
of local businesses—for a fee—to the margins of their maps.
The price might be as high as fifty dollars per view. Note
that four of the seven views on the Windham County map are
of Brattleboro businesses: a melodeon factory, a furniture
factory, and Brattleboro’s two water cure establishments.
The advertising has long outlived the businesses, none of
which exist today.
Drawings, or perhaps photographs of the buildings were
prepared and sent to Philadelphia along with the draft map.
In Philadelphia, the map’s designers assembled the various
parts into a composite map, added a decorative border and
engraved the whole onto large stone printing plates. The map
was printed on four separate sheets which were then fitted
and glued to a canvas backing. Each map was then hand
colored. Different hues were applied to each town; slightly
darker shades defined the town lines. The finished maps were
varnished and mounted on wooden rollers.
original Windham County map is not commonly seen today,
which suggests that only a few may have been printed. It is
among the rarest of the Vermont county maps. Its scarcity
may be due to a poor sales effort by the map’s publishers.
Searches in old newspapers have failed to uncover
advertisements for the map of the sort used in nearby
Cheshire County, New Hampshire, where a similar—and more
commonly seen—map was produced in 1858.
features of the Windham County map suggest the relative
inexperience of its publishers. The large lettering gives
certain areas of the map a cluttered appearance. In places
names are hard to read, due both to the handwriting of the
engraver, and to flaws in the actual printing. Additionally,
the map’s accuracy in some instances is questionable. Road
layouts, for example, are in many areas only generally
Variations in name spelling attract special attention. Some
are clearly wrong, like “Birce Street” (Birge Street) in
Brattleboro and “B.S. Paulding” (B. Spaulding) in
Londonderry. Errors like these probably crept into the map
when the Philadelphia engravers interpreted the surveyor’s
notes. Other odd spellings are less easily explained. Do the
entries “Parkkus” in Townshend and “Edwads” in Wardsboro
reflect misengravings? Or do they perhaps give us clues as
to how some names were pronounced in 1856? We don’t know
whether the surveyor actually checked the spelling of each
property owner’s name, or whether he simply wrote down what
he heard. Throughout the map can be seen different spellings
of nearly identical names, A good example of this is seen on
the maps of Brattleboro and Guilford. There, within a few
miles of each other, are seen the family names “Akley,”
“Acley” and “Ackley.” Is this variance due to mapmaker
error, or did these neighboring, and possibly related
families actually spell their names three different ways?
publication of McClellan’s Map of Windham County in a
convenient atlas form should encourage the use and study of
this pioneer document of Windham County history. Answers to
the questions raised by this map may become evident with
time and further study. Comments by readers on the map’s
accuracy and insights into how this important old document
was prepared are welcomed by the publisher.
County, New Hampshire (Introduction to the 1982
Map of Hillsboro County, NH
An Historical Sketch of The Map
The Map of Hillsboro County,
1858 is a singular historical document. The result of
the most comprehensive survey yet made of these towns, the
map pinpoints the names and locations of every residence,
workplace, church and school. The geographic features which
give our region its charm and character are carefully
displayed. The map, like later gazetteers, presents
important demographic data: population and agricultural
statistics, and substantial city directories. The
birthplaces of prominent Americans—Franklin Pierce and
Horace Greeley among others— are given special treatment.
publishers, Smith, Mason & Co. of Philadelphia, published
similar maps of other New Hampshire counties. Publication
was announced in local newspapers during the winter of
1856-57. Offices were set up in Manchester and Nashua where
prospective customers could view preliminary plans for the
work. Advance orders were taken for the map, at five dollars
per copy. Prominent citizens allowed their names to be used
in the map’s advertisements, testifying to the merits of the
map, and no doubt assuring it of financial success.
map was printed on four separate sheets (probably on large
stone printing plates) and assembled and glued together onto
a cloth backing. Each copy was then hand-colored in several
different hues, varnished, and mounted on wooden rollers.
The large size—five feet on an edge—has often proved an
impediment to display. Copies have commonly been consigned
to storage, usually in attics, where they have suffered the
adverse effects of heat and leaky roofs. Originals in good
condition today are rare items.
and plans made prior to the 1850s were simple affairs,
usually commissioned by government, showing only political
boundaries, major roadways, and an occasional mill or
tavern. With few exceptions (Nashua, Milford and
Manchester), no detailed town maps preceded the 1858 map.
Thus it becomes the first “road map” for most of the
Hillsborough County towns.
were measured with a wheel odometer, similar to the
wheelbarrow-like device pictured here. Some odometers may
have been drawn by horse and buggy. The surveyor would ask
the names of farmstead owners as he passed by, and would
surely add a brief sales pitch for the new map, after all,
the map would carry the name of the resident engraved upon
questionable whether surveyor J. Chace, Jr. personally
measured all these roads. Perambulating them all would have
required many months. As Chace is surveyor of record for no
fewer than 21 different county maps during the period
1854-1860, it is likely that assistants did most of
the hard work. The original road surveys for this
privately-produced map were the most comprehensive yet made;
this map served as the basis for later maps until the end of
Hillsboro County: Differences among Several
States of the Map
preparing the Map of Hillsboro County for
reproduction last year I noticed that there are differences
among the several copies of the original wall maps which I
At this point I have been able to identify 2 different
editions of the map, and have identified a single variant
map which was probably the publisher’s proof copy.
The two editions are distinguished by the
arrangement of the maps elements. Some of the maps are
oriented to magnetic north (the first edition) while others,
more numerous in my research, are oriented to true north.
The magnetic north maps have the cartouche at the top center
of the map, slightly to the left. On the true north
copies the entire county section is pivoted to the left
(counterclockwise), and the cartouche is placed in the upper
right. The village inset maps are arranged in entirely
different locations (?). NEED PHOTO.
I found several magnetic north (“original”)
editions in a finished state: the maps were assembled onto
cloth backing, varnished, and mounted onto wooden rollers.
This was the conventional finished format. The copies
I examined were owned by private individuals.
But interestingly, there is a single
magnetic north copy in the library of the New Hampshire
Historical Society which is quite different than the other
magnetic north copies – many sites have different names.
This copy, unlike the others observed, is unvarnished, and
was never assembled and mounted. The NHHS copy is 4 pieces
of paper, nicely printed, with coloring only on the town
My comparison of the maps was not
comprehensive, but I did notice several dozen differences.
On the Brookline village map the NHHS map shows a “W.
Gilson” on the right side of a road; the more finished maps
label this site “Gilson & French”. The Mont Vernon
finished maps show a “Ruby Hill” while no such feature
appears on the NHHS copy. There are at least 6 amendments to
the map of Amherst village.
These differences strongly suggest that the
NHHS copy might be a proof copy which was brought to New
Hampshire for marketing and accuracy-checking purposes.
Advertisements which ran in local newspapers in February of
stated that some
drafts of the map were already in existence (they had been
shown to prominent citizens who gave testimonials in the ad)
and that more finished prints (?) would soon be available
for “…examining the work before its final compilation, in
order to make it entirely satisfactory as to accuracy, &
etc.” The changes manifested on the
varnished maps no doubt reflect corrections and amendments
suggested by the public. The entries on the varnished and
mounted maps are clearly changes (in some cases faint
lettering of the proceeding entry can be discerned in the
later editions). My cursory examination indicates that most
of the changes are in the populated areas. This corresponds
with the record left in advertisements by the publishers.
They went to the larger towns to sell their maps, and would
presumably have gotten the most corrections from those
areas. Residents of outlying towns may not have had an
opportunity to examine the demonstration maps. If that is
so, then errors may be more likely in those towns if we
assume that errors would be randomly spread throughout the
D. Allen 1983 | <urn:uuid:61458146-408e-42b0-8fe2-2a18c25664fa> | CC-MAIN-2016-26 | http://www.old-maps.com/vermont/vt-more-pubtxt.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00074-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.944936 | 2,801 | 2.96875 | 3 |
See Disclaimer regarding information on this site. Some links on this page may take you to organizations outside of the National Institutes of Health.
oropharynx (the part of the throat at the back of the mouth). Most cases are designated as squamous cell carcinomas because they begin in the flat cells (squamous cells) that cover the surfaces of the mouth, tongue, and lips. About 42,000 individuals in the United Stated are diagnosed with oral cancer each year. Most cases occur in people over age 40. Men are twice as likely to be affected as women. The use of alcohol and/or tobacco is associated with approximately 75 percent of oral cancers. Other risk factors include HPV, a sexually transmitted disease; increasing age; sun exposure; and a poor diet. Treatment for oral cancer may include surgery, radiation therapy, chemotherapy, or targeted therapy.Oral cancer includes cancers of the mouth, lips, and
Last updated: 6/21/2016
- What You Need to Know About Oral Cancer. National Cancer Institute. 2009; http://www.cancer.gov/publications/patient-education/wyntk-oral.pdf.
- Oral Cancer. National Institute of Dental and Craniofacial Research. August 2014; http://www.nidcr.nih.gov/OralHealth/Topics/OralCancer/OralCancer.htm.
- You can obtain information on this topic from the Centers for Disease Control and Prevention (CDC). The CDC is recognized as the lead federal agency for developing and applying disease prevention and control, environmental health, and health promotion and education activities designed to improve the health of the people of the United States.
- MedlinePlus was designed by the National Library of Medicine to help you research your health questions, and it provides more information about this topic.
- The Merck Manuals Online Medical Library provides information on this condition for patients and caregivers.
- The National Cancer Institute provides the most current information on cancer for patients, health professionals, and the general public.
- The National Institute of Dental and Craniofacial Research (NIDCR), purposes to improve oral, dental and craniofacial health through research, research training, and the dissemination of health information. Click on the link to view information on this topic.
- Cancer.Net, a resource from the American Society of Clinical Oncology, provides information about oral cancer. Click on the above link to access this information.
- The American Cancer Society provides information about oral cancer. Click on the link to access this information.
- Kademani D. Oral cancer. Mayo Clinic Proceedings. 2007 Jul;82(7):878-887. | <urn:uuid:16683058-3f24-4936-a25a-91805df92ee6> | CC-MAIN-2016-26 | https://rarediseases.info.nih.gov/gard/9360/oral-cancer/resources/1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00108-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.890633 | 555 | 3.71875 | 4 |
MEngM Required Course Descriptions
During the fall term, students take four courses—a total of 48 units. The following are required unless similar prior classes can be demonstrated:
2.810 Manufacturing Processes and Systems
Introduction to manufacturing systems and manufacturing processes including assembly, machining, injection molding, casting, thermoforming, and more. Emphasis on the relationship between physics and randomness to quality, rate, cost, and flexibility. Attention to the relationship between the process and the system, and the process and part design. Project (in small groups) requires fabrication (and some design) of a product using several different processes (as listed above).
2.854 Introduction to Manufacturing Systems
Provides ways to analyze manufacturing systems in terms of material flow and storage, information flow, capacities, and times and durations of events. Fundamental topics include probability, inventory and queuing models, forecasting, optimization, process analysis, and linear and dynamic systems. Factory planning and scheduling topics include flow planning, bottleneck characterization, buffer and batch-size tactics, seasonal planning, and dynamic behavior of production systems.
2.961 Management for Engineers
Provides an overview of management issues for graduate engineers. Topics approached in terms of career options as engineering practitioner, manager, and entrepreneur. Specific topics include semantics, finance, starting a company, and people management. Through selected readings from texts and cases, focus is on the development of individual skills and management tools. Requires student participation and discussion, term paper.
2.S981 (1-2-3) Additive Manufacturing Processes
Lecture and lab subject with emphasis on novel additive manufacturing (AM) processes, including those for polymers, metals, and composites. Lectures will address materials, process physics, equipment design, and control, broader issues (standards, geometry representations, etc.), and emerging applications. Lab exercises will use state of the art equipment, and will explore process capabilities and quality. The course will culminate in a team project involving design, prototyping, and characterization of a customized object and/or innovative concept for improving an AM method.
2.S982 (0-3-3) New Process Development - "Bench to Money“
A full term project subject where students in groups of 2-3 join a research lab in the LMP. spending ~ 6 hours per week, they will learn about a new process or a hardware concept and develop an understanding of the steps and hurdles necessary to bring such an idea to commercial reality. Each group will produce a report at the end of the term.
January Term (IAP)
In January, MST students begin their Group Projects in Industry. They also participate in other activities during this Independent Study Time.
During the spring term students take three courses and a seminar (a total of 39 units), and work on their Group Projects.
2.830J* Control of Manufacturing Processes
Statistical modeling and control in manufacturing processes. Use of experimental design and response surface modeling to understand manufacturing process physics. Defect and parametric yield modeling and optimization. Forms of process control, including statistical process control, run by run and adaptive control, and real-time feedback control. Application contexts include semiconductor manufacturing, conventional metal and polymer processing, and emerging micro-nano manufacturing processes.
ESD 267/268J Manufacturing System and Supply Chain Design
Focuses on decision making for system design, as it arises in manufacturing systems and supply chains. Students exposed to frameworks and models for structuring the key issues and trade-offs. Presents and discusses new opportunities, issues and concepts introduced by the internet and e-commerce. Introduces various models, methods and software tools for logistics network design, capacity planning and flexibility, make-buy, and integration with product development. Industry applications and cases illustrate concepts and challenges.
2.739J Product Design and Development
Covers modern tools and methods for product design and development. The cornerstone is a project in which teams of management, engineering, and industrial design students conceive, design, and prototype a physical product. Class sessions employ cases and hands-on exercises to reinforce the key ideas. Topics include: product planning, identifying customer needs, concept generation, product architecture, industrial design, concept design, and design-for-manufacturing.
2.888 Professional Seminar in Global Manufacturing and Entrepreneurship
Students also begin their thesis project in the spring. This thesis project continues through the summer term, when students participate in industry-based group projects. This full-time project gives students a chance to apply their understanding of manufacturing fundamentals to real problems and make real-world improvements in process, material flow and logistics.
The key activity of the summer is the Group Project. The full time work at the companies ends in the middle of August. The MIT Project Thesis is completed in late August. | <urn:uuid:ce218861-eb21-4f0d-be8f-5a6fa7af1df5> | CC-MAIN-2016-26 | http://web.mit.edu/meng-manufacturing/academics/courses.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00141-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.911724 | 996 | 2.640625 | 3 |
Over the last year, one million people have fled their homes in Afghanistan to escape the brutal terrorism of the Taliban regime and to seek refuge from the worst drought to hit the country in 20 years. As many as 800,000 internal refugees seek relief within Afghanistan; thousands have crossed the border to neighboring Pakistan in search of relief, only to find death and starvation in this unsympathetic country. The government of Pakistan is using one refugee camp, Jalozai—where 80,000 starving people watch at least one child die daily—as a warning to the throngs of refugees still trying to cross the border, that Pakistan will not support them.
United Nations officials assert millions of dollars are set aside to aid new refugees from Afghanistan and that refusing to accept UN aid makes Pakistan complicit in the death and starvation Afghan refugees endure everyday. The UN World Food Programme (WFP) estimates that if the drought in Afghanistan ended tomorrow, food assistance to refugees would need to continue until July 2002. | <urn:uuid:5c365bd4-44cd-421c-8df5-9cdf0e073c77> | CC-MAIN-2016-26 | http://www.msmagazine.com/news/printnews.asp?id=5421 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00143-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939243 | 198 | 2.96875 | 3 |
A selection of articles related to elements.
Original articles from our library related to the Elements. See Table of Contents for further available material (downloadable resources) on Elements.
- Working with the elements
- Four is the number of the physical universe. Both the square and the cross are symbols of material life in the three-dimensional world. The horizontal line of the cross symbolizes time, and the vertical one depicts space. The corners of the square and the...
Magick >> The Elements
- Everyday Earth
- When you think of "Earth" what comes to mind? Perhaps you feel the stable element of solidity and grounding. Or maybe you see Earth as the third planet from the Sun. Or for you, is Earth the rich brown soil in your own backyard? Earth is all these...
The Elements >> Earth
- What are the elements, and how many of them are there?
- In the context of magick and ritual, you will find that the four elements of Air, Fire, Water, and Earth are often used. These are powerful representations of natural magick. On the pentacle, our sacred symbol, each point of the star represents one element....
Magick >> The Elements
- Balancing Deficient Elements
- Analgous to mineral deficency a person lacking the presence of natal Planets in a particular element will also need to establish practical methods in order to balance the element in question. An example of a deficiency would be: Earth 1 Air 4 Fire 3 Water 4...
Mystic Sciences >> Astrology
- The Elements of Color Magick
- "Color is the place where our mind and the universe meet each other." Paul Cezanne In magick colors represent a certain energy, goal, person (someone you're working for) or a non-physical being (deity, spiritual force). The magickal color meanings...
Symbology >> Colorology
- The Perpetual Raising: Part 2
- (Part 2 of 5) Every cubic inch of the body constantly radiates our internal state of being to the environment. In return, each one also unceasingly receives psychic inputs, (feelings, thoughts, Intentions, the general state of consciousness of others, etc.)...
Mind >> World Mind
Elements is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Elements books and related discussion.
Suggested News Resources
- NASA Infrared Observatory: "2013 Supernova Released Elements Needed to Create
- Observations made with NASA's flying observatory, the Stratospheric Observatory for Infrared Astronomy (SOFIA) indicate that nova eruptions create elements that can form rocky planets, much like Earth.
- The Learning Network | Analyzing the Elements of Art | Five Ways to Think
- Welcome to the fourth piece in our Seven Elements of Art series, in which Kristin Farr of KQED Art School helps students make connections between formal art instruction and our daily visual culture. So far we have published pieces on shape, form and line.
- Market Currents: Six elements influencing the crude oil market today
- On the 84th birthday of the most wonderful Keely Smith (no relation, by the way), the oil market is reversing yesterday's losses and going 'zooma zooma' to the upside, despite an impending solid build to crude stocks from the weekly EIA inventory report.
- Tottenham possess many elements of German football - Dortmund boss Tuchel
- "We can see a lot of elements from German football - the early ball recoveries, the compactness, the way they defend, and also the way their outstanding attacking players are included in that defending. "That is unique in English football.
- Microsoft Bing Now Displays Periodic Table And Periodic Elements In Its Search
- In addition to the Solar System we reported last week, Microsoft Bing now offers periodic table and info about periodic elements directly in its search results page. Bing's version of periodic tablet is interactive and useful.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
Related searchescape may new jersey geography
radio ceylon the hindi service
civilization iv python
vere gordon childe biography | <urn:uuid:5d3a6475-3d6c-4766-8425-f6b40251e8c3> | CC-MAIN-2016-26 | http://www.realmagick.com/elements | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00017-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.887048 | 916 | 2.59375 | 3 |
PepsiCo To Use Potato Water To Help Run Its Chip Factories
Food Giant PepsiCo is taking new initiatives to achieve more sustainability in its factories.
PepsiCo has come up with a novel plan to reduce the water consumption in its factories. It aims to recycle the water extracted from potatoes to run its potato chip plants in the UK. 80% of a potato is water and PepsiCo uses around 350,000 tons of potatoes annually.
Walter Todd, PepsiCo’s Vice President of sustainability for Europe said in an interview to The Guardian that in the process of cooking the potatoes, water is boiled off and lost, so the challenge for them is to find how to capture that water and use it in their operations, thus conserving the water from the mains. He says, within 10 years, they plan to source all their water for the four crisp factories in the UK from these potatoes, thus taking them off the water mains completely. | <urn:uuid:5644391c-df93-47d3-a0ed-18c7dd055904> | CC-MAIN-2016-26 | http://www.psfk.com/2010/06/pepsico-to-use-potato-water-to-help-run-its-chip-factories.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00014-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96632 | 197 | 2.59375 | 3 |
According to recent studies the vegetarians seem to eat more healthily than their meat-eating peers. “Vegan nutrition” contains more fruits and vegetables and less fat, especially saturated fat. Research being conducted states that by eating more plant-based foods and less meat and animal products might help prevent heart problems, diabetes, and certain kinds of cancer. “The vegan recipes” truly seem to promote a healthier lifestyle.
The concern for the vegetarian lifestyle is making certain that you get the appropriate vitamins and nutrients. If a young person chooses “the vegan recipes” then it’s vital that the young person gets enough vitamin B12, iron, zinc, and calcium. If you cut out meat without finding appropriate substitutes can deprive growing bodies of these valuable nutrients. Variety is the key to a successful “vegan nutrition” diet. This diet will include plenty of leafy greens, whole-grain products, nuts, seeds and legumes. The non vegan vegetarian will add dairy products or eggs. “The vegan recipes” however restrict those items.
The “vegan nutrition” element is important if you are an athlete. Being an athlete requires the consumption of protein for the muscles. A protein bar or a shake will satisfy that requirement, but usually the choice of beans, nuts and soy is better. Many of us have a need to chew foods that crunch.
Some of the foods available for “the vegan recipes” are green leafy vegetables, orange juice for obtaining calcium. Nerve function requires Vitamin B12 which can be found in fortified breakfast cereals. To obtain iron for red blood cells whole grains, iron-fortified breakfast cereals, legumes such as chickpeas, lentils, and baked beans, soybeans tofu, dried fruits pumpkin seeds and raisins. To obtain Zinc for growth and development and immune function try nuts, legumes, seeds and whole grains. For obtaining tissue and muscle growth protein can be found in beans, grains, nuts, nut butters, seeds, soy products, tofu, and veggie burgers.
Variety is the key to being a successful healthy vegetarian. The choice is yours to make. The internet is one source but many of the mainstream magazines carry “the vegan recipes” that are healthy and tasty. Don’t forget the other magazines like Vegetarian Times, and Yoga Journal are two magazines available with lots of “the vegan recipes” | <urn:uuid:3cc48918-b525-4ed3-b421-81864434a734> | CC-MAIN-2016-26 | http://www.syl.com/articles/lifestylechoicesofbeingvegetarianwiththehelpoftheveganrecipes.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.928246 | 505 | 2.734375 | 3 |
The State Museum of Prehistory in Halle is one of the most important archaeological museums in central Europe.
A cornerstone of archaeological preservation in Saxony-Anhalt, it houses one of the oldest, most comprehensive and most important archaeological collections in Germany. This stock of more than 15 million discoveries includes the famous Nebra Sky Disk. These exceptionally interesting artefacts are displayed in chronological order – from the beginning of the Stone Age through to the early Iron Age.
Tuesday-Friday 9am-5pm, Saturday, Sunday & public holidays 10am-6pm
Add your favourites here. Save, sort, share and print your selection and plan your entire trip to Germany.
0 favourites selected
Dve uporabni bližnjici za povečavo v brskalniku:
Dodatna pomoč vam je na voljo pri ponudniku brskalnika. Do nje dostopate s klikom na ikono: | <urn:uuid:8f554448-01a9-4d18-a552-f2cec24398a0> | CC-MAIN-2016-26 | http://www.germany.travel/sk/towns-cities-culture/museums/nature-technology/state-museum-of-prehistory-halle.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00063-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.753081 | 211 | 2.765625 | 3 |
Wednesday, September 07, 2005
Fifth graders get a document based social studies test in November. Understanding political cartoons is one of the components. Here's a dbq slide show on cartoons. The first four slides give some tips on how to understand them, but I don't think that will work-you just have to be aware of the contexts that cartoons deal with and that involves teaching content. The one above is some semi original work that could have been a New Orleans' Daily News City headline. | <urn:uuid:397f8944-87a5-454e-8782-ea16212fa7b1> | CC-MAIN-2016-26 | http://dbellel.blogspot.com/2005/09/political-cartoons.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00141-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963397 | 102 | 3.140625 | 3 |
These women first set up their own business after the French revolution, in the 19th century. Most of them were former house cooks forLyon’s rich and affluent families. They started treating the canuts, silk weaving workers, to popular meals. Later, when their reputation reached beyond the edge of Lyon, the most famous of them even welcomed General de Gaulle as VIP at their table.
Their recipe? Simplicity and subtlety in contrast to the Parisian sophistication. They gave a feminine definition of gastronomy: an honest cooking with taste and spirit, but most of all ― with local and seasonal top quality ingredients.
Les mères lyonnaise: French women chefs and their guests
Their customers were not coming for a fancy experience. Most of these women chefs came from rural families and didn’t get much education. Their reception might have been surly, their characters sturdy. Little choice was given to their guests, but the quality of their meals was perfect.
For more than a century, les mères lyonnaises were more famous than all the personalities of Lyon. Edouard Herriot, mayor of Lyon and French Secretary, was calling Eugénie Brazier―who was the first woman to get three stars from Michelin in 1933―“the second mayor”. Les mères lyonnaises revolutionized the kitchens of Lyon and beyond.
Today’s bouchons, restaurants with traditional cuisine lyonnaise, male chefs are in the mères kitchen in Lyon. Mathieu Viannay for example has been reviving successfully Mère Brazier’s bouchon for a few years: “Artichauts et foie gras”, “Poularde de bresse demi-deuil” or “Paris-Brest et Pralin” are still à la carte. Paul Bocuse began his career at this same stove before becoming a worldwide star. These days, bouchons adopted a more modern spirit―sometimes with an uneven quality.
Les mères lyonnaise: French women chefs and their menu
When you are raised in Lyon in the mères atmosphere, you learn first that the availability of the ingredients will design the menu ― you might not know what you will cook for lunch until you go to the open market in the morning. And you might have to change your menu every day. In fact you don’t have the mères spirit if you are not an open-minded and a creative cook. In your kitchen, you might read a recipe before preparing a meal. But as soon as your ingredients are on the countertop, put this recipe away. Just look, smell, taste and… cook the best you can. “Simply but subtly” is a French leitmotiv, (a recurring melodic phrase to suggests an idea, character, or thing).
Les mères à France: women chefs and their food
Simple, straightforward French cuisine that has come from the kitchens of these exceptional women chefs. They have influenced generations of great French men and women chefs with recettes, recipes such as Salade Niçoise, Pissaladière (famous onion-like pizza), Soupe à l’Oignon Gratinée (onion soup), Poulet au vin blanc (chicken in white wine sauce), Canard aux olives, (roasted duck with olives), and Gratin dauphinois (potato gratin, usually made with gruyère cheese).
Book recommendation: Les secrets de la mere Brazier
Les secrets de la mere Brazier, by Roger Moreau, Jacotte Brazier, Roger Garnier and Paul Boscuse.Édition SOLAR (2009).
This is a biography of Mère Brazier with 400 of her recipes and some of her famous menus with comments.
Vocabulary: French to English translations
À la carte: Ordering items listed individually on a menu.
Artichauts et foie gras: Artichokes with foie gras.
Bouchons: Name for the mère’s restaurant. (Today, the name for a little restaurant with traditional cuisine lyonnaise. This name “bouchon” is just used in Lyon. English translation for “bouchon” is “cork”.)
Canard aux olives: Roasted duck with olives.
Canuts: Silk factory workers.
Gastronomy: Study of the relationship between food and culture.
Gratin dauphinois: Potato gratin, usually used for a recipe that calls for gruyère chese.
Leitmotiv: (From the German Leitmotiv “leading motif”.) A recurring melodic phrase used to suggest a character, thing, or idea.
Les mères lyonnaises: Women chefs in the region of Lyon, France, dating back to 1759 and widely known up to the 1930s.
Michelin Guide: Reviews and rates top restaurants and world chefs with a ratings system of one to three stars. The highest rating is three stars.
Paris-Brest et Pralin: Praline.
Pissaladière: Onion pizza.
Poularde de bresse demi-deuil: Fatted chicken raised in Bresse with a stuffing of foie gras, truffles, etc…
Poulet au vin blanc: Chicken in white wine sauce.
Salade Niçoise: Mixed salad of various vegetables topped with tuna and anchovy.
Soup à l’Onion Gratinée: Onion soup.
Laurence Haxaire received her Master Degree in Science and Technology for the Food Industry. She became a journalist and writer specializing in food and flavors after working for the flavor extraction industry inGrasse (the perfume capital of France). Laurence was born in Romans-sur-Isèrre, a bustling town in the South East of France famed for its longstanding tradition of shoe making. She was raised in Lyon, the food capital of Europe, in a family where food is part of a smart education. Her family lives in Bordeaux, France. Website.
You may also enjoy A Woman’s Paris® post, French Cuisine: Cooking schools in Paris founded by women, by Barbara Redmond who writes about extraordinary women who cook: from Anne Willan, Marthe Distel and Elisabeth Brassart, to “Les trios gourmands” Julia Child, Simone Beck and Louisette Bertholle. Including a directory of cooking schools in Paris.
The Veuve Barbe-Nicole Clicquot and other Widowed women entrepreneurs, by Canadian writer Philippa Campsie who tells about the fast track to business independence — or indeed, any kind of independence — two hundred years ago or so, for many women, seems to have been widowhood. The story of Barbe-Nicole Clicquot, better known as Veuve (Widow) Clicquot; a story that also happened to Louise Pommery, Lily Bollinger, and Mathilde Laurent-Perrier, and a few others.
Smell and Taste, Sensation and Pleasure, by French writer Laurence Haxaire who explains the “smart” education of the French child who is taught to recognize and describe the flavours, the feeling of taste, and most importantly, why they liked it or disliked it. Her introduction to the world of flavour is all about sensations and pleasure. She urges to “tell what you feel.”
French Onion Soup – a Paris meal to remember, by Michelle Hum who recalls the aroma of sweet, caramelized onions, dry wine, and rich broth carried with the steam rising from her bowl. With the first taste — serendipity. Recipe included for Julia Child’s Soupe à l’oignon (French onion soup), from her cookbook, The Way to Cook.
Text copyright ©2011 Laurence Haxiare. All rights reserved.
Illustrations copyright ©Barbara Redmond. All rights reserved. | <urn:uuid:fd903e71-dfe0-41b3-abd6-67fdb33fb175> | CC-MAIN-2016-26 | https://awomansparis.wordpress.com/2010/11/18/women-chefs-les-meres-lyonnaises/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00196-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.917748 | 1,715 | 2.53125 | 3 |
Tom Siegfried - Science Editor, The Dallas Morning News
November 5, 2003
Antimatter, black holes and the expansion of the universe were all 'discovered' by physicists studying squiggles on paper. Now, predictions of strange quark matter, invisible stars and new dimensions of space and time set the stage for the biggest science headlines of the 21st century.
Is the space above this area blank? If so, there may be a problem loading the embedded version of the video from YouTube. Either their server is having issues or your school is actively blocking access to YouTube. | <urn:uuid:c93efd5a-7cd9-42fb-850a-4e869b94b2fb> | CC-MAIN-2016-26 | http://education.jlab.org/scienceseries/strange_matters.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00097-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.879424 | 119 | 2.578125 | 3 |
Grizzly (Ursus arctos)
Known as the second largest terrestrial carnivore in North America. The average male
weighs 250 to 350 kilograms, females half that.
A full life for a grizzly in the wild is 25 years.Females have their first litter
when they are between five and seven and their last at 20.
The ideal meal is a bellyful of berries, which are critical for building fat deposits
to carry grizzlies through the denning period. In spring, they will prey on newborn
moose, deer, elk and caribou, but 80 to 90 percent of their overall diet is vegetation.
Slumber time begins for females in mid-November, while males den up to a month later.
They aren’t true hibernators but their body temperature drops and they become
lethargic yet can remain semi-active all winter.
Polar bear (Ursus maritimus)
Known as the largest land carnivore in North America. Adult males can weigh as much
as a small car: 800 kilograms. Females are about half as large.
A full life for a male polar bear is 25 years. Females often live into their late
20s. Both genders become sexually mature at four or five, but many males don’t
breed until eight years old or later.
The ideal meal is a ringed seal, but they will dine on marine mammals as large as
a beluga whale. They eat mainly the fat and skin, leaving the meat for scavengers.
Slumber time b egins in mid-October, when pregnant females den. Like grizzlies, they
aren’t true hibernators, but if they have not eaten for a week, polar bears are
able to slow down their metabolism to conserve energy. | <urn:uuid:113385dc-5d39-4931-a194-27de75299721> | CC-MAIN-2016-26 | http://www.canadiangeographic.ca/magazine/nd03/indepth/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00163-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946511 | 386 | 3.234375 | 3 |
To learn how apps work, start with App Fundamentals.
To begin coding right away, read Building Your First App.
Android provides a rich application framework that allows you to build innovative apps and games for mobile devices in a Java language environment. The documents listed in the left navigation provide details about how to build apps using Android's various APIs.
If you're new to Android development, it's important that you understand the following fundamental concepts about the Android app framework:
Apps provide multiple entry points
Android apps are built as a combination of distinct components that can be invoked individually. For instance, an individual activity provides a single screen for a user interface, and a service independently performs work in the background.
From one component you can start another component using an intent. You can even start a component in a different app, such as an activity in a maps app to show an address. This model provides multiple entry points for a single app and allows any app to behave as a user's "default" for an action that other apps may invoke.
Apps adapt to different devices
Android provides an adaptive app framework that allows you to provide unique resources for different device configurations. For example, you can create different XML layout files for different screen sizes and the system determines which layout to apply based on the current device's screen size.
You can query the availability of device features at runtime if any app features require specific hardware such as a camera. If necessary, you can also declare features your app requires so app markets such as Google Play Store do not allow installation on devices that do not support that feature. | <urn:uuid:8e5d4ee7-aa14-4661-aa69-62f0006a981f> | CC-MAIN-2016-26 | https://developer.android.com/guide/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.899206 | 324 | 3.6875 | 4 |
Innovation and Technology News
Gene 'treasure trove' linked to puberty onset
Puberty onset Over 100 regions of the genome play a role in the timing of a girl's first period, a new international study has found.
Many of these regions were not previously thought to be involved in reproduction at all say researchers, reporting today in the journal Nature.
Understanding the factors involved in puberty timing is important because girls who go into puberty early are at higher risk of diseases such as type 2 diabetes, breast cancer and cardiovascular disease later in life, says Dr John Perry of the University of Cambridge, who led the research.
The study is the largest to date, combining data from 166 institutions world-wide and analysing DNA from over 182,000 women and girls.
The finding that so many genetic variants are involved in puberty timing is amazing, says Perry.
"We've identified over 100 regions of the genome that are associated with puberty timing but the analyses we are doing suggest that...possibly thousands of genes are involved. These implicate a broad range of biological pathways and processes.
"It really demonstrates that puberty timing is a much more complex biological process than we originally thought.
"The biological complexity is quite staggering really. The study represents a treasure trove of new genes for reproductive biology.
"The vast majority of genes that we identified had no evidence linking them to puberty timing at all before the study," says Perry.
However, the study did confirm the involvement of genes that previous studies had suggested contribute to puberty timing. These genes were discovered in mouse studies and in very rare human disorders where puberty is very early or very late.
Professor Grant Montgomery of the QIMR Berghofer Medical Research Institute in Brisbane also contributed to the research.
Unlike diseases such as cystic fibrosis, where one defective gene is the cause, the genetic component in puberty timing is the aggregate effect of a very large number of genes with very small effects, says Montgomery.
He says the power of combining 57 international studies, looking at over 182,000 women in total, was essential to detect such small effects.
The earliest records of puberty date back to the 1840s in Scandinavian studies, says Professor George Patton, of the Murdoch Children's Research Institute in Melbourne, who was not involved in the study.
"Puberty in the early 19th century was at round 17", he says. "The mean age in countries like Australia is now about 12.5 years."
However, he says, the timing of a girl's first period has remained fairly stable since the 1960s.
A number of environmental factors are known to be linked to early onset of puberty in girls, including premature birth, being a small baby and being obese in childhood, he adds.
"Two factors have probably been important [over time]", he says. "One is better nutrition in childhood and the other is getting rid of so many of the infections of childhood and their consequences."
Better nutrition, sanitation and vaccinations may allow girls to be fit enough to be able to reproduce younger, he suggests. | <urn:uuid:bbcc3eed-60ae-47b1-9ee4-e2c7644fae4e> | CC-MAIN-2016-26 | http://www.abc.net.au/science/articles/2014/07/24/4052003.htm?site=tv/newinventors&topic=tech | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967424 | 625 | 2.875 | 3 |
• The clouds appear every day, are widespread and are
highly variable on hourly to daily time scales.
• PMC brightness varies over horizontal scales of a
few kilometers, and because of the AIM high horizontal resolution,
we now know that over small regions the clouds are ten times
brighter than measured by previous space-based instruments.
• A previously suspected, but never before seen, population
of very small ice particles was measured that is believed
to be responsible for strong radar echoes from the summertime
• Mesospheric ice occurs in one continuous layer extending
from below the main peak at 83 km up to around 90 km.
• Mesospheric cloud structures, resolved for the first
time by the CIPS imager, exhibit complex features present
in normal tropospheric clouds. | <urn:uuid:0580045d-bec4-4627-8dba-1fb4d9c083f2> | CC-MAIN-2016-26 | http://aim.hamptonu.edu/mission/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00113-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.919655 | 166 | 3.484375 | 3 |
Practical strategies for teachers who share classroom teaching responsibilities
Filled with down-to-earth ideas, suggestions, strategies, and techniques, The Co-Teaching Book of Lists provides educators with a hands-on resource for making the co-teaching experience a success. Written by educator and popular teacher trainer Kathy Perez, this book gives educators a classroom-tested and user-friendly reference for the co-taught classroom.
Topics covered include: roles and responsibilities; setting up the classroom; establishing classroom climate; effective accommodations and modifications for students; goal-setting; negotiating conflicts; scheduling issues; and more.
- Author Katherine Perez is a popular presenter and workshop leader for Bureau of Education and Research and Staff Development for Educators
- Offers best practices and helpful strategies for making co-teaching a success
- Includes a wealth of ideas that are both practical and easy to implement
This easily accessible reference presents numerous positive and ready-to-use tips, strategies, and resources for collaborative teaching and student success. | <urn:uuid:24c44118-8cf7-41a1-b320-2971d9a14439> | CC-MAIN-2016-26 | http://www.ebookmall.com/ebook/the-co-teaching-book-of-lists/katherine-perez/9781118017449 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00179-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923704 | 207 | 2.921875 | 3 |
Exotic applications of optical thin films: (Left) Scanning electron microscope image of a columnar microcavity with quantum dots produced from a planar structure by a complex etching procedure. The reflecting stacks were deposited in an ion-beam sputtering system. (Right) Microcavity organic light emitting diodes with double sided light emission of different colors. J. Vac. Sci. Technol. A 22, 764-7 (2004).
This equipment is now able to deposit layers with intermediate refractive indices within homogeneous multilayer systems. Such layers are of particular importance in systems designed for use at oblique angles of incidence of light. The reproducibility and degree of control of thickness and refractive index achieved with this apparatus is now sufficient for the deposition of rugate filters.32
The ion beam sputtering system has proved to be an excellent research tool. The advantage of using in situ ion beam etching to remove thickness overshoots during the manufacture of multilayers was convincingly demonstrated in 2003.33 This system has been used with great success to produce a variety of coatings for colleagues in the Institute. Some examples of the special coatings produced in this way are facet coatings for lasers,34 mirrors for VCSELs and micro-cavity devices,35 coatings to increase the contrast and efficiency of OLEDs, and microcavity OLEDs with double-sided light emission of different colors36 and coatings for the Gemini North telescope (Altair).
In 1968, the Bank of Canada was looking for ideas about how to enhance the security of the nation’s currency. The group offered a proposal that would take advantage of the iridescent behavior of thin films.37,38 Normally this is an undesirable property that designers strive to minimize. However, when optical coatings are incorporated into a bank note, they change color with the angle of viewing in a way that cannot be reproduced by photography or xerography. Identicard Ltd., a company producing identification documents and drivers’ licenses, also expressed an interest in the invention. This proposal was destined to become the group’s largest project.39,40
Unfortunately, at the time, no equipment manufacturers were willing or able to supply coating machines that would produce the required large areas of accurate multilayer coatings at a low enough per-unit-area cost. A period of intense engineering efforts followed as the group successfully designed and developed two different processes.
Optical security devices: (Left) First- and (right) second-generation optical security devices on Canadian banknotes. The colors of the thin film systems change from gold to green to blue with increasing angle of viewing.
We developed the system for the Bank of Canada in conjunction with the Canadian Vacuum Corporation Ltd., Gastops Ltd., Lembo Corporation of Canada Ltd. and Vadeco International. The system was a semi-continuous roll-coater based on electron beam evaporation onto a Mylar web.41 The group developed a batch-type RF- magnetron sputtering system for Identicard Ltd., in conjunction with Corona Vacuum Coaters Inc. In it, a large sheet of Mylar was draped over a 1.8 m-diameter, 1.8 m-high cylindrical drum that was rotating about a horizontal axis. Currently, Canadian banknotes carry the second generation of optical thin film security devices. The group has also suggested that optical thin films be used to protect optical media.
Other special products and collaborations
Over the years, the Thin Film Group has developed a number of special multilayer systems that have been of interest to industry. For example, in the late 1980s, the group introduced thin metal layers into dielectric stacks to reduce the reflectance of various generic filter types.42,43 This technology is used in the construction of high-contrast TFEL and OLED displays,44 as well as black layer coatings for the artificial vision systems of the CanadArm for use in space shuttles and the space station.
The group has written many scientific articles about antireflection coatings, polarizing and non-polarizing beam splitters and cutoff filters. One outstanding recent development has been the Li Li polarizing beam splitter, which offers an unprecedented performance over a broad spectral region and a wide range of angles.45
The group has also had many other significant institutional interactions. Some of the companies or institutions we have worked with include Luxell, Environment Canada, FISO, KAO, the Institute of Optics in Quebec, JDS-Fitel, Lumonics Optical, Nortel and, last but not least, its own spin-off, Iridian Spectral Technologies.
NRC Thin Films Group in 2006. Left to right: Frances Lin, Li Li, George Dobrowolski, Yanen Guo, Kamil Mroz, Pierre Verly, Daniel Poitras, Xiaoshu Tong, Bob Simpson, Dan Dalacu and Penghui Ma.
Through the years
During the past 50 years, the Thin Films Group has been involved in a number of academic and commercial projects of national and international significance. It has published about 125 refereed papers and has about 25 U.S. patents to its credit. There can be no doubt that the successes of the group were due to its excellent staff and the stimulating working conditions at the NRC.
NRC scientists who belonged to the group or collaborated closely with it during the past 50 years include K.M. Baird, P.D. Carman, G. Clarke, D. Dalacu, J.A. Dobrowolski, P.D. Grant, G.R. Hanes, F. Ho, M. Laubitz, L. Li, P. Ma, N. Osborne, D. Poitras, B.T. Sullivan and P.G. Verly.
Technical staff frequently made all the difference between success and failure. They include T. Cassidy, D.G. Charbonneau, Y. Guo, L. Howe, G. Laframboise, S.H. Lewis, F. Lin, G.E. Marsh, G. Marshall, L.M. Plante, T. Quance, M. Ranger, J.D. Sankey, R.H. Simpson, X. Tong, H.T. Tran, C.J. Van der Hoeven, A. Waldorf and R.L. Wilkinson.
The group also hosted a number of guest workers who helped to expand its horizons. These include T. Akiyama, J. Ciosek, C. Holm, D. Menagh, C. Montcalm, Z. Pang, J. Shao, A.V. Tikhonravov and M.K. Trubetskov.
Last but not least, the group benefited from the influx of “young blood”—summer students who, with their enthusiasm, contributed to the projects and were frequently invited to be co-authors of scientific publications. All of these people contributed to the successes of the group and the present staff express their appreciation to them.
J.A. Dobrowolski, Dan Dalacu, Li Li, Penghui Ma, Daniel Poitras and Pierre G. Verly are with the Institute for Microstructural Sciences, National Research Council of Canada, Ontario, Canada. | <urn:uuid:dd398751-2e0b-472e-b178-d2e9d25463da> | CC-MAIN-2016-26 | http://www.osa-opn.org/home/articles/volume_18/issue_6/features/fifty_years_of_optical_interference_coatings_at_th/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00200-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.928633 | 1,540 | 2.96875 | 3 |
One of the most effective ways to jump-start our economy is to attract skilled workers and entrepreneurs from around the globe to spur innovation and create jobs here at home. As Congress debates comprehensive immigration reform, revamping our legal immigration policies must be a priority.
The immense contribution that immigrants have made to our nation's economic development is well documented: American history is rife with stories of immigrants who have built businesses from the ground up, and created thousands of jobs for native-born citizens and enthusiastically embraced America. Immigrants helped found such corporate giants as Pfizer, Kraft Foods, Intel and Google.
According to the Partnership for a New American Economy, an immigrant or a child of an immigrant is responsible for starting more than 40 percent of the American companies on the Fortune 500 list. In 2009, researcher Vivek Wadhwa noted that immigrants started 52 percent of Silicon Valley's technology companies.
U.S. immigration law has not kept pace with economic realities. The number of visas and green cards for highly skilled workers is far too low. Red tape discourages many of these workers from fulfilling their dream of U.S. citizenship. And while the U.S. leads the world in attracting international students to its outstanding universities, there is no program for the most promising of these students to seek permanent residency.
Global competition for human capital is intense. Nations like China and India understand that the growth of their economies depends on attracting the best talent in science, technology, engineering and mathematics (STEM). They are pursuing policies to achieve this goal.
One important step the U.S. can take is to revamp the H-1B temporary visa program. Each year, the H-1B program allows 85,000 skilled immigrant workers to enter the United States for employment. Of the visa slots, 65,000 are reserved for scientists, engineers and computer programmers, while another 20,000 are allocated for those with advanced degrees. However, the demand for the H-1B visa is so high, far exceeding available slots, that the U.S. government resorts to a lottery to pick winners.
Using a lottery to determine whether the world's best and brightest can enter the United States works against our national interest. According to a study by the Technology Policy Institute, if no green card or H-1B visa constraints had existed between 2003 and 2007, an additional 182,000 foreign graduates in science and technology fields would have remained in the U.S. making significant economic contributions.
The immigration reform bill recently passed by the U.S. Senate, as well as the SKILLS Visa Act endorsed earlier this year by the House Judiciary Committee, move us forward by significantly raising the annual H-1B visa program cap and setting aside additional visas for individuals with advanced STEM degrees from a U.S. school. Both bills also establish new visa and green card programs for foreign entrepreneurs seeking to establish companies in the U.S. These changes, if adopted, will help make our immigration system more focused on promoting economic growth.
We cannot forget, however, the importance of low-skilled immigrant labor, particularly those who have been living in the shadows but working in jobs that are critical to our economic growth. We also must afford these individuals the opportunity to fully participate in today's economy and become American citizens. | <urn:uuid:7f2bd73f-6e73-45c7-9329-485f13986686> | CC-MAIN-2016-26 | http://www.ocregister.com/articles/visa-248775-ocprint-economic-skilled.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00096-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953528 | 669 | 2.578125 | 3 |
Changing Perspectives in Education
Considering Waldorf: Changing Perspectives in Education from Eugene Schwartz on Vimeo.
As Waldorf education approaches its 100th anniversary, its schools and methodologies are more widely known and practiced than ever. Waldorf methods are practiced not only in the independent schools in which they originated, but also in public school and home school settings. This is the first true documentary ever filmed about Waldorf education. We examine its successes and its achievements, but also look squarely at the controversies that swirl around Waldorf schools.
To arrange a screening of this film,
contact Eugene Schwartz
This film is available on DVD (PAL).
For more information contact:
Rudolf Steiner College Bookstore | <urn:uuid:f9c2f68e-66e8-441d-9bbe-6c1e43cfd670> | CC-MAIN-2016-26 | http://millennialchild.com/film.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00149-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932286 | 147 | 2.578125 | 3 |
This man is badass.
- Badass is defined as something or someone cool or something or someone tough, rebellious or aggressive.
When a person gets a really cool new motorcycle, it might be an example of something described as badass.
- The definition of a badass is someone who is tough, mean, violent or a bit aggressive.
The outlaw Jesse James is an example of a badass.
(comparative more badass, superlative most badass)
- (US, slang) Having extreme appearance, attitude, or behavior that is considered admirable.
- That tough guy looks badass. | <urn:uuid:b0ebe01e-d506-42ec-a634-913200a232f5> | CC-MAIN-2016-26 | http://www.yourdictionary.com/badass | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00103-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.872341 | 123 | 2.6875 | 3 |
Methacrylates in Sealant Applications
Producers and converters of goods are constantly looking for process improvements that allow them to increase productivity. Ultraviolet/electron beam (UV/EB) curing can increase the speed of production while decreasing the overall environmental impact. While UV/EB curing is well-known to producers and converters of essentially two-dimensional surfaces—such as flooring, optical media, and printed surfaces—the technology can enable process improvements in less obvious applications, like sealants and potting compounds. Industrial sealants and potting compounds, such as those used in equipment, automotive, and electric grid construction, have thus far been considered niche applications.
These applications have a different set of requirements than other UV/EB areas. In traditional UV/EB applications, the coating, adhesive, or ink is typically applied at a very thin coat weight and provides surface modification or protection. In the case of sealants, the applied chemistry is expected to bridge the gap between solid surfaces and possibly tie them together. The sealant should have a certain surface hardness that will allow compression during assembly, and it should be able to withstand mechanical stresses. Potting compounds fill in pre-determined areas to protect the contents or users of the goods.
Sealants and potting compounds are expected to perform in more adverse environments than standard UV-cured systems. These systems could be exposed to high temperature and high humidity for extended periods of time. There is also potential exposure from liquids, such as water, cooling fluids or lubricating oils. It is therefore necessary to understand the resistance of the UV-cured system to all of these potential exposures. As with any system, changes in mechanical or adhesion properties over the service life are undesirable. Understanding how certain monomers and oligomers can hold up under these adverse conditions becomes critical in assembling the best solution possible.
LEARNING WHAT WORKS
A study was undertaken to understand the key differences in monomer and oligomer backbone structure as heat aging is applied to the UV-cured sealant. The focus of the oligomer evaluations was on aliphatic urethane acrylates, as these backbones allow for differentiation of the molecular weight, backbone chemistry and functionality. By varying these oligomer design elements, applications chemists can achieve a wide range of tensile, viscoelastic, solvent resistance and liquid properties. A plethora of monomer structures could potentially be used in these applications. Table 1 shows a list of the prospective products.
All of the monomers selected for the final parts of this study were monoacrylates. Earlier research revealed that any higher functional monomer, such as diacrylates and triacrylates, introduced too much hardness to the systems. The higher functional acrylates could be useful at additive levels to tweak in properties, but they will not be the main diluents.
Different monomer and oligomer chemistries were examined by adopting a standardized formulation and changing compositions. The main evaluation techniques for this study were Shore A hardness and tensile testing. These two methods allow for an evaluation of the surface and bulk stiffness of the UV-cured systems. Taken individually, they tell only a portion of the story of what is happening to the properties. Taken together, they paint a clear picture of how the system is changing over exposure time.
All of the samples were cured with a Fusion UV Systems 600 W/in D lamp at 25 ft per minute for a total UV irradiation of 3.2 J/cm2. Samples for tensile testing were prepared by drawing down the samples on Q-Panel A-612 6 x 12-in. mill-finished aluminum panels. Tensile testing was performed according to ASTM D 882 on an Instron 5543 tensile tester equipped with Bluehill analysis software. Samples for Shore A hardness were prepared by creating a 7-mm-thick well on an aluminum panel using Frost King vinyl foam weatherseal. Shore A hardness was performed using a tester from The Shore Instrument & Manufacturing Co., Inc. and ascribing to ASTM D 2240. All of the samples were exposed at 85°C, with either 25% humidity (room) or 85% humidity. The 85°C/85 RH exposures were performed in a Hotpack constant temperature/constant humidity chamber, model 434304. The 85°C/25 RH exposures were performed using a LabLine Imperial V oven.
To understand what chemistries can withstand the exposure testing with only minor changes, the study began by looking at different monomers that have shown some utility. All of the evaluations were based on the formulation from Table 2, with PEtUA1 used as the oligomer. The type and ratio of the monomers used at 50% in the formulation was changed. The two monomers used in the formulation were BOEAEA and CTFA. The formulations were all applied and UV-cured at room temperature using the prescribed lamp and energy. The samples were then tested for hardness and tensile properties. Separate samples were put in the Hotpack and LabLine ovens to expose them to the different conditions for 168 hours.
The results reveal several insights. First, as the amount of CTFA is increased in the formulation, the Shore A hardness of the formulation increases. This is mainly due to the glass-transition temperature (Tg) of CTFA vs. that of BOEAEA. When UV-cured, BOEAEA yields a photopolymer with a very low Tg of -73°C (by DSC), whereas CTFA yields a Tg of 32°C (by DSC). Simply changing the Tg of the monomer affects the overall Tg of the cured system. In this case, increasing the Tg moves the cured sealant formulation into a more glassy state with higher modulus. As the formulation that contains only CTFA as the monomer shows, the modulus and Shore A increased dramatically.
Second, increasing the amount of CTFA in the formulation actually decreases the heat stability of the formulation. When CTFA was used at up to 30% in the formulation, the Shore A hardness and modulus of the dry and moist exposed polymers did not vary by more than 10% from the unexposed sample. Formulations with greater than 40% of CTFA increased dramatically in hardness and modulus when exposed to either dry or moist heat. The percentage of CTFA in a formulation would therefore be limited to avoid stiffening of the UV sealant during exposure. To further confirm that CTFA causes stability issues, PEA was substituted for the monomer and evaluated.
Comparing Figure 3 to Figure 1 and Figure 4 to Figure 2, one can see how PEA performs vs. CTFA. Substituting PEA for BOEAEA in the formulation gave an even, stepwise increase in the Shore A hardness of the UV-cured formulations. After dry and moist heat exposure, the hardness remained within 10% of the unexposed sample. Interestingly, increasing the amount of PEA in the formulation did not dramatically increase the modulus of the system. After aging, the modulus of the samples stayed within 10% of the unexposed samples. Quite simply, the evidence suggests that formulations based on PEA offer superior dry and moist heat aging vs. those based on CTFA.
As those who have experience with UV- or EB-cured formulations know, the oligomer is the major contributor to the overall properties of the system. The monomer utilization and properties described previously can be used to modify the base properties of the oligomers themselves. In this work, urethane diacrylate oligomers showed the best utility for sealant and potting compound applications. Use of these oligomers allows for changes in backbone structure and molecular weight, as well as good analysis of key properties. The initial evaluation was to look at the properties of a polyether, polyester or polycarbonate backbone aliphatic urethane diacrylate of similar molecular weights. In this, the contribution of the specific backbones to the overall properties of the system was isolated.
Figure 6 shows that at CTFA levels of less than 40%, all the formulations showed similar modulus. By contrast, Figure 5 shows some differentiation in the oligomer backbones. The polycarbonate-based oligomer has a significantly higher Shore A hardness than the polyester- or polyether-based ones. The polyether-based oligomer, PEtUA1, actually showed higher Shore A and modulus than the standard polyester- and hydrophobic polyester- backboned PEsUA1 and PEsUA4.
After exposure to the 85°C/85 RH conditions for 168 hours, all of the sealants showed changes in their Shore A hardness. PEtUA1 showed the best stability, as the Shore A for the formulations with greater than 40% CTFA maintained their hardness within 10% of the unexposed sealant. The good stability is probably due to its balance of hydrophobicity and hydrophilicity, which allows water vapor to pass through the UV-cured film with minimal effect. Also, the lack of ester linkages within the oligomer backbone eliminates one possible area of failure under these conditions.
The polycarbonate backbone showed a slight decrease in hardness. The standard and hydrophobic polyesters both showed a dramatic increase in hardness after the exposure. For these two products, note that the formulation does not contain organic or inorganic acids. Had the formulation contained those compounds, the polyesters might have hydrolyzed under the high heat and humidity conditions. Therefore, when the cured sealants were exposed to high temperature and humidity, the polyether backbone maintained its properties better than the other backbone chemistries.
As with any study of urethane acrylate oligomers, it is interesting to examine the effect of molecular weight on the properties of the system. The core of the urethane acrylate oligomer itself is changing in molecular weight and is effectively increasing the length between crosslinks. Of the PEsUA1 and PEsUA2 products, the PEsUA2 has a significantly higher molecular weight, yet only shows a large decrease in Shore A hardness once high levels of PEA are used in the formulation. At lower levels, the molecular weight differences have little effect. The PESUA3 is dramatically higher in molecular weight, as demonstrated by how much softer the sealant is when based on that oligomer. In addition, the PEsUA3-based formulation has twice the viscosity of the other two. In essence, increasing the molecular weight of the oligomer is an effective tool to manipulate the overall hardness of the system.
It is one thing to be able to withstand temperature and humidity conditions for a period of time, but many of the targeted applications could expose the photopolymer to liquid chemicals. Therefore, it is important to understand how well the UV-cured system can withstand long-term chemical exposure. To that end, we changed the formulation based upon what we learned from earlier work.
Of all of the testing so far in the project, this test yielded the most surprising results. Acrylate coatings soaked in hot water for extended periods of time will typically exhibit some changes in their physical properties. In the case of these samples, the high film thickness would be expected to see a much more pronounced effect.
The samples were tested immediately after removal from the water so any water trapped in the photopolymer could be seen through changes in its properties. In fact, after 168 hours (one week) in 85°C water, the effect on the hardness of the photopolymer was negligible. All of the Shore A hardness levels were within 10% of their original values. The cured formulations have a good amount of polarity built into them and would be expected to draw in a significant amount of water during the test. That clearly did not happen and lends to the good utility of UV- or EB-cured chemistries in applications where chemical resistance is needed.
This study shows that UV- or EB-cured acrylate chemistries have the necessary performance characteristics for sealant and potting compound applications. Requirements such as hardness, elongation, resistance to dry and humid heat, and chemical resistance can be met when the correct monomers and oligomers are used. Cured photopolymer properties can be predictably manipulated by changing the backbone structures and molecular weight of the various components. Polyether-backboned monomers and oligomers showed the best overall performance and maintained that performance when exposed to both dry and humid conditions.
Editor’s note: The author presented a version of this article at RadTech 2012. | <urn:uuid:5e556599-c816-4f95-8bf9-e17791cd7f34> | CC-MAIN-2016-26 | http://www.adhesivesmag.com/articles/91629-methacrylates-in-sealant-applications?WT.rss_f=Curing&WT.rss_a=Methacrylates+in+Sealant+Applications&WT.rss_ev=a | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00050-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949702 | 2,618 | 2.9375 | 3 |
The software performs an operation at a privilege level that is higher than the minimum level required, which creates new weaknesses or amplifies the consequences of other weaknesses.
New weaknesses can be exposed because running with extra privileges, such as root or Administrator, can disable the normal security checks being performed by the operating system or surrounding environment. Other pre-existing weaknesses can turn into security vulnerabilities if they occur while operating at raised privileges.
Privilege management functions can behave in some less-than-obvious ways, and they have different quirks on different platforms. These inconsistencies are particularly pronounced if you are transitioning from one non-root user to another. Signal handlers and spawned processes run at the privilege of the owning process, so if a process is running as root when a signal fires or a sub-process is executed, the signal handler or sub-process will operate with root privileges.
Time of Introduction
Architecture and Design
Modes of Introduction
If an application has this design problem, then it can be easier for the developer to make implementation-related errors such as CWE-271 (Privilege Dropping / Lowering Errors). In addition, the consequences of Privilege Chaining (CWE-268) can become more severe.
An attacker will be able to gain access to any resources that are
allowed by the extra privileges. Common results include executing code,
disabling services, and reading restricted data.
Likelihood of Exploit
This weakness can be detected using tools and techniques that require
manual (human) analysis, such as penetration testing, threat modeling,
and interactive tools that allow the tester to record and modify an
These may be more effective than strictly automated techniques. This
is especially the case with weaknesses that are related to design and
Use monitoring tools that examine the software's process as it
interacts with the operating system and the network. This technique is
useful in cases when source code is unavailable, if the software was not
developed by you, or if you want to verify that the build phase did not
introduce any new weaknesses. Examples include debuggers that directly
attach to the running process; system-call tracing utilities such as
truss (Solaris) and strace (Linux); system activity monitors such as
FileMon, RegMon, Process Monitor, and other Sysinternals utilities
(Windows); and sniffers and protocol analyzers that monitor network
Attach the monitor to the process and perform a login. Look for
library functions and system calls that indicate when privileges are
being raised or dropped. Look for accesses of resources that are
restricted to normal users.
Note that this technique is only useful for privilege issues related
to system resources. It is not likely to detect application-level
business rules that are related to privileges, such as if a blog system
allows a user to delete a blog entry without first checking that the
user has administrator privileges.
Automated Static Analysis - Binary / Bytecode
According to SOAR, the following detection techniques may be
Highly cost effective:
Compare binary / bytecode to application permission
print('Unable to create new user directory for user:' +
While the program only raises its privilege level to create the folder
and immediately lowers it again, if the call to os.mkdir() throws an
exception, the call to lowerPrivileges() will not occur. As a result,
the program is indefinitely operating in a raised privilege state,
possibly allowing further exploitation to occur.
The following code calls chroot() to restrict the application to a
subset of the filesystem below APP_HOME in order to prevent an attacker from
using the program to gain unauthorized access to files located elsewhere.
The code then opens a file specified by the user and processes the contents
of the file.
FILE* data = fopen(argv, "r+");
Constraining the process inside the application's home directory
before opening any files is a valuable security measure. However, the
absence of a call to setuid() with some non-zero value means the
application is continuing to operate with unnecessary root privileges.
Any successful exploit carried out by an attacker against the
application can now result in a privilege escalation attack because any
malicious operations will be performed with the privileges of the
superuser. If the application drops to the privilege level of a non-root
user, the potential for damage is substantially reduced.
This application intends to use a user's location to determine the
timezone the user is in:
locationClient = new LocationClient(this, this, this);
This is unnecessary use of the location API, as this information is
already available using the Android Time API. Always be sure there is
not another way to obtain needed information before resorting to using
the location API.
This code uses location to determine the user's current US State
First the application must declare that it requires the
ACCESS_FINE_LOCATION permission in the application's manifest.xml:
During execution, a call to getLastLocation() will return a location
based on the application's location permissions. In this case the
application has permission for the most accurate location possible:
locationClient = new LocationClient(this, this, this);
While the application needs this information, it does not need to use
the ACCESS_FINE_LOCATION permission, as the ACCESS_COARSE_LOCATION
permission will be sufficient to identify which US state the user is
Installation script installs some programs as
setuid when they shouldn't be.
Phases: Architecture and Design; Operation
Strategy: Environment Hardening
Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.250.2]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.
Phase: Architecture and Design
Strategies: Separation of Privilege; Identify and Reduce Attack Surface
Identify the functionality that requires additional privileges, such as access to privileged operating system resources. Wrap and centralize this functionality if possible, and isolate the privileged code as much as possible from other code [R.250.2]. Raise privileges as late as possible, and drop them as soon as possible to avoid CWE-271. Avoid weaknesses such as CWE-288 and CWE-420 by protecting all possible communication channels that could interact with the privileged code, such as a secondary socket that is only intended to be accessed by administrators.
Perform extensive input validation for any privileged code that must
be exposed to the user and reject anything that does not fit your strict
When dropping privileges, ensure that they have been dropped successfully to avoid CWE-273. As protection mechanisms in the environment get stronger, privilege-dropping calls may fail even if it seems like they would always succeed.
If circumstances force you to run with extra privileges, then determine the minimum access level necessary. First identify the different permissions that the software and its users will need to perform their actions, such as file read and write permissions, network socket permissions, and so forth. Then explicitly allow those actions while denying all else [R.250.2]. Perform extensive input validation and canonicalization to minimize the chances of introducing a separate vulnerability. This mitigation is much more prone to error than dropping the privileges in the first place.
Phases: Operation; System Configuration
Strategy: Environment Hardening
Ensure that the software runs properly under the Federal Desktop Core Configuration (FDCC) [R.250.4] or an equivalent hardening configuration guide, which many organizations use to limit the attack surface and potential risk of deployed software.
There is a close association with CWE-653 (Insufficient Separation of Privileges). CWE-653 is about providing separate components for each privilege; CWE-250 is about ensuring that each component has the least amount of privileges possible.
Mapped Taxonomy Name
Mapped Node Name
7 Pernicious Kingdoms
Often Misused: Privilege Management
CERT Java Secure Coding
Minimize privileges before deserializing from a privilege
[R.250.5] [REF-17] Michael Howard, David LeBlanc
and John Viega. "24 Deadly Sins of Software Security". "Sin 16: Executing Code With Too Much Privilege." Page
243. McGraw-Hill. 2010.
[R.250.6] [REF-7] Mark Dowd, John McDonald
and Justin Schuh. "The Art of Software Security Assessment". Chapter 9, "Privilege Vulnerabilities", Page
477.. 1st Edition. Addison Wesley. 2006.
CWE-271, CWE-272, and CWE-250 are all closely related and possibly overlapping. CWE-271 is probably better suited as a category. Both CWE-272 and CWE-250 are in active use by the community. The "least privilege" phrase has multiple interpretations. | <urn:uuid:d099c943-daf4-4cd7-864c-a474c43e95c1> | CC-MAIN-2016-26 | http://cwe.mitre.org/data/definitions/250.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00188-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.885959 | 1,910 | 3.296875 | 3 |
In its annual survey, “Minority Religious Communities At Risk,” the First Freedom Center of Virginia observed intensified contention over the right to freedom of religious expression in both Canada and the United States. As evidence, the editors highlighted a major Canadian Supreme Court decision as well as public criticism of the conservative government’s creation of an Office of Religious Freedom; for the United States, the editors cited the litigation over the 2011 Patient Protection and Affordable Healthcare Act. The contention in both countries seemed to pit conservative religious-freedom advocates against a progressive secular establishment. However, as I argue here with the Canadian case, the situation is more complicated.
The contention centers on what it means to protect religious expression when doing so may challenge or even violate values that the government or majority upholds. For human rights advocates, this question is of pressing concern for members of resilient pluralistic democracies, which should be able to sustain the risks as well as the rewards of freedom. When we cannot endure diverse expressions of religious freedom, then we reduce the potential of freedom to mean more than the personal preferences of acceptably normal citizens or consumers.
In February 2012, the Supreme Court of Canada handed down a concise decision in S.L v. Commission scolaire des Chênes (2012 SCC 7). Catholic parents had requested an exemption for their children from a mandatory Ethics and Religious Culture program in the public school curriculum in Quebec. They argued that their sincerely held religious beliefs required them to provide moral and religious education in the context of family and church. They anticipated that the public school curriculum was aimed at introducing a variety of mores and religions. They argued that this would be confusing to young children and that it would introduce a philosophy of relativism, which is contrary to the parents’ religious beliefs. Therefore, the parents requested that their children be exempt from the program. The Board of Education refused to give an exemption. On appeal from the Court of Appeal of Quebec, the Supreme Court of Canada upheld the Board’s refusal on the following grounds:
“The suggestion that exposing children to a variety of religious facts in itself infringes their religious freedom or that of their parents amounts to a rejection of the multicultural reality of Canadian society and ignores the Quebec government’s obligation with regard to public education.”
Further, because the content of the mandatory program had not yet been developed, the Court held that there was no evidence to show that the program did in fact infringe on the parents’ or children’s right to freedom of religious expression:
“…It is not enough for a person to say that his or her rights have been infringed. The person must prove the infringement on a balance of probabilities.”
One of the first things one notices is how slim are the facts in the case. The parties were grappling over a curriculum that had not yet been designed; yet the court construed the parents’ request for exemption as tantamount to rejection of multiculturalism, escalating the stakes on all sides.
In a commentary on the case, law professor Diana Ginn observes that the majority’s approach in S. L. narrows the right to freedom of religion safeguarded in the Canadian Charter of Rights. Ginn recommends that it would have been better to test if the parents had a religious freedom claim at stake, and then inquire if the government’s action would infringe that right. If so, then the onus would shift to the government to prove that infringement was reasonably justified to sustain a free and democratic multicultural society. The parents, Ginn notes, were not demanding that the government present Catholicism as the one true faith; they were asking for an exemption to excuse their children from being taught that it is not. Ginn shows that by assessing the rights claim in opposition to the protection of multiculturalism, the court construed the parents’ request for exemption as a “rejection of the multicultural reality of Canadian society”. There was, however, no evidence that the parents rejected that reality, or that the exemption would foster such rejection by their children.
Professor Lori G. Beaman has previously examined the means by which Canadian courts limit religious freedom through interlocking discursive appeals to responsibility and common sense. Underlying that appeal, she intuits, is a fear of the other whose freedom might exceed or even oppose the figure of the moderate and, in Beaman’s words, legally approved “responsibilized self.” In the context of litigation, experts and governments are called upon to delimit the proper scope of religion and multiculturalism. In this way, protected rights are interpreted to fit the responsibilized self for the context of acceptable multiculturalism.
We might counter this trend with a vision of multiculturalism that does not aim at containment or accommodation of religious others. Rather, a more robust vision draws on a long history of difficult and deliberate practices of pluralism to sustain the ethos of public life. Such pluralism is not relativism, as the Catholic parents may fear; rather, it is a practice of reciprocal yet agonistic respect for various traditions of formation and family that foster individuals who become equal citizens. In such a vision, multicultural communities are not represented as commensurable options amenable to a common denominator of personal preference. They are, rather, interwoven and contesting social processes in which the texture of human freedom becomes capable, diverse, and resilient. Such a vision would oppose the anxiety that drives governments, courts, and school boards to protect Canada’s multicultural heritage by curtailing the religious freedom that has historically generated the value of that multicultural heritage for many Canadian citizens.
The Catholic parents’ request for an exemption resonates with a history of “separate” educational approaches by Catholics and Protestants in Quebec and the rest of Canada. Ann Pellegrini has observed that in the United States the ability of Protestant cultures to position their norms and affects as the norms and affects of the public per se is one of the tacit themes of secularization in America. This is true in Canada, too, although these themes arise from different genealogies. Notably, what was at stake in the rejection of the Catholic parents’ request for their children’s exemption from a mandatory course in Ethics and Religious Culture was mutual skepticism about the religious education of children, a chronically contested issue. In this regard, the religious-freedom argument is both conceptually prior to, and historically confluent with, the agonistic dynamics of Canada’s multicultural heritage. There is no evidence that Catholic education prevents Catholics from participating in Canada’s multicultural society; indeed, Catholic families, schools and hospitals have been crucial sectors for the formation of citizens who fostered Canada’s multicultural heritage with commitments to social justice and the public good.
The Supreme Court’s tough approach to the families, on the basis of the slim facts in the case, is worrisome. As Beaman observes, the Canadian courts have developed an accommodation approach to religious freedom. She argues in her book: “The language of accommodation rests on an assumption of a normal or mainstream and a benevolent dispensing of special consideration for those on the margins. It builds in inequality and maintains it.” Further, as the First Freedom Center editors observes, infringement on and violations of religious conscience are ways in which majority governments signal to minority citizens that the dominant vision will be enforced. This approach legitimates anxiety in response to diverse formations of freedom, precisely at those seams where any formations seem most fugitive from the government’s or the majority’s vision. To reduce protection of rights to accommodation and neutrality—where everyone is equally the same—reduces the value of multiculturalism because it reduces the diversities and capacities of freedom.
Religious traditions, not only Catholic traditions, encompass rich conceptions of freedom. For example, in the Augustinian Catholic tradition, freedom might be best imagined as a will that is determined by God’s beauty and love. The doctrine of original sin gives an account of the deficiencies and laxities of desire—desire that is distracted by a multitude of options, perfectly exemplified in the wanton attention of virtual or consumer choice. Freedom, theologically understood, requires education of the will through habits of patience, attention, and enjoyment of the good. It aims at the reduction of our inclination to trivialize the will as a spontaneous selector. This runs athwart the dominant secular ideal of freedom as unrestrained personal choice. Yet to engage and protect alternative practices of freedom is crucial in a multicultural society in which members aspire to embody different and dynamic capabilities for the common good.
A robust multicultural society must venture robust conceptions of freedom. Complex capabilities, such as moral imagination, discretion, and perseverance through uncertain challenges, or fidelity in the face of many options, are not built from “believing whatever you want”. Believing or doing whatever you want is frequently banal; it is the freedom of the wanton, the consumer, or the gamer, all of whom can just fail and start again. Meaningful freedom of religious expression is made out of sustained patterns of devotion, attention, creativity, and self-restraint. The right to freedom of religion epitomizes the paradox of freedom: the more meaningful an individual’s freedom, the more one reduces one’s options in expressing it. In this respect, Michael Lambek’s positioning of religion as “unfreedom” sidesteps the challenge of creating across generations the robust freedom of faithfulness and hope. Faith is itself a complex capability that requires spheres and times of unrestraint and uncertainty in order to distinguish it from coercion, compulsion, or default. Yet when religious freedom is conceived as the option to believe whatever you prefer rather than the process of practicing faith across generations, the complex capabilities of religious freedom are rendered shallow by the law. For these reasons, the government’s obligation to promote a multicultural society should not be used to curtail diverse practices of religious education. Religious individuals can only be reciprocally engaged with others if they can also be immersed in opportunities to gather, build, and transform their traditions. The common admonition—whether moral or political—to engage in mutual dialogue and adjustment with others implies this.
In S. L., the Court used the Charter protection of Canada’s multicultural heritage as a constitutional bolster against the parents’ claim rather than as a further warrant to protect it. But there are many good historical and ethical reasons to treat the constitutional mandate for multiculturalism as a warrant rather than as a limit on the right to freedom of religious expression. This would mean advancing a judicial approach and civic practice that did not merely accommodate but confidently engaged claims of religious freedom. The Supreme Court of Canada in S.L treated the Catholic parents’ appeal for exemption from the Ethics and Religious Culture program as a reproach to a multicultural society. From another perspective, the parents’ appeal was an expression of and claim upon the depth of that heritage. Diana Ginn’s more generous response asks the court to judge whether a religious freedom claim is at stake and, if so, only then would the government’s interest in advancing a specific curriculum be weighed against the harm of any infringement on that freedom. Her approach aims to reduce the zero-sum analysis—religious freedom or multicultural reality—in a case where such opposition is contradictory. A confident and resilient democracy can support a more robust approach in which the right to freedom of religious expression is construed as fundamental to a multicultural society. | <urn:uuid:18ef98bd-1097-451d-aa8d-5c213c8aa75f> | CC-MAIN-2016-26 | http://blogs.ssrc.org/tif/2013/08/20/religious-freedom-and-multiculturalism-canadian-contentions/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00070-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.945915 | 2,351 | 2.5625 | 3 |
ANCIENT ARCHITECTURE IN VIRTUAL REALITY DOES IMMERSION REALLY AID LEARNING?
This study explored whether students benefited from an immersive panoramic display while studying subject matter that is visually complex and information-rich. Specifically, middle-school students learned about ancient Egyptian art and society using an educational learning game, Gates of Horus, which is based on a simplified three dimensional computer model of an Egyptian temple. First, we demonstrated that the game is an effective learning tool by comparing written post-test results from students who played the game and students in a no-treatment control group. Next, we compared the learning results of two groups of students who used the same mechanical controls to navigate through the computer model of the temple and to interact with its features. One of the groups saw the temple on a standard computer desktop monitor while the other-saw it in a visually immersive display (a partial dome) The major difference in the test results between the two groups appeared when the students gave a verbal show-and-tell presentation about the Temple and the facts and concepts related to it. During that exercise, the students had no cognitive scaffolding other than the Virtual Egyptian Temple which was projected on a wall. The student navigated through the temple and described its major features. Students who had used the visually immersive display volunteered notably more than those who had used a computer monitor. The other major tests were questionnaires, which by their nature provide a great deal of scaffolding for the task of recalling the required information. For these tests we believe that this scaffolding aided students' recall to the point where it overwhelmed the differences produced by any difference in the display. We conclude that the immersive display provides better supports for the student's learning activities for this material. To our knowledge, this is the first formal study to show concrete evidence that visual immersion can improve learning for a non-science topic.
Advisor:Lowry Burgess; Stephen Hirtle; Michael Lewis; Kurt VanLehn; Peter Brusilovsky
School:University of Pittsburgh
School Location:USA - Pennsylvania
Source Type:Master's Thesis
Date of Publication:07/07/2008 | <urn:uuid:5e18365e-a5f4-40bb-b181-ef34b21a3874> | CC-MAIN-2016-26 | http://www.openthesis.org/documents/Ancient-architecture-in-virtual-reality-225505.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00008-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936867 | 440 | 3.609375 | 4 |
A report making recommendations for a new, international security agenda for the Amazon, says the region’s nations should link-up and circulate data on water, energy, health and food security to ensure sustainable development and tackle challenges posed by changes in climate and land use.
A failure to do so, it says, could lead to far greater economic and social disruption in the mid-term and create unprecedented challenges for South America’s political leaders.
“The data exist, but are very fragmented,” says Andy Jarvis, a programme leader at the International Center for Tropical Agriculture (CIAT), and an author of the report released by CIAT and think-tank the Global Canopy Programme last month (17 December). The report was developed with input from science experts and political leaders from Bolivia, Brazil, Colombia, Ecuador and Peru.
According to Jarvis, existing data on matters crucial for the region’s future security are out of date, and there is a lack of a consistent monitoring on issues such as access to water, energy and health.
Where there are data, the links between these issues, or between data collected at a regional, national or international level, are missing, he adds.
“The Amazon needs a strong commitment from every government in the region.”
Andy Jarvis, CIAT
Jarvis also laments the disjointed approach to development in the region, with resource extraction currently dominating the development agenda, instead of efforts to deliver sustainable and holistic progress.
For example, he says, few of the plans for new public hydroelectric plants in the region consider how climate change may affect river flow and so energy generation in the mid-term.
They also ignore the fact that deforestation can cut the amount of local rain available for energy production, Jarvis warns.
To end such an incoherent approach and deal with the lack of data, he says the “Amazon needs a strong commitment from every government in the region”.
Otherwise, he says, the region may see a repeat of the water security rows seen in South Asia, where Bangladesh, India and Pakistan are unwilling to share water from the Ganges river.
“If this happens in the Amazon, it will be disastrous for everyone,” he says.
The report includes two key policy recommendations to governments. One is to identify areas where water, energy, food or health security are most vulnerable in Amazonia, and to quantify the social and financial costs of that vulnerability. And the second is to establish national ‘nexus groups’ of senior experts to help inform decision-making across sectors.
“We’re talking with ministers and high authorities of the involved countries so they know and apply our recommendations,” Jarvis says.
Link to full report
See below for a video by the Global Canopy Programme about the Amazonia security agenda: | <urn:uuid:16b9682e-c7f8-4c55-9e08-22cc929f8a9f> | CC-MAIN-2016-26 | http://www.scidev.net/global/data/news/data-sharing-and-joint-thinking-urged-for-amazonia.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00068-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929404 | 586 | 2.71875 | 3 |
Public Health - Degree Programs
While it is possible to gain experience in the field without an advanced degree, most
public health professionals need at least a Master’s degree (MSPH, MPH) for career
advancement. In general, the MPH degree will include coursework in a number of public
health disciplines, such as administration, epidemiology, environmental health, and
behavioral health. Public health (MSPH, MPH) programs are typically about two years.
MPH programs typically require a few years of experience or another professional degree.
Those without the required professional degree or experience are encouraged to consider
the MSPH or MS. Dual degrees are available to those that are pursuing degrees or
have degrees in fields such as nursing, law, social work, public policy, business,
medicine, dentistry, and veterinary medicine. | <urn:uuid:986dc09d-4c9a-4bc5-93a3-781df4be4d54> | CC-MAIN-2016-26 | http://www.skidmore.edu/hpac/public-health/degree.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00113-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934037 | 179 | 2.640625 | 3 |
Using the new kVA method, you only need one calculation to determine the short-circuit values at every point within an entire electrical power system.
In this day of high fault currents, it's more important than ever to protect electrical equipment from extremely high current levels. Otherwise, the equipment will explode as it attempts to interrupt the fault. But fault current calculations have always been difficult to get a handle on, until now.
The new "Easy Way" kVA approach is taking the place of the abstract "Per Unit" method of short-circuit calculations from the past. With the kVA method, you can easily visualize what currents will flow where. And you can calculate them using an inexpensive handheld calculator in moments, regardless of the complexity of the electrical power system.
This method is simple because there are no awkward "base" changes to make, since kVAs are the same on both the primary and secondary sides of every transformer. Perhaps best of all, you only need one calculation to determine the short-circuit values at every point within the entire electrical power system. With the old Per Unit method, you needed a separate calculation for each point in the system.
You can obtain short-circuit kVA values from the electrical utility company, but short circuit power is also protected by generators and motors. The kVA produced by motors equals the motor starting inrush current, and the kVA produced by generators equals the kVA nameplate rating divided by its nameplate subtransient reactance rating "Xd."
For example, a 1000kVA generator with a subtransient rating of 0.15 instantaneously produces 1000/0.15, or 6666kVA. A 100hp motor instantaneously produces 100,000/.17kVA, or 588kVA. If this motor and generator connect to the same bus, then the short-circuit power available at that bus is the sum (6666 + 588), or 7254kVA. If the electrical utility is rated to deliver 100kVA to this same bus, then the total short-circuit power available at that bus is 107,254kVA.
Using the kVA method also greatly simplifies the short-circuit power attenuation (or holdback) provided by reactors, transformers, and conductors. For example, a 2000kVA 7% impedance transformer will pass through its windings a maximum of 2000/.07, or 28571kVA of power, if infinite power flows to one side of its windings. If instead of an infinite current source, the above bus connects to this transformer, then the amount of power that will be "let through" the transformer is the reciprocal of the sum of the reciprocals of the two, or 1/(1/107254 + 1/28571), or 22561kVA. You can determine transformer impedance, reactor impedance, or cable size with the kVA method quickly enough to make "what if" calculations.
Comparisons over several years have found results of the kVA method to be accurate within 3% of computer calculations using expensive software, so you can even use the kVA method as a "check" on the input and output of a computer calculation. This is an excellent benefit because standard engineering procedure requires you to check calculations using a different method from the one originally used.
Editors Note: EC&M's book, "Short-Circuit Calculations The Easy Way," explains the entire "Easy Way" kVA method in a step-by-step format. Available from EC&M Books, call (800) 543-7771 to order. | <urn:uuid:2ce7aac7-98a4-4a00-92de-e7e41e27100c> | CC-MAIN-2016-26 | http://ecmweb.com/design/making-short-circuit-calculations-easy?quicktabs_11=0 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00041-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.871107 | 741 | 3.34375 | 3 |
Before Green Valley became a village, a mining camp almost took the honors in 1881 three miles to the west. A cluster of miners’ tents mushroomed beside the military road established more than a decade earlier by the army. The road from Tonto Basin followed Wild Rye Creek to its headwaters, and then continued to the mouth of Pine Creek. From there it led northward toward the Mogollon Rim. However, not far from the mining camp, a trail branched to the east and led into Green Valley. The prospectors chose this location because many mines were close by and at the end of a nearby gulch there was a spring of fresh water.
On May 1, 1881, Emer and Margaret Chilson came from Globe to the mining camp and opened a mercantile store. The store had a wooden platform with tent sidings, as did the other buildings. The Chilson store soon became the camp’s social center and primary supply center for the surrounding ranches and mines. Emer Chilson’s journal of transactions, found at the Arizona State archives in Phoenix, records the names of early Rim Country settlers, including notes on their purchases of tobacco and liquor. Burch, Cole, Craig, Gowan, McDonald, Middleton, Nance, Nash, Pyeatt, Vaughn, Vogel and Sieber are some of the names that appear. Since the camp did not have a name, the Chilsons took the prerogative of naming it Marysville, after their daughter, Margaret Mary Chilson.
The Chilson family history is so imbedded with Payson history we need to look at it more closely.
Margaret and Emer Chilson would play an important role in the development of the Payson community. Margaret Ann Birchett was born in Burleson County, Texas, Feb. 18, 1851, and while very young she was orphaned. Her maternal grandparents Cole raised her and her siblings. Living in dangerous Comanche Indian country, the children carried a gun as they walked daily to school. At the close of the Civil War the family moved to Downey, Calif., and there, Margaret Ann met Emer L. Chilson. She was 15 and he was 25 when they were married Sept. 6, 1866. They had five children, John C., Littlie Dale, Charles E., Margaret Mary, and Napoleon W. Chilson (later nicknamed “Boss”).
The family decided to return to Texas in 1875, but when they reached Globe/Miami in Arizona Territory, the prospects for mining were so good they decided to settle there. They built a house from adobe and bear grass, and proceeded to put up the first ore mill in Miami. Their sixth child, Guy, was born, and Margaret later claimed he was “the first white child born in Miami.” The morning after the baby was born, her bachelor neighbors brought breakfast to the family, consisting, she said, “of fresh fish from the Salt River, hot rolls, beefsteak, rabbit, quail and such things.”
In Miami the Chilsons organized the community’s first school and housed the teacher. The family opened the first mercantile store in old Miami with Margaret’s brother, Joe Birchett. They realized there was more money in selling to the miners than in being a miner. In 1881 the excitement of expansion wooed the family to Marysville, and they left the Miami store in the hands of brother Joe.
On Feb. 10, 1881, several months before the Chilsons arrived in Marysville, a 39-year-old Frenchman named Paul Vogel arrived there with his friend William Craig. The partners established a claim called The Single Standard Mine. Vogel had immigrated to America with his father from Alsace, France, and settled in Illinois in 1861, just in time to join the army for the Civil War.
Paul saw much action with the 24th Illinois Volunteer Infantry, and after he was mustered out in August 1864, he became a muleskinner for government contractors. While caring for mule teams he met William Craig, a wagon master, and the two engaged in various freighting enterprises.
When they heard about the gold rush in Arizona’s central mountains, they decided to try their luck as prospectors. Their Single Standard mining claim did not pay off, and after they had sunk into it several thousand dollars of borrowed money they sold the mine and began working as builders for the growing number of settlers in the Rim Country.
Their work included log fences around the ranches at Indian Gardens, just west of today’s Kohl’s Ranch on Tonto Creek, as well as the famous “mud house” that still stands on Payson’s Main Street. In later years Paul Vogel told Ernest Pieper, “You know, when I built that place it took me 30 days and I got 30 dollars.”
Vogel and Craig are known primarily for their fruit ranch on Webber Creek, where today’s Geronimo Estates subdivision is located. They called it the Spade Ranch, probably because they had to spade so much dirt in order to plant more than 1,200 fruit trees, imported from Alabama in the spring of 1884. They had a large drying oven for the fruit, and the remnants of it are there to this day. For many years they supplied Payson homeowners with luscious peaches and other fresh fruit in season as well as dried fruit out of season.
Paul Vogel died a bachelor one month short of his 88th birthday, and he is buried in the Payson Pioneer Cemetery.
The Marysville camp was short lived because the gold was running too thin to sustain it. When the area received word of a major Apache war party headed that way in 1882, Emer Chilson decided to take his family to Globe for safety. While there Margaret gave birth to their seventh child, Irene.
When they returned to Marysville they found their store had been looted of all its merchandise, and Emer decided to give it up. He traded the store to L.P. Nash of Strawberry for a cut-rate price on a nearby mine called The Golden Wonder.
It soon became evident the mine would not feed his growing family (their next child Jesse was born in 1884), and when Chilson could not make the final payments, a distant relative from California, John Robbins, rescued them by purchasing a share of the mine.
Emer and his older sons worked in the mines as far away as Bisbee to support his family, and they began raising cattle in Green Valley. That was where the future lay for this pioneer family, and Payson soon became their base of operations. Emer died in 1891, leaving Margaret with six children. His children Lillie and Guy had preceded him in death.
The Chilson men went on to develop extensive cattle ranches, while their mother remarried and came to be known affectionately as Grandma Platt.
Margaret and her sons parlayed their land holdings, selling their NB Ranch at the mouth of Pine Creek to Guy Barkdoll, a son-in-law. Boss and Charlie Chilson traded cattle to Bill McDonald for the old Burch ranch, which included today’s Payson Golf Course.
Margaret later traded that ranch to Guy Barkdoll for property on Main Street and cash. Jesse built a house for his mother and himself at 703 W. Main, saying he would never marry. But he did, falling in love with a local teacher named Lena Chapman. He built a house next door to his mother’s where he lived with his wife until his death from cancer. Margaret Ann Chilson Platt died in 1941 at the age of 90.
So the scene shifted from Marysville to Green Valley, a settlement that was soon to be named Payson.
The spring, later called Grimes Spring, was tapped with a system of pipes that brought the water into the mining camp. Some decades later, during prohibition, that canyon with its spring became a site of bootlegging operations.
Eventually the mine yielded many thousands of dollars in gold for a series of owners, but in 1980 the mine became the center of a scandal. An Arizona mining company together with a Canadian corporation sold $40,000 worth of unregistered securities under false claims about the mine’s value. The investor’s money was used to pay off the company debts. | <urn:uuid:9b3f92c4-04ca-4543-a61a-dcf7c07ace2e> | CC-MAIN-2016-26 | http://www.paysonroundup.com/news/2008/oct/01/story_payson_arizona/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00187-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.983712 | 1,783 | 2.59375 | 3 |
In the first flush of the Arab Spring many attempts were made to draw historical parallels with previous revolutions. Some looked back to the spring of 1848, when a wildfire of revolt swept through Europe, starting in Sicily but leaving few countries untouched. Those of a more optimistic disposition invoked the more successful velvet revolutions of 1989, when the countries of Eastern Europe shook off communist rule and began their faltering journey to a free market and Western-style democracy. The most anxious commentators, however, have implicitly looked to 1917, with al-Qa'ida in place of the Bolsheviks, and a second, "October Revolution" somewhere on the horizon.
The Paris Commune of 1871 offers another revolutionary precedent that deserves consideration, and not only for the accident of its 140th anniversary this year. The Commune represented for many in France the realisation of those ideals of liberty, equality and fraternity that had been repeatedly disappointed during the century since the Revolution of 1789. As with the Middle East today, the origins of discontent lay in a long period of autocratic rule, in which political cronyism had prevented the government from addressing the impact of economic problems on the working population.
In the previous summer of 1870, a failed French invasion of Germany had been swiftly driven back, leaving Paris under siege by the Prussian army. The French government was a new one. Twenty years after Louis-Napoleon III had seized power in a coup, proclaiming himself Napoleon III of the Second Empire, defeat by the Prussians had brought an end to his dictatorial rule and he had fled into exile. The declaration of a Third Republic had seen widespread jubilation.
Yet a winter under siege, experiencing harsh privations, tempered this joy for the citizens of Paris. Frustrated in their demands for the government to raise a citizen army, the more radical socialist groups demanded a more profound social revolution. In January 1871, angry meetings spilled over into armed resistance by the citizen's National Guard. Then, at dawn on 18 March, a confrontation over the control of the city's artillery saw the wholesale flight of the government and its army to nearby Versailles. What followed in Paris were ten weeks of popular rule: a first truly socialist Republic.
So, how do the two revolutions reflect each other? The use of Twitter during recent insurrections prompted me to commemorate the Commune by tweeting the voices of the participants on both sides. And hearing the echoes across time has led me to wonder whether the present cannot inform our understanding of the past, as much as the other way around.
It was a thought that struck me first during the demonstrations in Tahrir Square when, in the wake of soldier's fraternising with the people on the street, the Mubarak regime belatedly ordered the army's withdrawal, and was then obliged to accept its return as a neutral force. One area of historical speculation regarding the Commune has long been whether the attempt by the French government to seize the National Guard's artillery in Montmartre using a regiment prone to desertion was intended to provide a pretext to decisively confront the revolutionary elements. Twenty years earlier Adolphe Thiers, the head of the government, had proposed just such a strategy of withdrawal from Paris in order to crush an earlier revolution. Was he now carrying out a long-cherished plan? Watching Mubarak's desperate attempts to hold on to power, I felt more inclined to heed the words of one of Thiers's generals that "had we spent another 24 hours in Paris we would not have been able to bring a single regiment out", and accord greater weight to cock-up than conspiracy.
By the same token, the extreme disorganisation that has characterised the insurrection in Libya continues to shed light on the situation that pertained in Paris after 18 March 1871, of which one leading socialist would remark: "Never had a revolution taken the revolutionaries so much by surprise." The likely leader of an uprising, Auguste Blanqui, had been imprisoned only days earlier, and Victor Hugo, the novelist, a possible figurehead, only recently returned from a long exile, could only huff from the sidelines – "Paris governed by nobodies, it's impossible!" – while a series of ineffectual stand-ins failed to provide firm direction.
The impossible challenge of leading the National Guard, a force fighting for the cause of equality, was dramatised for me by news scenes from eastern Libya in the early days of the insurrection, where the armed men joined in ousting Gaddafi seemed incapable of agreeing on much else. The result of their squabbling has even since been played out along the route of the "Highway of Death" on which they stood arguing; every advance attempted along it has been driven back. Likewise around the perimeter defences of Communard Paris, fighting fervour was constantly undermined by poor discipline. More than once, attempts to improve matters saw National Guard officers punished for their perceived reactionary inclinations.
At least, in Libya, the foreign intervention has so far been for the protection of civilians. In Paris in the spring of 1871, the situation was more akin to that in Yemen or Bahrain. Like the Saudi or Pakistani forces invited in to reassert order, the Prussians, still encamped around Paris, readily lent their gun emplacements to Thiers's army and freed French prisoners of war to join his forces. Meanwhile, Otto von Bismarck, the Prussian leader, suggested that, unless Paris's example of revolution was speedily quelled, he would have to enforce compliance.
To understand what so terrified the international guardians of established order, one need only bring to mind those Libyans interviewed in Misrata who, only just rid of Gaddafi's rule, express their love of freedom in the same breath as acknowledging that if his men returned they and their families would be killed. A similar incaution infected the Communards who lived as if hoarding memories for a bleaker future. They drank in the cafés and attended charity concerts in the theatres of Paris even as shells fell around them.
Meanwhile, the Commune's elected representatives introduced a programme of new legislation. Education and the rights of workers and women featured prominently: a ban on night bakeries; the introduction of technical schools; nursery provision to allow women to work. Most of the policies are the familiar stuff of 21st-century social democracy; indeed, opportunist echoes of their language of Proudhonist anarchism may even be detected in the rhetoric of Cameron's Big Society. To the autocratic rulers of 19th-century Europe, however, they could hardly have been more threatening: papal encyclicals would not far fall short of calling such socialism, diabolical. Today's Middle East despots may have rather different sensitivities, but they are no more tractable.
When, from late April 1871 onwards, the defences of Paris began to fail, the retribution was terrible. During the final "Week of Blood", from 20 May, upwards of 20,000 Communards were killed, some on the barricades but many more mown down by machine guns in the liquidation centres around the city. If the battle for Paris had left much of the city looking like Grozny or Fallujah, the massacres far surpassed the horror of Srebrenica. Bodies lay in mass graves, for example in the chalk mines of the Parc Monceau: few who look at Impressionist scenes of blossom, painted there only a few years later, realise the wilful act of forgetfulness they represent. To draw such approximate comparisons with the fate of Grozny or Fallujah is not comfortable but nor is it idle.
In my exploration of revolutionary terrorism, The World That Never Was, the story of the Commune occupies only the first two of 24 chapters. What follows shows how the consequences of its brutal defeat shaped the 50 years following 1871: how it inspired revolutionary movements elsewhere, haunted a younger generation who turned to terrorism, and prompted some, Lenin most notably, to draw the ruthless lessons that would allow the Bolsheviks to succeed where the Communards failed.
The radicalisation caused by the destruction of Grozny and Fallujah more than a decade ago is already being felt and its impact may increase in years to come. Western intervention in Libya has so far prevented the massacres that might otherwise have occurred, but there, in the Gulf States and elsewhere, we may only have witnessed the beginning of the civil strife.
Alex Butterworth is the author of 'The World That Never Was: A True Story of Dreamers, Schemers, Anarchists and Secret Agents'. You can see the story of the Commune unfold by following his tweets @TheCommunards @TheVersaillais and @Communehistory | <urn:uuid:bb2f0b58-6937-4617-90c7-bd06b53c8792> | CC-MAIN-2016-26 | http://www.independent.co.uk/news/world/politics/the-long-march-to-freedom-2274656.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00109-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965082 | 1,785 | 2.796875 | 3 |
Henry Harpending (1944-2016) died this past Sunday. He had a stroke a year ago, and then a second one three weeks ago, but apparently he died of a lung infection. This is one of the risks of getting older: you dodge one bullet only to get hit by another.
The cemeteries are full of people who die before their time, but this is one case where I really wish death had held off a while longer, so that he could see more of the fruits of his labors, particularly in the area of gene-culture coevolution.
No, he wasn’t the only academic to show that culture and genes have coevolved in our species. In fact, the idea probably originated with Claude Lévi-Strauss in the early 1970s:
When cultures specialize, they consolidate and favor other traits, like resistance to cold or heat for societies that have willingly or unwillingly had to adapt to extreme climates, like dispositions to aggressiveness or contemplation, like technical ingenuity, and so on. [...] each culture selects for genetic aptitudes that, via a feedback loop, influence the culture that had initially helped to strengthen them. (Lévi-Strauss, 1971)
This idea of gene-culture coevolution became popular in the 1980s through papers by L.L. Cavalli-Sforza, Robert Boyd, Peter Richerson, and Pierre van den Berghe. It then fell out of fashion because ... well, because. When Paul Ehrlich wrote Human Natures (2000), he returned to the conventional wisdom that cultural evolution had largely replaced genetic evolution in our species. As one became more important, the other became less so.
In 2007, Henry Harpending turned this thinking on its head with a study on changes to the human genome over the past 80,000 years. With four other researchers, he found that these changes actually sped up more than a hundred-fold some 10,000 years ago, when hunting and gathering gave way to farming, which in turn led to population growth and larger, more complex societies. Our ancestors were no longer adapting to relatively static natural environments but rather to faster-changing cultural ones of their own making. They created new ways of life, which in turn influenced who would survive and who wouldn't.
As Henry and his co-authors pointed out, this estimate of a hundred-fold acceleration is actually conservative:
It is sometimes claimed that the pace of human evolution should have slowed as cultural adaptation supplanted genetic adaptation. The high empirical number of recent adaptive variants would seem sufficient to refute this claim. It is important to note that the peak ages of new selected variants in our data do not reflect the highest intensity of selection, but merely our ability to detect selection. Due to the recent acceleration, many more new adaptive mutations should exist than have yet been ascertained, occurring at a faster and faster rate during historic times. (Hawks et al., 2007)
Few ideas belong solely to one person, but Henry deserves credit for perseverance. Most of the others, like L.L. Cavalli-Sforza, eventually found it expedient to focus on other ideas. Henry pushed on, not only by co-writing a book with Greg Cochran, but also by continuing to do original research.
I would like to say that Henry was allowed to work in peace. That's how things are in a free society, no? Unfortunately, he was repeatedly warned to stop, subtly at first and then not so subtly. Last year, the Southern Poverty Law Center added his name to its list of "extremists"—a list that, curiously, omits people whose skin is darker than peaches and cream.
In its "Extremist File" the SPLC describes him as follows:
Harpending is most famous for his book, co-authored with frequent collaborator Gregory Cochran, The 10,000 Year Explosion: How Civilization Accelerated Human Evolution, which argues that humans are evolving at an accelerating rate, and that this began when the ancestors of modern Europeans and Asians left Africa. Harpending believes that this accelerated evolution is most visible in differences between racial groups, which he claims are growing more distinct and different from one another. The evolution of these racial differences are, in Harpending's account, the driving force behind all of modern human history. He is also a eugenicist who believes that medieval Europeans intuitively adopted eugenic policies, and that we should recognize the importance of eugenics in our own society. (Southern Poverty Law Center, 2015)
I would give that summary a D+.
- The book's argument was that genetic evolution slowly accelerated as modern humans spread outward from a relatively small area in Africa, beginning some 80,000 years ago. Much later, this acceleration greatly increased when farming began to replace hunting and gathering some 10,000 years ago. The actual Out of Africa event—when modern humans spread out of Africa some 50,000 years ago—was tangential to this process of accelerating genetic evolution, yet the SPLC summary makes it seem pivotal (perhaps to show that Henry was obsessed with black people?).
- The book's argument was that culture and genes coevolve: culture drives genetic evolution just as much as genes drive cultural evolution. And this process can take place within groups that are not normally thought to be “racial.”
- The last sentence is way off the mark. Yes, a culture will make it harder for some individuals to survive and reproduce, thereby removing certain predispositions and personality types from the gene pool, but this process is no more a "eugenic policy" than is natural selection itself. It's silly to use words like "eugenics" and "policy" for something that happens unconsciously in any culture, even in small bands of hunter-gatherers.
I don't mind people making unfounded criticisms. That's par for the course in academia. But was the SPLC interested in academic debate when it listed Henry as an "extremist"?
Indeed, what's the point of that list? Information gathering? Or is it more like incitement to extrajudicial punishment and, yes, extrajudicial violence? "Look folks, this is a BAD PERSON, so go and do what the justice system is too cowardly to do!" Isn't that the point of the exercise? And isn't that exactly what the KKK was condemned for doing?
A strange role reversal has taken place between the long-dead KKK and the SPLC. It's now the latter that tries to enforce its notions of good behavior through intimidation, veiled threats, public shaming, and blacklisting. It's now the SPLC that is conspiring, literally, to deny people their civil rights.
Anyway, Henry Harpending seemed unfazed by the SPLC's blacklisting. He was apparently one of those rare tenured professors who put his tenure to good use and blissfully went on doing what he had always been doing. I wish he had lived longer. He was irreplaceable not so much because he knew more but because he was unafraid to say and act on what he knew. I will miss him.
Cochran, G. and H. Harpending. (2010). The 10,000 Year Explosion: How Civilization Accelerated Human Evolution, New York: Basic Books.
Ehrlich, P. (2000). Human Natures. Genes, Cultures, and the Human Prospect, Penguin.
Harpending, H., and G. Cochran. (2002). In our genes, Proceedings of the National Academy of Sciences (USA), 99, 10-12.http://www.ncbi.nlm.nih.gov/pmc/articles/PMC117504/
Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA), 104, 20753-20758.http://harpending.humanevo.utah.edu/Documents/accel_pnas_submit.pdf
Lévi-Strauss, C. (1971). Race et culture, conférence de Lévi-Strauss à L'UNESCO le 22 mars 1971http://politproductions.com/sites/default/files/art-%C2%ABrace_et_culture%C2%BB_levi-strauss_unesco_22_3_1971.pdf
Southern Poverty Law Center (2015). Henry Harpending, Extremist Files,https://www.splcenter.org/fighting-hate/extremist-files/individual/henry-harpending | <urn:uuid:8290a9c8-10fa-45f4-8e70-77571d662618> | CC-MAIN-2016-26 | http://evoandproud.blogspot.co.nz/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00093-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954132 | 1,817 | 2.890625 | 3 |
22 Oct 2009:
Food Recycling Program
A Major Success in San Francisco
San Francisco’s new food recycling program — the first in the U.S. that requires all food waste from homes, apartments, businesses, and restaurants to be recycled and composted — has been enthusiastically embraced by city residents
, officials say. Although the program was officially launched on Wednesday, city officials say that residents have been recycling food for weeks and are already setting aside about half of the city’s 500 tons of daily food waste. The city requires residents and businesses to place food scraps in sealed buckets, and then collects the buckets and trucks them to San Francisco’s Organics Annex, where the food waste is composted. The compost is then sold as fertilizer to area farms and vineyards. Seattle was the first U.S. city to require all households to recycle food waste, but San Francisco’s law covers businesses and apartments. Jared Blumenthal, the city’s environmental officer, said residents have strongly backed the food recycling plan because — overwhelmed by bad environmental news — this gives them something concrete to do. “This is not rocket science,” he said. “This is putting some food scraps into a different pile and turning them into compost.”
Yale Environment 360 is
a publication of the
Yale School of Forestry
& Environmental Studies
Yale Environment 360
articles are now available in Spanish and Portuguese on Universia
, the online educational network. Visit the site.
Business & Innovation
Policy & Politics
Pollution & Health
Science & Technology
Antarctica and the Arctic
Central & South America
Tribal people and ranchers join together to stop a project that would haul coal across their Montana land. Watch the video.
is now available for mobile devices at e360.yale.edu/mobile
An aerial view of why Europe’s per capita carbon emissions are less than 50 percent of those in the U.S. View the photos.
Ugandan scientists monitor the impact of climate change on one of Africa’s most diverse forests and its extraordinary wildlife. Learn more.
video series looks at the staggering amount of food wasted in the U.S. – a problem with major human and environmental costs. Watch the video.
video goes onto the front lines with Colorado firefighters confronting deadly blazes fueled by a hotter, drier climate. Watch the video.
A three-part series Tainted Harvest
looks at the soil pollution crisis in China, the threat it poses to the food supply, and the complexity of any cleanup. Read the series. | <urn:uuid:31ce8c5e-0c24-41f5-a0d5-8c00da2c8f28> | CC-MAIN-2016-26 | http://e360.yale.edu/digest/food-recycling-program-a-major-success-in-san-francisco/2111/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.909238 | 542 | 3.046875 | 3 |
Pronunciation: (lang'gwish), [key]
1. to be or become weak or feeble; droop; fade.
2. to lose vigor and vitality.
3. to undergo neglect or experience prolonged inactivity; suffer hardship and distress: to languish in prison for ten years.
4. to be subjected to delay or disregard; be ignored: a petition that languished on the warden's desk for a year.
5. to pine with desire or longing.
6. to assume an expression of tender, sentimental melancholy.
1. the act or state of languishing.
2. a tender, melancholy look or expression.
Random House Unabridged Dictionary, Copyright © 1997, by Random House, Inc., on Infoplease. | <urn:uuid:2334709b-1e97-47ba-9bd3-b6a7a30a2b94> | CC-MAIN-2016-26 | http://dictionary.infoplease.com/languish | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.865926 | 164 | 2.59375 | 3 |
The Winter’s Tale Theme of Gender
Leontes’s hateful ideas about women dominate the first three acts of The Winter’s Tale. After he convinces himself that his pregnant wife is having an affair and carrying another man’s child, Leontes reveals a crude and misogynistic attitude that seems to have been lurking beneath the surface all along. In the jealous king’s mind, all women are sexually promiscuous and dishonest (an attitude that’s all too common in Renaissance literature). Leontes also gives voice to the notion that women who are not silent and obedient to their husbands are monsters who invert socially accepted gender hierarchies. Leontes eventually repents but his nasty attitude leaves a big mark on the play.
Questions About Gender
- How does Leontes behave when he suspects Hermione has been unfaithful? What does this reveal about Leontes's attitude toward women in general?
- What is Leontes's reaction to Paulina when she stands up for Hermione? Why does he hold Antigonus responsible for Paulina’s behavior?
- Why does Leontes say he’s glad Hermione didn’t nurse Mammilius when the young prince was an infant?
- What kind of relationship does the play forge between gender and speech?
Chew on This
Leontes believes that all women are inherently promiscuous and deceptive, but overall, the play proves this to be untrue.
Leontes gives voice to a common Renaissance attitude toward women – that is, any woman who is not silent and obedient is a monstrous hag who deserves to be punished. | <urn:uuid:0fdf8119-c836-4bb6-b3cb-11f473c490d9> | CC-MAIN-2016-26 | http://www.shmoop.com/winters-tale/gender-theme.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00124-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95256 | 343 | 3.296875 | 3 |
Many critics have suggested that Wright's southern stories are his best work, and it is clear that they have continued to be widely read and often anthologized. Despite their occasionally too obvious didacticism, the stories in Uncle Tom's Children convey an emotional power that has not been diminished by the passage of time nor the alteration of the social conditions they address.
Uncle Tom's Children shows the influence of literary realism and naturalism. Wright's prose is direct and graphic, focusing on the dark and violent aspects of life in the rural South during the 1930s. His effective use of dialect and black folk culture increases the realism of his stories. As in much literary naturalism, Wright's characters sometimes seem doomed by their social environment.
Yet, Wright's style in Uncle Tom's Children is also affected by his didactic purpose. Wright's straightforward narration emphasizes his message, and like other proletarian authors Wright breaks from the pessimistic determinism of naturalism by idealizing some characters and supporting their heroic opposition to oppression with an underlying hope for melioration.
Wright's simple narrative technique is enriched by the use of symbols and allusions. Characters' names, natural phenomena, colors, and pervasive Biblical references are used to strengthen Wright's messages. As a result, the stories take on many of the characteristics of allegory.
(The entire section is 215 words.) | <urn:uuid:66901411-d544-4bfe-98d7-e3a6fb56a260> | CC-MAIN-2016-26 | http://www.enotes.com/topics/uncle-toms-children/in-depth | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00082-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970183 | 276 | 3.6875 | 4 |
The Wolfram Language introduces GeoGraphics, an extension of its powerful graphical functionality to produce maps. GeoGraphics offers full automation and freedom to handle cartographic projections, choice of zoom (from the whole Earth down to meter scale), map styling (street maps, relief maps, ...), and much more. GeoGraphics introduces new geographical elements adapted to the surface of the Earth (like geodesics or rhumb lines for navigation) and has integrated access to the large corpus of geographical information in Wolfram|Alpha through the new Entity framework.
The Wolfram Language offers many convenience functions for easy access to current weather and astronomical data, and location-based knowledge for countries, states, and more—all integrated with powerful geographic visualization functions such as GeoGraphics. With comprehensive support for extended regions, recent and historical earthquake data, and Sun and Moon positions, the Wolfram Language provides you with all the tools you need to work with geographic data.
The Wolfram Language introduces a full Geographic Information System (GIS). It integrates the powerful new GeoGraphics function for map construction, the new Entity framework to access the large corpus of information in Wolfram|Alpha, and improved functionality for geodetic computations. This allows construction of a very large variety of maps—in any cartographic projection and including the representation of results of arbitrary computations—with any type of geographic data. | <urn:uuid:edb286db-9cb7-46d0-bda2-ae2af96c075a> | CC-MAIN-2016-26 | http://www.wolfram.com/mathematica/new-in-10/geographic-computation/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00118-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.833124 | 276 | 2.609375 | 3 |
Since 2008, over a hundred billion apps have been downloaded from Apple’s App Store onto users’ iPhones or iPads. Thousands of software developers have written these apps for Apple’s “iOS” mobile platform. However, the technology and tools powering the mobile “app revolution” are not themselves new, but rather have a long history spanning over thirty years, one which connects back to not only NeXT, the company Steve Jobs started in 1985, but to the beginnings of software engineering and object-oriented programming in the late 1960s.
Apple’s iOS is based on its desktop operating system, Mac OS X. More importantly, iOS’s software development kit (SDK), known as “Cocoa Touch,” is based on the same principles and foundations as Mac OS X’s desktop SDK, Cocoa. (An SDK is the set of tools and software libraries that application developers use to build their apps. Commonly these come in the form of Application Program Interfaces, or “APIs,” which are interfaces or “calls” into functions provided by the platform’s built-in libraries.) OS X and Cocoa, which first shipped in March 2001, were in turn based on the NeXTSTEP (originally capitalized as “NeXTStep”) operating and development environment. NeXT was founded by Steve Jobs upon resigning from Apple after he had been stripped of power following an attempted boardroom coup. Both NeXTSTEP and NeXT’s computers were state of the art, but the computers were too expensive for the education market NeXT targeted.
Its hardware business flagging, by 1993 NeXT was forced to close down its factory, becoming a software company focused on custom applications development for the enterprise. The NeXTSTEP development platform, renamed “OpenStep,” was ported to other hardware and other operating systems, including Intel processors and Sun workstations.
In 1996, Apple was itself in dire straits, and needed to replace its aging Mac OS with a more modern and robust operating system. Failing to produce one of its own, Apple acquired NeXT in order to make NeXTSTEP the basis for what eventually became Mac OS X. In January 1997, at the annual Macworld Expo trade show, Steve Jobs triumphantly returned onstage as an Apple employee for the first time since 1985, this time to explain what he thought Apple needed to survive and become great again, and how NeXTSTEP technology could help Apple achieve it.
Video: Jobs demonstrates OpenStep, MacWorld Expo, January 1997
In this short 20-minute presentation at MacWorld Expo in January 1997, Jobs demonstrated the technology that would become Cocoa, the software development system that would eventually be used by thousands of iOS app developers around the world. Steve Jobs was showing Apple developers what their future would look like, one which, indeed, today’s iOS developers would find remarkably similar to their everyday experience. In fact, what Jobs was showing Apple developers in 1997 was not new, but had been released by NeXT almost a decade earlier, in 1988.
Video: Steve Jobs introduces the NeXTStep object-oriented development environment to the world
Indeed, NeXTSTEP had been such a productive development environment that in 1989, just a year after the NeXT Computer was revealed, Sir Tim Berners-Lee at CERN used it to create the WorldWideWeb.
What made NeXT’s development environment so ahead of its time? At the 1997 MacWorld demo, Jobs told a little parable . By that point in time, it was well known in the computer industry that Jobs got the idea for the Macintosh’s graphical user interface when he and a team from Apple visited Xerox PARC in 1979. PARC, or “Palo Alto Research Center,” was a blue-sky computer research lab started by Xerox to create the “Office of the Future.”
PARC’s staff, led by CHM Fellow Robert Taylor, was a who’s who of leading computer scientists of the day (among them CHM Fellows Chuck Thacker, Butler Lampson, Bob Metcalfe, Lynn Conway, Charles Geschke, and John Warnock). Among these luminaries was CHM Fellow Alan Kay. Kay envisioned the “Dynabook,” a tablet-like computer that would be a dynamic medium for learning. Thacker and Lampson designed, with the technology available at the time, an “interim” Dynabook which might partially make real Kay’s ideas. The result was the Alto, a personal workstation designed for a single user, running the world’s first graphical user interface (GUI) with windows, icons, and menus, controlled using a mouse. The Xerox Alto, which visitors can see in CHM’s Revolution exhibit, and much of whose source code CHM has released to the public, was the progenitor of the way almost all desktop computer users interact with their machines today.
During the 1997 MacWorld demo, Jobs revealed that in 1979 he had actually missed a glimpse of two other PARC technologies that were critical to the future. One was pervasive networking between personal computers, which Xerox had with Ethernet, which it invented, in every one of its Alto workstations. The other was a new paradigm for programming, dubbed “object-oriented programming,” by Alan Kay. Kay, working with Dan Ingalls and Adele Goldberg, designed a new programming language and development environment that embodied this paradigm, running on an Alto. Kay called the system “Smalltalk” because he intended it to be simple enough for children to use. A program would consist of “objects” that modeled things in the real world, such as “Animal” or “Vehicle.” This differed from traditional “procedure-oriented” (or “procedural”) programming, where routines (“procedures”) operate on data inputs that are each stored separately. In Smalltalk, objects consisted of data grouped together with the routines (“methods”) that operated on that data. Kay imagined a program as a dynamic system of objects sending messages to each other. An object receiving a message would use it to select which of its many routines, or methods, to run. The same message sent to different objects would result in each receiving object executing its own routine, each different from the others. For example, a Dog object and a Cat object would respond to the “Speak” message differently; the Dog would run its “Bark” method while the Cat would run its “Meow” method.
Smalltalk’s development environment was graphical, with windows and menus. In fact, Smalltalk was the exact GUI that Steve Jobs saw in 1979. Smalltalk’s GUI was composed of just such a collection of interacting objects that we discussed. For example, a Window object could be sent the message “Draw,” which it would forward to all of the objects inside it, including Buttons and Sliders. Each of these objects would have its own particular method for drawing itself. During Jobs’ visit to PARC, he had been so enthralled by the surface details of the GUI that he completely missed the radical way it had been created with objects. The result was that programming graphical applications on the Macintosh would become much more difficult than doing so with Smalltalk. Said Jobs in his 1988 introduction of the NeXT Computer: “Macintosh was a revolution in making it easier for the end user. But the software developer paid the price… It is a bear to develop software… for the Macintosh… if you look at the time it takes to make [a GUI] application… the user interface takes 90% of the time.”
With the NeXT computer, Jobs planned to fix this exact shortcoming of the Macintosh. The PARC technologies missing from the Mac would become central features on the NeXT. NeXT computers, like other workstations, were designed to live in a permanently networked environment. Jobs called this “inter-personal computing,” though it was simply a renaming of what Xerox’s Thacker and Lampson called “personal distributed computing.” Likewise, dynamic object-oriented programming on the Smalltalk model provided the basis for all software development on NeXTSTEP. According to Jobs in 1988, NeXTSTEP would reduce the time for a developer to create an application’s user interface from 90% down to 10%. Instead of using Smalltalk, however, NeXT chose Objective-C as its programming language, which would provide the technical foundation for the success of NeXT and Apple’s software platforms for the next two decades and beyond. Objective-C remains in use at Apple today for iOS and OS X development, though in 2014 Apple introduced its new Swift language, which may one day replace it.
Objective-C was created in the 1980s by Brad Cox to add Smalltalk-style object-orientation to traditional, procedure-oriented C programs. It had a few significant advantages over Smalltalk. Programs written in Smalltalk could not stand alone. To run, Smalltalk programs had to be installed along with an entire Smalltalk runtime environment—a virtual machine, much like Java programs today. This meant that Smalltalk was very resource intensive, using significantly more memory, and running often slower, than comparable C programs that could run on their own. Also like Java, Smalltalk programs had their own user interface conventions, looking and feeling different than other applications on the native environment on which they were run. (Udell, 1990) By re-implementing Smalltalk’s ideas in C, Cox made it possible for Objective-C programmers to organize their program’s architecture using Smalltalk’s higher level abstractions while fine-tuning performance-critical code in procedural C, which meant that Objective-C programs could run just as fast as traditional C programs. Moreover, because they did not need to be installed alongside a Smalltalk virtual machine, their memory footprint was comparable to that of C programs, and, being fully native to the platform, would look and feel the same as all other applications on the system. (Cox, 1991) A further benefit was that Objective-C programs, being fully compatible with C, could utilize the hundreds of C libraries that had already been written for Unix and other platforms. This was particularly advantageous to NeXT, because NeXTSTEP, being based on Unix, could get a leg up on programs that could run on it. Developers could simply “wrap” an existing C code base with a new object-oriented GUI and have a fully functional application. Objective-C’s hybrid nature allowed NeXT programmers to have the best of both the Smalltalk and C worlds.
What value would this combination have for software developers? As early as the 1960s, computer professionals had been complaining of a “software crisis.” A widely distributed graph predicted that the costs of programming would eclipse the costs of hardware as software became ever more complex. (Slayton, 2013, pp. 155–157) Famously, IBM’s OS/360 project had shipped late, over-budget, and was horribly buggy. (Ensmenger, 2010, pp. 45–47, 205–206; Slayton, 2013, pp. 112–116) IBM produced a report claiming that the best programmers were anywhere up to twenty-five times more productive than the average programmer. (Ensmenger, 2010, p. 19) Programmers, frequently optimizing machine code with clever tricks to save memory or time, were said to be practitioners of a “black art” (Ensmenger, 2010, p. 40) and thus impossible to manage. Concern was so great that in 1968 NATO convened a conference of computer scientists at Garmisch, Switzerland to see if software programming could be turned into a discipline more like engineering. In the wake of the OS/360 debacle, CHM Fellow Fred Brooks, the IBM manager in charge of OS/360, wrote the seminal text in software engineering, The Mythical Man-Month. In it, Brooks famously outlined what became known as Brooks’ law—that after a software team reaches a certain size (and thus as the complexity of the software increases), adding more programmers will actually increase the cost and delay its release. Software, Brooks claimed, is best developed in small “surgical” teams led by a chief programmer, who is responsible for all architectural decisions, while subordinates do the implementation. (Brooks, 1995)
By the 1980s, the problems of cost and complexity in software remained unsolved. It appeared that the software industry might be in a perpetual state of crisis. In 1986, Brooks revisited his thesis and claimed that, despite modest gains from improved programming languages, there was no single technology, no “silver bullet” that could, by itself, increase programmer productivity by an order of magnitude—the 10x improvement that would elevate average programmers to the level of exceptional ones. (Brooks, 1987) Brad Cox begged to differ. Cox argued that object-oriented programming could be used to create libraries of software objects that developers could then buy off-the-shelf and then easily combine, like a Lego set, to create programs in a fraction of the time. Just as interchangeable parts had led to the original Industrial Revolution, a market for reusable, off-the-shelf software objects would lead to a Software Industrial Revolution. (Cox, 1990a, 1990b) Cox’s Objective-C language was a test bed for just such a vision, and Cox started a company, Stepstone, to sell libraries of objects to other developers.
Steve Jobs and his engineers at NeXT saw that Cox’s vision was largely compatible with their own, and licensed Objective-C from Stepstone. Stepstone and later NeXT engineer Steve Naroff did the heavy lifting to modify the language and compiler for NeXT’s needs. But rather than buy libraries from Stepstone, NeXT developed their own set of object libraries using Objective-C, and bundled these “Kits” with the NeXTSTEP operating system as part of its software development environment. The central graphical user interface library that all NeXT developers used to construct their applications was the ApplicationKit, or AppKit. In conjunction with AppKit, NeXT created a visual tool called “Interface Builder” that gave developers the ability to connect the objects in their programs graphically.
As part of his 1997 MacWorld presentation, Jobs demonstrated how easily one could build an app using Interface Builder, a process familiar to any iOS developer today. Jobs simply dragged a text field and a slider from a palette into a window, then dragged from the slider to the text field to make a “connection,” selecting a command for one object to send to the other. The result was that the text field was now hooked up to the slider, displaying in real-time the numerical value, between 1 and 100, which the slider’s position represented. Jobs demonstrated this all without writing a single line of code, driving home his point: “The line of code that the developer could write the fastest… maintain the cheapest… that never breaks for the user, is the line of code the developer never had to write.” Today, the AppKit is still the primary application framework on OS X, and iOS’s UIKit is heavily modeled on it. Interface Builder, too, still exists, as part of Apple’s Xcode Integrated Development Environment (IDE).
The combination of Objective-C, AppKit, and Interface Builder allowed Steve Jobs to boast that NeXTSTEP could make developers five to ten times more productive—precisely the order of magnitude improvement Brooks had claimed could not be achieved. Jobs assumed his audience at Macworld in 1997 was familiar with Brooks. “You’ve all read the Mythical Man-Month,” he told the audience. “As your software team is getting bigger, it sort of collapses under its own weight. Like a building built of wood, you can’t build a building built of wood that high.”
Using this metaphor of a building, Jobs brilliantly explained the comparative advantage of NeXTSTEP’s AppKit. Programming for DOS was the equivalent of starting at the ground floor, he argued, and an app developer might add three floors of functionality to achieve four floors of capability. The classic Mac OS, with its Toolbox APIs, effectively raised the foundation to the fifth floor, allowing a developer to reach eight floors of functionality. This, Jobs said, was what enabled the creation of killer applications like Pagemaker on the Mac (1985), which, in conjunction with laser printers, did much to create the entirely new market of desktop publishing. Jobs insisted that it was this capacity for developer innovation—to enable new kinds of applications on Apple’s platform that simply could not exist elsewhere—that Apple needed to foster if it was to survive and grow.
The problem, Jobs continued, was that Microsoft Windows had caught up to the Mac. Windows NT effectively provided a seventh floor base, outcompeting the Mac. Here is where Jobs saw NeXT coming to Apple’s rescue. NeXTSTEP, with its Interface Builder tool along with the AppKit and other object-oriented libraries, provided so much rich functionality out of the box that they raised the developer up to the twentieth floor, claimed Jobs. This meant that developers could eliminate 80% of the code that all graphical applications share in common, allowing them to focus on the 20% of the code that made their app unique and provided additional value to users. The result, Jobs insisted, would be that a small team of two to ten developers could write an app as fully featured as a hundred-person team working for a large corporate software company like Microsoft.
I argue that this vision outlined by Jobs is in fact remarkably similar to Fred Brooks’ notion of programming with small surgical teams led by a “chief” programmer. The differences is that rather than having the chief programmer delegate the grunt work to subordinate coders, as Brooks described, in Jobs’ vision, that work was handled by libraries of objects. In actuality, this was Cox’s vision too, except that whereas Cox intended for objects to be purchased on an open market, NeXT bundled its object libraries as part of the NeXTSTEP operating system and development environment, which might actually inhibit the formation of an after-market for objects. In the vision of both Cox and Jobs, the grunt work of making an application was offloaded to the developers of the objects; nobody in a small team needed to be a mere “implementer,” forced to work on the program’s “foundation.” Unlike procedural code units, it was precisely the black-boxed, encapsulated nature of objects—which prevented other programmers from tampering with their code—that enforced the modularity that allowed them to be reused interchangeably. Developers standing on any given floor simply were not allowed to mess with the foundation they stood on. Freed from worrying about the internal details of objects, developers could focus on the more creatively rewarding work of design and architecture that had been the purview of the “chief” programmer in Brook’s scheme. All team members would start at the twentieth floor and collaborate with each other as equals, and their efforts would continue to build upward, rather than be diverted to redoing the floors upon which they stood.
Was this promise of 5 to 10x improvement a pipe dream? We have already seen that NeXTSTEP had been used as a rapid prototyping tool to create the first version of the WorldWideWeb. Though NeXTSTEP had not found a large user base, it had been very well received by programmers, especially in academia. There was also a die-hard community of third-party NeXT developers backing Jobs up with their products. Small shops like OmniGroup, Lighthouse Design, and Stone Design, teams no larger than eighteen in the case of Lighthouse, and a single man in the case of Stone, had written fully featured applications such as spreadsheets, presentation software, web browsers, and graphic design tools. Moreover, NeXTSTEP had proved so productive for rapid development of “mission-critical” custom applications that Wall Street banks and national security organizations like the CIA were paying thousands of dollars per license for it.
Six months later, at a fireside chat at Apple’s May 1997 Worldwide Developer Conference, Jobs said that Lighthouse (which had since been acquired by Sun), proved that NeXTSTEP technology provided the five to ten times speed improvement for implementing existing apps. Moreover, the more compelling advantage was that NeXTSTEP would allow some innovative developer to create something entirely new, which could not have been developed on any other platform initially, and which could not be replicated on other platforms without huge effort. NeXTSTEP was what Tim Berners-Lee had used to create the WorldWideWeb, and what Dell used to create its first eCommerce website in 1996. This was how NeXTSTEP’s object-oriented development environment would power innovation on Apple platforms well into the twenty-first century.
Looking back from the perspective of 2016, Steve Jobs was remarkably prescient. After Mac OS X shipped on Macintosh personal computers, small-scale former NeXT developers and shareware Mac developers alike began to write apps using AppKit and Interface Builder, now called “Cocoa.” These developers, taking advantage of eCommerce over the Web, began to call themselves independent or ”indie” software developers, as opposed to the large corporate concerns like Microsoft and Adobe, with their hundred-man teams. In 2008, Apple opened up the iPhone to third-party software developers and created the App Store, enabling developers to sell and distribute their apps directly to consumers on their mobile devices, without having to set up their own servers or payment systems. The App Store became ground zero for a new gold rush in software development, inviting legendary venture capitalist firm Kleiner Perkins Caulfield & Byers to set up an “iFund” to fund mobile app startups. (Wortham, 2009) At the same time, indie Mac developers like Andrew Stone and Wil Shipley predicted that Cocoa Touch and the App Store would revolutionize the software industry around millions of small-scale developers.
Unfortunately, in the years since 2008, this utopian dream has slowly died: as unicorns, acquisitions, and big corporations moved in, the mobile market has matured, squeezing out the little guys who refuse investor funding. With hundreds of competitors in the App Store, it can be extremely difficult to get one’s app noticed without expensive external marketing. The reality is that a majority of mobile developers cannot sustain a living by making apps, and most profitable developers are contractors writing apps for large corporations. Nevertheless, the object-oriented technology Jobs demoed in 1997 is today the basis for every iPhone, iPad, Apple Watch and Apple TV app. Did Steve Jobs predict the future? Alan Kay famously said, “The best way to predict the future is to invent it.” Cyberpunk author William Gibson noted, “The future is already here—it’s just not evenly distributed.” NeXT had already invented the future back in 1988, but because NeXT never shipped more than 50,000 computers, only a handful were lucky enough to glimpse it in the 1990s. Steve Jobs needed to return to Apple to distribute that future to the rest of the world.
In today’s Silicon Valley, with its focus on innovation and the future, the deep histories of such technologies as Apple’s Cocoa development environments are often forgotten. However, understanding the past is vitally important for inventing the future, for chances are, the future has already been invented, one just needs to do a little digging. At the Computer History Museum, our mission is to preserve this past, not just the physical hardware (we have a number of NeXT computers and peripherals), but also the software that is the soul of these machines. CHM has a small collection of NeXT software, including NeXTSTEP 3.3, OpenStep 4.2, Enterprise Object Frameworks 1.1 and WebObjects 3.0 on CD, but we are lacking earlier versions of NeXTSTEP, we have little in the way of NeXT applications. Filling out the collection is important because software is the contextual link between computers, users, and the institutions and society they are embedded in. “Software is history, organization, and social relationships made tangible,” as computing historian Nathan Ensmenger has written. (Ensmenger, 2010, p. 227) Thus, beyond preservation, it is also vital to make meaningful the stories of computer software’s legacy and contextualize it in the culture of its time. This is my first blog post as CHM’s Curator for its new Center for Software History, and it marks the beginning of my project to collect and interpret materials, software, and oral histories related to graphical user interfaces, object-oriented programming, and software engineering, starting with a focus on NeXT, Apple, and Xerox PARC. Look forward for more stories like this in this space, and if you are a former NeXT, Apple, or Xerox PARC engineer, and would like to contribute to this project, please contact me at email@example.com!
- Objective-C was not the work of Brad Cox alone, and it might have become an obscure footnote in the history of programming languages had Steve Jobs’ NeXT Computer not chosen it as the basis for programming on NeXTSTEP. Cox co-founded Stepstone (originally called “Productivity Products International” or PPI) with Tom Love to promote object-oriented solutions that could coexist with existing languages. Objective-C originated as the Object-Oriented Pre-Compiler. The first version of the language to be called Objective-C still used a separate C preprocessor to translate Obj-C code into straight C before handing it off to the compiler. However, this version of the language and compiler were insufficient for NeXT’s purposes, and it took the contributions of Steve Naroff and others to make Objective-C into the language we know today. (See note 2 below.)
To tailor Objective-C to NeXT’s needs, Stepstone engineer Steve Naroff took over development from Cox, and made significant additions to the language to support NeXT’s visual programming tool, InterfaceBuilder. Naroff’s work was so important that he was eventually hired by Steve Jobs at NeXT and later stayed on at Apple. Naroff integrated Objective-C directly into the C compiler NeXT was using, the open source GNU C compiler, GCC, working closely with Richard Stallman. This eliminated the separate translation step. To support InterfaceBuilder, Naroff added a key feature to the language: “categories” (known today as “class extensions”), a way to dynamically add methods to an existing class without subclassing it.
Another key feature called “protocols” was later added by NeXT engineers Bertrand Serlet (who later became Apple’s Software Vice President) and Blaine Garst (who later led the Java team at Apple). Protocols allow classes to inherit multiple interface specifications without inheriting their implementations, circumventing the conflicts that can occur with multiple class inheritance in languages like C++. The feature was later adopted by Java as “interfaces.” These two features, “categories” and “protocols,” made possible several key design patterns heavily used by NeXT’s AppKit class libraries, and it became impossible in later years to think about Objective-C without them.
In addition to these, many other contributions to Objective-C by NeXT engineers were necessary, driven by the practical needs of NeXT developers in real-world use, rather than the needs of computing researchers. Kevin Enderby worked on the linker and assembler. Naroff solved a method fragility problem that affected dynamic library compatibility, added explicit declaration constructs, the #import directive, and C++ integration. Serlet added method forwarding to enable remote object proxies. Garst worked on the Objective-C runtime, and was a key advocate for reference-counted memory management. These and other modifications laid the groundwork for Objective-C’s longevity at NeXT and later Apple, providing the solid foundation that would eventually power Mac OS X and iPhone development up to the present day.
- Brooks, Frederick P. 1987. “No Silver Bullet: Essence and Accidents of Software Engineering.” Computer 20 (4): 10–19.
- ———. 1995. The Mythical Man-Month: Essays on Software Engineering. Anniversary ed. Reading, MA: Addison-Wesley Pub. Co.
- Cox, Brad J. 1983. “The Object Oriented Pre-Compiler: Programming Smalltalk 80 Methods in C Language.” SIGPLAN Not. 18 (1): 15–22.
- ———. 1990a. “There Is a Silver Bullet: A Software Industrial Revolution Based on Reusable and Interchangeable Parts Will Alter the Software Universe.” BYTE, October 1.
- ———. 1990b. “Planning the Software Industrial Revolution.” IEEE Software 7 (6): 25.
- ———. 1991. Object-Oriented Programming : An Evolutionary Approach. 2nd ed. Reading MA: Addison-Wesley Pub. Co.
- Ensmenger, Nathan L. 2010. The “Computer Boys” Take Over: Computers, Programmers, and the Politics of Technical Expertise. Cambridge, MA: MIT Press.
- Slayton, Rebecca. 2013. Arguments That Count: Physics, Computing, and Missile Defense, 1949-2012. Cambridge, MA: MIT Press.
- Udell, Jon. 1990. “Smalltalk-80 Enters the Nineties.” BYTE, October 1.
- Wortham, Jenna. 2009. “The iPhone Gold Rush.” The New York Times, April 5. http://www.nytimes.com/2009/04/05/fashion/05iphone.html. | <urn:uuid:1e45de2b-3a69-42a6-82f3-cc5626798557> | CC-MAIN-2016-26 | http://www.computerhistory.org/atchm/the-deep-history-of-your-apps-steve-jobs-nextstep-and-early-object-oriented-programming/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00197-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959192 | 6,345 | 2.921875 | 3 |
A letter to the editor of the Wall Street Journal from distinguished Professor of Physics at Princeton University William Happer is published today in which he states, "Even if we could hold CO2 levels fixed, the climate would continue to change because of other influences. In a time of serious world problems, wasteful expenditures justified by nonproblems like CO2 make no sense."
Anne Jolis's "The Other Climate Theory" (op-ed, Sept. 7) is a welcome message of realism on climate. Painful changes in the U.S. economy are being justified by the mantra that the earth's climate is dictated by CO2 in the atmosphere; elaborate computer models assert that doubling CO2 concentrations will warm the earth by an intolerable three or four degrees Celsius, or even more. This is contrary to straightforward theoretical estimates and empirical observations, indicating that the direct warming potential of CO2 is only about one degree Celsius, which would most likely be a benefit to world. The recent European Organization for Nuclear Research (CERN) experiments, discussed by Ms. Jolis, support extensive observational evidence that cosmic rays reaching the earth's surface have a large influence on climate.
Additional important climate drivers include complicated fluctuations of major oceanic currents and volcanic eruptions. Even if we could hold CO2 levels fixed, the climate would continue to change because of other influences. In a time of serious world problems, wasteful expenditures justified by nonproblems like CO2 make no sense.
Professor of Physics
Other Letters to the Editor published in the print edition of today's WSJ:
It is important for readers to understand that the U.N.'s Intergovernmental Panel on Climate Change (IPCC), as well as many other organizations invested in the idea that humans are the major cause of late 20th-century climate change, have never been seriously interested in pursuing natural causes of climate change. Since inception, the IPCC has been nearly totally preoccupied with trying to make the case for strong anthropogenic global warming (AGW), primarily via CO2 emissions. This has led to biases and distortions of the scientific process, and the "tribal behavior" by climate scientists that we have seen in a variety of contexts, e.g., the Climategate emails and IPCC report "errors."
The real scientific climate debate has been taken up by the so-called skeptics as they have searched to understand the underlying causes of climate change, including both natural and anthropogenic sources. In fact, there are not one but two roughly defined schools of thought. All agree that AGW is far smaller than the IPCC claims, and there exists a substantial body of empirical evidence to support this. However, one school holds that the predominate influences arise from astronomical sources, such as the cosmic-ray mechanism. The other school believes that the earth is quite capable of changing its climate quickly and significantly via its own unforced chaotic variations.
This debate among skeptics has proceeded under the media's radar screen.
Roger W. Cohen, Ph.D.
La Jolla, Calif.
Climate science has not yet established how much clouds impact climate—there is even debate about whether clouds warm or cool the earth—or to what extent clouds have fluctuated over the last, you pick it, 20, 50, 100, 1,000 years, or the specific ways in which clouds interact with solar input (i.e., how much they reflect back into space).
It is all but acknowledged (climate scientists on the global-warming government-funding bandwagon have a hard time acknowledging anything that could undermine their beliefs) that because of these uncertainties about clouds, all climate models do a terrible job modeling how clouds impact climate. Change the assumptions about the amount of clouds or how they impact global temperatures by more than 1% and you can completely explain all global warming (and cooling).
Politicians who believe that the more government can control what we do in our daily lives, the better those daily lives will be, see man-made global warming as the ultimate tool for such control. Thus, they are more than happy to fund scientists who support that viewpoint. Let's see how robust the funding for the CERN CLOUD experiment is going forward.
John P. Miller, Ph.D.
Portola Valley, Calif.
The cosmic-ray theory is also discussed in the landmark book "Heaven and Earth—Global Warming, the Missing Science" (2009) by Prof. Ian Plimer of the University of Adelaide, Australia. Prof. Plimer's work was so profound as to become the primary enabler for the recent defeat of a climate-change bill in the Australian legislature.
It is long past time for the EPA's management to follow the strong recommendation of its own National Center for Environmental Economics scientists who, in a very comprehensive internal report to management (March 2009), were highly critical of claims regarding the worth of the IPCC climate models. Their urgent, but ignored, plea was for the EPA to undertake its own independent assessment of whether or not human activity influences climate.
El Segundo, Calif. | <urn:uuid:930ffd45-0979-44dd-bf89-d791f0bf2bc0> | CC-MAIN-2016-26 | http://hockeyschtick.blogspot.com/2011/09/distinguished-physicist-william-happer.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00193-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.94858 | 1,030 | 2.53125 | 3 |
When President Franklin D. Roosevelt recognized the 150th anniversary of our Nation's Bill of Rights, he called it the "great American charter of personal liberty and human dignity." He understood that the freedoms it protects -- among them speech, worship, assembly, and due process -- are freedoms that reinforce one another. They form the bedrock of the American promise, and we cannot fully realize one without realizing them all. Today, as we work to reinforce human rights at home and around the globe, we reaffirm our belief that government of the people, by the people, and for the people inspires the stability and individual opportunity that serve as a basis for peace in our world.Even before the day was over, the airwaves and internet were filled with those who might have been happier with a less full-throated endorsement of the Bill of Rights, at least when it came to the second of the first ten amendments.
In adopting the 10 Constitutional Amendments that make up the Bill of Rights, the Framers sought to balance the power and security of a new Federal Government with a guarantee of our most basic civil liberties. They acted on a conviction that rings as true today as it did two centuries ago: unlocking a nation's potential depends on empowering all its people. The Framers also called upon posterity to carry on their work -- to keep our country moving forward and bring us ever closer to a more perfect Union.
Generations of patriots have taken up that challenge. They have been defenders who stood watch at freedom's frontier, marchers who broke down barriers to full equality, dreamers who pushed America from what it was toward what it ought to be. Now it falls to us to build on their work. On Bill of Rights Day, we celebrate the liberties secured by our forebears, pay tribute to all who have fought to protect and expand our civil rights, and rededicate ourselves to driving a new century of American progress.
As the scope and enormity of the crime became clearer throughout the day, there were a significant number of high profile reactions from those who set aside the usual "this is not the time to discuss policy, but rather a time to grieve" protocol. Some, including E.J. Dionne, Jr. in the Washington Post, simply ignored the Constitution and the Bill of Rights in calls for strict gun control. Michael Cooper in the New York Times only referenced the Constitution in a pre-massacre quote from the president of the NRA.
Others, such as Ed Schultz of MSNBC, were more explicit in their criticism of the Second Amendment (via The Blaze):
Despite the fact that the Connecticut school shooter reportedly used guns legally owned by his mother, MSNBC host Ed Schulz said the shooting is proof that we must “come to grips with a changing society” and stop “hiding behind the Second Amendment.”
* * * *
“Tonight is… a time we as a people come to grips with a changing society,” Schultz said. “We need to be the Founding Fathers on how we deal with the sickness in our country called ‘gun violence.’ Hiding behind the Second Amendment doesn’t cut it anymore.”Even the president in his initial statement hinted at a possible change in his previous hesitancy to address the gun control issue:
He continued: “Hiding behind the Second Amendment can no longer be the shield for access. The people who wrote that document owned slaves, oppressed women, and were short on tolerance.”
The MSNBC host went on to say that lawmakers in Washington need to stop doing the bidding of the gun lobby.
As a country, we have been through this too many times. Whether it’s an elementary school in Newtown, or a shopping mall in Oregon, or a temple in Wisconsin, or a movie theater in Aurora, or a street corner in Chicago -- these neighborhoods are our neighborhoods, and these children are our children. And we're going to have to come together and take meaningful action to prevent more tragedies like this, regardless of the politics.His remarks at Sunday's memorial in Newtown were more direct:
We can’t tolerate this anymore. These tragedies must end. And to end them, we must change. We will be told that the causes of such violence are complex, and that is true. No single law -- no set of laws can eliminate evil from the world, or prevent every senseless act of violence in our society.Although the president did not specifically address the Second Amendment or gun control, it is difficult to see how he could resist the pressure that will be exerted by many liberals and Democrats who have often oversimplified the gun control argument as dominated by the "gun lobby." The reach of the NRA just doesn't go that deep and the love-their-guns-more-than-children insinuation aimed at 2nd Amendment defenders is too ludicrous to be taken seriously by the general public. The American people realize that while "tragedies must end" sounds like a worthy goal, it is not a justification for undermining the basic liberties upon which this country was founded.
But that can’t be an excuse for inaction. Surely, we can do better than this. If there is even one step we can take to save another child, or another parent, or another town, from the grief that has visited Tucson, and Aurora, and Oak Creek, and Newtown, and communities from Columbine to Blacksburg before that -- then surely we have an obligation to try.
In the coming weeks, I will use whatever power this office holds to engage my fellow citizens -- from law enforcement to mental health professionals to parents and educators -- in an effort aimed at preventing more tragedies like this. Because what choice do we have? We can’t accept events like this as routine. Are we really prepared to say that we’re powerless in the face of such carnage, that the politics are too hard? Are we prepared to say that such violence visited on our children year after year after year is somehow the price of our freedom?
It remains to be seen if the president will head down the path to weaken the 2nd Amendment in spite of his Bill of Rights Day proclamation or if he will focus on those other areas he mentioned ("law enforcement to mental health professionals to parents and educators".) If his positions on political speech, campaign finance laws, and the First Amendment are any guide, the former seems more likely than the latter. The country will soon have an opportunity to see if President believes this is another opportunity to "fundamentally [transform] the United States of America" in the way that we, as his Proclamation says, "balance the power and security of [the] Federal Government with a guarantee of our most basic civil liberties." | <urn:uuid:31a8c36f-79cd-408f-a2e4-a382205a55fc> | CC-MAIN-2016-26 | http://speakwithauthority-jsm.blogspot.com/2012/12/bill-of-rights-day-and-gun-control.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00081-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967637 | 1,383 | 2.578125 | 3 |
Way beyond the orbit of Pluto spins a newly found dwarf planet, 2012 VP113, nicknamed "Biden" by the astronomers. The dwarf planet is believed to be located within the inner Oort cloud, joining the dwarf planet Sedna as a distant member of the solar system.
According to NASA, the discovery of the distant dwarf planet helps define the outer edge of the solar system while also leading to new insights on the mysterious Oort Cloud, considered the “edge” of the solar system. This area could contain up to 2 trillion comets and other objects and is located between 5,000 and 100,000 astronomical units away from the sun, reports NASA. (Each AU is the average distance between the sun and Earth, roughly 93 million miles.) The researchers break up the Oort Cloud into two sections, the much closer inner part, which contains Sedna and the newly discovered dwarf planet, and the outer section, which is believed to be where some comets originate. The Kuiper Belt is located outside of the orbit of Neptune and contains hundreds of thousands of icy objects.
The discovery of 2012 VP113 occurred on Nov. 5, 2012, and was led by Chadwick Trujillo, from the Gemini Observatory in Hawaii, and Scott Sheppard, from the Carnegie Institution in Washington, D.C. Additional observations pinpointed 2012 VP113’s orbit and general surface properties.
Continue Reading Below
Kelly Fast, discipline scientist for NASA's Planetary Astronomy Program, Science Mission Directorate at NASA, said in a statement, “This discovery adds the most distant address thus far to our solar system’s dynamic neighborhood map.” As noted by NASA, the edge of the system solar is largely unknown and the orbit of Sedna and 2012 VP113 go beyond what can be observed by telescopes, meaning there is a lot out there in deep space that has yet to be discovered.
There could be around 900 objects that are 621 miles wide with similar orbits to Sedna and 2012 VP113. The two dwarf planets in the inner Oort Cloud were discovered at their closest point to the sun, 76 AU and around 80 AU, respectively, although their orbits extend hundreds of AU outward and beyond the reach of telescopes. "Some of these inner Oort Cloud objects could rival the size of Mars or Earth. This is because most of the inner Oort Cloud objects are so distant that even very large ones would be too faint to detect with current technology," said Sheppard in a statement.
More interestingly, there may be massive planet out there, 10 times the size of Earth, that could be affecting the orbit of Sedna and 2012 VP113. The researchers believe that, due to the similar orbits of the two dwarf planets, there may be such a planet in the inner Oort Cloud. The research was published in the journal Nature. | <urn:uuid:4bec4ad7-34d7-443d-a781-0c81b6a0fb51> | CC-MAIN-2016-26 | http://www.ibtimes.com/new-dwarf-planet-biden-discovered-2012-vp113-most-distant-family-member-solar-system-photo-1563728 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00009-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939363 | 581 | 3.421875 | 3 |
Colossal hot cloud envelops colliding galaxies
A burst of star formation that lasted at least 200 million years may be responsible for the “halo” in NGC 6240.
Scientists have used Chandra to make a detailed study of an enormous cloud of hot gas enveloping two large colliding galaxies. This unusually large reservoir of gas contains as much mass as 10 billion Suns, spans about 300,000 light-years, and radiates at a temperature of more than 7 million degrees.
This giant gas cloud, which scientists call a “halo,” is located in the system NGC 6240. Astronomers have long known that NGC 6240 is the site of the merger of two large spiral galaxies similar in size to our Milky Way. Each galaxy contains a supermassive black hole at its center. The black holes are spiraling toward one another and may eventually merge to form a larger black hole.
Another consequence of the collision between the galaxies is that the gas contained in each individual galaxy has been violently stirred up. This caused a baby boom of new stars that has lasted for at least 200 million years. During this burst of stellar birth, some of the most massive stars raced through their evolution and exploded relatively quickly as supernovae.
The scientists involved with this study argue that this rush of supernova explosions dispersed relatively high amounts of important elements such as oxygen, neon, magnesium, and silicon into the hot gas of the newly combined galaxies. According to the researchers, the data suggest that this enriched gas has slowly expanded into and mixed with cooler gas that was already there.
During the extended baby boom, shorter bursts of star formation have occurred. For example, the most recent burst of star formation lasted for about 5 million years and occurred about 20 million years ago. However, the scientists do not think that the hot gas was produced just by this shorter burst.
What does the future hold for observations of NGC 6240? Most likely the two spiral galaxies will form one young elliptical galaxy over the course of millions of years. It is unclear, however, how much of the hot gas can be retained by this newly formed galaxy, rather than lost to surrounding space. Regardless, the collision offers the opportunity to witness a relatively nearby version of an event that was common in the early universe when galaxies were much closer together and merged more often.
In this new composite image of NGC 6240, the X-rays from Chandra that reveal the hot gas cloud are colored purple. These data have been combined with optical data from the Hubble Space Telescope, which shows long tidal tails from the merging galaxies, extending to the right and bottom of the image. | <urn:uuid:ec27db6d-e8a6-417f-9005-d6a4d7da6def> | CC-MAIN-2016-26 | http://www.astronomy.com/News-Observing/News/2013/05/Colossal%20hot%20cloud%20envelops%20colliding%20galaxies.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00147-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961923 | 542 | 3.65625 | 4 |
- freely available
Int. J. Environ. Res. Public Health 2013, 10(11), 5671-5682; doi:10.3390/ijerph10115671
Abstract: Born in the early nineteen nineties, evidence-based medicine (EBM) is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized.
With the arrival of the Internet, the globalization of health and the increasing of new opportunities for improving the care process by sharing knowledge, the paradigm of how physicians should face their daily work needs to be restated. Currently, not only is the number of patients that search for information about their illness on the Internet growing [1,2], but also, even junior physicians are starting to base their diagnosis and treatment decisions on the information gathered on the Internet . In this way, the use of the Internet for disseminating health-related knowledge in a more complete and effective way is now becoming a reality. This is one of the aims of the telehealth paradigm.
The idea of telehealth is not new. Since the nineteen nineties, the classical paradigm of clinical practice has been continuously in discussion. More formally, Nikelson defines telehealth as the use of telecommunications to provide health information and care across distance . Telehealth philosophy has redesigned the framework of how physicians should face their daily work. The increase in the variability of patients that physicians can virtually visit, the possible lack of direct contact and the quantity of information available in a continuous care paradigm cause a profound change essential in the classical physicians daily practice. The classical paradigm in which the physician is considered an isolated element that trusts in their own experience to diagnose and apply adequate treatments to a patient is now changing to another that makes use of well-known scientific knowledge as the basis to provide better and more effective treatments. These facts are forcing medical doctors to adapt their daily practice with new methods and technologies, moving from experience to evidence-based medicine (EBM) to address this problem.
EBM promotes the integration of the best biomedical evidence to physicians’ daily clinical practice. EBM requires that physicians are active and continuously complement their expertise with the information available in big libraries of clinical cases. With the arrival of the digital era, the possibility to find information about illness diagnosis and treatment on the Internet is exponentially increasing. Thanks to the current rapid and ubiquitous Internet access, it is possible to access incredibly large digital libraries over the Internet. That opportunity can be exploited by physicians, allowing them to apply very recent scientific medical studies to their current patients in very little time after their publication. In the case of telehealth, where the physician may not have direct access to the patient, the use of patient-centered protocols to monitor and empower the patient in their own care process is critical. For that, the standardization of care continuity and the use of preventive patient-centered protocols will provide an efficient and effective way to profit from the penetration of technology. In other words, the use of care protocols for standardizing health may be the solution to allow holistic control of the patient integrated with the daily practice of the general practitioner. In fact, EBM and clinical guidelines have been used for creating specific telehealth protocols .
However, although EBM aims to be patient centered, taking into account the patient’s choices in the process of care , there is a growing skepticism in the way EBM and clinical guidelines have been deployed in a personal health approach [8,9]. Clinical guidelines are continuously improved by the results achieved in clinical trials. Clinical trials are based on stratification and segmentation, but not on individualized patients. In this way, clinical guideline critics argue that the characteristics of clinical trial population inclusion criteria differ critically from individual patients, which should be the target of guidelines . For the telehealth paradigm, the problem is even worse. The classical statistical approach of clinical trials is based on general probabilistic models that analyze the effect of treatments or diagnosis methods in different groups looking for evidence that demonstrates the validity of the processes. However, these probabilistic models do not take into account characteristics, such as the dynamic change of the patient’s history or the iterative effect of physicians decisions on patient behavior, depending on the patient’s personality. In our vision, this information is critical in a telecare process. Continuous control of disease involves the patient and the physician in a very coupled, dynamic and iterative flow in which the decisions of physicians and the responses of patients seem to be as important as the biomedical data gathered in the care process. For that, to be able to construct patient-centered clinical guidelines in a holistic way, the creation of probabilistic models that reflect the statistical dependencies and correlations among the variables in the care protocol of a disease is necessary, taking into account not only patient characteristics, but also the effect of general practitioner decisions. The use of that probabilistic model within the use of clinical trial statistical methods will enable the maximizing of the efficiency and accuracy in each optimization iteration of the clinical guideline.
In this paper, an interactive pattern recognition probabilistic approach based on EBM principles is formalized. This approach takes into account the whole care process, as well as the relationship among the stakeholders involved. This paper is organized as follows. First, EBM and clinical guideline concepts are defined in more detail. Secondly, our EBM probabilistic model is presented, and finally, a short discussion about the results concludes the paper.
2. Evidence-Based Medicine Principles and Clinical Guidelines
According to Sacket et al. , evidence-based medicine is the conscientious, explicit and judicious use of the current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the available external clinical evidence from systematic research.
In EBM, the clinical competence of individual physicians is integrated with the best clinical evidence available through systematic research . In this way, EBM is aimed at physicians making their diagnosis decisions and treatments based on the most updated biomedical literature by making a critical argument and taking into account their personal experience. EBM promotes the creation of clinical guidelines and protocols to guide clinical decisions. Those protocols and guidelines should be integrated into the professionals’ daily practice. However those protocols are not intended to be strictly followed, but to empower physicians to achieve cost-effective and high quality care paths. In summary, EBM promotes:
Intensive use of the biomedical literature: The integration of biomedical literature with daily practice will allow the decisions of physicians to be based on statistical evidence. To allow that integration, it is necessary that this information be accessible to physicians in an easy and practical way.
Critical reading of the literature based on personal experience: Due to the high variability of human behavior and multi-pathological patients, it is very usual that patients that have the same illness have different responses to the same treatment. Therefore, the evidence taken from the biomedical literature should be used only as a valid complement to the personal experience of the physician.
Patient-centered care: The EBM advocates for patient involvement in the care process. The empowerment of patients and informal caregivers not only will allow for a more effective self-care of patients, but also allows for better understanding of their illness, allowing them to prevent disease complications.
The application of evidence-based medicine principles requires the continuous analysis of the literature and of clinical cases to support the physicians’ daily practice. To achieve such empowerment, the first step is to provide physicians with current biomedical knowledge at their work environment. One of the tools used by EBM to disseminate scientific evidence to the medical community is clinical guidelines. Clinical guidelines are documents whose objectives are to support physicians’ clinical decisions by providing them with scientifically validated evidence to diagnose, manage and treat each specific illness. More formally, in , a clinical guideline is defined as being systematically developed statements to assist practitioners and patient decisions about appropriate healthcare for specific circumstances. A more recent definition was presented in as statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.
Clinical guidelines identify, summarize and evaluate medical knowledge based on scientific evidence. Those documents suppose a continuously updated state-of-the-art prevention, diagnosis, prognosis and treatments that currently have demonstrated evidence of their effectiveness on specific illnesses. Those clinical guidelines are becoming reference documents for health professionals, supporting them in their daily decisions. Clinical guidelines have demonstrated their advantages [13,14], supporting health professionals in the continuous improvement of clinical outcomes, reducing the variability in clinical practice, forcing experts to unify criteria and providing greater cost effectiveness in daily practice.
The use of information and communication technologies (ICT) can be a way to disseminate clinical guidelines. In this line, there are different digital libraries that make clinical guidelines available over the Internet, like PUBMED , Fisterra or Cochrane . These repositories have indexed a high quantity of clinical guidelines, available for use by physicians and medical teams.
These documents are exponentially increasing in number and are continuously updated by biomedical researchers. This continuous improvement requires an iterative process that is currently being discussed [18,19,20,21]. To allow for the correct deployment of the EBM, it is necessary that the most recent and contrasted scientific evidence be reflected in the clinical guidelines. In this iterative process, the scientific discoveries are used by medical expert communities for updating existing clinical guidelines, providing better hypotheses for caring for patients and becoming, step by step, the perfect protocols that cover all the issues for an illness.
To maximize the efficiency of this iterative process, the creation of a probabilistic model allows us to work in a formal framework that ensures the theoretical correctness of our hypothesis and, then, to obtain better results in practice. There are some works that point to the Bayesian theories as the most accurate formal framework to approach the achievement of biomedical evidence , and some also advising about the problems of working with other widely used validation methods, like p-value . In those papers, the authors show how the Bayesian theory can help in the validation of the evidence achieved in biomedical research, but they do not take into account one of the fundamentals of EBM: daily practice integration. Our hypothesis is that incorporating daily practice into the probabilistic model will allow us to achieve a better understanding of the EBM dependencies and, then, get better results for the improvement of clinical guidelines.
In this paper, we propose a Bayesian approximation to the whole process of EBM integrating biomedical research with the general practitioners daily practice.
3. Evidence-Based Medicine in the Interactive Pattern Recognition Framework
The number of existing clinical pathways and guidelines available on the Internet for use by physicians and medical teams is exponentially increasing and continuously improving. However, the great amount of information available makes it practically impossible for physicians to be properly updated. The pattern recognition (PR) paradigm can be a solution for supporting physicians in their daily practice. PR provides a formal framework that allows for the development of mechanisms for supervision and inference of the most accurate protocols. Additionally, the PR framework also allows us to design new adaptation techniques based on personal profiles.
Interacting with machines has proven to help many human activities. However, machines can also take advantage of human feedback to improve their performance. In this context, the new interactive pattern recognition (IPR) framework has been recently proposed . This proposal enables interaction between a human and a PR system, allowing the system to learn from this interaction, as well as adapting the system itself to the human behavior. IPR has been applied in different PR fields. These include interactive transcription of handwritten and spoken documents, computer-assisted language translation and interactive text generation and parsing, among others . In this section, we aim to apply the principles of IPR to the management of evidence-based medicine (EBM).
The EBM-based guidelines are adapted depending on the specific characteristics of current patients. In an IPR scenario, these adaptions can be the basis for the automatic inference of new guidelines, helping in the continuous improvement of them by using the pattern recognition approach. In other words, the application of IPR to EBM will allow us to iteratively adapt the clinical guidelines to the specific features of patients, as well as to automatically improve the new guidelines using the information of each individual adaption. In addition, these inferred guidelines are expressed in a formal way. On the one hand, the formalization of clinical guidelines allows for the possibility of defining models that represent, adequately and without ambiguities, the clinical care process. On the other hand, it also enables building automatic methods to estimate the formal guidelines, as well as mathematical models for optimizing the iterative cycle for the continuous improvement of guidelines.
3.1. Interactive Pattern Recognition Framework
In order to allow for an effective application of the pattern recognition paradigm, it is important to analyze the recognition problem from a probabilistic perspective. In the classical PR paradigm, we can formulate the problem as: Let x be an input stimulus, observation or signal and y a hypothesis or output, which the system has to derive from x. Let M be a model or a set of models used by the system to derive its hypotheses. In general, M is obtained through an automatic batch training process from a given training corpus of the task being considered.
The idea of the classic PR paradigm is to find the output hypothesis, ŷ, that maximizes the posterior probability, Pr(y|x), of the hypothesis, y, when the entry data, x, is produced. Using a model, M, this is approximated as:
The terms in Equation (2) are the likelihood model, PM(x|y), that represents the relationship between the input stimulus and its output hypothesis and the, prior PM(h), that represents the well-formedness of the output hypothesis.
3.2. IPR Approach for EBM
However, the application of the the classical pattern recognition framework to EBM is not realistic, as it is enunciated. This is because the inferred models rarely are perfect, and the inference method cannot be fail-safe. For that, the presence of a health professional who ensures the validity of the hypothesis inferred by PR systems is needed. In order to do that, there are two possible approaches for incorporating the health expert into the process of inference:
Post-process approach: The PR system offers a solution, and the expert analyzes and adapts it.
Interactive approach: The expert is involved in the IPR building process of the solution.
In this paper, we present a new IPR approach for supporting EBM in the formalization of clinical guidelines and their optimization and adaption to daily care. That means that health professionals are continuously involved in the process of identification, adaption and optimization of the clinical guideline hypothesis. In Figure 1, a graphical description of the presented model is shown. According to the EBM philosophy, we have separated the problem into two different stages: the daily care protocol cycle and the interactive protocol improvement cycle.
3.2.1. Daily Care Protocol Cycle
The daily care protocol cycle represents the usual path followed by the patient involved in a care process following a clinical guideline. In this cycle, the patient is in touch with his physician. Depending on the more adequate clinical guideline, h, and the multiple signs and symptoms of the patient, x, a different status, s, is sug/gested that is associated with the appropriate treatment or diagnostic method. A patient can respond differently to the treatment depending on his pathologies or personal characteristics, which can affect the treatment results (for example, adherence). These results will become new entries, r, to the next cycle iteration. This model can be seen as a classical dialogue system, where the treatment is the response to the signs and symptoms of the patient. In each iteration of the daily process, the physician analyzes the data, x, the status, s, and the current clinical guideline, h, to correct the status of the patient within the clinical guidelines. If considered necessary, the physician is able to modify the patient status. That implies the application of a different treatment or diagnostic method that has not been directly suggested by the clinical guidelines. For example, if according to the data gathered the current hypothesis (clinical guideline) is that the patient severity is high, but the physician considers that it is not accurate, he can change the status to low severity, applying the treatments proposed for this case by the clinical guidelines.
In Equation (3) h is the clinical guideline associated with the patient, S′ is the history of all the previous states visited by the patient in the clinical guideline defined as h, supervised and modified, if needed, by physicians in each iteration. This equation represents that the system obtains the best status, ŝ, that has been associated with the best treatment, using all the information gathered (x, h, S′).
Using Bayes and applying the restriction that Ph(x|s, S′) does not depend on S′, we can reach Equation (4). This assumption, similar to the application in other well-known probabilistic models, like Hidden Markov Models (HMM), allows us to reduce the problem, making it easier to solve. Observing Equation (4), it is important to note that Ph(s|S′) is the a priori probability of s being compatible with S′; so, we take into account only the s compatible with the current hypothesis, h, and the history, S′. Each status has associated treatments and diagnostic methods that cause a response, r, in the patient. This response will be used in the next care iteration as new gathered data, x.
In this line, the physician can correct the status in each iteration by selecting different treatments and diagnostic methods. However, according to the model, the physician is not able to change the structure of the hypothesis, meaning that the physician cannot change the clinical guideline. If, according to the physician’s experience, the patient needs a different treatment not included in the hypothesis, then the patient should go out of the clinical guideline, starting a classical process of care.
3.2.2. Interactive Protocol Improvement Cycle
As we have seen in the previous section, the physician is not able to modify the structure of the current clinical guideline, h. The improvement of the clinical guidelines takes place in the interactive protocol improvement cycle. In this stage, a group of independent health experts is involved in the interactive learning process to offer new and optimized clinical guidelines to physicians for daily practice.
The treatments and diagnostic methods used by physicians in daily care, as well as the responses of the patients can be used to infer new clinical guidelines, better adapted and optimized for daily practice using interactive pattern recognition methodologies. The aim of this section is to build the probabilistic formal framework of IPR to support experts in this second cycle.
The second cycle exposed in Figure 1 represents the continuous improvement of the clinical guidelines. In this stage, the patient’s signs and symptoms, x, and the diagnostics methods and treatments, s, will be used to infer a new improved clinical guideline, h. In addition, this continuous improvement also depends on the medical expert committee decisions, f, which apply human knowledge to the clinical guideline, as well as the previous clinical guideline used, H, due to the close relationship between the treatment followed and the entry data. For this model, we assume that new advances and scientific evidence are included in the f function and are filtered and applied according to the medical expert committee decisions.
In the Equation (5), h is the clinical guideline, H′ is the history of applied clinical guidelines, x is the data collected from the patient, s is the status of the patient corrected by the physician and f is the feedback of the expert group that is able to modify the structure of the guideline by inserting, deleting and modifying the status available. Intuitively, the new clinical guideline depends on the information gathered from daily care (x, s), the expert committee decisions, f, and the history of the previous hypothesis, H′.
Applying Bayes in a similar way to the previous section, we can achieve Equation (6), where (x, s) are the entry samples and P (x, s|h, f, H′) can be simplified by making a naive Bayes’ assumption: the input observation, x, and the current state in the process, s, are statistically independent variables, given h, f and H′, obtaining:
Simplifying dependencies on medical expert committee feedback, f, and the historical hypothesis, H′, we can write the prediction of ĥ in more detail:
According to this equation, in order to maximize clinical guidelines improvement, it is not only necessary to take into account the concordance of the new clinical guideline with the signs and symptoms of the patient, (P (x|h)), but also, the concordance with the treatments followed, (P (s|h)). This is because the selection of the correct treatments and diagnostic methods is related to the response of the patient gathered in the form of signs and symptoms. In addition, we need to take into account (P (h|f, H′)), which is the probability of the hypothesis, h, that is compatible with the expert decisions, f, and the history of the guidelines, H′.
4. Discussion and Conclusion
In this paper, a double cyclic interactive paradigm for applying pattern recognition to EBM is formalized. In the first cycle, the daily care of the patient is formalized by the clinical guideline and supervised by the experience of the physician in the interaction with the patient. In the second cycle, the clinical guidelines, used by physicians in daily practice, are constructed and optimized based on clinical evidence from the results achieved in studies made by biomedical researchers.
According to probability theory, the systems that are intended to build good clinical guidelines should maximize the probability of the acceptance of data gathered from patient x, the probability of the acceptance of the physician interactions S and the a priori probability of the model. That means that the treatments, diagnostic methods and other patient/physician interactions are as important as the data gathered (biomedical, demographic, etc.) from the patient. Then, all these interactions directly affect the success of a clinical guideline and should be taken into account, together with the rest of the data in clinical cases, to improve the clinical guidelines.
Intuitively, daily practice physician interactions can provide information related to the experienced feelings of the physician or to the behavior of the patient facing the treatment (i.e., adherence), which can be critical in the selection of the disease protocol and can be very difficult measure directly from the patient. The application of a treatment in itself can be a diagnostic method or even voided treatments can provide information (i.e., placebo effect). For that and according to our theoretical results, the treatments, diagnostic methods and other patient/physicians interactions that are applied should be added to statistical datasets, to learn more and to be adapted to reality and accurate models.
Furthermore, the deductive decisions provided by the medical expert committee in order to improve the clinical guideline based on previous hypothesis are decisive. Therefore, the creation of systems and models that allow for these experts to be more aware of the physicians’ daily practice will produce more effective and accurate clinical guidelines.
In order to provide a framework to evaluate the proposed system, we advocate for one based on usability and quality of service. As IPR is a supervised model, the error expected in each iteration is zero. This is because the experts correct the errors in each iteration to ensure a safe deployment of the clinical guidelines in real cases. In that case, indicators, such as the number of iterations needed to achieve a complete (or acceptable) clinical guideline, the number of corrections made by physicians and medical experts in each iteration, the quality of service or the satisfaction of the users (not only physicians, but also patients), can be used to evaluate the system. However, this evaluation should be made in real cases with real patients in order to evaluate the impact of this model in daily practice.
To take advantage of this paradigm, we need to use a formal language to represent the hypothesis. This formal representation can be interpreted by computers, and then, care can be deployed using the telehealth paradigm. Those protocols can be supervised by physicians using the daily care cycle interactive pattern formalized in this paper. In the second interactive cycle, we suggest the creation of interactive data mining processes that incorporate the patient and professional. This not only will take into account the results of classical statistical approaches, but also the interaction among professionals and patients, like dynamic treatments and patient decisions, like adherence, which will be integrated in the model, being more accurate and adapted to reality. However, to allow for a fully interactive system, we need representation models that are easy to understand by human experts. This is because, the easier a language is to be understood, the easier the hypothesis is to be supervised and optimized.
In this way, to apply the formalism achieved, we suggest the use of finite state-based workflows as the hypothesis language, as well as process mining algorithms, to infer the new hypothesis. Finite state-based workflows are designed to be easily understood and have been used to represent guidelines [26,27]. Finite state systems can be automatized by computers and can be propagated through telehealth applications, allowing for the supervision of physicians in daily care. Process mining algorithms can be used to infer workflows that can be supervised, optimized and corrected by health experts to achieve better formal clinical guidelines in the next iteration of the process.
The authors want to acknowledge the European Commision for their suport via MOSAIC (ICT-FP7-600914) and HEARTWAYS (ICT-SME-315659) EU-Projects
Conflicts of Interest
The authors declare no conflict of interest.
- Kummervold, P.E.; Chronaki, C.E.; Lausen, B.; Prokosch, H.U.; Rasmussen, J.; Santana, S.; Staniszewski, A.; Wangberg, S.C. eHealth trends in Europe 2005–2007: A population-based survey. J. Med. Int. Res. 2008, 10, e42. [Google Scholar] [CrossRef]
- Powell, J.; Inglis, N.; Ronnie, J.; Large, S. The characteristics and motivations of online health information seekers: Cross-sectional survey and qualitative interview study. J. Med. Int. Res. 2011, 13, e20. [Google Scholar] [CrossRef]
- Hughes, B.; Joshi, I.; Lemonde, H.; Wareham, J. Junior physician's use of Web 2.0 for information seeking and medical education: A qualitative study. Int. J. Med. Inf. 2009, 78, 645–655. [Google Scholar] [CrossRef]
- Nickelson, D.W. Telehealth and the evolving health care system: Strategic opportunities for professional psychology. Prof. Psychol. Res. Pract. 1998, 29, 527–535. [Google Scholar] [CrossRef]
- Straus, S.E.; Richardson, W.S.; Glasziou, P.; Haynes, R.B. Evidence-Based Medicine: How to Practice and Teach EBM, 3rd ed.; Churchill Livingstone: London, UK, 2005. [Google Scholar]
- Britton, B.P. First home telehealth clinical guidelines developed by the American Telemedicine Association. Home Healthc. Nurse 2003, 21, 703–706. [Google Scholar] [CrossRef]
- Sackett, D.L.; Rosenberg, W.M.C.; Gray, M.J.A.; Haynes, B.R.; Richardson, S.W. Evidence based medicine: What it is and what it isn't. BMJ 1996, 312, 71–72. [Google Scholar] [CrossRef]
- Goldberger, J.J.; Buxton, A.E. Personalized medicine vs. guideline-based medicine. JAMA J. Am. Med. Assoc. 2013, 309, 2559–2560. [Google Scholar] [CrossRef]
- Romana, H.W. Is evidence-based medicine patient-centered and is patient-centered care evidence-based? Health Serv. Res. 2006, 41, 1–8. [Google Scholar] [CrossRef]
- Elstein, A.S. On the origins and development of evidence-based medicine and medical decision making. Springer 2004, 2004, 184–189. [Google Scholar]
- Field, M.; Lohr, K. Clinical Practice Guidelines: Directions for a New Program, 3rd ed.; National Academy Press: Washington, DC, USA, 1990. [Google Scholar]
- Graham, R.; Mancher, M.; Wolman, D.M.; Greenfield, S.; Steinberg, E. Clinical Practice Guidelines We Can Trust; The National Academies Press: Washington, DC, USA, 2011. [Google Scholar]
- Grol, R.; Grimshaw, J. From best evidence to best practice: Effective implementation of change in patients' care. Lancet 2003, 362, 1225–1230. [Google Scholar] [CrossRef]
- Cosby, J.L. Improving patient care: The implementation of change in clinical practice. Qual. Saf. Health Care 2006, 15, 447. [Google Scholar] [CrossRef]
- PubMed Library. National Library of Medicine and The National Institutes of Health PubMed Library. Available online: http://www.pubmed.gov (accessed on 24 October 2013).
- Fisterra. Available online: http://www.fisterra.com/fisterrae/ (accessed on 24 October 2013).
- The Cochrane Collaboration. COCHRANE Library. Available online: http://www.cochrane.org/index.htm (accessed on 24 October 2013).
- Eden, J.; Wheatley, B.; McNeil, B.; Sox, H. Knowing What Works in Health Care: A Roadmap for the Nation; The National Academies Press: Washington, DC, USA, 2008. [Google Scholar]
- Buchan, H.A.; Currie, K.C.; Lourey, E.J.; Duggan, G.R. Australian clinical practice guidelines a national study. Med. J. Aust. 2010, 192, 490–494. [Google Scholar]
- Shaneyfelt TM, C.R. Reassessment of clinical practice guidelines: Go gently into that good night. JAMA 2009, 301, 868–869. [Google Scholar] [CrossRef]
- Fernandez-Llatas, C.; Meneu, T.; Benedi, J.M.; Traver, V. Activity-Based Process Mining for Clinical Pathways Computer Aided Design. In Proceedings of the IEEE Engineering in the 32nd Annual International Conference of the Medicine and Biology Society, Buenos Aires, Argentina, 1–4 September 2010.
- Ashby, D.; Smith, A.F. Evidence-based medicine as Bayesian decision-making. Stat. Med. 2000, 19, 3291–3305. [Google Scholar] [CrossRef]
- Goodman, S.N. Toward evidence-based medical statistics 2: The bayes factor. Ann. Intern. Med. 1999, 130, 1005–1013. [Google Scholar] [CrossRef]
- Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; John Wiley and Sons: New York, NY, USA, 2001. [Google Scholar]
- Toselli, A.H.; Vidal, E.; Casacuberta, F. Multimodal Interactive Pattern Recognition and Applications; Springer-Verlag: Berlin, Germany, 2011. [Google Scholar]
- Sedlmayr, M.; Rose, T.; Rhrig, R.; Meister, M. A Workflow Approach towards GLIF Execution. In Proceedings of the European Conference on Artificial Intelligence (ECAI), Riva del Garda, Italy, 29 August–1 September 2006.
- Fernandez-Llatas, C.; Pileggi, S.F.; Traver, V.; Benedi, J.M. Timed Parallel Automaton: A Mathematical Tool for Defining Highly Expressive Formal Workflows. In Proceedings of the IEEE 2011 Fifth Asia Modelling Symposium AMS, Manila, Philippines, 24–26 May 2011; pp. 56–61.
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). | <urn:uuid:ae8611d8-8ea5-4ca2-b0d4-39c3920beacd> | CC-MAIN-2016-26 | http://mdpi.com/1660-4601/10/11/5671/htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00117-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.911841 | 7,012 | 2.640625 | 3 |
“Our immigrant ancestor…was part of a large group of Presbyterians who followed an emigration led by the Reverend William Martin in 1772. Several Presbyterian pastors led their congregations in emigrations from Ulster to America in the decade following Rev. Dr. Thomas Clark's emigration from Ballybay, Northern Ireland to New York Colony in 1764. The most notable of these was the Martin emigration of Covenanter Presbyterian in 1772 from the area of Kellswater in central County Antrim, now part of Northern Ireland."
"In 1750 Presbyterians from Octoraro, Virginia, and North Carolina, came to South Carolina and settled at Rocky Creek. By 1755 Irish immigrants, many of them Covenanters, began arriving. Various groups (Associate, Covenanter, Burgher, Anti-Burgher, Seceders) formed the "Catholic" (meaning a union of various groups of Presbyterians) church on Rocky Mount Road, 15 miles southeast of Chester. In 1770 Covenanters began holding society meetings and wrote to Ireland for a minister. Reverend William Martin answered the call in 1772."
"The Rev. William Martin was the only Covenanter minister in counties Down and Antrim at that time. In 1760 he resided at Kellswater, in the townland of Carnaghts in the Parish of Connor. He had oversight responsibility for societies at Cullybackey, Laymore, Cloughmills, and Dervock. He preached also in Londonderry and Donegal. The Presbytery was founded in 1743 and Kellswater became the center in 1760."
"There were five ships in the emigration led by Reverend Martin, all of which sailed in 1772. The first two sailed from Larne, the next two from Belfast, and the last one from Newry. The emigrants settled throughout western South Carolina, many in the Abbeville area. Reverend Martin himself settled in the general area of Abbeville, South Carolina (Rocky Creek in Chester County). After the British burned his church in 1780, he took refuge in Mecklenburg County, North Carolina.”
“The James and Mary sailed first on August 25 from Larne. There was smallpox on board (five children died) when they arrived in Charleston harbor on October 16. They were required to remain on board in quarantine, lying off Sullivan's Island for over seven weeks, until the first part of December.” Ulster Emigration to Colonial America: 1718-1775. Dickson, P:253.; English America: American Plantations & Colonies. Thomas Langford, contains ship lists of voyages to English America from 1500 to 1825; The Vessels, Voyages, Settlement and People of English America, 1500-1825.
“The next ship to sail was the Lord Dunluce that left Larne on October 4 and arrived in Charleston on December 20. This is the only ship that listed "Rev. Wm. Martin (Kellswater)" as an agent. The original sailing date was to have been August 15. The sailing was delayed until August 20, and then rescheduled for September 22. On August 28, the ship announced that passengers must give earnest money by September 5 since a greater number had offered to go than could betaken. On September 15, the ship advertised that, since some families had drawn back, two hundred more passengers could be accommodated. Reverend Martin was on this ship when it finally sailed on October 4. One man and several children died of small pox on the trip.” Dickson, P:254.
“The Pennsylvania Farmer, whose destination had originally been advertised as Philadelphia, sailed from Belfast on October 16 and arrived in Charleston on December 19.” Dickson, P:248.
“The Hopewell sailed from Belfast on October 19 and arrived in Charleston on December 23.” Dickson, P:248.
“The Freemason sailed from Newry on October 27 and arrived in Charleston on December 22” Dickson, P:252.
“According to Council Journal 37, Province of South Carolina, under date of 6 Jan. 1773, the brigantine Free Mason, out of Ireland (port not specified), discharged at Charles Town, South Carolina, the following among its Irish Protestant immigrant passengers who were authorized the amount of land, in South Carolina, indicated opposite their names:
(55 listed passengers alphabetized here by surnames - Land Warrant Petitions with number of acres)
Anderson, Hugh . ..100
Barnes, George . . .100
Beard, Jean . . .100 (Listed separately)
Beard, Margaret . . .100 * Listed together in this order
Beard, William . . .100 *
Bigham, Margaret . . .150
Breden, James . . .300
Brown, John . . .300
Coapling, Charles . . .150 * Listed together in this order
Coapling, Alexand(er) . . .100 *
Coapling, William, Jun'r ... 100 *
Coapling, Jane . . .100 *
Coapling, Charles . . .100 (Listed separately)
Coapling, William . . .350 (Listed separately)
Cox, James . . .300
Daniels, Margaret . . .100
Eger, Emila . . .100
Fleman, John . . .100
Foster, Isabella . . .100 * Listed together in this order
Foster, James . . .100 *
Foster, William . . .300 *
Foster, Sarah . . .100 *
Gorley, Hugh . . .100
Hall, John . . .250
Livingston, Isaac . . .300
McClurkam, Richard . . .150 (Listed "Rich'd")
McGreary, Edward . . .100
McKay, Samuel . . .450
McKee, William . . .250
McKnight, John . . .350 (Listed separately)
McKnight, Mary . . .100 * Listed together in this order
McKnight, Jane . . .100 *
McKnight, Margaret . . .100 *
McLeland, Thomas . . .100
McMachor, Arthur . . .100
Mullen, John . . .100
Nisbett, Jonathan . . .100 (Listed separately)
Nisbett, Robert . . .400 (Listed separately)
Paterson, Samuel . . .350 (able to pay for land)
Patterson, Mary . . .100 (unable to pay for land)
Presley, Mary . . .100
Pressley, John . . .300
Reynolds, William . . .450 (Listed "Wm.")
Richey, John . . .100
Riddle, John . . .300
Shane, William . . .100
Stevenson, Catherine . . .100 (Listed "Cath'n")
Stuart, Charles . . .100 (Listed "Chas.")
Taylor, Andrew . . .200
Thomson, Henry . . .200 * Listed together in this order
Thomson, William . . .100 *
Thomson, Robert . . .100 *
Thomson, John . . .100 *
Thursdale, John . . .250
Wilson, James . . .100”
"In the Province of South Carolina in 1773, land was granted under the Crown, as follows: Single man or woman (16 yrs. of age or older) - 100 acres Married man or widow - 100 acres for self and 50 acres for each child under 16 years Married woman - none."
"Prior to this time, the "Bounty Act" had expired and no bounty could be paid to the individuals. There was, therefore, no list of the passengers for the purpose of determining "family rights". Family members and other individual passengers who were not eligible (e.g., under 15) to petition for free land (still available under the eighth clause of the General Duty Act of June 14, 1751) are not listed."
Source Citation and Source Information:
Scotch-Irish Migration to South Carolina, 1772: Reverend William Martin And His Five Shiploads of Settlers. Jean Stephenson. Shenandoah Publishing House. 1970.
The Five Ships and the People who came with the Rev. Martin. The names of the emigrants have been reconstructed from letters written home to Ulster and published in the paper and from extractions of the South Carolina Quarter Session Minutes, by Janie Revill and Jean Stephenson; there is a Surname Summary of those who came with the Reverend William Martin.
Ships to South Carolina, 1768 & 1772.
Journal 37 of the South Carolina Council, Meeting of January 6, 1773, PP:15-25.
Protestant Immigrants to South Carolina, 1763-1773, PP:126-127.
Patterson Immigration: The Descendants of Samuel Senton Patterson - From County Down, Ireland to South Carolina & Beyond.http://freepages.genealogy.rootsweb.ancestry.com/~pattersonh...
Passengers to the Carolinas. USGenWeb. South Carolina. Victoria Proctor. http://www.sciway3.net/proctor/state/ships/SC_ships2.html | <urn:uuid:2cd0edcd-ed2e-47bb-854c-4f65277fe7ca> | CC-MAIN-2016-26 | http://boards.ancestry.com/surnames.martin/13132.1/mb.ashx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00190-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.925016 | 2,009 | 3.015625 | 3 |
Nepal needs a better surveillance system to accurately estimate animal-borne parasitic infections that claim more victims than malaria and are comparable to HIV/AIDS in this country, says an international study.
A team of scientists from Belgium, Nepal, New Zealand, Switzerland and the Netherlands estimates the public health burden of animal-borne parasitic disease or 'parasitic zoonoses' in Nepal at 24,000 healthy years lost annually.
The study says that parasitic zoonoses are grouped under neglected tropical diseases (NTDs) that are prevalent in, and often endemic to, low-income countries.
In Nepal, top parasitic zoonoses include 'neurocysticercosis,' caused by the pork tapeworm that affects the nervous system, and 'cystic echinococcosis,' caused by dog tapeworms and spread through dog feces.
A third is 'congenital toxoplasmosis' in which mothers infected with the toxoplasma parasite give birth to infected children.
The study calculated the burden of these diseases using the global health metric system of disability-adjusted life years (DALYs), which indicates the number of healthy years lost by a patient or a population due to infection, death or both.
The team estimated that Nepal loses 14,268 healthy years annually due to neurocysticercosis, 9,255 years due to congenital toxoplasmosis, and 251 due to cystic echinococcosis.
Nepal's integrated NTD control program (2010–2014), implemented by the health ministry's epidemiology and disease control division, is hampered by a lack of population-level data for these diseases, says Keshab Yogi, program officer for NTDs at the WHO office in Nepal.
The study notes that "the official passive surveillance system of the government of Nepal, the Health Management Information system, has been reported to suffer from inconsistencies... active surveillance systems are in place, but (they) only target certain vaccine-preventable diseases, and not parasitic zoonoses."
The study analyzed numerous data sources to examine the relevance and importance of such infections in Nepal. "We believe that such efforts can help interrupt the vicious circle of under-recognition, underfunding and neglect," Brecht Devleesschauwer, lead author of the study, said.
The DALY metric was used in the Global Burden of Disease Report 2010 (a comprehensive regional and global assessment of mortality and disability from major diseases, injuries, and risk factors) to estimate the burden of some parasitic zoonoses in Nepal. "But these merely stratify global estimates by country," Devleesschauwer said.
"Our paper is a response to such estimates, as our estimates are rooted in local data, and not the result of a multi-country modelling and smoothening exercise," Devleesschauwer explains.
Source: Science Development Network
Click here for the complete issue. | <urn:uuid:7139cf62-bb13-438a-8662-ad95c0cdf747> | CC-MAIN-2016-26 | http://www.asiabiotech.com/publication/apbn/18/english/preserved-docs/1804/18040013x.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00143-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923061 | 606 | 3.1875 | 3 |
of Slavery in Pennsylvania
was the 225th anniversary of the abolition of slavery in
the Commonwealth of Pennsylvania.
1770 the legislature of the Commonwealth of Pennsylvania
— the General Assembly —
had passed legislation prohibiting the
importation of slaves, but this was disallowed by the
Privy Council in London.
In 1701, when the General Assembly passed legislation imposing duty of £20 on
every slave imported —
thereby making it uneconomic to import slaves — the
legislation was again disallowed by London.
1780 the General Assembly enacted an Act to abolish slavery
in respect of people who were born in the future and
provided for the gradual abolition in respect of
copy of the early constitution of Vermont can be found
Avalon Project of the Yale Law School.
to pages dealing with the abolition of slavery in the
Amendment to the Bill of Rights
to pages dealing with the abolition of slavery in other
campaign against slavery
Thomas Fowell Buxton (1786-1846)
Trade Act 1807
Trade Act 1824
Trade Act 1843
Abolition Act 1833
to other pages dealing with slavery:
slavery still exist? | <urn:uuid:9dc42792-ea40-40a8-9fa9-187b725a8e15> | CC-MAIN-2016-26 | http://www.anti-slaverysociety.addr.com/hus-penn.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00027-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.810738 | 248 | 3.296875 | 3 |
Brasilia is the capital of Brazil, with a population of about 3.6 million for its metropolitan area. The city was planned and developed in 1956, residential buildings located around expansive urban areas, and specific areas for almost everything, including hotel sectors. When seen from above, the main planned part of the city's shape resembles an airplane or butterfly. The image was acquired October 19, 2005, covers an area of 31 x 25.8 km, and is located at 15.7 degrees south latitude, 47.8 degrees west longitude.
With its 14 spectral bands from the visible to the thermal infrared wavelength region and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet. ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products.
The broad spectral coverage and high spectral resolution of ASTER provides scientists in numerous disciplines with critical information for surface mapping and monitoring of dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and geology; and measuring surface heat balance.
The ASTER U.S. science team is located at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA's Science Mission Directorate, Washington, D.C.
More information about ASTER is available at http://asterweb.jpl.nasa.gov/. | <urn:uuid:2c59f077-eb5f-4a3c-b0d6-36b4ea99ee2d> | CC-MAIN-2016-26 | http://photojournal.jpl.nasa.gov/catalog/PIA13045 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00073-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923955 | 368 | 3.09375 | 3 |
While scavenging through her family archives, Ruth Bush of Lehighton chanced upon a worn spiral-bound, orange-covered book entitled the Story of Carbon County.
As she curiously peered through its introductory pages, she learned of how it came to be, and was all the more fascinated.
The 154-page book began, "In this 'Space Age' as we look to the moon and beyond, we sometime tend to overlook the importance of that which is closest to us.
"Many residents of Carbon County are completely unaware of the county's colorful history, present growth, and plans for a prosperous future.
"To help pupils in the Carbon County Schools more fully appreciate their heritage and become more aware of the plans for their future, Mrs. Natalie B. Murray, director of the Special Education Curriculum Development Center, Title III Federal Project, suggested that 16 groups of mentally and academically-talented students help to compile information about Carbon County."
This student teams began their projects in November 1969. The resulting book was completed a year-and-a-half later, in March, 1971 by Russell R. Hahn and Franklin A. Mummy, "teachers: Classes for the Mentally and Academically Talented."
The fully-illustrated, black-and-white tome is composed of eight chapters: The Indians, Treaties with the Indians, Moravian Indian School, Frontier Indians, Industrial growth, People and places, and the Mollie Maguires. It culminating with a final section on Plans for the Future.
The writing, which is to the point and often revealing, spends it first four chapters on the Native Americans, who in 1970, were referred to as "Indians."
It begins with the journey of the Mengwe, later called the Iroquois, and the Lenni-Lenape, as they crossed the Namaesi Sipu, Mississippi River through the land to the Alligewi Indians, namesakes of the Allegheny Mountains.
It talks about Indian life and throws in the legend of Glen Onoko.
Next, it covers Penn's land purchases from the Lenape, and the 1737 Walking Purchase that took away their remaining land. A series of maps indicate how each treaty expanded the land of the Pennsylvania colonists into Towamensing, the wilderness.
Then, the story introduces the Moravians, who built the village of Gnadenhutten, part of present-day Lehighton. The Moravians began to convert the Indians. Indians, who had been chased off their land by the Walking Purchase, took revenge on the Moravians and their Indian converts, leading to a state of warfare and the mustering of troops under Benjamin Franklin to build Fort Allen, present-day Weissport.
The book notes some stories that have become lost, such as the "Wolf Pack" story. "Shortly after 1800, William Arner and his family moved to Mahoning Valley, the story notes. When he found his flock of sheep nearly destroyed by wolves, he set a trap and laid in wait in a tree.
When the wolves came, he fired at the leader, and then when he went to reload, he dropped his rifle. He spent the night with a dozen growling wolves gnawing at the trunk until they left at daybreak.
The book mentions people and Places – the lesser-known Peter Nothstein is singled out.
"Peter Nothstein was one of the few soldiers from this area who fought in the Revolutionary War," it said. Born in 1760 in Mahoning Valley, he served under Major General John Sullivan in the Battle of Long Island. He survived the British takeover by hiding underwater and breathing through a reed.
The book concludes with "Plans for the Future." Interestingly, many of the insights at the time remain true today. For instance, this is a list suggested by the County Planning Commission. One can clearly see the influence of then County Planning Commission Executive Director, Agnes McCartney.
1. Expand the county's resort and vacation industry.
2. Encourage industrial growth primarily by attracting new businesses to the region.
3. Try to improve overall living conditions.
4. Improve the county's public facilities, such as the county home.
5. Preserve the county's most interesting open spaces for present and future generations.
The book concludes, "Each of you can take an active interest in the activities and programs that are currently underway or under construction. It's your future! Help to protect it!"
Reading the book today, two generations later, shows that it is as true today as it was then. | <urn:uuid:a7d5abdb-b42c-49e2-8f6e-ce544114c637> | CC-MAIN-2016-26 | http://www.tnonline.com/2010/feb/06/dusting-story-carbon-county | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00139-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.964366 | 957 | 2.96875 | 3 |
Enoch Poor was one of the best brigade commanders in Washington's Army. Born in Andover, MA, he had served under Amherst during the French and Indian War. Following the Lexington alarm he was appointed Colonel of the 2nd New Hampshire. He took part in Montgomery's invasion of Canada and was made Brigadier General in February, 1777. The nucleus of Poor's Brigade was three New Hampshire regiments, and at various times it also included Hazen's 2nd Canadian (Congress's Own), the 2nd and 4th NY, and even some Connecticut militia.
Poor and his men fought with the Northern Army throughout the Saratoga Campaign from Ticonderoga to Burgoyne's surrender, serving with distinction at Freeman's Farm and Bemis Heights. He wintered at Valley Forge and participated in the final maneuvers at Monmouth. Poor's Brigade was dispatched to escort the British and German prisoners of Burgoyne's surrendered "Convention Troops" through part of CT on their way to internment in Virginia in November, 1778. The following year, Poor played a prominent role in Sullivan's Expedition against the Iroquois.
In early August 1780, the Marquis de Lafayette formed a light division with uniforms and equipment he had brought back from France that Spring. Poor accepted command of one of the two Brigades in Lafayette's division (Brigadier General Hand had the other). The light infantry were considered elite troops and were used aggressively. Both Poor and Hand were proven commanders of hard fighting brigades that were used as shock troops.
The new Light Division was organized as follows:
- 1st Brigade (Brig-Gen Enoch Poor)
- Van Cortlandt's Battalion (Col Philip Van Cortlandt's 2nd NY regiment, consisting of five New York and three New Hampshire companies),
- Shepard's Battalion (Col WIlliam Shepherd's 4th MA regiment with eight MA companies),
- Gimat's Battalion Lt-Col Jean-Joseph Sourbader de Gimat with eight MA companies).
- 2nd Brigade (Brig-Gen Edward Hand),
Washington wrote Lafayette on August 3rd, 1780 that " Your light infantry is formed about two thousand fine men; but the greatest of them naked." That same day he sent word to General Poor, then at Danbury with his old Brigade, that "The sooner you take your command in the Light Infantry the better."
Except for certain officers and men in Van Cortlandt's battalion, Poor's new Brigade was comprised of men with whom he had not previously served. Lt.-Col. de Gimat was one of Lafayette's aides and new to his mixed battalion of Massachusetts companies. Poor was better known to his commander, having served briefly under Lafayette during his first independent command prior to the British evacuation of Philadelphia.
Pulling together, properly equipping, and training Lafayette's Division continued throughout August, 1780. Washington with the main army consolidated in northern New Jersey, where he faced a number of strategic challenges. The general wanted to coordinate with French reinforcements based in Rhode Island in an attack on New York. Lafayette craved such a bold task for his light troops. The British in New York, however, had been lately reinforced by victorious troops from the southern theater, and Washington suspected they might be strong enough to launch an attack on the French in Newport. The southern American Army was in dire straights and some of Washington's officers, frustrated by inaction, desired a transfer to the South.
On September 6th, Washington called his Generals to a council of war to discuss which strategies to pursue. Brigadier General Hand was there, but General Poor was not. We will look into the reasons why this was so in a subsequent post in this series. | <urn:uuid:3ba9d661-b233-4959-be12-110d30548b2d> | CC-MAIN-2016-26 | http://greensleeves.typepad.com/berkshires/2011/09/the-death-of-general-poor-part-ii.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00066-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.985367 | 769 | 3.015625 | 3 |
Displaying 1 - 6 of 6 resources in Environmental Disasters and Publications:
1. British Columbia's Forests - A Geological Snapshot
British Columbia has developed a series of detailed province-wide maps with information on tree species, forest age, natural and human disturbance, protected areas, ownership and ...
2. Counter Hegemony Project
nottingham, United Kingdom
CounterHeg is a counterHegemonic newsletter compiled with news, art, essays, sounds, etc. that challenge the xisting social order. Its agenda is change. Workers right, Women's ...
3. Ethical Consumer Research Association
Manchester, United Kingdom
Ethical Consumer Research Association is a non-profit workers co-operative that produces a magazine, maintains a database of ethically-related corporate information, and conducts research for campaign ...
4. Save The Pine Bush & Karner Blue Butterfly
Albany, NY, USA
A grassroots organization to preserve a unique pine barens ecosystem, which is the habitat of the endangered Karner Blue butterfly. Located in the Capital District ...
5. Too Wild to Drill by The Wilderness Society
The Wilderness Society released a report highlighting wild lands that are currently being threatened by the Bush administration's energy plan. In this report, The Wilderness ...
6. Underwater Times
castro valley, CA, USA
All wet, All news. The daily journal of life in and around water. ... | <urn:uuid:ee18c3be-c8c7-43ef-965a-8b01bd61e861> | CC-MAIN-2016-26 | http://www.envirolink.org/topics.html?topic=Environmental%20Disasters&topicsku=2002119145522&topictype=topic&do=catsearch&catid=6 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00199-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.835124 | 292 | 2.6875 | 3 |
Subject Headings and web searching – making the library count
Providing access to information and knowledge is a primary function of library practice and in a series of three blogs I will focus on the importance of Subject Headings.
Part 1 Subject Headings and web searching – making the library count
Part 2 Library of Congress and SCIS Subject Headings
Part 3 The Schools Online Thesaurus (ScOT) – not your ordinary Subject Headings
Getting the mechanics of library provision right is the first step to future-proofing your school library and the information access of your users.
‘The National Library Guidelines for New Zealand Schools’ of 2002, ( p. 32) state the first critical success factor for library access is that: ”The library contributes to effective information management within the school and plays an integral role in the school’s ICT infrastructure”.
Increasingly this infrastructure is moving beyond the school into the global environment, making the selection and access to appropriate quality information more important than ever.
As well as managing access, the school library’s systems should also support the development of student’s information seeking skills. With the popularity of the Web, I can see why librarians may despair trying to instill good information literacy skills into their students when ‘Googling’ offers such a quick fix option.
One of the many benefits of teaching students effective search skills using the Library Catalogue is that these skills are transferable to the Web and other online search databases. By beginning with the Library Catalogue, students can more easily define their search in a quality controlled database. Searching using Subject Heading terms, along with keywords and phrases, produces better Web results, and quality results in online databases like EPIC .
If you are searching for “Teen reads” using subject heading terms such as “Teen fiction” or better still “Young adult fiction” , produce better library related results in the Library Catalogue and doing a Web search.
The best way to experience this is to do a comparison search on the Web and compare the results. Your students can also benefit from this knowledge and it is a way of understanding the value of using the library catalogue as a quality control for further online searching.
I won't spend time discussing the use of Subject Headings versus web tags or tag terms, but as a starting point it is good to understand the difference between the Subject Headings used in the Library Catalogue and the tag terms used to organise information on the Web.
Most of you with online accounts will be aware of tagging or may have been tagged. It might be the embarrassing photo on Facebook linked to your name tag or the latest hashtag discussion on Twitter. Or you may have been busy flexing your library skills organising your blog entries with appropriate key terms. Tag words are everywhere and can be used and created by anyone and like the keywords and Subject Headings used in Library Catalogue records, tag terms help to organise content and make it accessible to others.
Unlike Subject Headings, tag words on the Web have no governing rules. It’s mostly a first in, first served approach and to be useful they rely on a match (often random) to be made between the content and terms used to search. With the amount of content on the Web, even a very poor search will produce results, but not necessarily quality results.
Library of Congress Subject Headings (LCSH) use controlled vocabulary, which is internationally standardised to ensure that resources worldwide can be ‘tagged’ with the same subject terms and therefore located with a search using the same terms. You import records with these Subject Headings as part of your cataloguing. Being international, authoritative material online will also be tagged with the same Subject Headings which means using the information found in the catalogue record can help your students find authoritative material online.
Don’t think this is just for older students. The next time a student is after a book on ‘Bugs’ and you want to encourage their information seeking and vocabulary, show them the Subject Heading ‘Insects’ and related terms and produce a list of subject-based results for all the resources on that subject.
In this example - A keyword search for ‘Bugs’ produced this record with the Subject Heading for all resources on the same topic ‘Insects’ and better still also shows related terms like Spiders and Arachnida.
Armed with the correct Subject Heading search terms, information seekers will find all catalogued or indexed resources related to their subject or topic, be it in the Library Catalogue, databases like EPIC or the Web.
So when you have students interested in a certain topic, take the time to familiarise them with the Subject Heading terms related to that topic. Sure, they might still go ‘Googling’, but by doing this you are helping to create confident, connected users of information no matter where they search online.
National Library of New Zealand 2002, The School Library and Learning in the Information Landscape: Guidelines for New Zealand Schools. | <urn:uuid:a061a074-a3d1-4977-bbb0-4ecc1e0aca57> | CC-MAIN-2016-26 | http://schools.natlib.govt.nz/blogs/libraries-and-learning/12-11/organising-knowledge-importance-library-standards-global-society | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00022-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.904658 | 1,058 | 3.03125 | 3 |
Land ~ Sea Discovery Group Staff
Authors prelude; During the winter months of rain and snow this
writer / treasure hunter spends hours digging through old books
and ephemera looking for possible treasure hunting sites. Quite
often an unknown part of history is brought to light and a new
understanding of our countries growth is seen. Sometimes a two-fold
purpose is reached as is the case in the article to follow titled
"Gold Rush Circus." I not only found an interesting
and relatively unknown segment of California's history but found
a slew of new treasure sites to metal detect. Being that the sites
are in gold country this author will be able to enjoy the best
of all worlds, treasure hunting and prospecting while on vacation
in the coming months.
discovery of gold in the mother lode country in 1848 brought fortune
seekers from every part of the world. Eastern newspapers exploited
the news and spread it across land and sea to rich and poor alike
to create a mass migration of souls, eventually called the '49'ers,
to this now golden land.
A broadside (poster) promoting
fortune seeker Californians know little about was Joseph A. Rowe.
This enterprising man brought something to Californians they had
never seen the likes of before, the circus. Rowe was the founder
and star of Rowe and Companies Pioneer Circus. This circus, which
was California first, made its debut on October 29, 1849 in the
city of San Francisco.
Joseph A. Rowe was born in North Carolina in 1819 to a well to do
family but was left orphaned by the age of eight. His guardians
cared little for his welfare and education leaving him to get into
more mischief then the normal antics of a boy his age. His favorite
pastimes to the annoyance of passersby was standing on his hands
against the side of the livery sable and riding the horses to water
while he stood on their backs. They said he'd never amount to anything.
In 1829 a circus
came to nearby Kingston and young Rowe, fed up with the bad treatment
by his guardians sought to be apprenticed to the troupe. He was
hired as a circus rider and during the next four years he learned
the trade. Before long Rowe was known as one of the most daring
riders on the circuit. He traveled to New York, Cuba, and Venezuela.
During this time he met and joined up with other performers of the
finest qualities starting his own troupe. He traveled extensively
throughout South America to the delight of all who saw his circus.
San Francisco as it
looked in 1849
Rowe was in Lima when the news of the gold rush reached him from
the eastern newspapers. Although he was forty years old at the time
he still had the desire to strike it rich. So, in spite of the long
and arduous journey ahead of him he boarded a British mail steamer
and headed north to Panama meeting there a tide of souls on their
way to California from the east coast and Europe. He was laid up
in Panama from May to August amidst the misfortunes and disease
of the populations of the world funneling through this tiny country
on their way to California. Finally he was able to board the bark
Tasso and on the 12th of October 1849 after a long sea voyage, he
sailed into the narrows, now know as the Golden Gate.
On Monday October
29, 1849 Rowes Olympic Circus, as it was fist called, played its
acts to a full house of over 1500 people. The amphitheater was on
the east side of Kearney St. between California and Sacramento Streets.
Rowe charged the shocking price of $3.00 per single seat and $5.00
for a private box. Typical circus admission prices of the time were
fifty cents for adults and half price for children and servants.
Despite high fares the performances were a raving success. The Alta
California Newspaper reported that the circus would relieve the
tedium of many a long winter night and so it must have for many
a night the troupe played to a full house.
Typical equestrian feat performed
for the miners at Rowe & Co.'s Pioneer Circus in 1857.
During the coming years Rowe and his circus traveled to Honolulu
and Australia all to the delight of his audiences.
By 1857 things in the gold fields of California had changedquite
drastically. In the early days of the rush, miners learned the basics
of the trade and using a pan, rocker, pick and shovel; they could
readily earn a living. By now though things were getting tougher
and it was harder to scratch out an existence. Miners graduated
to sluices hundreds of feet long, dynamite, drills, and on its way
was hydraulic mining. The boomtowns of these days were starting
to fade but although the times were tough spirits were good. The
bars, which served as the social centers in many communities, were
busy, and other then gambling and an occasional dance the town entertainment
was quite limited.
In 1857 Rowe decided it was time to bring his great Pioneer Circus
to the mines. He had no idea of the hardships he would have to encounter
in the mountains. On April 1st they headed out exhibiting in Sacramento
for 4 days with much success. The next stop was Folsom on April
As they headed farther into the interior not only did things get
rougher but also they became more of an attraction. Picture a mining
town going through it's everyday paces when the circus's advance
man usually a clown or jester rides into town jumps off his horse
in a most unusual way and begins pounding on his drum and blowing
a bugle. The towns people start to gather and the clown proceeds
to tell them of the wonder of the Pioneer Circus. Often in trade
for free tickets the clown would ask storekeepers to post bills
in their windows announcing their arrival.
Soon the caravan
would arrive, sometimes stretched out for a mile in length. Eight
beautiful horses drew the main circus wagon built and ornamented
in Sacramento. The front board, which curved gracefully over the
wheelhouses, was painted with a realistic representation of Sutters
Fort as it was in 1848. It even depicted Brannans Store on
one corner. The rear board, which made a similar curve, depicted
the coat of arms of the state. The circus would parade through the
town with all its pageantry giving just enough hints of the delight
to come. The elaborate chariots, prancing horses decked out in finery,
and a juggler tossing four or five oranges into the air was all
quite out of place in this wild western landscape.
Joseph A. Rowe
By this time the miners had set down their shovels and picks, leaving
the canyons and streams, not wanting to miss the entertainment.
It had been a long winter. Some spots still had 4 feet of snow on
the ground. The Pioneer Circus vowed that the towns people
and miners would not be disappointed.
The main acts of the day were the equestrian events. During this
time in our countries growth nearly every man alive rode a horse
and many at one time or another had harnessed a team to a wagon
and drove them to town. A skilled rider and a well-trained horse
were judged with a critical eye. In fact the sports heroes of the
day were bareback riders. They were idolized quite like the Olympic
medallists of today's Olympic Games.
The riders and horses went through hours of rigorous training some
of which is not what you would expect. While the horses went through
their paces the grooms would carelessly kick cans about the ring,
fire guns, and even tie five-gallon cans to the horse's tail! This
was all done in training to teach the horse not to sway from its
paces for anyone but its trainer. Timing was everything in the ring.
An acrobatic rider doing a back somersault would not like it much
if he came down from his leap only to find the horse spooked by
a child with a firecracker and not be in his appointed spot.
At show time the audience was treated to all the spectacle and finery
the troupe could provide. The human eye loves to dwell on pleasing
things and perhaps the most pleasing site of all was Miss Mary Ann
Whittaker, the first female equestrian artist in America. She was
ranked among the best in danuese (ballet) and pantomime. She would
ride out into the sawdust-covered ring standing on her milk white
horse in pink tights and ruffles with stars and spangles that glittered
like the golden flakes in a miner's pan. Then to the amazement of
the crowd as she neared a ribbon held in her path 12 feet high by
two colorful clowns, she would leap up off the horse and over the
ribbon and then land gracefully onto the horses back all while it
was speeding around the circus ring. The applause was thunderous
and it continued through the evening. Other riders rode in pyramids
on two horses with three riders stacked neatly on top of one another
while still others did forward and backward flips through rings
of fire. An Indian rubber man displayed his ability to tie himself
in knots and to cram himself into small places. The giant, named
Guilliot handled 32 and 48 pound canon balls as easily as a boy
would handle peanuts.
All eyes were drawn into the magic circle. Long winter hardships
were forgotten and the circus had done its job and lived up to its
promise. The next morning the troupe would pack up its gear and
load the wagons to head on down the road off its next destination.
This act continued on the same through all the towns it would encounter.
Many of the stopping points for the shows are now a 'who's who'
in only the best of ghost town books. On April 6th they played in
Eldorado and pulled in $397.00. The next night was Diamond Springs
where they pulled in $375.00. Then the Pioneer Circus spent the
next two nights in Placerville doing $715.00 and $331.00 respectively.
On April 10th they did their show in Coloma, the home of James Marshall
discoverer of gold in the north. On the 11th they performed at Kelsey
just north of Placerville. Kelsey was later the home of Marshall
where he worked as a blacksmith and miner. His blacksmith shop in
Kelsey is Historical Landmark 319. Next stop, Greenfield Valley,
which appears as Green Valley on USGS Placerville 1931 Quadrangle
map. After spending the 13th in Georgetown doing a whopping $697.00
the wagons rolled out to Baileys on the 14th, Rattlesnake on the
15th, Gold Hill on the 16th, Auburn on the 17th, and Todd's Valley
on the 18th. Todds Valley was named after Dr. F. Walton Todd
who was a cousin of Abraham Lincoln's wife Mary. The Todd Valley
Mine along with the Peckham Hill Mine produced over 5 million dollars.
One of the better
nights of the tour was $852.00 at Michigan Bluffs. This small but
rich little town was destroyed by fire later the same year. Reports
have it that $100,000 in gold was shipped from there every month
until the mid 1860's. In 1864 a 226-ounce nugget was found near
Michigan Bluffs and was sold later for $4,000.
keepsakes from a gold rush miners family. This item
will be featured in an upcoming auction.
After Michigan Bluff things started going down hill for the Pioneer
Circus. The expenses were heavy being so high up in the mountains.
Hay and Barley for the horses were costly due to the high cost
of freight. Expenses began to run nearly $400.00 a day and the draw
from the performances started to drop under $300.
The rain didn't help any. Getting from place to place was hard enough.
The roads were so wretchedly constructed that the slightest bit
of rainfall created a river of mud to contend with. The heavy wagons
would sink up to their hubs in the mud and the caravan would have
to stop. Sometimes it took six teams of horse to pull the wagon
out only to have another get stuck soon afterward. Usually they
were able to cover only two or three miles in an hour. Many of the
stops were 10 to 15 miles apart and a rider would go ahead and make
the forks in the road with a rail so the caravan would go the right
Perhaps the worst occurrence would be after getting little or no
sleep, fighting their way through a drenching rainstorm, pulling
out and repairing wagons, and then hearing the word "lost."
This of course meant retracing steps and going through all the problems
again. Many performers wished they were dead after hearing that
Still the show went on and when it came time to perform for the
miners the gloom and doom of the trip and unpaid salaries were forgotten
and they performed their best. Rowe's Pioneer Circus played the
mountain camps and towns until August when due to expenses they
were forced to return to San Francisco. Before leaving they had
played Yankee Jim, Iowa Hill, Illinois Town, Dutch Flats, Red Dog,
Grass Valley, Rough and Ready, Nevada, Orleans Flat, Oroville, Horsetown,
Marysville, Monk Hill, Railroad Flat, West Point, Chinese cap and
Though The Pioneer Circus played a only brief part in our mining
history in the State of California it undoubtedly gave the folks
of the mining camps a brighter outlook on life for a time and broke
up the rigorous routines of the day leaving them with memories that
would last a lifetime.
Erwin G. Gudde, California Gold Camps, University of Berkley Press,
Dressler, California's Pioneer Circus, H. S. Crocker Co.,Inc.,
Culhane, The American Circus, Henry Holt & Company, 1990. | <urn:uuid:e4179c89-9a55-4571-ad7f-be080a44c3f9> | CC-MAIN-2016-26 | http://www.e-adventure.net/land/mining/circus.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00107-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970316 | 3,044 | 2.53125 | 3 |
|hope college > academic departments > american ethnic studies|
American Ethnic Studies Minor
AES 210: Introduction to Ethnic Studies, CD4
This course examines the roles that slavery and race have played in shaping the course of American history. Starting from an overall assessment of slavery’s origins in western culture, the course considers the practice of slavery and its social, political, and economic influences in North America. Special emphasis is placed upon analyzing how institutional slavery and the concept of race shaped the lives of masters, slaves, and their respective descendants down to the present day. Applies to African American Focus.
SPAN 344. Modern Hispanic American Literature and Culture
study of Hispanic American literature from the wars of independence until the present
(XIX and XX centuries). Politics and important historical events are discussed
through the analysis of literary texts and most representative works of the corresponding
period (other sources such as documentary videos, slides, and films are considered).
Students are exposed to a wide variety of literary genres ranging from
narrative, drama, poetry, essay, etc. Conducted in Spanish. Prerequisite: Spanish 341
with a grade of C+ or better or equivalent. | <urn:uuid:0cce83a6-086f-4884-a4e6-566dca838b1e> | CC-MAIN-2016-26 | http://www.hope.edu/academic/ethnicstudies/courses.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00121-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.922575 | 243 | 2.640625 | 3 |
by Simon Armitage
The Farrand Chapelette is a type of harmonium or small organ. Simon Armitage and his father before him were choir boys at the church of Saint Bartholomew in Marsden, a village in West Yorkshire. On occasions when the congregation at a service was quite small, the organist would play the harmonium instead of the full-size organ.
The harmonium eventually fell out of use, and in the opening lines of his poem “Harmonium” Armitage states that it was “gathering dust / in the shadowy porch.” It would have been thrown in a skip had Armitage not wanted it. In the final line of the first stanza he comments that he could have it “for a song”, an idiom that means very cheaply. There is an obvious play on words here, as the harmonium is of course used to play song tunes.
The second stanza of “Harmonium” is twice as long as the first and describes the musical instrument in detail. The first half of this stanza focuses on the effect sunlight has had in the church. The windows show images of saints and of Jesus Christ rising from the dead; Armitage says that the sun can “beatify” the saints, in other words raise them above the level of ordinary people. He contrasts the fact that the sunlight shining through the stained glass windows has a positive effect whereas it has weathered or “aged” the wooden case of the instrument. Armitage uses the metaphor “fingernails” in describing the way the sun has discoloured the harmonium's keys; the area that the organist would have pressed with his fingers is now yellow. One of the harmonium's notes or keys has “lost its tongue;” the personification to convey the fact that the key is silent brings life to the image.
The last three lines of the second stanza focus on how worn the treadles of the harmonium are. These are like pedals that the organist has to continually push down with his feet as he plays the music. There are actually holes in both of them now. Armitage even describes how the organist used to wear “grey, woollen socks / and leather-soled shoes,” conjuring up a rather dull picture. He uses a half rhyme, with “treadles” at the end of line ten and “pedalled” at the end of line twelve; this is the only instance of rhyme in the stanza.
The third stanza is a shorter one, consisting of five lines. Armitage uses alliteration twice in the opening line, “But its hummed harmonics still struck a chord.” This is a vivid description emphasising the fact that although the harmonium is very old and worn, it means something to the poet. The idiom “to strike a chord” means that something triggers a memory, but of course this is another play on words, since chords can be played on a harmonium. Armitage tells us that the instrument was used for a hundred years and stood “by the choristers' stalls.” He mentions that “father and son” had both sung there; this could refer to himself and his father, although he does not specifically say so. In the closing line of the third stanza, Armitage reverses a simile to describe the singing of the choir boys. He says that “gilded finches” “streamed out” of their throats, using metaphors, and says that the finches were “like high notes,” which is in fact what they were. This imagery is rather complicated but nevertheless conveys the image beautifully.
The fourth and final stanza is the poem's longest one. It concerns Armitage's father, although the poet does not actually say so; the only actual use of the word “father” is in the third stanza. Armitage describes the way his father came to help him “cart” the harmonium away. The description is not a flattering one, and it echoes the description of the aged musical instrument. The poet's father came in a “blue cloud of tobacco smog, / with smoker's fingers and dottled thumbs.” We can't help but be reminded of the harmonium's yellowing keys and weathered wooden case. The two men carry the instrument “flat, laid on its back,” personifying it. This leads to Armitage's father making a remark that the poet says “he, being him, can't help but say.” The father tells his son that the next box he will carry down the nave of the church will be the father's coffin. The word “coffin” is not actually used, but the father says the box “will bear the freight of his own dead weight.” In other words, it will contain his dead body; the phrase “dead weight” is used literally here, but it can also mean a particularly heavy weight or even an oppressive burden.
The last three lines concentrate on Armitage's emotional response to his father's remark. He begins “And I, being me,” echoing the phrase “And he, being him” that came three lines earlier. Armitage says that his reply was “some shallow or sorry phrase or word” that he mouthed. The lack of precision conveys the idea that he couldn't think of the right or suitable answer to such a poignant remark. The poem closes with the line “too starved of breath to make itself heard.” Armitage was so out of breath from carrying the harmonium that he could not speak loudly enough, and perhaps he didn't want his answer to be heard as he felt that it was inadequate. The last two lines rhyme, and these are the only two consecutive lines in the poem that rhyme with each other.
“Harmonium” is a touching poem that initially appears to be about Armitage's attachment to this musical instrument that, although old and almost worn out, was a part of his childhood. The final stanza, however, introduces his father, and Armitage is clearly affected emotionally by his father's comment on the fact that the poet will soon be carrying his coffin into the church. Armitage's use of imagery, plays on words and sparing use of rhyme create a convincing piece of poetry. He shows that objects that are old and no longer used still have value and the memories they trigger are meaningful. More than that, he links the theme of the harmonium with his feelings towards his aging father, whose death draws ever nearer; confronting this idea, the poet is so emotional that he cannot express himself as he would wish.
Originally published on helium.com
Originally published on helium.com | <urn:uuid:537ce0a9-90db-4183-873a-9fed78a71fee> | CC-MAIN-2016-26 | http://poetryforgcseenglish.blogspot.com/2012/09/harmonium-by-simon-armitage.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00190-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981582 | 1,463 | 3.515625 | 4 |
Giving The Gift This Holiday Season Of Dental Health
Holiday season is a time of great joy bringing families and loved ones together from all points around the globe. It is also a time of over indulging on holiday sweets and treats. The consumption of these cookies and candies have the potential to create some real damage to your dental health. So what about stuffing your loved one’s stockings with gifts that can help them through this time. These can include electric tooth brushes, travel toothbrush kits, flossers, and sugar free gum.
Eating foods that are high in sugar, without proper oral care, can result in tooth decay and cavities. What better way to counteract the extra sugars and starches we often eat during the holidays than, to give your friends and family great gifts to care for their teeth.
If not removed by brushing or other means, sugars in the mouth can contribute to tooth decay. Naturally occurring bacteria in the mouth, form a colorless, sticky film called plaque. Cavity-causing organisms within plaque feed on sugar and turn it into acid, which attacks tooth enamel and leads to tooth decay.
Dental Health Stocking Stuffers
-Electric Toothbrush. This could be a great gift for loved ones . An electric toothbrush works far better to remove plaque than a manual brush. Most of us enjoy brushing more with a rotary brush, and brush longer! Electric brushes are great for anyone, but especially for youngsters and elderly. Dexterity, ability to see, and, boredom make manual brushing difficult to do properly.Electric toothbrushes do most of the work for you making it easier to maintain dental hygiene.
-Sugar Free Gum. Chewing gum is a great way to keep your teeth clean especially since it increases saliva in the mouth to counteract the food debris, plaque, and wash away the acids produced by bacteria. Try filling a jar with different flavors of gum and adding a pretty bow!
-Waterpik. Water flossing is faster, easier and almost as effective to traditional flossing. A great way to keep your teeth and gums healthy into the new year.
-Travel kits. Many of us travel for the holidays or work. A nice colorful zipper bag with new oral hygiene items, and perhaps a little note inside reminding them you’ll miss them while they are away.
-Flossers. There are many different types of flossers available, most of us don’t have more than one. Your gift would be great to keep in the office or car.
-Whitening strips, gel, pen. Most of us dream of having a whiter smile, but tend not to spend the extra money to whiten. They will think of you everytime they flash their bright smile.
-Gift Certificate for Dental Services. This has been available for quite some time but generally reserved for teeth whiteningor other cosmetic dentistry services. Why not use it for preventive treatment as well. Have a special someone who has not been to the dentist in awhile? This could be the perfect gift to get them back on track of their dental health.
Happy Holidays to Everyone!
While Dental gifts may not be as popular as video games or ipods they do show you care! It is important to get on track to good dental health, and overall health. Your loved ones know how special you are, and always want the best for you. What shows that better, than thoughtful gifts that keep them healthy? Enjoy the holidays, and spread joy with your smile! | <urn:uuid:01e6c24c-edcf-4ebb-8f9b-a4a213a596c4> | CC-MAIN-2016-26 | http://drperrone.com/blog/giving-the-gift-this-holiday-season-of-dental-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00081-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954799 | 730 | 2.609375 | 3 |
Our three guiding questions
The IMYC has been developed around, what we believe to be, three crucial guiding questions:
What kind of world will our students live and work in?
Teaching and learning is exciting (and difficult) because it looks both forward and back. We look back because, in part, learning is about taking on the heritage of our culture and learning about what has made us who we are. We look forward because we know the world is going to be different than it was and we accept the challenge of making the best judgments we can about what that world will look like.
What kinds of children are likely to succeed in the world?
We are tasked with making the best predictions possible about the state of the world in the future. We have to do this because it guides our thinking about what kinds of people children will need to be. Their personal dispositions will be the key to whether children can make the best of their learning in the years to come.
What kinds of learning will our students need and how should they learn it?
A view about the future world and the personal qualities that will matter helps us decide what kinds of learning young people will need. Knowing what kinds of learning they need guides us to what learning should look like in the classroom. | <urn:uuid:53ff0d0d-6aec-427f-86d6-182561008e1b> | CC-MAIN-2016-26 | http://www.greatlearning.com/imyc/the-imyc/guiding-questions | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00078-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.9661 | 260 | 3.703125 | 4 |
Ebola outbreak in west Africa overBy Jay Jacobs Jan 15, 2016
The World Health Organization (WHO) today declared Liberia free of Ebola, marking the end of the outbreak in West Africa.
The declaration comes two years after the first case was reported in Guinea and cases of re-emergence in Liberia and Sierra Leone.
All three countries remain at risk of additional outbreaks due the virus's ability to remain infectious in survivors even after they have recovered from its symptoms.
Also making points in the statement, Dr. Margaret Chan, WHO Director-General, said detecting and breaking every chain of transmission had been a monumental achievement, adding that so much was needed and so much was accomplished by national authorities, heroic health workers, civil society, local and worldwide organizations and generous partners.
UNICEF highlighted the importance of providing children living in these three countries, many of whom have been taken in by immediate or extended family, with cash grants, school support, clothing and food.
"Secondly, the risk is going to reduce over time as the immune systems of survivors clear the virus from the body of the survivors... And, thirdly, that the countries of West Africa have put in mechanisms to manage that risk", he said.
"It is a collective effort of the global community and the people of Liberia for being once more Ebola-free", Tolbert Nyenswah, Liberia's deputy minister of public health, said in a telephone interview.
Staff members of Doctors Without Borders, conduct a decontamination process on Dr. Tom Frieden, CDC Director, who is dressed in his personal protective equipment and is exiting an Ebola treatment unit in Monrovia, Liberia in 2014.
The country joins Sierra Leone and Guinea - the epicenters of the latest outbreak - as Ebola-free. The WHO says it will continue to work with governments on preventing transmission and responding to outbreaks. Liberia is an example of that challenge, having gotten cases to zero twice before, only to have the disease spark again-both instances likely caused by lingering virus in survivors.
The Ebola outbreak highlighted just how little we actually know about the disease and how hard it can be to fully eradicate.
In 2014, the AP obtained an internal World Health Organization document that said "nearly everyone involved in the outbreak response failed to see some fairly plain writing on the wall..."
The U.N. secretary general, Ban Ki-moon, said Wednesday that the Ebola outbreak in West Africa had been "a fundamental test" of the world's ability to come together to stanch the pandemic. All three countries are now in a 90-day period of heightened surveillance.
"We've shown in Liberia that the health care system has the human resource capacity, infection prevention and control capacity, health care workers are vigilant, and response workers are vigilant".
You may also like...
Derrick Henry headed to NFL Draft — ESPN Reports
Jan 15, 2016 | <urn:uuid:6d194f70-2752-485f-914c-c542f608e939> | CC-MAIN-2016-26 | http://thevillagessuntimes.com/2016/01/15/ebola-outbreak-in-west-africa-over/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00072-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955082 | 595 | 2.625 | 3 |
Writing a New Chapter of Jewish History in Hungary with JDC Ambassadors
We are leaving Budapest, after three enlightening days on a JDC mission led by Rebecca Neuwirth. The Jewish community here, long one of the largest in Europe, is struggling to find its identity in the face of a rising tide of anti-Semitism following years of Communist rule. The result is difficult to characterize and, sometimes, hard to watch. This community needs JDC's vigilance and support.
A brief history: Hungary has 10 million people, 2 million living in Budapest, of whom 100,000 are Jewish (one guide quipped there are 250,000 if you use the Nazi race law definition.). Jews once flourished in Budapest; in the late 1800's, an estimated 23 percent of the city's population was Jewish. The Dohany St. synagogue is the largest in Europe, holding 3,000 people. Yet as in other parts of Eastern Europe, anti-Semitism began to flourish long before the outbreak of WW II.
Hungary's Admiral Horthy initially allied the country with Germany by declaring war on the Soviet Union (and, oddly enough, upon the U.S.) in 1940. Ironically, this alliance initially spared most Hungarian Jews from immediate extermination. Many men were sent to brutal labor camps and to the front as human mine sweepers, as portrayed in Julie Orringer's excellent novel The Invisible Bridge, but the civilian population was largely spared during the next few years. In 1944 Horthy saw the shift in direction of the war and sought secret peace negotiations with the Allies. When Germany discovered this, it invaded and the German occupation was followed by an extermination of unparalleled speed. In a campaign personally directed by Adolf Eichmann, nearly 450,000 Hungarian Jews were sent to death camps between May 15 and July 9, 1944. The total number of Hungarian Jews who died in the Shoah is estimated in excess of 550,000.
After the war, living behind the Iron Curtain where religion was suppressed, Hungarian Jews simply forgot their identity. Many of the people to whom we spoke, almost all from mixed marriages, recalled learning they were Jewish by chance; via a phone call from a distant relative in Israel seeking lost relatives back home, or by being disciplined by a parent for calling a playmate a dirty Jew only to be told that he himself was Jewish. Independence in 1989 for the first time opened the door to renewal of the Jewish community, but now, less than 30 years later, overt anti-Semitism has emerged again and the Hungarian government has at best equivocated in response. A right wing political party, Jobbik, garnered 17 percent of the popular vote and one of its members in Parliament called for compiling a list of all “dangerous Jews who are posing a threat to Hungarian national security.” At the same time, the Hungarian economy is dismal. The official unemployment rate of 12 percent was said to be half of the real rate, and in many small towns we were told there was no work at all.
Despite all of this, we saw signs of renewal. We visited the city's first thrift shop, recently opened by JDC, and saw the donations pouring in. This may sound uneventful in the States where the concept of donating and reselling goods is well established. But this is a huge step toward philanthropy in a post-Communist society where there has been no culture of giving at all, and no tax incentive for charity.
On an inspiring home visit, we met a woman living with five of her six children and her two grandchildren in a three-room apartment. Previously living in an apartment half the size and struggling to keep her family afloat, she faced physical abuse from her husband and became desperate upon finding out he was abusing their 17 year old daughter. Only recently aware that her own father was Jewish, she was referred to JDC by a neighbor, a Holocaust survivor. Discovering she was not alone and beginning to find a connection to the larger Jewish community, she told us that the family's greatest joy was a family session at Szarvas, the camp JDC operates two hours outside the city. Two of her children are returning this summer. Proud and hopeful despite the palpable challenges ahead, she now volunteers at JDC two afternoons a week, and was plainly touched when we thanked her for giving back to JDC.
And at Szarvas itself -- a truly magical camp in the countryside -- we saw young people from all over the world connecting with one another and with the Jewish community. The camp brings together children from the Ukraine and the Upper East Side, from Serbia and Israel. At a deafening lunch, we saw and joined 400 campers singing Jewish songs and dancing with unmitigated joy. We spoke to young members of the staff about what they saw of their future in Hungary. Several said their parents urged them to move away, but all of them said they were very much at home and did not wish to leave. And we had dinner with the director of Budapest's Jewish theater, who learned he was Jewish at 14, made Aliyah, served as an officer in the IDF yet returned home after a decade in Israel. There was strong sense from many of the pull of Hungary and a desire to write a new chapter in Jewish history there, despite the challenges.
But at Cafe Europa, a JDC social program for Holocaust survivors, the old history resurfaced. A large and seemingly happy group of survivors ranging in age from 68 to 91 became troubled when asked about the new wave of anti-Semitism in Hungary. Several said they were afraid and, despite our assurances of support, asked what the United States or JDC could do if things got worse. We assured them that we would never allow a repetition of history and that both the U.S. and Israel offered a place for them if worst came to worst, but none seemed convinced by our assurances or satisfied by the prospect of leaving Budapest.
Despite wonderful camaraderie with our traveling companions, we left with mixed feelings. The powerful energy and happiness of Szarvas, its inspired and inspiring campers and staff, were balanced by the uneasiness of the Holocaust survivors and anti-Semitic comments we heard ourselves. JDC has a large role to play in this community's mixture of renewal, hope and fear. We can help those most in need and offer our support to those who want a renewed Jewish identity, in a country where challenges abound.
JDC Board Member and Ambassadors Co-Chair Zachary Fasman wrote this post after traveling to Hungary last month.
An Error Occurred
Logging In With One of Your Social Web Site Logins
Instead of trying to remember a bunch of special username/password combinations to log in to different web sites that you visit, you can now link your account on this web site to your account on one (or more) of the social media web sites shown and log in with the same username/password combination that you use on that social web site to log in to our site.
To provide this connection in a secure manner, we use Gigya, a social network connection provider that works behind the scenes to make safe, secure connections between user accounts on different systems, such as popular social media web sites like Facebook and web sites like ours where you are actively involved in social issues and causes.
Each time you log in, Gigya uses special application programming interfaces (APIs) to establish the connection between the sites and validate your username and password. Neither our web site or Gigya receive or store your social network passwords.
In addition to reducing the number of logins you have to remember, connecting your accounts can make it quicker and easier to share an activity or cause you feel passionately about from our web site with your friends on your social web sites.
You can break the connection between your accounts at any time. | <urn:uuid:9cf96d76-fe89-4a9b-bdd7-74a3655d4db1> | CC-MAIN-2016-26 | http://www.jdc.org/jdc-field-blog/2013/writing-a-new-chapter-of.html?s=g_em | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00144-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.969243 | 1,606 | 2.578125 | 3 |
[What's the point of designing games? Veteran educator and designer Ernest Adams examines the concepts of fun, enjoyment, and personal fulfillment to reveal the key, uplifting tenets of game creation.]
The Japanese language uses suffixes to modify the word that precedes them. Two of these suffixes are -do and -jutsu. The -do ending means "the way of..." whatever it is modifying, while the -jutsu ending means "the skills (or methods, or techniques) of..."
Consider the words jujutsu and judo. They refer to two different approaches to a particular martial art, a form of unarmed hand-to-hand combat that concentrates on grappling, pinning, and throwing. Jujutsu is the older and more brutal form, intended for lethal combat. Judo is a sport that derived from jujutsu.
When appended to the name of a martial art such as judo, the -do ending refers to a philosophy behind the art -- a set of values that are intended to guide the combatant in the proper use of the jutsu, or techniques, of battle. The Japanese word do is cognate with the Chinese word tao, which also means "way" or "path", and also connotes a mental or moral discipline rather than a purely practical collection of rules.
It seems to me as if this distinction could apply equally well to game design. Game design has many jutsu, and these are well-known. Challenge, growth, choices, balance, pacing, novelty, surprise, risk/reward, social interaction, storytelling, creative play -- all these are techniques we use for creating entertainment. But what values guide our use of these jutsu? What is the tao of game design?
The first answer that the novice will give, undoubtedly, is "give the player fun." I've already explained why fun is too limited a concept on which to base a medium as powerful as ours, so I won't bother to do so again. Broaden the idea of fun a little and you get enjoyment.
But I find even enjoyment is too restrictive. Broaden it further, and we come to entertainment. For the most part, we want to give the player entertainment -- whether it's pure fun or some more complex kind of experience.
Video game design is not analogous to martial arts, however. In martial arts the goal is always victory, and victory is precisely defined -- the death or submission of the enemy, or his defeat according to a system of rules adjudicated by referees. Different jutsu will lead to victory over different kinds of enemies, but victory is defined the same way no matter who the enemy is.
That's not true for entertainment, because different people like to be entertained in different ways. They enjoy doing different things, they like to face different challenges, and they like to experience different emotions.
The tao of game design cannot be "give the player entertainment," because there are no rules about how to entertain everyone. Let's look at it another way.
Most art forms (painting, dance, music, film, theater, literature and so on) are purely expressive. The artist expresses; the audience observes. The audience may also contemplate, criticize, interpret, applaud, or reject the art, but the one thing they cannot do is change it.
Video games aren't like that. They're interactive. The player and the designer collaborate to create the experience that the player will have. The designer has most of the power, of course: she constructs the world, establishes the actions available to the player, and defines the goals towards which he will strive.
And yet in spite of the designer's pre-eminent role, the game is nothing without the player. A video game that nobody plays is an empty thing, a mere collection of machine code. To be meaningful, the game must be played.
In fact, the game doesn't really exist until it is played. By convention we refer to the software as "the game," but in truth, the software is not the game. The game is the act of playing. It comes into existence when the player starts up the software and crosses into the magic circle. | <urn:uuid:34e96e92-ca7e-4fa5-bdf3-b72e0a33327a> | CC-MAIN-2016-26 | http://www.gamasutra.com/view/feature/3765/the_designers_notebook_the_tao_.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00100-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955522 | 859 | 2.875 | 3 |
The Compromise of 1850 included five bills passed to retain the balance of power between slave and free states following American acquisition of new territory in the Mexican-American War. The compromise abolished the slave trade—but not slavery—in the District of Columbia, formalized Texas's borders and paid its debts to Mexico, and allowed the territories that later became Utah, Arizona, New Mexico, and Nevada to decide the slavery question for themselves. More controversially, California became a free state in exchange for passage of the Fugitive Slave Act, which strengthened slave owners' ability to seize escaped slaves—and free blacks—living in the North. The act led to emigration to Canada by thousands of African Americans and spurred resistance by abolitionists and the Underground Railroad.
Get help on American History with Chegg Study
Answers from experts
Send any homework question to our team of experts
View the step-by-step solutions for thousands of textbooks
In history there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important history concepts and terms are, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand key history terms and concepts, we’ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts. | <urn:uuid:7de3c79c-81b5-4886-8548-e80faf8da2be> | CC-MAIN-2016-26 | http://www.chegg.com/homework-help/definitions/compromise-of-1850-43?cp=CHEGGFREESHIP | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00200-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953045 | 282 | 4.5625 | 5 |
Learn to Say President Willkie
Wendell Willkie was born on this day in 1892 in Elwood, Indiana.
Wendell Willkie blazed in and out of American politics with the short-lived intensity of a spark of static electricity. An unlikely presidential candidate, he was the model of the dilettante crusader, the role Ross Perot seemed to fill in 1990s American politics. His father and mother were both lawyers, and he was raised to become a lawyer himself and a good Democrat. After serving in the artillery at the Meuse-Argonne front in World War I, he joined Firestone Tire and Rubber Co. in Akron, Ohio as an in-house lawyer. In 1929, he moved to New York City to work for and later head Commonwealth & Southern, a large electric utility company.
Although he had campaigned for Franklin Roosevelt in 1920 when Roosevelt was on the Cox for President ticket and had contributed $150 to Roosevelt's 1932 campaign, Willkie became a strident opponent of Roosevelt's New Deal excesses -- particularly Roosevelt's establishment of the Tennessee Valley Authority (TVA) by which cheap electricity was introduced to rural Tennessee through federal projects which competed with Willkie's own utility company. Although Willkie generally supported the New Deal conceptually, his ire over the TVA led him to denounce Roosevelt on a national speaking tour.
His natural charisma appealed to the anti-Roosevelt minority, and "Willkie Clubs" began to spring up around the country, leading a group of Eastern Republicans to convince Willkie, who had never held public office, to switch parties and seek the Republican presidential nomination in 1940. With conservative Robert Taft and a too-young Thomas Dewey as his only credible opponents, the Republicans unanimously rallied around Willkie at the convention on the 6th ballot.
Ideologically similar to Roosevelt, Willkie supported intervention in Europe and much of Roosevelt's economic policies. Nevertheless he barnstormed 30,000 miles around the country giving more than 500 speeches criticizing Roosevelt's aspirations for a third term (employing the words of George Washington as moral precedent for presidents not serving more than two terms). He was enough of a thorn in Roosevelt's side that Roosevelt considered leaking a story that Willkie carried on adulterous affair. Roosevelt decided against the strategy, and the voters decided that Willkie did not offer enough of a reason to change horses in midstream: Roosevelt defeated Willkie, 55% to 45%.
After the election, Roosevelt dispatched Willkie to Europe to visit allied governments on behalf of the U.S. He published a best-selling book, One World, in support of international cooperation, and pursued the Republican nomination again in 1944, but withdrew from the race after a poor showing in the Wisconsin primary. He died shortly thereafter, on October 8, 1944, in New York. | <urn:uuid:033b6a37-7149-420d-a80c-82d12bd8aa3a> | CC-MAIN-2016-26 | http://rsparlourtricks.blogspot.com/2007/02/learn-to-say-president-willkie.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972821 | 578 | 3.4375 | 3 |
Remember the Ixtoc I well blowout of 1979, that released about 3.3 million barrels of oil into the Gulf of Mexico over more than ten months? Not many North Americans do -- because they were less environmentally conscious, because it occurred in Mexican rather than U.S. waters, because Iran's Islamic revolution and the Soviet invasion of Afghanistan filled the airwaves and the headlines, or even because many of today's adults were too young to notice, or even unborn.
And that's one of the big problems behind the BP oil spill. In 1977 the University College London civil engineers Paul Sibley and Alistair Walker published a paper suggesting that major bridge collapses occurred at approximately 30-year intervals as new designs succeeded old as a result of the failure's lessons, new generations of designers became increasingly confident in the safety record of their innovations, until they finally pushed them over a tipping point, beginning a new cycle. The civil engineering professor and historian of technology Henry Petroski has developed this idea, which last came to the fore in the Minneapolis bridge collapse of 2007, as discussed here and here. My graduate teacher William H. McNeill coined a mordant phrase for such recurrence of disasters partially as a result of confidence in reforms, the Law of the Conservation of Catastrophe.
Do cycles of disaster apply to oil rigs as well as to bridges? Sibly and Walker thought so. In the February 12, 1976 issue of New Scientist they had the North Sea in mind when they wrote "When Will an Oil Platform Fail?" but their conclusion was prophetic for the Gulf as well:
Our studies have shown that it is a mistake to rely on the success of previous structures as an assurance of safety and that whenever vigilance is relaxed the price must be paid. With the present scale of structures the price will undoubtedly be much higher than the cost of any testing or research that could be done now. | <urn:uuid:45f5f9af-9e71-45ff-8893-a143aa75b6ac> | CC-MAIN-2016-26 | http://www.theatlantic.com/technology/archive/2010/06/technologys-disaster-clock/58367/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00027-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976475 | 386 | 2.796875 | 3 |
Warsaw (Apr. 8)
The towns of Brest, famous for the Russian-German treaty of Brest-Litovsk, and Terespol, as well as surrounding districts, have been completely submerged by the overflowing of the rivers Bug and Muchowietz. It is practically impossible to send aid into the stricken regions, and the population is trying to find shelter on the roofs of the higher houses. Many lives have been lost by drowning. It is as yet impossible to estimate the damage, which is believed to be tremendous. | <urn:uuid:8c3b835a-419c-4b33-9a8a-2074116ae931> | CC-MAIN-2016-26 | http://www.jta.org/1924/04/08/archive/two-jewish-towns-in-poland-suffer-from-flood | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00062-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.971799 | 111 | 2.546875 | 3 |
The criminal justice system is second only to the educational system in the number of fads and “experiments” that temporarily course through the professional ranks. Some experiments—drug courts and victim statements are recent examples—show staying power and eventually become a regular facet of criminal courts.
Restorative justice, a relatively recent development in the criminal justice system, is making a bid to follow drug courts as an experiment that eventually becomes a regular part of the mainstream experience of justice. Although programs that comprise portions of the restorative justice ideal have existed for many years, the concept of a holistic umbrella under which restorative programs can thrive is relatively new.
In short, restorative justice is a process by which the harm done to victims of crime can be reduced or repaired. This is generally a cooperative effort between victim and defendant in which defendants are made aware of the personal impact of their criminal actions. In addition, the victim of the crime has an opportunity to work with the defendant in an attempt to dissuade the defendant from committing similar crimes in the future.
In practice, restorative justice typically involves two major phases: opportunities for victims of crime to communicate with defendants, and opportunities for the community to impact the sentence and encourage the defendant to make different choices in the future. According to Restorative Justice Online, a restorative justice evangelism and resource guide, these phases often include victim/defendant mediation, victim impact circles and conferences, victim assistance by or with the defendant, monetary and nonmonetary restitution, and comprehensive community service opportunities. Many of these programs exist individually throughout the country but lack a central theme to drive and manage them cohesively within a given jurisdiction. Restorative justice attempts to remedy this.
Why should we care? Without community involvement in criminal proceedings, the perception by the community may be that their input isn’t needed or warranted. Where restorative justice has been implemented, victims and defendants alike often praise the move away from retributive justice in which victims and the community have little, if any, impact on judicial proceedings. On the other hand, there are some types of cases—such as sexual assault or domestic violence—in which the victim’s cooperation with the defendant may not be appropriate under any circumstances. In these cases, restorative justice may involve more of the community than the individual stakeholders.
In the United States, many restorative justice programs have been patterned after victim and defendant experiences in tribal courts in the 1990s. Tribal court criminal proceedings often permit any and all stakeholders to make a statement to the court and to the defendant. This includes victims and their families, relatives of the defendant, affected community members, and sometimes individuals in tribal government. Fact finders may take any and all of these statements into account when considering a verdict or sentence.
Like many recent experiments in the criminal justice system, the success of restorative justice is mixed. In 2005, attorney Karen Gottlieb’s survey of four restorative justice programs reflected muddied results:
Success was documented as a “slowing down” of alcohol and drug use in adult participants; however, graduates were as likely to reoffend as nongraduates, and participants as a whole had a relatively high 3-year recidivism rate that ranged from 50–64 percent in the adult courts and over 90 percent in the juvenile courts. For the adult program, graduates took longer to reoffend than nongraduates, and participants had fewer postprogram charges compared to their preprogram criminal histories. Juvenile graduates as a whole, on the other hand, showed no differences in recidivism patterns between graduates and those who did not complete the court program.
Explanations for the difference in effectiveness between juvenile and adult defendants were not addressed by Gottlieb in her survey, but it may be that the more invested a defendant is in the community, the more effective restorative justice can be. As juveniles often have less of a connection to community than adults, restorative justice may need to be adjusted to include more family members or different restitution models for juvenile offenders.
As lawyers, the concept of restorative justice may make us a bit cautious because we lose some control of what statements and evidence are presented to the court, regardless of which side of a criminal proceeding we are on; to present our best case often means hours of dedicated witness preparation and selecting the most compelling arguments for the jury. To have our advocacy subject to one or two incredibly impactful victim statements may be more than many lawyers are comfortable with. Conversely, attorneys may find that they prefer restorative justice in sentencing and postsentencing services because the penalties may be reduced with more community involvement and (potentially) less recidivism.
Whether restorative justice is here to stay or disappears as a failed fad remains to be seen. But when it comes to reducing recidivism and harm to victims, restorative justice may be one experiment that is worth taking a look at.
For more information, see the following:
- Restorative Justice Online: www.restorativejustice.org.
- Karen Gottlieb, Process and Outcome Evaluations in Four Tribal Wellness Courts. Washington, DC: US Department of Justice (2005), available at www.ncjrs.gov/pdffiles1/nij/grants/231167.pdf.
- Judge Tracy McCooey, Maverick in Problem-Solving Courts and Restorative Justice (video series), www.cuttingedgelaw.com/video/judge-tracy-mccooey-maverick-problem-solving-courts-and-restorative-justice (retrieved July 13, 2011).
- Leena Kurki, Restorative and Community Justice in the United States, 27 Crime and Just. 235–303 (2000).
- Michael Dooley, Classification and Restorative Justice: Is there a Relationship?, Topics in Community Corrections, Annual Issue, 1999, http://static.nicic.gov/Library/period165.pdf.
- André Léger, Restoration or Retribution: An Empirical Examination of the Recidivistic Patterns of a Group of Young Offenders from New York City (1999) (unpublished M.A. thesis, Department of Sociology, Queen’s University, Kingston, Ontario, Canada), http://qspace.library.queensu.ca/bitstream/1974/5364/1/Leger_Andre_200912_MA.pdf.
- Marc Forget, Restorative Justice in Prisons: An Evolution from Victim Offender Mediation in 1998, to a Restorative Prison Wing in 2001, to a Holistic, Multi-Sector Project in 2004, presentation at ancillary meeting #40, Eleventh United Nations Congress on Crime Prevention and Criminal Justice, Apr. 24, Bangkok, Thailand (2005), www.pfi.org/cjr/about-cjr/un-initiatives/11thcongress/rjprisonsmeeting/forgetpres/view. | <urn:uuid:d5caf551-6776-450f-b306-7cc15e18b5a0> | CC-MAIN-2016-26 | http://www.americanbar.org/publications/gpsolo_ereport/2011/october_2011/making_case_restorative_justice.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936411 | 1,451 | 2.9375 | 3 |
You might be saying Alka-what? what is Alkaline eating? I think Wikipedia has the best most basic description…
The Alkaline diet (also known as the alkaline ash diet, alkaline acid diet and the acid alkaline diet) is a diet based on the theory that certain foods, when consumed, leave an alkaline residue, or ash. Minerals containing elements like calcium, iron, magnesium, zinc, copper, are said to be the principal components of the ash. A food is thus classified as alkaline, acid or neutral according to the pH of the solution created with its ash in water.
We’ve been Alkaline for over 3 years now, i was introduced to the concept by Tony Robbins and after looking into it further it really did just make sense. After living a “traditional” eating regime prior to that i have to say within the first 2 weeks you will notice not only an increase in energy but the health aspects are amazing (i haven’t taken a single tablet in 3 years!). The interesting part is things like colds and flu rarely visit and when they do its usually no longer than about an hour or 2 (I’m not kidding).
One of the most fundamental ideals of it is that your body has to either “eliminate or assimilate” food that goes in aka if it cant break something down it needs to store it so it can be dealt with later. With this in mind in the first 3 months alone i was loosing a kilo a day with all the assimilated waste just pouring out. We are taught from a young age that the primary function of food is to provide energy yet with the aforementioned in mind your body can actually use more energy trying to handle food than it will be able to generate from it. A good example of this is when you have a really big traditional dinner with meat, vegetables then a dessert and then afterwards you start to feel really tired, the tired part is your body trying to process everything plus keep all your other systems going.
So what can you eat? well Meat and Dairy are out as is anything cooked (unless slightly warmed) or anything with preservatives/additives basically any food that doesn’t have a place in Nature (yes that still means meat and dairy) and some natural foods will still cause Stomach Acid to turn super acidic (EG Tomato) but in small quantities its easy to be handled. Generally i can tell if something is too acidic not long after eating it because you can feel everything slowing down.
So if i haven’t lost you yet here’s 5 of our favourite recipes…
1. Riceless Sushi
- 1 and 3/4 cups peeled fresh parsnips
- 3 tbs. macadamia nuts
- 3 tbs. pine nuts
- 2 tbs. lemon juice
- 1-2 pinches celtic sea salt (to taste)
- 1 tbs. raw soy sauce
Process in a food processor until ricey.
Generally speaking just about any vegetable can be used. for example,
- Red Pepper
and of course some Norri sheets and a Sushi mat if you havent already got one (most supermarkets sell these)
Cut into thin matchstick style strips and marinate in the following for at least a few hours before the rolling part,
- 3 tbs. sesame oil
- 1 tbs. black sesame seeds
- 1 tbs. raw soy sauce or 2 pinches of salt
- 2 tbs. lemon juice
again you can adjust this to taste or if you cant get additive free alternatives.
MAKING THE SUSHI
If you haven’t done this before then it will take a few rolls to get the hang of it, and it can be quite tricky. even after making this quite often i still will not roll it the best at times.
-Take a sheet of Norri and place it onto a clean smooth surface or a sushi mat (you can get these quite cheap at most supermarkets or Asian markets)
-Spread 2-3 tbs. of the rice mixture on to 1/4th of the Norri sheet.
-Make a little indent and put 1-2 tbs. of the marinated veggies on top.
-Get a pastry brush with a bit of water with a pinch of sea salt, and brush the top part of the sushi.
-Roll it up! You can roll the sushi with a sushi mat or use your fingers. I like to use my fingers. Use your thumbs and fingers and roll it up, when rolling, tighten the roll every time.
- Let the roll sit for 5 minutes before cutting.
-Using a serrated knife cut the Norri roll into 5-6 equal parts. Use a see-saw motion with a serrated knife to make it a perfect smooth cut.
2. Red Capsicum and Almond Dip
- 280g red Capsicum
- 60g almonds
- 2-3 garlic cloves
- 1 tbsp. cold pressed extra virgin olive oil
- 1 pinch of sea salt
- 1 pinch of cayenne pepper
Put all ingredients in a blender and mix until the dip becomes smooth and creamy (the magic of Almonds). Season with salt and pepper.
3. Beetroot and Orange Salad
- 2 oranges
- 1/2 tsp honey
- cracked pepper
- 1/4 lemon (or less depending on taste)
- 1 tsp cold pressed oil
Cut the tops and bottoms off the 2 oranges, just enough to be able to squeeze some juice from the cut off pieces but not too much off the oranges. put the remaining orange aside for below and squeeze the juice from the cut off pieces. mix all dressing ingredients together to taste, make sure the honey doesn’t over power the other tastes in the dressing.
- One medium cooked beetroot, i usually just boil (leave the skin on) in filtered water till soft then scrape/peel the skin off once cooled.
- 2 oranges (from above)
- 2 or 3 lettuce leaves
Thinly slice the beetroot, peel the oranges cut in half and de-seed then thinly slice. put lettuce on a plate, layer orange pieces on top followed by beetroot. Pour over dressing and serve.
- 2 medium avocados
- 1 small onion, chopped
- 1 small tomato, diced
- 1/2 lemon, juiced
Dice all ingredients and add to a blender. Blend till chunky and mixed, Its important to not blend this too long or it will turn into a puree rather than a chunky mix.
Best eaten when freshly made, but keeps for 2 days in the refrigerator. Goes great on or with lettuce leaves
TIP: When storing, leave the avocado pits in the Guacamole to help maintain freshness.
- ½ cup green or red cabbage
- 2 carrots
- 1 tomato
- 1 small red onion
- 3 tbsp. chopped parsley
- 3-4 tbsp. cold pressed extra virgin olive oil
- 1 fresh lemon, juice
- Dash of sea salt and cayenne pepper to taste
Shred cabbage and carrots, and finely chop the tomato, the onion and the parsley. Put in a big bowl.
For the dressing add the olive oil and the fresh lemon juice and pour over the salad. Add salt and pepper to taste. I also add some Tahini and or some Honey for a different taste.
Being 80% water i normally aim to drink around 5 liters of (filtered) water a day. For anyone that’s on twitter you can keep track of how much water you a drinking (thanks to a little module i wrote for my CopyTwit Project) by sending a tweet to @autocopytwit with #water and an amount (eg “@autocopytwit #water 600ml”) | <urn:uuid:47c43edf-dd05-4bc1-8486-07066e1c42d3> | CC-MAIN-2016-26 | http://fivehive.com/2011/02/02/5-quick-alkaline-diet-recipes/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00137-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.909606 | 1,649 | 2.625 | 3 |
Understanding the weather conditions that have been linked to ice-crystal icing can help pilots avoid situations that may put airplane engines at risk for power loss and damage.
By Matthew L. Grzych, Meteorologist, Atmospheric Physics and Flight Test Engineering
In a majority of ice-crystal icing engine events, convective weather occurs in a very warm, moist, tropical-like environment.
High-altitude ice crystals in convective weather can cause engine damage and power loss in multiple models of commercial airplanes and engines. (More information about engine power loss in ice crystal conditions can be found in AERO fourth-quarter 2007.)
Pilots typically use the term “icing conditions” to refer to weather conditions usually below 22,000 feet where supercooled liquid droplets form ice on cold airframe surfaces. On the contrary, ice-crystal icing conditions connected to engine power loss are thought to be due to completely frozen ice crystals. When flying near convective weather through ice crystal conditions, pilots have reported a lack of airframe icing or ice detection (no supercooled liquid present), but they do notice the appearance of rain on the windscreen, sometimes at temperatures too cold for liquid water to exist. It has been confirmed that the appearance of rain is caused by small ice particles melting on impact with the heated windscreen. Pilots also have noted that the sound made by flight through ice crystals is different from the sound they hear when flying through rain. Although it’s not present on all airplanes, a total air temperature (TAT) anomaly also has occurred simultaneously during some engine events.
The TAT anomaly is due to ice crystals building up in the area in which the sensing element resides, where they are partly melted by the heater, causing a 0 degrees C reading. This phenomenon seems to depend on where the TAT sensor is installed on the fuselage. In some cases, TAT has stabilized at 0 degrees C during a descent and may be noticeable to pilots. In other cases, the error is more subtle and not a reliable-enough indicator to provide early warning to pilots of high concentrations of ice crystals.
This article provides detailed information about the convective weather associated with engine-power-loss events and recommendations on how to increase pilots’ awareness of this weather and help them avoid conditions that can result in power loss.
Overview of engine events associated with convective ice crystals
Engine-power-loss and -damage events are being reported within anvil cloud regions of convective storms at high altitudes. The engines in all events have recovered to normal thrust response quickly.
It has been accepted that ice crystals are the primary source of the engine icing because of the lack of airframe icing reports, lack of radar reflectivity, and the fact that many of these events are occurring at extremely cold temperatures where only frozen particles can exist.
There appear to be certain environments and particular regions within each storm system that most often lead to engine events. The most common observations during these events include:
- The airplane is traversing a convective anvil cloud.
- Pilots are avoiding heavy radar return regions at flight level by 20 miles or more.
- Only light to moderate turbulence is reported leading up to and during the engine events.
- No hail is reported.
- There is no lightning.
- Either a lack of airplane weather radar returns or light radar returns present at flight level.
- Moderate to heavy precipitation (amber or red radar returns) is located below the airplane and the freezing level.
Weather associated with ice crystal engine events
Because it is believed that the clouds where engine events occur are composed of high concentrations of small ice crystals, scientists and meteorologists refer to these as regions of high ice water content (HIWC). Engine events associated with HIWC have occurred in two distinct types of cloud: classic convection and nonclassic HIWC-producing convection (referred to as nonclassic convection from here forward). Roughly 20 percent of engine events occur in classic convection, while the remaining 80 percent occur in nonclassic convection.
Classic convection: Classic convection has vigorous updrafts, is typically found over land, and will have moderate to heavy radar signatures present up to high altitudes, making the core areas and danger zones detectible so the flight crew can avoid them (see fig. 1). Because this region of convective weather can be detected by the airplane’s radar system, pilots can avoid the cell by diverting to the upwind side. In these more typical convective clouds, engine events have been recorded in the anvil cloud downwind from the cell’s core. In the anvil, even though there can be HIWC, ice particles return only enough radar energy to occasionally record green signatures on the pilot’s radar. At other times, there may be no radar returns at all.
This image depicts a vertical cross-section view as an airplane is headed for a classic convective cell. Colors represent standard airplane radar returns where green is light, amber is moderate, and red is heavy. In this scenario, the radar beam pointed straight ahead detects heavy precipitation and the airplane diverts and avoids the weather. There is, however, an area of high ice water content (HIWC) possible in the anvil cloud downwind from the convective core that pilots need to be aware of and avoid.
Pilots should avoid the region of anvil cloud downwind from heavy cores near these typical convective cells, especially if light radar returns are present at high altitudes. However, in the majority of ice crystal engine events, pilots unknowingly pass directly over heavy convective precipitation through the anvil cloud into regions of high ice content within nonclassic convective cells, as discussed in the next section.
Nonclassic convection: The type of weather that is most associated with ice crystal icing and subsequent engine events is not what is generally considered typical convection, which has vigorous cores that can be detected at flight level. Instead, the convective weather that is of greatest concern is associated with nonclassic convective clouds that have weak updrafts, regions of decaying convection, and regions of HIWC aloft, but lacks reflectivity at flight level, making it more difficult for pilots to identify (see fig. 2).
This image depicts a cross-section view as an airplane is headed for a nonclassic convective system. During a typical ice crystal engine event, the airplane will be flying in convective cloud with light radar returns at flight level. However, if the pilot uses the radar tilt function to scan below the airplane, moderate to heavy radar returns will be seen. These are regions to avoid because they are associated with regions of HIWC.
Many times areas of HIWC may be associated with residual areas of merging and decaying cell updrafts within a larger convective system. HIWC regions are typically characterized by relatively weak updrafts that are not strong enough to loft large ice particles, such as hail, to high altitudes, but are able to loft high concentrations of small ice particles up to the tropopause (tropopause height varies depending on the latitude and the season). Large ice particles, such as hail or graupel, are effective radar reflectors and show up on weather radar readily. However, radar returns are not reported during ice crystal engine events, leading meteorologists to conclude that only small ice particles can be present during these events.
Ice crystal engine event: a case study
Airlines can gain valuable insights into convective weather associated with engine power loss and damage by examining an actual engine icing event (see fig. 3). In the enhanced infrared satellite image of a large convective system where an engine icing event occurred, the colored areas represent regions of deep convection and the bright white region is where cloud tops have penetrated through the tropopause into the lower stratosphere. The airplane flew along the path from right to left, entering a large anvil cloud associated with a tropical convective system. A TAT anomaly was observed shortly after the airplane entered the anvil cloud, followed by a series of engine events as the airplane penetrated the deepest part of the storm at temperatures well below freezing. The engines recovered quickly, and the airplane continued safely to its destination.
This satellite image shows a typical scenario for ice crystal engine events in which an airplane enters a large convective system while on ascent or descent at temperatures well below freezing.
In this region of the convective system, large amounts of moisture are lifted, converted to ice crystals, and then lofted to high altitudes. This event represents a fairly typical scenario for ice crystal engine events in which an airplane enters a large tropical-like convective system while on ascent or descent at temperatures well below freezing. The engine event then occurs while passing through a region of deep glaciated convective cloud with moderate to heavy rain below the airplane.
Radar data provides another view of this ice crystal engine event (see fig. 4). The red arrow represents the airplane’s flight trajectory; a series of engine events occurred between the white dots. Low-level radar returns along the path were mostly moderate with some embedded heavy return regions. However, at flight level — where the series of events occurred — radar returns were only scattered light return (green) areas. Using the radar’s tilt function to scan below the airplane would have revealed moderate to heavy returns below.
Radar data for this event shows a top-down view (main image) and a vertical slice looking northeast through the storm (inset). The red arrow depicts the airplane flight trajectory.
Characteristics of systems with areas of high ice content
Although the exact physics and dynamics that contribute to ice crystal engine events are not completely understood, there are many similarities among events.
For example, a majority of the events has occurred in tropical and subtropical regions of the world (usually between 30 degrees south and 30 degrees north latitude). In these cases, the airplane penetrated into the deepest part of a nonclassic convective system, flying directly over heavy rain in the glaciated cloud above.
Nonclassic convection events have also occurred at higher latitudes during summer months; for example, they have been reported in the eastern United States and Japan.
A smaller percentage of engine events, on the order of 20 percent or less, has occurred in classic convection. These events typically occur in mid-latitude, continental storms as an airplane diverts from a heavy weather core at altitude and flies into a region of HIWC adjacent to or downwind of the core.
A conceptual model helps illustrate where areas of high ice content might be found (see fig. 5). In these systems, there can be several areas of active convection where heavy returns may be present to high altitudes, as well as broad regions of decaying convection and moderate to heavy stratiform precipitation regions at lower levels.
An infrared satellite image of a tropical mesoscale convective system where an engine event occurred (top) and an idealized east-west vertical cross-section through the storm’s center viewing it from south looking north (bottom). Green, yellow, and red areas represent light, moderate, and heavy radar return regions, respectively. Ice content is labeled HIWC.
Engine event threat areas include regions above the freezing level either adjacent to or downwind of heavy convective cores or above moderate to heavy rain associated with decaying convection or stratiform regions within the convective system. Both regions are labeled “HIWC Possible” in figure 5.
From an observer’s perspective at high altitudes, the anvil region may grow so large that it can take on the appearance of a thick cirrus cloud shield and lose its visual convective qualities. Essentially, many individual convective cells and their associated anvil clouds all merge into one large, broad system and each individual anvil cloud loses its identity.
Engine events most commonly occur at altitudes of 20,000 to 35,000 feet at temperatures ranging from ‑10 degrees C to -40 degrees C. However, some outlier events have occurred at altitudes as low as 9,000 feet with a temperature of ‑8 degrees C and at altitudes as high as 41,000 feet with temperatures down to ‑63 degrees C.
In a majority of the ice crystal engine events, convective weather occurs in a very warm, moist, tropical-like environment. The atmosphere is generally slightly to moderately unstable, resulting in weak to modest updraft strength. During engine events, pilots report only light to moderate turbulence. These convective systems are generally large, heavy rain producing storms that have life cycles ranging from several hours to 24 hours or more.
Typically, events do not occur in severe convection with strong updrafts because these cells are detectable at altitude, and pilots are able to avoid them. However, in some cases high concentrations of ice crystals can be present within the anvils of these storms either adjacent to or downwind from heavy cores.
Based on an analysis of the ice crystal engine event database, Boeing has developed the following recommendations to help flight crews avoid regions of HIWC:
- During flight in instrument meteorological conditions (IMC), avoid flying directly above significant amber- or red-depicted map weather radar regions.
- Use the weather radar gain and tilt functions to assess weather radar reflectivity below the airplane.
For example, if an airplane is flying in IMC above the freezing level and there are amber or red radar returns in the vicinity or cloud tops up to the tropopause, or the airplane is known to be in a convective cloud, regions of HIWC may be in the area. In this scenario, the pilot should point the radar down to look below the freezing level. If amber and red areas indicating heavy rain are detected below the freezing level, HIWC areas are possible above these low-level moderate to heavy rain regions. Under these conditions, the pilot should consider evasive action.
To date, the engines affected in all recorded ice crystal events have recovered to normal thrust response quickly. However, due to the possibility of continued power loss and the risk of engine damage, airlines can use this information to help them avoid flying in convective weather associated with engine-power-loss events.
For more information, contact Matthew Grzych.
Recognize areas where ice crystals may exist.
- Above the freezing level in convective weather.
- Near the deepest part of a convective cloud.
Recognize common conditions.
- Moderate to heavy rain is present below the airplane, producing amber and red radar returns, but little or no returns at flight level.
- Weak to modest updraft velocities.
- Light to moderate turbulence.
- During flight in instrument meteorological conditions, avoid flying directly above significant amber or red radar returns.
- Use the weather radar gain and tilt functions to assess weather radar reflectivity.
Convective weather, or atmospheric convection, is the result of an unstable atmosphere where ascending air parcels condense moisture to high altitudes sometimes resulting in one or more of the following:
- Vertically deep cloud with a large cirrus (anvil) region.
- Areas of strong wind shear and turbulence.
- Areas of high condensed-water content.
- Heavy precipitation and hail.
- Regions of highly concentrated ice particles. | <urn:uuid:0143466c-1bde-45e9-b921-f7dda3bd04f6> | CC-MAIN-2016-26 | http://www.boeing.com/commercial/aeromagazine/articles/qtr_01_10/5/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00195-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932034 | 3,154 | 3.234375 | 3 |
Spiritualists and Eastern religions have used meditation for centuries as a form of mind-body medicine, and the western world is finally catching on. There are many different forms of meditation, all of which rest the mind, encouraging a practice of focus and control.
Clear, long-term, scientific research now backs up the many claims that regular meditation literally alters your brain activity, as well as contributes to the psychological and physiological well-being of those that practice.
Health Benefits of Meditation
Meditation encourages a level of consciousness that promotes a state of healing. When the mind is rested, the brainwave pattern moves into the alpha-wave brain state associated a relaxed autonomic nervous system that promotes the healing itself.
Without any control or effort of our minds, the autonomic nervous system is responsible for the regulation of our glands and organs. It consists of two parts: the sympathetic and parasympathetic nervous systems. The parasympathetic nervous system calms the body down, increasing digestive juices and reducing the heart rate. The sympathetic nervous system is designed to rev the body up and get it ready for action. This is commonly referred to as 'flight or fight' response, whose symptoms are commonly felt under times of stress.
When the sympathetic nervous system is turned on for too long, chronic stress or burnout may result from it.
The alpha wave state of the brain stimulates the parasympathetic nervous system and reduces the activity of the parasympathetic nervous system. Symptoms of stress are reduced, and the individual will increasingly enjoy a greater sense of well-being. And with regular meditation practice, the autonomous nervous system as a whole can be trained permanently into a far more beneficial physical and mental state.
Proven Benefits of Meditation
Scientific research has also proven other health benefits of meditation, such as:
Reduced Anxiety, Stress and Depression
New scientific research shows that meditation activates the left pre-frontal cortex, whilst reducing the activity of the right pre-frontal cortex of the brain. It has also proven that meditating stimulates the serotonin production of the brain. Both serotonin and the left frontal cortex of the brain are associated with positive feelings, such as calm and happiness, and so as a result meditating actually relieves feelings of depression, insomnia, anxiety and even headaches.
Reduced Blood Pressure
Meditation has been scientifically proven to reduce blood pressure, which encourages general heart health and circulation. It has also been shown to reduce cholesterol levels (often influenced by the stress hormone Cortisol), and to relieve the heart itself with its stress-reducing powers.
New scientific research indicates those who meditate are significantly better equiped to deal with infections, viruses and even cancer in comparison to the general population. It has also been proven to speed up the rate of post-operative healing.
Scientists have documented a 50% reduction in perceived pain when sufferers tackled the pain itself with meditation. This is especially relevant to common pain conditions such as arthritis or back problems.
Focus, Concentration and IQ
Long-term meditating has been proven to physically thicken the prefrontal cortex of the brain, leading to an overall improvement in mind's ability to pay attention, its awareness and overall intelligence.
Aging can actually be reversed with regular meditation. As mentioned above, scientific research has shown that meditating thickens the prefrontal cortex of the brain, which is contrary to what should occur physically in the brain in the process of aging. Tit is a fact that hose who meditate regularly generally look younger than their chronological, physical age.
Taming Habits and Addictions
In stilling the mind through meditation, addictive behaviours can be observed, analysed and questioned through a process of self-inquiry. In identifying issues surrounding the addictions and gaining deeper personal insights about the source of their cravings, an addict is able to finally let the addiction go.
Meditation has also been shown to increase exercise tolerance and self-confidence, as well as decrease symptoms of PMS and other types of emotional distress.
If you would like to learn more about the meditation techniques available to you, please speak to your local yoga or meditation practitioner. | <urn:uuid:abd4295f-3b19-42ee-b205-b0c8d4da8209> | CC-MAIN-2016-26 | http://www.naturaltherapypages.com.au/article/meditation_boosts_your_health | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00047-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940363 | 845 | 2.90625 | 3 |
Pronunciation: (u-lit"u-rā'shun), [key]
1. the commencement of two or more stressed syllables of a word group either with the same consonant sound or sound group (consonantal alliteration), as in from stem to stern, or with a vowel sound that may differ from syllable to syllable (vocalic alliteration), as in each to all. Cf. consonance (def. 4a).
2. the commencement of two or more words of a word group with the same letter, as in apt alliteration's artful aid.
Random House Unabridged Dictionary, Copyright © 1997, by Random House, Inc., on Infoplease. | <urn:uuid:974b117b-5d05-4347-a9b1-57cc050006c2> | CC-MAIN-2016-26 | http://dictionary.infoplease.com/alliteration | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00163-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.889932 | 153 | 2.984375 | 3 |
Sulamith Löw Goldhaber’s pioneering work with particle accelerators put her at the forefront of a seismic shift in the research of particle physics. Goldhaber met her husband, Gerson Goldhaber, at Hebrew University in Jerusalem, and the pair earned their PhDs in physics at the University of Wisconsin, Madison, in 1951. Known as one of the best teams for studying nuclear emulsion technology, they pressed to use the Bevatron at Berkeley, the world’s most powerful particle accelerator, as often as possible, making vital early discoveries about the interactions of K- mesons and protons. Goldhaber’s presentation on heavy mesons and hyperons at the 1956 Rochester Conference marked an important shift in studying strange particles: before, most of the major discoveries came from physicists studying cosmic rays, but now particle accelerators offered more possibilities for observation and experimentation. In the early 1960s, she switched from nuclear emulsions to the newly discovered bubble chambers (filled with superheated liquids to track particles) and quickly became an expert in the field, making vital discoveries about resonant states of mesons. After her sudden death in 1965, Tel Aviv University began an annual memorial lecture in her name.
How to cite this page
Jewish Women's Archive. "Sulamith Goldhaber." (Viewed on July 1, 2016) <http://jwa.org/people/goldhaber-sulamith>. | <urn:uuid:15f2c642-7d8c-4660-90a8-910f7bc43cc9> | CC-MAIN-2016-26 | http://jwa.org/people/goldhaber-sulamith | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00174-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.928697 | 303 | 3.21875 | 3 |
Deep below the surface of an isolated mountain range in Mexico sit two rooms of splendor: translucent crystals the length and girth of mature pine trees lie pitched atop one another, as though moonbeams suddenly took on weight and substance.
In April 2000, brothers Eloy and Javier Delgado found what experts believe are the world’s largest crystals while blasting a new tunnel 1,000 feet down in the silver and lead Naica Mine of southern Chihuahua. Forty-year-old Eloy climbed through a small opening into a 30- by 60-foot cavern choked with immense crystals. "It was beautiful, like light reflecting off a broken mirror," he says. A month later, another team of Naica miners found an even larger cavern adjacent to the first one.
Officials of the Peñoles company, which owns the mine, kept the discoveries secret out of concern about vandalism. Not many people, however, would venture inside casually: the temperature hovers at 150 degrees, with 100 percent humidity.
"Stepping into the large cavern is like entering a blast furnace," says explorer Richard Fisher of Tucson, Arizona, whose photographs appear on these pages. "In seconds, your clothes become saturated with sweat." He recalls that his emotions raced from awe to panic.
Fisher says a person can stay inside the cave for only six to ten minutes before becoming disoriented. After taking only a few photographs, "I really had to concentrate intensely on getting back out the door, which was only 30 to 40 feet away." After a brief rest, he returned for another couple of minutes. "They practically had to carry me out after that," Fisher says.
Geologists conjecture that a chamber of magma, or superheated molten rock, lying two to three miles underneath the mountain, forced mineral-rich fluids upward through a fault into openings in the limestone bedrock near the surface. Over time, this hydrothermal liquid deposited metals such as gold, silver, lead and zinc in the limestone bedrock. These metals have been mined here since prospectors discovered the deposits in 1794 in a small range of hills south of Chihuahua City.
But in a few caves the conditions were ideal for formation of a different kind of treasure. Groundwater in these caves, rich with sulfur from the adjacent metal deposits, began dissolving the limestone walls, releasing large quantities of calcium. This calcium, in turn, combined with the sulfur to form crystals on a scale never before seen by humans. "You can hold most of the crystals on earth in the palm of your hand," says Jeffrey Post, a curator of minerals at the Smithsonian Institution. "To see crystals that are so huge and perfect is truly mind-expanding."
In addition to 4-foot-in-diameter columns 50 feet in length, the cavern contains row upon row of shark-tooth-shaped formations up to 3 feet high, which are set at odd angles throughout. For its pale translucence, this crystal form of the mineral gypsum is known as selenite, named after Selene, the Greek goddess of the moon. "Under perfect conditions," says Roberto Villasuso, exploration superintendent at the Naica Mine, "these crystals probably would have taken between 30 to 100 years to grow."
Until April 2000, mining officials had restricted exploration on one side of the fault out of concern that any new tunneling might lead to flooding of the rest of the mine. Only after pumping out the mine did the level of water drop sufficiently for exploration. "Everyone who knows the area," says Fisher, "is on pins and needles, because caverns with even more fantastic crystal formations could be found any day."
Previously, the world’s largest examples of selenite crystals came from a nearby cavern discovered in 1910 within the same Naica cave complex. Several examples from the Cave of Swords are exhibited at the Janet Annenberg Hooker Hall of Geology, Gems, and Minerals at the Smithsonian’s National Museum of Natural History. | <urn:uuid:ad559f77-917d-4434-85ce-63242438db64> | CC-MAIN-2016-26 | http://www.smithsonianmag.com/science-nature/crystal-moonbeams-61506257/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00017-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960575 | 821 | 3.109375 | 3 |
Spastic colon is another term for irritable bowel syndrome (IBS), a common disorder characterized by abdominal cramping, abdominal pain, bloating, constipation and diarrhea.
The term "spastic colon" describes the increase in spontaneous contractions (motility) of muscles in the small and large intestines associated with IBS. These contractions are sometimes called spasms. However, because IBS may also be associated with decreased motility, the term spastic colon isn't always accurate.
The cause and severity of IBS varies from person to person. Treatment is aimed at relieving symptoms, and may include changing your diet, increasing physical activity, reducing stress, and taking anticholinergic or anti-diarrheal medications.
Feb. 12, 2014
- Wald A, et al. Pathophysiology of irritable bowel syndrome. http://www.uptodate.com/home/index.html. Accessed Sept. 7, 2013.
- Picco MF (expert opinion). Mayo Clinic, Jacksonville, Fla. Sept. 19, 2013.
- Wald A, et al. Treatment of irritable bowel syndrome. http://www.uptodate.com/home/index.html. Accessed Sept. 7, 2013. | <urn:uuid:ff4e9174-b42a-4586-9612-ce9000fa6c80> | CC-MAIN-2016-26 | http://www.mayoclinic.org/diseases-conditions/irritable-bowel-syndrome/expert-answers/spastic-colon/faq-20058473?p=1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00031-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.91788 | 259 | 2.71875 | 3 |
Q: Why does everyone say not to use gets()?
A: Unlike fgets(), gets() cannot be told the size of the buffer it's to read into, so it cannot be prevented from overflowing that buffer if an input line is longer than expected--and Murphy's Law says that, sooner or later, a larger-than-expected input line will occur. [footnote] (It's possible to convince yourself that, for some reason or another, input lines longer than some maximum are impossible, but it's also possible to be mistaken, [footnote] and in any case it's just as easy to use fgets.)
The Standard fgets function is a vast improvement over gets(), although it's not perfect, either. (If long lines are a real possibility, their proper handling must be carefully considered.)
One other difference between fgets() and gets() is that fgets() retains the '\n', but it is straightforward to strip it out. See question 7.1 for a code fragment illustrating the replacement of gets() with fgets().
Rationale Sec. 126.96.36.199
H&S Sec. 15.7 p. 356 | <urn:uuid:3f31e1db-e3fe-45d7-abcc-2e800d2c0117> | CC-MAIN-2016-26 | http://www.c-faq.com/stdio/getsvsfgets.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00176-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.913236 | 243 | 3.203125 | 3 |
Some useful R code examples on graphics are:
Learn R Toolkit:
It contains PowerPoint slideshows, videos, R scripts and data files to help Excel users move up to R. R code examples are provided for panel charts, conditional format, dot plot, box plots with or without points, and fully annotated histogram charts.
It is composed of 5 modules:
Module 1: Installing & Setting Up R
Module 2: Compare R and Excel Worlds
Module 3: R Script Basics
Module 4: R Charts
Module 5: Box, Dot, Histogram, Strip Charts.
Using R for Advanced Charts:
It presents examples on lattice plots, step chart, boxplot, trend chart, panel chart, etc.
Peter’s R Programming Pages:
It presents R code examples on various R plots, including 3D bar charts, matrix contours, microarray heatmaps, network graphs, scatter & pairs plots, Ramachandran (phi/psi) plots, and linear models.
Slides on R Graphics by Paul Murrell:
It provides many details on advanced graphic controls, such as margins, graphical parameters, grid graphics and lattice. | <urn:uuid:025d6145-4824-4dc9-a3dc-e7d9c6030a9a> | CC-MAIN-2016-26 | https://rdatamining.wordpress.com/2011/08/16/r-code-examples-on-graphics/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00188-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.717699 | 249 | 2.75 | 3 |
Our Klamath Basin Water Crisis
Upholding rural Americans' rights to grow food,
own property, and caretake our wildlife and natural resources.
Earth Day Network Program credited with
Cincinnati School Board Decision to Build Green
June 5, 2006 (Washington, DC) Earth Day Network’s National Civic Education Project is being credited as a major contributor to the recent decision to build green schools in Cincinnati, confirming that this groundbreaking program could create the most civically active and environmentally aware generation in the history of the United States.
The National Civic Education Project (CEP) was created by Earth Day Network to empower young people to exercise their rights and uphold their responsibilities as citizens as they address environmental health issues affecting their communities. CEP students are instilled with the skills, pride and passion required to engage their political leaders at all levels, which speaks to the very heart of the democratic process.
“In its first year, the Civic Education Project created a new generation of civically active and educated students, who helped convince the Cincinnati School Board to create healthier schools throughout the city. Earth Day Network is creating environmental democracy, and a new green movement. That is our vision, said Kathleen Rogers, EDN president. “This is only the first step in Cincinnati and we’re already expanding to other cities because every city in the United States holds unique environmental challenges that can act as a civic education experience for students.”
During the May 8th meeting of the Cincinnati Board of Education, the Director of the Facilities Branch for Cincinnati Public Schools (CPS) presented high performance design guidelines, which clarify the Board’s decision to use the best practices of green building design in partnership with the Cincinnati chapter of U.S. Green Building Council. These guidelines will now be included in the CPS Building Standards, with the goal of building and operating sustainable high performance school facilities that are environmentally and fiscally responsible, healthy places to learn and work that promote student achievement.
A local green building advocacy group, The Alliance for Leadership and Interconnection (ALLY), gives credit to Earth Day Network’s Civic Education Project for this towering achievement. “I believe that this year’s Earth Day Network Green School Civic Education Project provided the nudge for Cincinnati Public Schools to announce that they are going to go green with the rest of the schools being built and renovated,” said Ginny Frazier, executive director of ALLY.
Earth Day Network chose four schools in Cincinnati and one in Washington, D.C. for the first year of the Civic Education Project, 2005-2006, because each city faces significant environmental problems and educational obstacles and is home to a large number of minority and low-income students. In Cincinnati, more than 76% of students are of a minority race and approximately 2/3 of all students receive free or reduced lunches.
In Cincinnati, five teachers participated: John Dean and Shelby Louden at Aiken College and Career High School, Penelope Greenler at Winton Montessori, Kamlesh Jindal at Bond Hill Academy, and Erin Morris with the Cincinnati Park Board. These teachers and their students focused on green schools because Cincinnati is engaged in a $1 billion school refurbishment program.
• Penelope Greenler and her students at Winton Montessori worked to create a design of their school as a green building. She and her students attended planning meetings of the Alliance for Leadership and Interconnection to learn how to create support for building green schools and met with architects who helped them design a model of what they hope will be the new Winton Montessori.
• At Bond Hill Academy, Kamlesh Jindal and her students created a civic outreach program around the politics of recycling. They designed and implemented a model recycling program at their school and then reached out to community leaders and city officials to expand their idea while relating it to the green building theme by studying how green construction utilizes recycled materials.
• Erin Morris, from the Cincinnati Park Board, works with students after school. She and her students studied how the exterior of a green school can be designed to act as a learning environment. They worked with city officials to create potential designs of the exterior landscape of a green school so that it has demonstrable environmental benefits.
• John Dean and Shelby Louden, at Aiken College and Career High School worked with his students to build community support for green schools. They attended community meetings and conducted a campaign to create awareness of the economic, educational, and environmental benefits of green schools. The goal is to gain support not just from local politicians, but from the community at large.
During the Cincinnati School Board’s May 22nd meeting, the teachers and students who participated in the Civic Education Project each presented the results of their green schools projects and enthusiastically urged board members to consider the full range of benefits green schools provide, including an increase in class attendance and academic performance.
“Participating in the Civic Education Project has helped me see the impact of green and high performance school environments on our students.” said Penelope Greenler. “Our children deserve to have schools that are healthy and that enhance their education. They should not be learning in spite of their school environment with poor lighting, inadequate ventilation, too much noise, and not enough space.”
About Earth Day Network:
Earth Day Network was founded by the organizers of the first Earth Day in 1970 and promotes environmental citizenship and year round progressive action worldwide. Earth Day Network’s global network reaches more than 12,000 organizations in 174 countries and hundreds of thousands of educators around the world. Earth Day is celebrated by more than half a billion people each year making it the largest secular holiday in the world. April 22, 2006 marked the 36th anniversary of Earth Day.
Page Updated: Thursday May 07, 2009 09:15 AM Pacific
Copyright © klamathbasincrisis.org, 2005, All Rights Reserved | <urn:uuid:356841d2-b9a0-4be6-b22a-50cd0c373287> | CC-MAIN-2016-26 | http://www.klamathbasincrisis.org/environmentalistswildlands/greenschoolsedn060606.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00023-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959643 | 1,206 | 2.90625 | 3 |
While caring for a cactus is relatively easy, potting one can be a little more complicated. The makeup of the soil is key, since cacti have specific density and nutritional requirements. The perfect potting soil should wet easily, drain well and contain some organic material for proper nutrition and adequate air. The soil should not be so rich that it holds water for long periods, though, because this will inevitably rot the roots. It is easy to make your own cactus soil at home with ingredients you can get at your local garden center.
Obtain good-quality top soil, peat moss and crushed pumice or vermiculite from your local garden supply store or nursery.
Make a mixture of 20 percent top soil, 10 percent peat moss and 70 percent pumice or vermiculite in the mixing container. Combine thoroughly to form the foundation for the cactus soil. Moisten the soil so that it is damp.
Sterilize the soil. Place the mixture in a plastic bag, insert a cooking thermometer into the soil through the bag, place the bag on an oven tray, and heat the soil in the oven until it reaches an internal temperature of 165 degrees.
Add 1/2 cup of bone meal for every 12 quarts of soil mix to ensure a proper pH balance and a good-quality, general-use, timed-release fertilizer such as Osmocote. Mix these two ingredients into the soil thoroughly. | <urn:uuid:2ffa1801-2ed9-4fe9-b614-ae8e97006e43> | CC-MAIN-2016-26 | http://www.gardenguides.com/79506-make-cactus-potting-soil.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00117-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.886081 | 301 | 2.625 | 3 |
College-Bound, an SAT Review Project (Brooklyn Public Library)
Best Practices in New York State
(Example illustrating Qualitative Results)
This project clearly yielded both quantitative and qualitative results. The Feedback points to student achievement in both survey and anecdotal form. While the timeframe for the grant may have precluded collecting the data for the grant report, similar programs would be encouraged to plan from the beginning to ask students to voluntarily contact the library to share their scores. When students take the SAT’s more than once, it would also be helpful to see what changes occur. When a program has such an impact that students vastly improve scores and get into a college of choice or obtain scholarships in part because of scores, it might also be of value to keep track of such individuals as future Friends of the Library and/or donors.
Exceeded by 7% a proposed goal of registering 80% (i.e., at least 150 students of the 192 students targeted as program participants (11th and 12th grade students from some of Brooklyn’s highest-need neighborhoods).
Adult and teen participants in the College-Bound informational workshops responses:
One student, who had taken the first winter series SAT review classes held at Transit Tech H.S, recently reported to his instructor that the class had helped him raise his score by 100 points. He also explained, “I started hearing back from colleges. I got accepted to the Honors Program at Queens College, and I got into York. I’m waiting to hear from Brooklyn College. I heard from some of the other students and they raised their scores too.
Another was able to attend only five class sessions in the 10-sesson series. Not a typical “high achieving” student, she joked about how often she cut classes at school. However, she called before each SAT review class she missed to explain that she would be absent. She then came to the library at the end of each session that she missed to request the homework assignment. When asked about her college plans, She told her instructor that she planned to go to beauty school. As the conversation progressed, however, she admitted that she really wished to go to law school. Her mother reported that at home, she talked frequently about the SAT class and how much it meant to her. She scored over 650 on the SAT‘s verbal portion and was very surprised by how well she did.
A senior with college aspirations who would be the first college applicant in his family, had not yet begun the application process. He explained that he did not have a good relationship with his guidance counselor at school. Moreover, because he believed that he would fail the test, he had not planned to take the SAT. At the College-Bound workshop, he learned that there is no such thing as “failing” the SAT and that taking the test is a prerequisite for applying to most colleges. He found out that the CUNY system will accept a SAT score of 420 or above as an exemption from remedial math and English classes. He also learned that first-generation college students are valued and sought after by many colleges. He subsequently attended 10 sessions of SAT prep at the library. His practice test scores were 450 (verbal) and 500 (math)Another student asked for a college recommendation at the end of 10 SAT prep sessions. She was successfully admitted to City College (CUNY) and returned to the library to request a second recommendation for a competitive scholarship given to journalism and communication majors. Awarded a substantial scholarship, she has most recently applied for a competitive mentoring program sponsored by the National Coalition of 100 Black Women.
Go to Library Development home page | <urn:uuid:f303933f-39a3-45e6-aaa7-bef15f7c1473> | CC-MAIN-2016-26 | http://www.nysl.nysed.gov/libdev/obe/bestprac/bpdet14.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00110-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981241 | 750 | 2.53125 | 3 |
Orangutans may `host ancient jumping genes`
Washington: Modern-day orangutans are host to ancient jumping genes dubbed Alu, which are over 16 million years old, according to a new study.
These tiny pieces of mobile DNA are able to copy themselves using a method similar to retroviruses. They can be thought of as molecular fossils, as a shared Alu element sequence and location within the genome indicates a common ancestor.
But, because this is an inexact process, a segment of “host” DNA is duplicated at the Alu insertion sites and these footprints, known as target site duplications, can be used to identify Alu insertions.
“However, it has long been recognized that only a small fraction of these elements retain the ability to mobilize new copies as ‘drivers,’ while most are inactive,” said Mark Batzer, Boyd Professor and Dr. Mary Lou Applewhite Distinguished Professor of Biological Sciences.
“In humans, telling the difference has proven quite difficult, mainly because the human genome is filled with plenty of relatively young Alu insertions, all with slight differences while at the same time lacking easily identifiable features characteristic for Alu propagation.
“This makes it hard to find their ‘parent’ or ‘source Alu’ from potentially hundreds of candidates that look similar.”
In contrast to humans and other studied primates, recent activity of Alu elements in the orangutan has been very slow, with only a handful of recent events by comparison.
“In the current study, we were able to discover the likely source Alu, or founder, of some of the very recent Alu insertions unique to the orangutan. This is significant for many reasons,” said research associate Jerilyn Walker.
“First, this study represents only the second study that identified a driver Alu element. In addition, this driver is more than 16 million years old!”
Analysis of DNA sequences has found over a million Alu elements within each primate genome, many of which are species specific: 5,000 are unique to humans, while 2,300 others are exclusive to chimpanzees.
In contrast, the orangutan lineage (Sumatran and Bornean orangutans) only has 250 specific Alu. Even though the Alu discovered in this study is old enough to be shared in human, chimpanzee, gorilla and orangutan genomes, its primary “jumping” has been in orangutans.
“Furthermore, this ancient ‘backseat driver’ created several daughter elements over the course of several millions years and a relatively young daughter element (found only in Sumatran orangutans and absent from Bornean orangutans) also appears to mobilize and has created offspring Alu copies of itself,” said assistant professor Miriam Konkel.
This is promising new evidence that Alu propagation may be ‘waking up’ in orangutans.
The study has been published in open access journal Mobile DNA.
More from India
More from World
More from Sports
More from Entertaiment
- DNA: Analysis of problems of unemployement in India
- DNA: Analysis of how car makers are playing with people's life
- DNA: Analysis of increasing trade of terror, laudanum and fake currency in Malda
- DNA: Analysis of increasing trade of terror, laudanum and fake currency in Malda- Part II
- DNA: A plane landed at -100 degree celsius to rescue a worker
- US election is about more than beating Donald Trump: Senator Bernie Sanders
- Sterling, stocks skid as Brexit result agonisingly close
- REVEALED: Anil Kumble's first message to India captains Mahendra Singh Dhoni, Virat Kohli after appointment
- 'Mirzya' trailer alert! Harshvardhan Kapoor spells enchanting cast in a splendid tale by Rakeysh Omprakash Mehra
- RSS chief Mohan Bhagwat to share dais with Leonardo DiCaprio in UK: Reports | <urn:uuid:9b3ef534-b083-49b8-9e38-2ee775ba2cbd> | CC-MAIN-2016-26 | http://zeenews.india.com/news/eco-news/orangutans-may-host-ancient-jumping-genes_774083.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00184-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.928763 | 866 | 3.5625 | 4 |
The government burden on the average worker has grown about a $1 trillion over the past year, and if time is money, Americans are working a lot longer to pay their taxes.
We're all familiar with the saying, "working for the man."
If "the man" is Uncle Sam, Wednesday is the day the average American would earn his or her financial freedom. It marks "Cost of Government Day," or the day calculated each year when the average American is done paying off his or her share of the cost of government.
What does that mean?
Imagine if you had to turn over your entire paycheck to the government all year until you paid off your share of federal, state and local taxes.
If that were the case, Aug. 12 would be the last day you'd have to work for Uncle Sam, so to speak.
We all know it doesn't necessarily work that way, but Cost of Government Day is a way to gauge just how much the government burden is on American workers.
This year, Cost of Government Day fell nearly a month later - 26 days to be exact - than it did last year.
It shows the size and scope of government has grown significantly.
The group Americans for Tax Reform - which calculates the annual date - blames the extension on the $787 billion stimulus plan and the troubled asset relief program for banks.
Critics worry other proposals like cap and trade legislation will only make matters worse.
"This is scary stuff," said Phil Kerpen of Americans for Prosperity. "We're now working eight and half months to pay for the cost of government. We've got only three and half months now to work for ourselves."
But cost of government is not limited to taxes. It also applies to government-imposed regulations.
"The total cost of government is how much total money the government spends and the regulation it imposes," explained Grover Norquist of Americans for Tax Reform. "When they say you have to use certain light bulbs, they don't take your money and buy you a light bulb, they just tell you buy we want. well, it may be a good idea or a bad idea, but it isn't free. It has a cost."
Norquist's group also measures the cost of government by state.
Thirty-five states get there before the federal government: Alaska, Louisiana and Mississippi top that list. Fifteen states, including the District of Columbia, fall behind the national day. Connecticut gets there last on Sept. 7-- which ironically falls on Labor Day this year. | <urn:uuid:bf7655ad-e1b8-4505-840e-9b4be9119987> | CC-MAIN-2016-26 | http://www.cbn.com/cbnnews/finance/2009/August/Cost-of-Govt-Day-Shows-Impact-of-Bailout/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00088-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97222 | 519 | 2.625 | 3 |
07 March 2012
Colorado's Top Income Tax Rate Is Low
The Tax Foundation summarizes the state of U.S. state income taxes in the map above.
In general, one has to do some analysis of the local economic base and the alternate sources of revenue tapped by states on a case by case basis looking at each state as a coherent whole that has its own particular reasons for what it is doing, to make sense of the results in particular states.
Low State Income Tax States
Seven states have no state income tax (Florida, Texas, South Dakota, Wyoming, Nevada, Washington and Alaska). New Hampshire (5%) and Tennessee (6%) have a flat income tax rates on interest and dividends, but not on other income.
Colorado's flat income tax rate is 4.63%, and it has no local income taxes. Three states have lower flat income tax rates (Michigan 4.35%, Indiana 3.4%, and Pennsylvania 3.07%), but also have local income taxes which probably push top marginal tax rates in much of each of those states above Colorado's flat marginal income tax rate.
Two states with graduated income tax rates have lower top marginal rates than Colorado's flat rate (Arizona 4.54%, North Dakota 3.99%).
Of course, there is no such thing as a free lunch and all states have to provide certain levels of services, even though there is some room for variation from state to state in overall per capita government spending and some variation from state to state in the tax base available for taxation that drives what is necessary to raise revenues.
Dominant Industry Taxes
Low income tax rates are made possible in some states by revenues from the dominant industries in the state. In Nevada, gambling funds state government. In Alaska, oil revenues are so great relative to the state's population that the state actually passes out money to people who are willing to live there. I suspect, but do not know, that resource based revenues make low income tax rates possible in North Dakota, South Dakota, Texas and Wyoming, although a low cost of government in some or all of those states may help as well.
The outlier states simply have a different package of taxation that focuses on taxes at the business entity level rather than the individual income level.
New Hampshire's example in claiming to lack sales and income taxes, while technically true, is a bit deceptive. Some of the money it needs it secured with a state property tax (something very few other states have and that clearly isn't a sales or income tax), a 0.75% real estate transfer tax (which is a sort of sales tax), an 8.5% tax on "business profits" (not just corporate profits but profits from all businesses) and a 0.75% "business enterprise tax" computed on a tax base of all compensation (e.g. wages), interest and dividends received by a business. Throw in the 5% state tax on dividends and interest, which covers a lot of the income not captured by the business profits tax, the business enterprise tax, and the real estate transfer tax, and considering differences in income mix at the top and bottom of states with progressive marginal tax rate systems, and the sources of New Hampshire's taxes look less different from other states - New Hampshire's exceptionality has a healthy dose of marketing involved.
Washington State does something similar. It's business and occupations tax is imposed on gross revenues, rather than net revenues, with rates that vary by industry, reflecting different operating margins in different industries. Washington State also has fairly high combined state and local sales tax rates (7.0%-9.5%). Like New Hampshire, Washington State also has a state property tax and a real estate transfer tax.
Tennessee's business tax is quite similar in structure to the one in Washington State's business and occupation tax (and a gross receipts tax in other industries) They tax the gross with varying rates by industry to reflect different typical margins in different industries. Tennessee also has a franchise tax on the assessed value (at a 0.25% rate) on all business property (tangible and real), and a 6.5% excise tax on entity level taxable income for all forms of entities similar to New Hampshire's business profits tax. Tennessee also has a 5.5% sales tax on food which most states exempt from sales taxation.
A quick glance at Arizona's tax laws suggest that it follows a similar model to Washington State and Tennessee, but with a greater emphasis on natural resource severance taxation to supplement its fairly low marginal income tax rates.
Florida has a business profits tax on all business entities except sole proprietorships, of 5.5%. It also has fairly high sales taxes.
Retiree Driven Low State Income Taxes
I don't know what alternate revenue sources Florida uses to fund its lack of income taxes, and Arizona uses to fund its relatively low state income tax. But, one clear motive for Florida, Nevada and Arizona to keep income taxes low is to compete for relocating seniors who want to withdraw retirement funds accumulated in high tax states in states where those funds don't face state and local taxation.
Colorado's political environment may be the result of similar motives - resorts catering to the affluent are a key element of our economy and we want affluent people to relocate to them because taxing their incomes at 4.63% rates beats taxing no income at all if they don't relocate to Colorado, and has pretty low levels of state spending relative to its affluence on public services like education, as well as a fairly balanced mix of taxes with middling to high combined state and local sales taxes (Colorado is the exception in having more local than state sales taxation, which many localities have opted for in order to secure lower rates of property taxation).
Low And High Income Tax Rates Driven By Intrastate Federalism
Michigan, Indiana and Pennsylvania are all Rust Belt states who have hemorraged so much population and economic vitality, that they feel compelled to reduce taxes to try to attract jobs, even though this policy does not appear to have actually attracted jobs. Michigan, Indiana and Pennsylvania also have local income taxes, which suggests two things. First, income tax rates at the state level are low, in part, because some of that tax base has been ceded to localities in an instance of intrastate federalism. It also suggests that the economic incentives driving for income taxation are quite different in different parts of the state, with parts of the state fitting a low income tax profile (like Northern Michigan and Western Pennsylvania and rural Indiana) not having very high local income taxes and those with a high income tax profile (like Philadelphia and Oakland County and Indianapolis) having higher local income taxes.
Local income taxes in Ohio, Kentucky, Alabama, Arkansas, Missouri, and Kentucky, none of which have particularly low top marginal rates in the state income tax, also mask even higher combined state and local income taxes.
In contrast, state income tax rates in California, Hawaii, Idaho (7.9%), Montana (6.9%), Minnesota (7.85%), Wisconsin (7.75%), Vermont (8.95%) and Maine (8.98%) look high in part because all state and local income taxes are consolidated in state government to the exclusion of local government.
Common Factors In High Income Tax Jurisdictions
In states with high top marginal income tax rates, in contrast, there does seem to be a common thread. Consider, by top tax rate, Hawaii (11%), California (10.3%), Oregon (9.9%), New York (8.82%), New Jersey (8.97%), and Washington D.C. (8.95%). What all of these states have in common is a significant number of high income workers for whom working outside the jurisdiction is simply not a viable option. If you are in an industry centered in New York City, or San Francisco, Silicon Valley, Los Angeles or Washington D.C., you must, for all practical purposes, work in that state and subject yourself to that state's income tax rates. Top rates in Connecticut (6.7%), Delaware (6.75%), Rhode Island (5.99%), Virginia (5.75%), Maryland (5.5%), and Massachusetts (5.3% flat rate) aren't low either and have some of the same characteristics.
Interestingly, some of the states which have high state income taxes also have local income taxation: Delaware, Iowa, Maryland, New Jersey, New York and Oregon all have local income taxes, which suggests that the necessity of working not just in a particular state, but in a particular locality is quite great.
Iowa whose top marginal income tax rate is 8.98% and also has local income taxes is something of a puzzle. Perhaps, its economy is so far driven that its high income farmer citizens too are bound to the land and can't escape state income taxes while maintaining their livelihoods, although it is not obvious how Iowa differs from other farm states in that respect. | <urn:uuid:16021d49-dbc5-4fe9-ab24-4c75c4a23581> | CC-MAIN-2016-26 | http://washparkprophet.blogspot.com/2012/03/colorados-top-income-tax-rate-is-low.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00148-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959421 | 1,825 | 2.515625 | 3 |
Tuesday May 7, 2013 | 7 comments
Previously, we looked at how tea began as what is known today as green tea and then inadvertently spun off into yellow tea and dark tea (AKA post-fermented tea). The puzzle is still incomplete, though, as there are another three basic categories of teas that have yet to be accounted for.
White tea – origins unknown
White tea is the least processed type of tea, its processing consisting of just two simple steps – wither the leaves in the sun and then dry them by either light baking or sunning. Experts, however, are divided as to white tea’s origins.
Some say white tea’s origins trace back to 1064, based on Emperor Song Huizhou’s treatise on tea in which a “white tea” is described as one covered with downy fur, not unlike the white teas of today. However, it was believed that although that may have been the precursor to the white tea of today, it may have merely described the appearance of that tea, similar to why “Anji White Tea” is still classified as a green tea.
Another school of thought places the birth year of white tea at 1554, when the first records of withering leaves in the sun in lieu of chaoqing (roasting to halt enzyme activity) were found. At that time, chaoqing was still a relatively new technique and often inexperienced producers burned the tender buds. Consequentially, they turned to sunning the leaves instead.
What is commonly known as white tea today, though, traces its beginnings to 1796 in Fuding, where the first silver needles were made by withering and baking buds made from the Dabaicha cultivar, a tree that yielded sturdier buds with white downy furs. Teas made from the Dabaicha cultivar produced tea with a fuller, sweeter taste that won the hearts of tea lovers.
From white tea to black
Black tea was birthed by combining the fundamentals of producing green, white, and dark teas around 1650 in Xingcun, Wuyishan, Fujian. It added the withering of tea leaves to the basics of green tea production and modified wodui to wohong, a step that expedites the oxidation of tea leaves via heat and humidity. This created a markedly different product from green tea and the production method was eventually exported to the entire world.
The rise of Black Dragon
Oolong (or wulong) tea is literally translated as “black dragon.” As to how it got its name, there are several theories. Like yellow tea, its discovery was almost certainly accidental. Whichever theory you choose to subscribe to, the process of yaoqing, or rattling the leaves to bruise them and cause oxidation, was unlikely to have started as a deliberate act.
In any case, this beautiful “mistake” resulted in the most diverse and (in my somewhat biased opinion) rewarding category of tea. From the almost green tea-like taste of green-style Tieguanyin to the aromatic black tea-like nature of Oriental Beauty, there is no other category of tea with such a wide spectrum.
Like white tea, the origins of oolong tea are somewhat disputed. However, most are certain that by the 18th Century, the production of oolong tea was prevalent in Fujian, both in the north (Wuyishan) and the south (Anxi). Eventually, oolong spread to Guangdong and Taiwan and it is these four areas that remain the main oolong production areas in the world today. | <urn:uuid:c5e08c4e-da64-478d-9eae-55d6029d314d> | CC-MAIN-2016-26 | http://www.tching.com/2013/05/the-evolution-of-tea-part-3/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00029-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.970391 | 766 | 3.03125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.