text
stringlengths
247
264k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
20
294
date
stringlengths
20
20
file_path
stringclasses
370 values
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
62
58.7k
Related Topics: OpenGL Transformation Updates: The MathML version is available here. A computer monitor is a 2D surface. A 3D scene rendered by OpenGL must be projected onto the computer screen as a 2D image. GL_PROJECTION matrix is used for this projection transformation. First, it transforms all vertex data from the eye coordinates to the clip coordinates. Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates. Therefore, we have to keep in mind that both clipping (frustum culling) and NDC transformations are integrated into GL_PROJECTION matrix. The following sections describe how to build the projection matrix from 6 parameters; left, right, bottom, top, near and far boundary values. Note that the frustum culling (clipping) is performed in the clip coordinates, just before dividing by wc. The clip coordinates, xc, yc and zc are tested by comparing with wc. If each clip coordinate is less than -wc, or greater than wc, then the vertex will be discarded. Then, OpenGL will reconstruct the edges of the polygon where clipping occurs. In perspective projection, a 3D point in a truncated pyramid frustum (eye coordinates) is mapped to a cube (NDC); the range of x-coordinate from [l, r] to [-1, 1], the y-coordinate from [b, t] to [-1, 1] and the z-coordinate from [n, f] to [-1, 1]. Note that the eye coordinates are defined in the right-handed coordinate system, but NDC uses the left-handed coordinate system. That is, the camera at the origin is looking along -Z axis in eye space, but it is looking along +Z axis in NDC. Since glFrustum() accepts only positive values of near and far distances, we need to negate them during the construction of GL_PROJECTION matrix. In OpenGL, a 3D point in eye space is projected onto the near plane (projection plane). The following diagrams show how a point (xe, ye, ze) in eye space is projected to (xp, yp, zp) on the near plane. From the top view of the frustum, the x-coordinate of eye space, xe is mapped to xp, which is calculated by using the ratio of similar triangles; From the side view of the frustum, yp is also calculated in a similar way; Note that both xp and yp depend on ze; they are inversely propotional to -ze. In other words, they are both divided by -ze. It is a very first clue to construct GL_PROJECTION matrix. After the eye coordinates are transformed by multiplying GL_PROJECTION matrix, the clip coordinates are still a homogeneous coordinates. It finally becomes the normalized device coordinates (NDC) by divided by the w-component of the clip coordinates. (See more details on OpenGL Transformation.) Therefore, we can set the w-component of the clip coordinates as -ze. And, the 4th of GL_PROJECTION matrix becomes (0, 0, -1, 0). Next, we map xp and yp to xn and yn of NDC with linear relationship; [l, r] ⇒ [-1, 1] and [b, t] ⇒ [-1, 1]. Then, we substitute xp and yp into the above equations. Note that we make both terms of each equation divisible by -ze for perspective division (xc/wc, yc/wc). And we set wc to -ze earlier, and the terms inside parentheses become xc and yc of the clip coordiantes. From these equations, we can find the 1st and 2nd rows of GL_PROJECTION matrix. Now, we only have the 3rd row of GL_PROJECTION matrix to solve. Finding zn is a little different from others because ze in eye space is always projected to -n on the near plane. But we need unique z value for the clipping and depth test. Plus, we should be able to unproject (inverse transform) it. Since we know z does not depend on x or y value, we borrow w-component to find the relationship between zn and ze. Therefore, we can specify the 3rd row of GL_PROJECTION matrix like this. In eye space, we equals to 1. Therefore, the equation becomes; To find the coefficients, A and B, we use the (ze, zn) relation; (-n, -1) and (-f, 1), and put them into the above equation. To solve the equations for A and B, rewrite eq.(1) for B; Substitute eq.(1') to B in eq.(2), then solve for A; Put A into eq.(1) to find B; We found A and B. Therefore, the relation between ze and zn becomes; Finally, we found all entries of GL_PROJECTION matrix. The complete projection matrix is; This projection matrix is for a general frustum. If the viewing volume is symmetric, which is and , then it can be simplified as; Before we move on, please take a look at the relation between ze and zn, eq.(3) once again. You notice it is a rational function and is non-linear relationship between ze and zn. It means there is very high precision at the near plane, but very little precision at the far plane. If the range [-n, -f] is getting larger, it causes a depth precision problem (z-fighting); a small change of ze around the far plane does not affect on zn value. The distance between n and f should be short as possible to minimize the depth buffer precision problem. Constructing GL_PROJECTION matrix for orthographic projection is much simpler than perspective mode. All xe, ye and ze components in eye space are linearly mapped to NDC. We just need to scale a rectangular volume to a cube, then move it to the origin. Let's find out the elements of GL_PROJECTION using linear relationship. Since w-component is not necessary for orthographic projection, the 4th row of GL_PROJECTION matrix remains as (0, 0, 0, 1). Therefore, the complete GL_PROJECTION matrix for orthographic projection is; It can be further simplified if the viewing volume is symmetrical, and .
<urn:uuid:7b691dec-bccf-4a0e-9b63-3d688938c523>
CC-MAIN-2013-20
http://www.songho.ca/opengl/gl_projectionmatrix.html
2013-05-19T09:54:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897774
1,387
Definition from Wiktionary, the free dictionary Old English From Proto-Germanic *hwalaz (compare Old Saxon hwal, Old High German wal, Old Norse hvalr), probably from Proto-Indo-European *(s)kʷálos (“sheatfish”). Another theory suggets it is perhaps akin to Finnish kala, from Proto-Uralic *kala. - a whale - English: whale
<urn:uuid:ef46f5a4-548d-4051-a7e9-0e9b2975c507>
CC-MAIN-2013-20
http://en.wiktionary.org/wiki/hw%C3%A6l
2013-05-19T10:53:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.810144
101
The first pulsar was discovered in 1967, when British astrophysicist Jocelyn Bell Burnell noticed a strange radio signal flashing about once every 1.33 seconds from a distant region of outer space. Burnell and her colleagues were mystified at first, briefly considering the possibility that this was a transmission from an alien civilization – but they soon realized that they had actually found a swiftly rotating neutron star, which they named CP-1919. The pulsar became part of music history 12 years later, when post-punk pioneers Joy Division used a graph of CP-1919’s signal for the cover of their debut album, Unknown Pleasures. Read our original review of the landmark album at RollingStone.com. GIF via intothecontinuum
<urn:uuid:0aad7908-d58f-4010-9ed5-b9998ab52dbd>
CC-MAIN-2013-20
http://adantwetzel.tumblr.com/
2013-05-20T02:31:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957444
154
Action against cartels is a specific type of antitrust enforcement. A cartel is a group of similar, independent companies which join together to fix prices, to limit production or to share markets or customers between them. Instead of competing with each other, cartel members rely on each others' agreed course of action, which reduces their incentives to provide new or better products and services at competitive prices. As a consequence, their clients (consumers or other businesses) end up paying more for less quality. This is why cartels are illegal under EU competition law and why the European Commission imposes heavy fines on companies involved in a cartel. Since cartels are illegal, they are generally highly secretive and evidence of their existence is not easy to find. The 'leniency policy' encourages companies to hand over inside evidence of cartels to the European Commission. The first company in any cartel to do so will not have to pay a fine. This results in the cartel being destabilised. In recent years, most cartels have been detected by the European Commission after one cartel member confessed and asked for leniency, though the European Commission also successfully continues to carry out its own investigations to detect cartels. Since 2008 companies found by the Commission to have participated in a cartel can settle their case by acknowledging their involvement in the cartel and getting a smaller fine in return.
<urn:uuid:996a459e-b5c3-43d9-8e94-7a6767a93763>
CC-MAIN-2013-20
http://ec.europa.eu/competition/cartels/overview/index_en.html
2013-05-24T01:54:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964837
265
What Rh factor testing screens for The Rh factor is a protein carried by red blood cells in some people, and not in others. If you have the protein, you are Rh positive. If not, you are Rh negative. (And you are special: Only about 15 percent of the population is Rh negative.) In blood typing, everyone is either type A, B, or O; the plus or minus sign after the letter refers to the Rh factor. Both Rh negative and Rh positive are entirely normal, healthy blood characteristics. Problems can arise, however, if an Rh-negative mom-to-be carries an Rh-positive baby. The mother's body may mistake the baby's blood cells as intruders and start making antibodies to attack them. Left unchecked, this condition (known as fetal Rh disease) can threaten the health of the baby. This almost never happens in a first pregnancy (since the baby's blood is unlikely to enter the mom's bloodstream until delivery). However, if untreated in the first pregnancy, it can threaten subsequent pregnancies. So as a preventive measure, all Rh-negative women are given injections of a substance called RhoGAM (Rh-d immune globulin, which prevents the antibodies from forming) at various times during each pregnancy — starting with the first. These injections save the lives of an estimated 10,000 babies per year in the United States alone. Who Rh factor testing is forAll pregnant women will have their Rh factor determined. Rh-negative woman will have follow-up testing and treatment. How Rh factor testing is doneBlood samples are taken from a vein in your arm. If you are Rh-negative, a RhoGAM injection goes into your muscle tissue in your arm or your backside. You might be given a choice; or your practitioner might favor one spot or the other. The injection is somewhat painful and the soreness can last for a couple of days. Ask your practitioner about taking a pain reliever to alleviate the discomfort. When Rh factor testing is done Rh testing is usually done during a woman's first blood test during pregnancy. RhoGAM injections for Rh-negative women are given at 28 or 29 weeks and again within 72 hours of delivery. The RhoGAM injection is also administered after any genetic testing that could result in mixing of maternal and fetal blood, such as CVS (chorionic villus sampling) or amniocentesis. Spotting, miscarriage, and abortion are the other situations where fetal blood can get into a pregnant woman's bloodstream, so RhoGAM is given to those who are Rh-negative after these events as well. Risks: There is little or no risk associated with blood tests. Note: If you are Rh-negative, the risk does go up with every subsequent pregnancy (as your body builds more and more antibodies). Fortunately, thanks to the widespread use of this screening test and safe, effective treatment, fetal Rh disease is now very rare.
<urn:uuid:33304304-9c88-4587-bacd-02d330f9de2a>
CC-MAIN-2013-20
http://www.whattoexpect.com/pregnancy/pregnancy-health/prenatal-testing/rh-factor.aspx
2013-05-22T07:31:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943991
604
Date of this Version On 1 May 1999, I was birding a point along the north shore of the lake when I encountered as small. flock of Yellow-rumped and Orange-crowned Warblers. At approximately 8:06 a.m. I was working my way through the flock when I heard a different chip note. The bird was feeding in a nearby tree, and I immediately recognized it as a male Black-throated Gray Warbler. I was able to study the bird at close range until 8:16 a.m. The bird was roughly the size of an Orange-crowned Warbler and was noticeably smaller and shorter-tailed than a Yellow-rumped Warbler. The head pattern was striking: solid black except for a yellow loral spot, a white eyebrow, and a broad white whisker mark. The throat and upper breast were also black. The remainder of the underparts was white except for some darker streaking along the flanks. The mantle was gray and was slightly paler than the head. The wings were also gray with two narrow white wingbars. The tail was dark gray above and showed a lot of white when viewed from below. The warblerlike bill was short, thin, and dark-colored. The legs were also dark-colored. On the basis of the solid black throat, I concluded it was an adult male.
<urn:uuid:45ebc519-de3c-4936-b7c2-428583be87f6>
CC-MAIN-2013-20
http://digitalcommons.unl.edu/nebbirdrev/58/
2013-06-19T18:54:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.989008
282
Chewa, also known as Nyanja, is a language of the Bantu language family. The gender prefix chi- is used for languages, so the language is also known as Chichewa and Chinyanja (spelled Cinyanja in Zambia), and locally Nyasa in Mozambique. Chewa is the national language of Malawi. It is also one of the seven official African languages of Zambia, where it is spoken mostly in the Eastern Province. It is also spoken in Mozambique, especially in the provinces of Tete and Niassa, as well as in Zimbabwe where, according to some estimates, it ranks as the third-most widely used local language, after Shona and Northern Ndebele. It was one of the 55 languages featured on the Voyager. An urban variety of Nyanja, sometimes called Town Nyanja, is the lingua franca of the Zambian capital Lusaka and is widely spoken as a second language throughout Zambia. This is a distinctive Nyanja dialect with some features of Nsenga, although the language also incorporates large numbers of English-derived words, as well as showing influence from other Zambian languages such as Bemba. Town Nyanja has no official status, and the presence of large numbers of loanwords and colloquial expressions has given rise to the misconception that it is an unstructured mixture of languages or a form of slang. The fact that the standard Nyanja used in schools differs dramatically from the variety actually spoken in Lusaka has been identified as a barrier to the acquisition of literacy among Zambian children. iSchool.zm, which develops online educational content in Zambian languages, has begun making 'Lusaka Nyanja' available as a separate language of instruction after finding that schoolchildren in Lusaka do not understand standard Nyanja. Chinyanja has its origin in the Eastern Province of Zambia from the 15th century to the 18th century. The language remained dominant despite the breakup of the empire and the Nguni invasions and was adopted by Christian missionaries at the beginning of the colonial period. In Zambia, Chewa is spoken by other peoples like the Ngoni and the Kunda, so a more neutral name, Chinyanja "(language) of the lake" (referring to Lake Malawi), is used instead of Chewa. The first grammar, A grammar of the Chinyanja language as spoken at Lake Nyasa with Chinyanja–English and English–Chinyanja vocabulary, was written by Alexander in 1880 and partial translations of the Bible were made at the end of 19th century. Further early grammars and vocabularies include A vocabulary of English–Chinyanja and Chinyanja–English: as spoken at Likoma, Lake Nyasa and A grammar of Chinyanja, a language spoken in British Central Africa, on and near the shores of Lake Nyasa, by George Henry (1891). The whole Bible was translated by William Percival Johnson and published as Buku Lopatulika ndilo Mau a Mulungu in 1912. A strong historical link of the Nyanja, Bemba and Yao people to the Shona Empire, who can point their earlier origins to Mashonaland, proves linguistically evident today. The ancient Shonas who temporarily dwelt in Malambo, a place in the DRC, eventually shifted into northern Zambia, and then south and east into the highlands of Malawi. ||Town Nyanja (Lusaka) |How are you? ||Nili bwino / Nili mushe |What's your name? ||Dzina lanu ndani? ||Zina yanu ndimwe bandani? |My name is... ||Dzina langa ndine... ||Zina yanga ndine... |How many children do you have? ||Muli ndi ana angati? ||Muli na bana bangati? |I have two children ||Ndili ndi ana awiri ||Nili na bana babili |How much is it? |See you tomorrow - ^ Nationalencyklopedin "Världens 100 största språk 2007" The World's 100 Largest Languages in 2007 - ^ Jouni Filip Maho, 2009. New Updated Guthrie List Online - ^ cf. Kiswahili for the Swahili language. - ^ Williams, E (1998). Investigating bilingual literacy: Evidence from Malawi and Zambia (Education Research Paper No. 24). Department for International Development. - ^ Woodward, M. E. 1895. - ^ Henry, George. 1891. - ^ The Umca in Malawi, p 126, James Tengatenga, 2010: "Two important pieces of work have been accomplished during these later years. First, the completion by Archdeacon Johnson of the Bible in Chinyanja, and secondly, the completed Chinyanja prayer book in 1908." - Paas, Steven, 2012. 3rd edition. Dictionary / Mtanthauziramawu. English – Chichewa / Chinyanja // Chichewa / Chinyanja – English. VTR Publications. ISBN 978-3-941750-87-6 - Mchombo, Sam, 2004. The Syntax of Chichewa. Cambridge Syntax Guides - Hetherwick, Alexander (1907). A Practical Manual of the Nyanja Language .... Society for Promoting Christian Knowledge. Retrieved 25 August 2012. - Gray, Andrew; Lubasi, Brighton; Bwalya, Phallen (2013). Town Nyanja: a learner's guide to Zambia's emerging national language. - Henry, George, 1904. A grammar of Chinyanja, a language spoken in British Central Africa, on and near the shores of Lake Nyasa. - Laws, Robert (1894). An English–Nyanja dictionary of the Nyanja language spoken in British Central Africa. J. Thin. pp. 1–. Retrieved 25 August 2012. - Rebman, John; Church Missionary Society (1877). Dictionary of the Kiniassa language. Gregg. pp. 65–. Retrieved 25 August 2012. - Riddel, Alexander (1880). A Grammar of the Chinyanja Language as Spoken at Lake Nyassa: With Chinyanja–English and English–Chinyanja Vocabularies. J. Maclaren & Son. Retrieved 25 August 2012. - Woodward, M. E., 1895. A vocabulary of English–Chinyanja and Chinyanja–English as spoken at Likoma, Lake Nyasa. Society for Promoting Christian Knowledge. - Missionários da Companhia de Jesus 1963. Dicionário Cinyanja–Português. Junta de Investigaçôes do Ultramar.
<urn:uuid:e0116e46-4673-4b97-ba79-30ec1f68ac1c>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Nyanja
2013-06-19T14:27:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.870753
1,506
< Browse to Previous Essay | Browse to Next Essay > Boeing 247 takes flight from Boeing Field, opening new vistas in commercial air travel, on February 8, 1933. HistoryLink.org Essay 2018 : Printer-Friendly Format At noon, on February 8, 1933, the Boeing 247 takes flight from Boeing Field, ushering in a new era of air travel. The twin-engine, ten-passenger monoplane blazes new trails in aviation, but is quickly overtaken by the competing Douglas Aircraft's DC-2, and never takes off commercially the way Boeing hopes it will. At the time, the 247 was the fastest transport plane around, with a top speed well over 200 miles per hour. Variable-pitch propellers gave the plane superb takeoff abilities and an economical cruising speed. Its cantilevered wings housed landing lights and de-icers, and the landing gear was retractable, lessening the drag coefficient in flight. Appearing at the 1933 Chicago's World Fair, the plane proved a hit with visitors. In 1934, it won the Collier Trophy, which is given to great achievements in the field of flight. Regardless, the plane was practically obsolete as it entered service, and Boeing ended up selling only 75 of the model to airline companies. Boeing pinned its hopes on United Airlines, which had ordered 60 of the planes. Early interest in the 247 by TWA and American Airlines was offput by the fact that United had "first call" on the orders. Not one to be spurned, TWA's Jack Frye approached a small aircraft manufacturer in California to determine whether he could build a better plane than the 247. That manufacturer was Donald Douglas. His prototype, the DC-1, led into production of the DC-2. The DC-2 proved to be a far better aircraft, and within a year, United was unloading its 247s in order to buy new DC-2s. Subsequent development of the DC-3 provided Boeing with even more competition for many years to come. In Boeing's haste to open its own vistas for commercial air transportation, it inadvertently opened the door for one of its fiercest competitors. This competition went on for decades until, in 1997, Boeing and McDonnell Douglas merged. Boeing ended up selling only 15 more 247s beyond its initial order to United. Two of these planes were sold to Lufthansa Airlines of Germany in 1934. Later, during World War II, it was discovered that the planes had been appropriated by the German military. The British captured a Heinkel 111 bomber, and when Boeing engineers examined it in Seattle, they found design elements that were lifted directly from the 247. F. Robert van der Linden, The Boeing 247: The First Modern Airliner (Seattle: Univeristy of Washington Press, 1991); Harold Mansfield, Vision: The Story of Boeing (New York: David MacKay Co., 1966), 46-48; Robert J. Serling, Legend & Legacy (New York: St. Martin's Press, 1992), 50-61. Travel through time (chronological order): < Browse to Previous Essay Browse to Next Essay > Government & Politics | Licensing: This essay is licensed under a Creative Commons license that encourages reproduction with attribution. Credit should be given to both HistoryLink.org and to the author, and sources must be included with any reproduction. Click the icon for more info. Please note that this Creative Commons license applies to text only, and not to images. For more information regarding individual photos or images, please contact the source noted in the image credit. Major Support for HistoryLink.org Provided By: The State of Washington | Patsy Bullitt Collins | Paul G. Allen Family Foundation | Museum Of History & Industry | 4Culture (King County Lodging Tax Revenue) | City of Seattle | City of Bellevue | City of Tacoma | King County | The Peach Foundation | Microsoft Corporation, Other Public and Private Sponsors and Visitors Like You
<urn:uuid:8c66d5dc-270f-48d9-9625-cba6a196777c>
CC-MAIN-2013-20
http://www.historylink.org/index.cfm?DisplayPage=output.cfm&file_id=2018
2013-05-22T07:13:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9191
855
Discussion of all aspects of biological molecules, biochemical processes and laboratory procedures in the field. 6 posts • Page 1 of 1 What do chaperonins have to do with health like mad cow disease proteins called prions? I seriously have no idea. I know chaperonins allow polypeptides to fold without outside disturbances but that's as far as I can get. I'm not certain about this, but I think it could be that mutated/malfunctioning chaperonens may in certain cases misfold proteins and turn them into prions. Prions, then, seem to have some kind of chaperon-like properties that allow them to misfold other proteins of their kind, causing plaques typical to dieases like the mad cow's. Alternatively, all the prionic proteins might be created by dysfunctional chaperons. Do they really fold without outside disturbances? I would consider the chaperonins to be an outside disturbance. I thought chaperonins help the protein to fold in a way that is condusive to its ultimate function. That yes, it does keep out other influences, say water molecules and other polar substances that could influence the folding of the peptide, as it helps it fold into a fully functional protein. So if the chaperonins are mutated, then the folding of the protein would be hindered if it was in a site that was condusive to the folding pattern. Then disease would be the outcome of that mutation. a mutated chaperonin would indeed result in increased protein misfolding. However, I don't think it would generate the Pr^sc version of the prion protein. As far as I know, the only thing that can cause the prion protein (which is present in every individual's brain) to fold in its toxic state is another prion protein. That's why people don't just get prion diseases, you need to get them from coming into contact with the infectious prion protein. "As a biologist, I firmly believe that when you're dead, you're dead. Except for what you live behind in history. That's the only afterlife" - J. Craig Venter Basic rule for neuroprotein health - don't eat human brains, don't pick your nose when preparing human brains for other's consumption, don't wallow in human brains - in fact, leave human brains in their skulls (unless you are a neurosurgeon and must expose yourself to human brain tissue, a necessary job hazard, and they wear gloves and masks; watch that spatter from the drill!). Thats definitely not only about human brains. In UK there was a case, when some buther used on knife for killing whole cows, so people not even eating their brains, did get some of the prions from meat. Cis or trans? That's what matters. 6 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
<urn:uuid:0ed3f2de-fe85-4229-b061-0dbf4b09e5ec>
CC-MAIN-2013-20
http://www.biology-online.org/biology-forum/about16550.html?hilit=Chaperonins
2013-05-25T06:06:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96376
612
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Mapping Characters Across Book Series |Grades||3 – 5| |Lesson Plan Type||Standard Lesson| |Estimated Time||Three 50-minute sessions plus reading time| Grades 3 – 12 | Student Interactive | Organizing & Summarizing The Graphic Map assists teachers and students in reading and writing activities by charting the high and low points related to a particular item or group of items, such as events during a day or chapters in a book. Grades 1 – 6 | Calendar Activity |  April 21 Students write their own "Junie B." stories, based on the Junie B. Jones series, after brainstorming issues they've experienced during the school year. Grades 3 – 8 | Calendar Activity |  October 4 Students select several books from one of Stratemeyer's series to read, discuss shared elements in the books, and use the 3-Circle Venn Diagram to compare story elements. Grades K – 12 | Calendar Activity |  March 11 In celebration of Keats' birthday, students write stories that include some characters from Keats' books and practice using collage techniques with the Collage Machine. Grades 3 – 6 | Calendar Activity |  February 12 Students work with a partner or individually to create cartoons of their favorite scenes from Tales of a Fourth Grade Nothing. Grades 3 – 8 | Printout | Graphic Organizer This concept map can be used in a variety of ways to show relationships between words and phrases. Students can add arrows as needed and group certain ideas together.
<urn:uuid:4c1127bb-728d-4ae4-8ef9-ca86c3011537>
CC-MAIN-2013-20
http://www.readwritethink.org/classroom-resources/lesson-plans/mapping-characters-across-book-409.html?tab=5
2013-05-22T14:25:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898103
449
Aerobic Exercise, Running Aerobic exercise improves oxygen consumption by the body. Benefits include strengthening the heart's pumping efficiency, improving circulation, lowering blood pressure, and stimulating the production of more red blood cells for oxygen transport. Aerobic exercise conditions the heart and lungs by increasing the oxygen available to the body and by enabling the heart to use oxygen more efficiently. Exercise alone cannot prevent or cure heart disease. It is only one factor in a total program of risk reduction; examples of other factors are high blood pressure, cigarette smoking and high cholesterol level.
<urn:uuid:135b875b-faa9-483a-b350-968023cd7dca>
CC-MAIN-2013-20
http://www.thevisualmd.com/visualizations/result/aerobic_exercise_running
2013-06-20T02:21:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.886404
112
Dentists could soon hang up their drills. A new peptide, embedded in a soft gel or a thin, flexible film and placed next to a cavity, encourages cells inside teeth to regenerate in about a month, according to a new study in the journal ACS Nano. This technology is the first of its kind. The new gel or thin film could eliminate the need to fill painful cavities or drill deep into the root canal of an infected tooth. The new research could make a trip to the dentist's office more pleasant, said Berkirane-Jessel. Instead of a drill, a quick dab of gel or a thin film against an infected tooth could heal teeth from within. Cavities are bacteria and pus-filled holes on or in teeth which can lead to discomfort, pain and even tooth loss. When people eat acidic foods, consume sugary snacks or simply don't maintain proper oral hygiene, bacteria begin to eat away at the protective enamel and other minerals inside teeth.
<urn:uuid:bc7a7cb4-868f-4bf3-9a5a-e8018a594f45>
CC-MAIN-2013-20
http://muzich.blogspot.com/2010/07/no-more-fillings-gel-regenerates-teeth.html
2013-05-21T00:55:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699632815/warc/CC-MAIN-20130516102032-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926624
202
It's almost unimaginable: a tsunami more than 1,000 feet (300 meters) high bearing down on the island of Hawaii. But scientists have new evidence of these monster waves, called megatsunamis, doing just that. The findings were presented Wednesday (Dec. 5) at the annual meeting of the American Geophysical Union. Unlike tsunamis from earthquakes, the Hawaiian tsunamis strike when the island chain's massive volcanoes collapse in humongous landslides. This happens about every 100,000 years, and is linked to climate change, said Gary McMurtry, a professor at the University of Hawaii in Honolulu. Sitting about 30 feet (10 m) away from today's Ka Le (South Point) seashore are boulders the size of cars. Some 250,000 years ago, a tsunami tossed the enormous rocks 820 feet (250 m) up the island's slopes, said Fernando Marques, a professor at the University of Lisbon in Portugal. (The boulders are closer to the shore now because the main island of Hawaii is one of the world's largest volcanoes, and its massive weight sends it sinking into the Earth at a rate of about 1 millimeter a year.) McMurtry's team found two younger and slightly smaller tsunami deposits at South Point on the main island of Hawaii, one 50,000 years old and one 13,000 years old. He suggests the tsunami source is the two Ka Le submarine landslides, from the flanks of the nearby Mauna Loa volcano. The waves carried corals and 3-foot (1 m) boulders 500 feet (150 m) Deadly, landslide-triggered tsunamis happen at volcanic islands around the world, and are a potential hazard for the Eastern United States. "We find them everywhere, but we don't know of any historical cases, so we have to go back in time," said Anthony Hildenbrand, a volcanologist at the University of Paris-Sud in France, who helped identify the ancient tsunami deposit. The falling rock acts like a paddle, giving the water a sudden push. While landslide tsunamis may have a devastating local effect, they lose their power in the open ocean and don't destroy distant coastlines like earthquake tsunamis. The giant landslides seem to happen during periods of rising sea levels, when the climate is also warmer and wetter, Hildenbrand told OurAmazingPlanet. Researchers speculate that the change from lower sea level to higher may destabilize a volcanic island's flanks, and heavier rains could soak its steep slopes, helping trigger landslides. There are at least 15 giant landslides that have slid off the Hawaiian Islands in the past 4 million years, with the most recent happening only 100,000 years ago, according to the U.S. Geological Survey. One block of rock that slid off Oahu is the size of Manhattan. Copyright 2012 OurAmazingPlanet, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:b47fefb6-8205-43a1-8658-c88e8d44c407>
CC-MAIN-2013-20
http://news.discovery.com/earth/oceans/landslide-driven-megatsunamis-threaten-hawaii-121207.htm
2013-06-18T04:41:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905868
669
General Information About Myelodysplastic Syndromes Key Points for This Section - A myelodysplastic syndrome is a disease in which the bone marrow does not make enough healthy blood cells. - Age and past treatment with chemotherapy or radiation therapy affect the risk of a myelodysplastic syndrome. - Possible signs of a myelodysplastic syndrome include feeling tired and shortness of breath. - Tests that examine the blood and bone marrow are used to detect (find) and diagnose myelodysplastic syndromes. - The different types of myelodysplastic syndromes are diagnosed based on certain changes in the blood cells and bone marrow. - Certain factors affect prognosis and treatment options. A myelodysplastic syndrome is a group of symptoms that includes cancer of the blood and bone marrow. In a healthy person, the bone marrow makes blood stem cells (immature cells) that become mature blood cells over time. A blood stem cell may become a myeloid stem cell or a lymphoid stem cell. A myeloid stem cell becomes one of three types of mature blood cells: - Red blood cells that carry oxygen and other substances to all tissues of the body. - Platelets that form blood clots to stop bleeding. - White blood cells that fight infection and disease. A lymphoid stem cell becomes a white blood cell. In a patient with a myelodysplastic syndrome, the blood stem cells (immature cells) do not become healthy red blood cells, white blood cells, or platelets. The immature blood cells, called blasts, do not work the way they should and either die in the bone marrow or soon after they go into the blood. This leaves less room for healthy white blood cells, red blood cells, and platelets to form in the bone marrow. When there are fewer healthy blood cells, infection, anemia, or easy bleeding may occur. Anything that increases your risk of getting a disease is called a risk factor. Having a risk factor does not mean that you will get a disease; not having risk factors doesn’t mean that you will not get a disease. Talk with your doctor if you think you may be at risk. Risk factors for myelodysplastic syndromes include the following: - Past treatment with chemotherapy or radiation therapy for cancer. - Being exposed to certain chemicals, including tobacco smoke, pesticides, fertilizers, and solvents such as benzene. - Being exposed to heavy metals, such as mercury or lead. The cause of myelodysplastic syndromes in most patients is not known. Myelodysplastic syndromes often do not cause early symptoms and are sometimes found during a routine blood test. Other conditions may cause the same symptoms. Check with your doctor if you have any of the following problems: - Shortness of breath. - Weakness or feeling tired. - Having skin that is paler than usual. - Easy bruising or bleeding. - Petechiae (flat, pinpoint spots under the skin caused by bleeding). The following tests and procedures may be used: - Physical exam and history : An exam of the body to check general signs of health, including checking for signs of disease, such as lumps or anything else that seems unusual. A history of the patient’s health habits and past illnesses and treatments will also be taken. - Complete blood count (CBC) with differential : A procedure in which a sample of blood is drawn and checked for the following: - The number of red blood cells and platelets. - The number and type of white blood cells. - The amount of hemoglobin (the protein that carries oxygen) in the red blood cells. - The portion of the blood sample made up of red blood cells. - Peripheral blood smear : A procedure in which a sample of blood is checked for changes in the number, type, shape, and size of blood cells and for too much iron in the red blood cells. - Cytogenetic analysis : A test in which cells in a sample of blood or bone marrow are viewed under a microscope to look for certain changes in the chromosomes. - Bone marrow aspiration and biopsy : The removal of bone marrow, blood, and a small piece of bone by inserting a hollow needle into the hipbone or breastbone. A pathologist views the bone marrow, blood, and bone under a microscope to look for abnormal cells. - Refractory anemia: There are too few red blood cells in the blood and the patient has anemia. The number of white blood cells and platelets is normal. - Refractory anemia with ring sideroblasts: There are too few red blood cells in the blood and the patient has anemia. The red blood cells have too much iron inside the cell. The number of white blood cells and platelets is normal. - Refractory anemia with excess blasts: There are too few red blood cells in the blood and the patient has anemia. Five percent to 19% of the cells in the bone marrow are blasts. There also may be changes to the white blood cells and platelets. Refractory anemia with excess blasts may progress to acute myeloid leukemia (AML). See the PDQ Adult Acute Myeloid Leukemia Treatment summary for more information. - Refractory cytopenia with multilineage dysplasia: There are too few of at least two types of blood cells (red blood cells, platelets, or white blood cells). Less than 5% of the cells in the bone marrow are blasts and less than 1% of the cells in the blood are blasts. If red blood cells are affected, they may have extra iron. Refractory cytopenia may progress to acute myeloid leukemia (AML). - Refractory cytopenia with unilineage dysplasia: There are too few of one type of blood cell (red blood cells, platelets, or white blood cells). There are changes in 10% or more of two other types of blood cells. Less than 5% of the cells in the bone marrow are blasts and less than 1% of the cells in the blood are blasts. - Unclassifiable myelodysplastic syndrome: The numbers of blasts in the bone marrow and blood are normal, and the disease is not one of the other myelodysplastic syndromes. - Myelodysplastic syndrome associated with an isolated del(5q) chromosome abnormality: There are too few red blood cells in the blood and the patient has anemia. Less than 5% of the cells in the bone marrow and blood are blasts. There is a specific change in the chromosome. - Chronic myelomonocytic leukemia (CMML): See the PDQ summary on Myelodysplastic/ Myeloproliferative Neoplasms Treatment for more information. See the PDQ summary on Chronic Myeloproliferative Disorders Treatment for information about other blood cell diseases. The prognosis (chance of recovery) and treatment options depend on the following: - The number of blast cells in the bone marrow. - Whether one or more types of blood cells are affected. - Whether the patient has symptoms of anemia, bleeding, or infection. - Whether the patient has a low or high risk of leukemia. - Certain changes in the chromosomes. - Whether the myelodysplastic syndrome occurred after chemotherapy or radiation therapy for cancer. - The age and general health of the patient.
<urn:uuid:afb153b8-6290-4940-a44f-0c387cc80b67>
CC-MAIN-2013-20
http://www.cancer.gov/cancertopics/pdq/treatment/myelodysplastic/Patient
2013-05-18T17:28:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897937
1,600
Thank you for your donation. Your IP: 18.104.22.168 |Planning and Building a Greenhouse| |Written by University of Maryland| Photo By rr_graphic_design Careful planning is important before a home greenhouse project is started. Building a greenhouse does not need to be expensive or time-consuming. The final choice of the type of greenhouse will depend on the growing space desired, home architecture, available sites, and costs. The greenhouse must, however, provide the proper environment for growing plants. The greenhouse should be located where it gets maximum sunlight. The first choice of location is the south or southeast side of a building or shade trees. Sunlight all day is best, but morning sunlight on the east side is sufficient for plants. Morning sunlight is most desirable because it allows the plant's food production process to begin early; thus growth is maximized. An east side location captures the most November to February sunlight. The next best sites are southwest and west of major structures, where plants receive sunlight later in the day. North of major structures is the least desirable location and is good only for plants that require little light. Deciduous trees, such as maple and oak, can effectively shade the greenhouse from the intense late afternoon summer sun; however, they should not shade the greenhouse in the morning. Deciduous trees also allow maximum exposure to the winter sun because they shed their leaves in the fall. Evergreen trees that have foliage year round should not be located where they will shade the greenhouse because they will block the less intense winter sun. You should aim to maximize winter sun exposure, particularly if the greenhouse is used all year. Remember that the sun is lower in the southern sky in winter causing long shadows to be cast by buildings and evergreen trees (Figure 1). Good drainage is another requirement for the site. When necessary, build the greenhouse above the surrounding ground so rainwater and irrigation water will drain away. Other site considerations include the light requirements of the plants to be grown; locations of sources of heat, water, and electricity; and shelter from winter wind. Access to the greenhouse should be convenient for both people and utilities. A workplace for potting plants and a storage area for supplies should be nearby. A home greenhouse can be attached to a house or garage, or it can be a freestanding structure. The chosen site and personal preference can dictate the choices to be considered. An attached greenhouse can be a half greenhouse, a full-size structure, or an extended window structure. There are advantages and disadvantages to each type. Lean-to. A lean-to greenhouse is a half greenhouse, split along the peak of the roof, or ridge line (Figure 2A), Lean-tos are useful where space is limited to a width of approximately seven to twelve feet, and they are the least expensive structures. The ridge of the lean-to is attached to a building using one side and an existing doorway, if available. Lean-tos are close to available electricity, water and heat. The disadvantages include some limitations on space, sunlight, ventilation, and temperature control. The height of the supporting wall limits the potential size of the lean-to. The wider the lean-to, the higher the supporting wall must be. Temperature control is more difficult because the wall that the greenhouse is built on may collect the sun's heat while the translucent cover of the greenhouse may lose heat rapidly. The lean-to should face the best direction for adequate sun exposure. Finally, consider the location of windows and doors on the supporting structure and remember that snow, ice, or heavy rain might slide off the roof or the house onto the structure. Even-span. An even-span is a full-size structure that has one gable end attached to another building (Figure 2B). It is usually the largest and most costly option, but it provides more usable space and can be lengthened. The even-span has a better shape than a lean-to for air circulation to maintain uniform temperatures during the winter heating season. An even-span can accommodate two to three benches for growing crops. Window-mounted. A window-mounted greenhouse can be attached on the south or east side of a house. This glass enclosure gives space for conveniently growing a few plants at relatively low cost (Figure 2D). The special window extends outward from the house a foot or so and can contain two or three shelves. Freestanding greenhouses are separate structures; they can be set apart from other buildings to get more sun and can be made as large or small as desired (Figure 2C). A separate heating system is needed, and electricity and water must be installed. The lowest cost per square foot of growing space is generally available in a freestanding or even-span greenhouse that is 17 to 18 feet wide. It can house a central bench, two side benches, and two walkways. The ratio of cost to the usable growing space is good. When deciding on the type of structure, be sure to plan for adequate bench space, storage space, and room for future expansion. Large greenhouses are easier to manage because temperatures in small greenhouses fluctuate more rapidly. Small greenhouses have a large exposed area through which heat is lost or gained, and the air volume inside is relatively small; therefore, the air temperature changes quickly in a small greenhouse. Suggested minimum sizes are 6 feet wide by 12 feet long for an even-span or freestanding greenhouse. A good selection of commercial greenhouse frames and framing materials is available. The frames are made of wood, galvanized steel, or aluminum. Build-it-yourself greenhouse plans are usually for structures with wood or metal pipe frames. Plastic pipe materials generally are inadequate to meet snow and wind load requirements. Frames can be covered with glass, rigid fiberglass, rigid double-wall plastics, or plastic film. All have advantages and disadvantages. Each of these materials should be considered--it pays to shop around for ideas. Greenhouse frames range from simple to complex, depending on the imagination of the designer and engineering requirements. The following are several common frames (Figure 3). Quonset. The Quonset is a simple and efficient construction with an electrical conduit or galvanized steel pipe frame. The frame is circular and usually covered with plastic sheeting. Quonset sidewall height is low, which restricts storage space and headroom. Gothic. The gothic frame construction is similar to that of the Quonset but it has a gothic shape (Figure 3). Wooden arches may be used and joined at the ridge. The gothic shape allows more headroom at the sidewall than does the Quonset. Rigid-frame. The rigid-frame structure has vertical sidewalls and rafters for a clear-span construction. There are no columns or trusses to support the roof. Glued or nailed plywood gussets connect the sidewall supports to the rafters to make one rigid frame. The conventional gable roof and sidewalls allow maximum interior space and air circulation. A good foundation is required to support the lateral load on the sidewalls. Post and rafter and A-frame. The post and rafter is a simple construction of an embedded post and rafters, but it requires more wood or metal than some other designs. Strong sidewall posts and deep post embedment are required to withstand outward rafter forces and wind pressures. Like the rigid frame, the post and rafter design allows more space along the sidewalls and efficient air circulation. The A-frame is similar to the post and rafter construction except that a collar beam ties the upper parts of the rafters together. Greenhouse coverings include long-life glass, fiberglass, rigid double-wall plastics, and film plastics with 1- to 3-year lifespans. The type of frame and cover must be matched correctly. Glass. Glass is the traditional covering. It has a pleasing appearance, is inexpensive to maintain, and has a high degree of permanency. An aluminum frame with a glass covering provides a maintenance-free, weather-tight structure that minimizes heat costs and retains humidity. Glass is available in many forms that would be suitable with almost any style or architecture. Tempered glass is frequently used because it is two or three times stronger than regular glass. Small prefabricated glass greenhouses are available for do-it-yourself installation, but most should be built by the manufacturer because they can be difficult to construct. The disadvantages of glass are that it is easily broken, is initially expensive to build, and requires must better frame construction than fiberglass or plastic. A good foundation is required, and the frames must be strong and must fit well together to support heavy, rigid glass. Fiberglass. Fiberglass is lightweight, strong, and practically hailproof. A good grade of fiberglass should be used because poor grades discolor and reduce light penetration. Use only clear, transparent, or translucent grades for greenhouse construction. Tedlar-coated fiberglass lasts 15 to 20 years. The resin covering the glass fibers will eventually wear off, allowing dirt to be retained by exposed fibers. A new coat of resin is needed after 10 to 15 years. Light penetration is initially as good as glass but can drop off considerably over time with poor grades of fiberglass. Double-wall plastic. Rigid double-layer plastic sheets of acrylic or polycarbonate are available to give long-life, heat-saving covers. These covers have two layers of rigid plastic separated by webs. The double-layer material retains more heat, so energy savings of 30 percent are common. The acrylic is a long-life, nonyellowing material; the polycarbonate normally yellows faster, but usually is protected by a UV-inhibitor coating on the exposed surface. Both materials carry warranties for 10 years on their light transmission qualities. Both can be used on curved surfaces; the polycarbonate material can be curved the most. As a general rule, each layer reduces light by about 10 percent. About 80 percent of the light filters through double-layer plastic, compared with 90 percent for glass. Film plastic. Film-plastic coverings are available in several grades of quality and several different materials. Generally, these are replaced more frequently than other covers. Structural costs are very low because the frame can be lighter and plastic film is inexpensive. Light transmission of these film-plastic coverings is comparable to glass. The films are made of polyethylene (PE), polyvinyl chloride (PVC), copolymers, and other materials. A utility grade of PE that will last about a year is available at local hardware stores. Commercial greenhouse grade PE has ultraviolet inhibitors in it to protect against ultraviolet rays; it lasts 12 to 18 months. Copolymers last 2 to 3 years. New additives have allowed the manufacture of film plastics that block and reflect radiated heat back into the greenhouse, as does glass which helps reduce heating costs. PVC or vinyl film costs two to five times as much as PE but lasts as long as five years. However, it is available only in sheets four to six feet wide. It attracts dust from the air, so it must be washed occasionally. Permanent foundations should be provided for glass, fiberglass, or the double-layer rigid-plastic sheet materials. The manufacturer should provide plans for the foundation construction. Most home greenhouses require a poured concrete foundation similar to those in residential houses. Quonset greenhouses with pipe frames and a plastic cover use posts driven into the ground. Permanent flooring is not recommended because it may stay wet and slippery from soil mix media. A concrete, gravel, or stone walkway 24 to 36 inches wide can be built for easy access to the plants. The rest of the floor should be covered by several inches of gravel for drainage of excess water. Water also can be sprayed on the gravel to produce humidity in the greenhouse. Greenhouses provide a shelter in which a suitable environment is maintained for plants. Solar energy from the sun provides sunlight and some heat, but you must provide a system to regulate the environment in your greenhouse. This is done by using heaters, fans, thermostats, and other equipment. The heating requirements of a greenhouse depend on the desired temperature for the plants grown, the location and construction of the greenhouse, and the total outside exposed area of the structure. As much as 25 percent of the daily heat requirement may come from the sun, but a lightly insulated greenhouse structure will need a great deal of heat on a cold winter night. The heating system must be adequate to maintain the desired day or night temperature. Usually the home heating system is not adequate to heat an adjacent greenhouse. A 220-volt circuit electric heater, however, is clean, efficient, and works well. Small gas or oil heaters designed to be installed through a masonry wall also work well. Solar-heater greenhouses were popular briefly during the energy crisis, but they did not prove to be economical to use. Separate solar collection and storage systems are large and require much space. However, greenhouse owners can experiment with heat-collecting methods to reduce fossil-fuel consumption. One method is to paint containers black to attract heat, and fill them with water to retain it. However, because the greenhouse air temperature must be kept at plant-growing temperatures, the greenhouse itself is not a good solar-heat collector. Heating systems can be fueled by electricity, gas, oil, or wood. The heat can be distributed by forced hot air, radiant heat, hot water, or steam. The choice of a heating system and fuel depends on what is locally available, the production requirements of the plants, cost, and individual choice. For safety purposes, and to prevent harmful gases from contacting plants, all gas, oil, and woodburning systems must be properly vented to the outside. Use fresh-air vents to supply oxygen for burners for complete combustion. Safety controls, such as safety pilots and a gas shutoff switch, should be used as required. Portable kerosene heaters used in homes are risky because some plants are sensitive to gases formed when the fuel is burned. Calculating heating system capacity. Heating systems are rated in British thermal units (Btu) per hour (h). The Btu capacity of the heating system, Q, can be estimated easily using three factors: Although this is a relatively small greenhouse, the furnace output is equivalent to that in a small residence such as a townhouse. The actual furnace rated capacity takes into account the efficiency of the furnace and is called the furnace input fuel rating. This discussion is a bit technical, but these factors must be considered when choosing a greenhouse. Note the effect of each value on the outcome. When different materials are used in the construction of the walls or roof, heat loss must be calculated for each. For electrical heating, covert Btu/h to kilowatts by dividing Btu/h by 3,413. If a wood, gas, or oil burner is located in the greenhouse, a fresh-air inlet is recommended to maintain an oxygen supply to the burner. Place a piece of plastic pipe through the outside cover to ensure that oxygen gets to the burner combustion air intake. The inlet pipe should be the diameter of the flue pipe. This ensures adequate air for combustion in an airtight greenhouse. Unvented heaters (no chimney) using propane gas or kerosene are not recommended. Installing circulating fans in your greenhouse is a good investment. During the winter when the greenhouse is heated, you need to maintain air circulation so that temperatures remain uniform throughout the greenhouse. Without air-mixing fans, the warm air rises to the top and cool air settles around the plants on the floor. Small fans with a cubic-foot-per-minute (ft3/min) air-moving capacity equal to one quarter of the air volume of the greenhouse are sufficient. For small greenhouses (less than 60 feet long), place the fans in diagonally opposite corners but out from the ends and sides. The goal is to develop a circular (oval) pattern of air movement. Operate the fans continuously during the winter. Turn these fans off during the summer when the greenhouse will need to be ventilated. The fan in a forced-air heating system can sometimes be used to provide continuous air circulation. The fan must be wired to an on/off switch so it can run continuously, separate from the thermostatically controlled burner. Ventilation is the exchange of inside air for outside air to control temperature, remove moisture, or replenish carbon dioxide (CO2). Several ventilation systems can be used. Be careful when mixing parts of two systems. Natural ventilation uses roof vents on the ridge line with side inlet vents (louvers). Warm air rises on convective currents to escape through the top, drawing cool air in through the sides. Mechanical ventilation uses an exhaust fan to move air out one end of the greenhouse while outside air enters the other end through motorized inlet louvers. Exhaust fans should be sized to exchange the total volume of air in the greenhouse each minute. The total volume of air in a medium to large greenhouse can be estimated by multiplying the floor area times 8.0 (the average height of a greenhouse). A small greenhouse (less than 5,000 ft3 in air volume) should have an exhaust-fan capacity estimated by multiplying the floor area by 12. The capacity of the exhaust fan should be selected at one-eighth of an inch static water pressure. The static pressure rating accounts for air resistance through the louvers, fans, and greenhouse and is usually shown in the fan selection chart. Ventilation requirements vary with the weather and season. One must decide how much the greenhouse will be used. In summer, 1 to 1.5 air volume changes per minute are needed. Small greenhouses need the larger amount. In winter, 20 to 30 percent of one air volume exchange per minute is sufficient for mixing in cool air without chilling the plants. One single-speed fan cannot meet this criteria. Two single-speed fans are better. A combination of a single-speed fan and a two-speed fan allows three ventilation rates that best satisfy year round needs. A single-stage and a two-stage thermostat are needed to control the operation. A two-speed motor on low speed delivers about 70 percent of its full capacity. If the two fans have the same capacity rating, then the low-speed fan supplies about 35 percent of the combined total. This rate of ventilation is reasonable for the winter. In spring, the fan operates on high speed. In summer, both fans operate on high speed. Refer to the earlier example of a small greenhouse. A 16-foot wide by 24-foot long house would need an estimated ft3 per minute (cubic feet per minute; CFM) total capacity; that is, 16x24x12 ft3 per minute. For use all year, select two fans to deliver 2,300 ft3 per minute each, one fan to have two speeds so that the high speed is 2,300 ft3 per minute. Adding the second fan, the third ventilation rate is the sum of both fans on high speed, or 4,600 ft3 per minute. Some glass greenhouses are sold with a manual ridge vent, even when a mechanical system is specified. The manual system can be a backup system, but it does not take the place of a motorized louver. Do not take shortcuts in developing an automatic control system. Air movement by ventilation alone may not be adequate in the middle of the summer; the air temperature may need to be lowered with evaporative cooling. Also, the light intensity may be too great for the plants. During the summer, evaporative cooling, shade cloth, or paint may be necessary. Shade materials include roll-up screens of wood or aluminum, vinyl netting, and paint. Small package evaporative coolers have a fan and evaporative pad in one box to evaporate water, which cools air and increases humidity. Heat is removed from the air to change water from liquid to a vapor. Moist, cooler air enters the greenhouse while heated air passes out through roof vents or exhaust louvers. The evaporative cooler works best when the humidity of the outside air is low. The system can be used without water evaporation to provide the ventilation of the greenhouse. Size the evaporative cooler capacity at 1.0 to 1.5 times the volume of the greenhouse. An alternative system, used in commercial greenhouses, places the pads on the air inlets at one end of the greenhouse and uses the exhaust fans at the other end of the greenhouse to pull the air through the house. Automatic control is essential to maintain a reasonable environment in the greenhouse. On a winter day with varying amounts of sunlight and clouds, the temperature can fluctuate greatly; close supervision would be required if a manual ventilation system were in use. Therefore, unless close monitoring is possible, both hobbyists and commercial operators should have automated systems with thermostats or other sensors. Thermostats can be used to control individual units, or a central controller with one temperature sensor can be used. In either case, the sensor or sensors should be shaded from the sun, located about plant height away from the sidewalls, and have constant airflow over them. An aspirated box is suggested; the box houses each sensor and has a small fan that moves greenhouse air through the box and over the sensor (Figure 5). The box should be painted white so it will reflect solar heat and allow accurate readings of the air temperature. A water supply is essential. Hand watering is acceptable for most greenhouse crops if someone is available when the task needs to be done; however, many hobbyists work away from home during the day. A variety of automatic watering systems is available to help to do the task over short periods of time. Bear in mind, the small greenhouse is likely to have a variety of plant materials, containers, and soil mixes that need different amounts of water. Time clocks or mechanical evaporation sensors can be used to control automatic watering systems. Mist sprays can be used to create humidity or to moisten seedlings. Watering kits can be obtained to water plants in flats, benches, or pots. CO2 and Light Carbon dioxide (CO2) and light are essential for plant growth. As the sun rises in the morning to provide light, the plants begin to produce food energy (photosynthesis). The level of CO2 drops in the greenhouse as it is used by the plants. Ventilation replenishes the CO2 in the greenhouse. Because CO2 and light complement each other, electric lighting combined with CO2 injection are used to increase yields of vegetable and flowering crops. Bottled CO2, dry ice, and combustion of sulfur-free fuels can be used as CO2 sources. Commercial greenhouses use such methods. Alternative Growing Structures A greenhouse is not always needed for growing plants. Plants can be germinated in one's home in a warm place under fluorescent lamps. The lamps must be close together and not far above the plants. A cold frame or hotbed can be used outdoors to continue the growth of young seedlings until the weather allows planting in a garden. A hotbed is similar to the cold frame, but it has a source of heat to maintain proper temperatures. Adapted from Fact Sheet 645 - University of Maryland Cooperative Extension Service, David S. Ross, Extension Agricultural Engineer, Department of Agricultural Engineering
<urn:uuid:c298edab-fc13-434e-8d66-e57e57d0a63a>
CC-MAIN-2013-20
http://thegardengeeks.com/home/index.php?option=com_content&view=article&id=3568:planning-and-building-a-greenhouse-&catid=35:how-to&Itemid=148
2013-05-20T22:05:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923601
4,861
From discarded fishing gear to plastic bags to cigarette butts, a growing tide of marine litter is harming oceans and beaches worldwide, says a new report. The report, the first-ever attempt to take stock of the marine litter situation in the 12 major regional seas around the world, was launched on World Oceans Day by the UN Environment Programme (UNEP) and Achim Steiner, UN Under-Secretary-General and UNEP Executive Director, said: "Marine litter is symptomatic of a wider malaise: namely the wasteful use and persistent poor management of natural resources. The plastic bags, bottles and other debris piling up in the oceans and seas could be dramatically reduced by improved waste reduction, waste management and recycling initiatives." "Some of the litter, like thin film single use plastic bags which choke marine life, should be banned or phased-out rapidly everywhere-there is simply zero justification for manufacturing them anymore, anywhere. Other waste can be cut by boosting public awareness, and proposing an array of economic incentives and smart market mechanisms that tip the balance in favor of recycling, reducing or re-use rather than dumping into the sea," he said. The report's findings indicate that despite several international, regional and national efforts to reverse marine pollution, alarming quantities of rubbish thrown out to sea continue to endanger people's safety and health, entrap wildlife, damage nautical equipment and deface coastal areas around the world. Plastics and cigarettes top the "Top Ten" of marine debris Plastic--especially plastic bags and PET bottles--is the most pervasive type of marine litter around the world, accounting for over 80% of all rubbish collected in several of the regional seas assessed. Plastic debris is accumulating in terrestrial and marine environments worldwide, slowly breaking down into tinier and tinier pieces that can be consumed by the smallest marine life at the base of the food web. Plastics collect toxic compounds that then can get into the bodies of organisms that eat the plastic. Smoking-related activities also receive top rankings when it comes to sources of marine litter. Cigarette filters, tobacco packets and cigar tips make up 40% of all marine litter in the Mediterranean, while in Ecuador smoking-related rubbish accounted for over half of the total coastal litter 'catch' in 2005. "The ocean is our life support system--it provides much of the oxygen we breathe, the food we eat and climate we need to survive-yet trash continues to threaten its health," said Vikki Spruill President and CEO of Ocean Conservancy. "The impact of marine debris is clear and dramatic; dead and injured wildlife, littered beaches that discourage tourism and choked ocean ecosystems. Marine debris is one of the most widespread pollution threats facing our ocean and it is completely Land-based activities are the largest source of marine litter. In Australia, surveys near cities indicate up to 80% of marine litter originating from land-based sources, with sea-based sources in the lead in more remote areas. The cost of rubbish Unsightly and unsafe, marine litter can cause serious economic losses through damaged boats, fishing gear, contamination of tourism and agriculture facilities. For example, the cost of cleaning the beaches in Bohuslän on the west coast of Sweden in just one year was at least 10 million SEK or $1,550,200. In the UK, Shetland fishermen had reported that 92% of them had recurring problems with debris in nets, and it has been estimated that each boat could lose between $10,500 and $53,300 per year due to the presence of marine litter. The cost to the local industry could then be as high as $4,300,000. The municipality of Ventanillas in Peru has calculated that it would have to invest around US$400,000 a year in order to clean its coastline, while its annual budget for cleaning all public areas is only half that amount. At the same time, flexible and economic incentives and deterrents need to be put in place to address the growing problem of At the moment, port authorities sometimes unwillingly discourage ships from bringing their galley waste back to shore--as seen in the East Asian Seas region where ships are charged on a fee-for-service (user pays) basis. Some vessel operators therefore opt to dispose of their garbage at sea--at no cost. Adopting a 'no special fee' approach to port waste reception facilities, as pioneered in the Baltic Sea region, can substantially decrease the number of operational and illegal discharges and help prevent pollution from ships to the marine environment. The level of fines for ocean dumping also needs to be reviewed to make them a sufficient deterrent. For example in the US the cruise ship Regal Princess was fined US$500,000 in 1993 for dumping 20 bags of garbage in to the sea. Fines of this level would act as a genuine deterrent to dumping of marine litter. Finally, income-generating opportunities linked to collecting and recycling marine litter can make a big difference in some of the world's poorer regions. For instance, in East Africa small-scale projects that create jobs and reduce the levels of marine rubbish need to be further promoted. SB.com editor Bart King recently wrote about the Great Pacific Garbage Patch.
<urn:uuid:f5fff45d-118d-402c-b1be-30637bc85bf1>
CC-MAIN-2013-20
http://www.sustainablebusiness.com/index.cfm/go/news.display/id/18336
2013-06-18T23:39:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921123
1,184
Basic information structures Web sites are built around basic structural themes. These fundamental architectures govern the navigational interface of the Web site and mold the user's mental models of how the information is organized. Three essential structures can be used to build a Web site: sequences, hierarchies, and webs. The simplest way to organize information is to place it in a sequence. Sequential ordering may be chronological, a logical series of topics progressing from the general to the specific, or alphabetical, as in indexes, encyclopedias, and glossaries. Straight sequences are the most appropriate organization for training sites, for example, in which the reader is expected to go through a fixed set of material and the only links are those that support the linear navigation path: More complex Web sites may still be organized as a logical sequence, but each page in the main sequence may have links to one or more pages of digressions, parenthetical information, or information on other Web sites: Information hierarchies are the best way to organize most complex bodies of information. Because Web sites are usually organized around a single home page, hierarchical schemes are particularly suited to Web site organization. Hierarchical diagrams are very familiar in corporate and institutional life, so most users find this structure easy to understand. A hierarchical organization also imposes a useful discipline on your own analytical approach to your content, because hierarchies are practical only with well-organized material. Weblike organizational structures pose few restrictions on the pattern of information use. In this structure the goal is often to mimic associative thought and the free flow of ideas, allowing users to follow their interests in a unique, heuristic, idiosyncratic pattern. This organizational pattern develops with dense links both to information elsewhere in the site and to information at other sites. Although the goal of this organization is to exploit the Web's power of linkage and association to the fullest, weblike structures can just as easily propagate confusion. Ironically, associative organizational schemes are often the most impractical structure for Web sites because they are so hard for the user to understand and predict. Webs work best for small sites dominated by lists of links and for sites aimed at highly educated or experienced users looking for further education or enrichment and not for a basic understanding of a topic.
<urn:uuid:018d501c-e7e3-438c-b2d5-da1a3bad0df5>
CC-MAIN-2013-20
http://webstyleguide.com/wsg2/site/basic_structures.html
2013-05-19T09:54:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914559
455
After India’s first lunar mission Chandrayaan – 1 found evidence of water on the moon’s surface, scientists have now discovered more than 40 small craters with water ice on the moon. Chandrayaan – 1 carried a NASA radar on board which has detected deposits of water ice at both poles of the moon. Many scientists believe the discovery is very significant. They say water ice could serve as a natural resource for future lunar mission landings, can be liquefied into drinking water and water components could be used to provide breathing air and rocket fuel. Could this breakthrough unravel much more about the lunar space and the solar system? Space scientist and former Indian Space Research Organisation (ISRO) chief, G. Madhavan Nair called this the “finding of the millennium”.
<urn:uuid:c004af95-7d63-4a1a-8962-c3366e391b87>
CC-MAIN-2013-20
http://blogs.reuters.com/india/tag/isro/
2013-06-20T03:07:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710115542/warc/CC-MAIN-20130516131515-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927728
168
Systems allow changes in mechanical properties: organisms Systems in nature allow organisms to change shape or their mechanical properties without changing the properties of given materials thanks to articulated struts. "Articulated strut (fig. 21.7). These share the common lattice of compression-resisting elements, but their joints (articulations) permit motion. We use them infrequently, but we do deliberately build joints into many bridges, for example, so the resulting mechanisms can distort safely under changing wind loads, varied 'live' or functional loads, or thermal size changes. Nature often uses the arrangement--major portions of vertebrate skeletons can be best viewed as mechanisms of articulated struts. The hard elements (ossicles) and their connections in echinoderms such as starfish provide another example. Systems build around articulated struts combine nicely with muscles; sometimes, as in insect skeletons, the muscles are on the inside, but the principle is the same. Among the best features of these systems is their ability to alter shape or overall mechanical properties rapidly without having to change the properties of specific materials…But even tensile tissues other than muscle may sometimes change properties fairly quickly in response to some chemical signal. These alterations have been studied most extensively in the so-called catch connective tissue of echinoderms (Motokawa 1984; Wilkie 2002). A starfish undergoes an impressive mechanical transformation as it shifts from being limp enough to crawl with its tube feet on an irregular substratum to being stiff enough so the same tube feet have adequate anchorage when pulling open the shell of a clam." (Vogel 2003:438) Learn more about this functional adaptation. - Steven Vogel. 2003. Comparative Biomechanics: Life's Physical World. Princeton: Princeton University Press. 580 p.
<urn:uuid:fd7d1ef5-e278-4f7a-9e39-16ca668299d3>
CC-MAIN-2013-20
http://eol.org/data_objects/14368334
2013-05-23T12:10:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901713
371
One Year Post Disaster: Managing traumatic stress As the initial shock subsides, reactions vary from one person to another. The following, however, are normal responses to a traumatic event and may resurface or become more heightened upon anniversaries: Feelings become intense and sometimes are unpredictable. You may become more irritable than usual, and your mood may change back and forth dramatically. You might be especially anxious or nervous, or even become depressed. Thoughts and behavior patterns are affected by the trauma. You might have repeated and vivid memories of the event. These flashbacks may occur for no apparent reason and may lead to physical reactions such as rapid heartbeat or sweating. You may find it difficult to concentrate or make decisions, or become more easily confused. Sleep and eating patterns also may be disrupted. Recurring emotional reactions are common. Anniversaries of the event, such as one year, can trigger upsetting memories of the traumatic experience. These "triggers" may be accompanied by fears that the stressful event will be repeated. Interpersonal relationships often become strained. Greater conflict, such as more frequent arguments with family members and coworkers, is common. On the other hand, you might become withdrawn and isolated and avoid your usual activities. Physical symptoms may accompany the extreme stress. For example, headaches, nausea and chest pain may result and may require medical attention. Pre-existing medical conditions may worsen due to the stress. Will time heal? It is important for you to realize that there is not one "standard" pattern of reaction to the extreme stress of traumatic experiences. Some people respond immediately, while others have delayed reactions -- sometimes months or even years later. Some have adverse effects for a long period of time, while others recover rather quickly. And reactions can change over time. Some who have suffered from trauma are energized initially by the event through efforts to clean up, help others, etc, only to later become discouraged or depressed. A number of factors tend to affect the length of time required for recovery, including: The degree of intensity and loss. Events that last longer and pose a greater threat, and where loss of life or substantial loss of property is involved, often take longer to resolve. A person's general ability to cope with emotionally challenging situations. Individuals who have handled other difficult, stressful circumstances well may find it easier to cope with the trauma. Other stressful events occurring in conjunction with the traumatic experience. Individuals faced with other emotionally challenging situations, such as serious health problems or family-related difficulties, may have more intense reactions to the disaster and need more time to recover.
<urn:uuid:eb3493cb-11ae-417d-975b-02311b936171>
CC-MAIN-2013-20
http://fourstateshomepage.com/fulltext-gmfs/?nxd_id=283932
2013-06-20T01:45:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959803
516
Minnesota State Senator Anton J. Rockne took pride in the nickname "Watchdog of the State Treasury." Yet as America's Great Depression deepened in 1932, he fought against programs for the poor and his opponents branded him "Commander-in-Chief of the Hunger Brigade." Friends and foes alike called him "A.J." or "Rock," and the latter term most accurately described his character. Anton Julius Rockne was born in Fillmore County in 1868. After earning a law degree from the University of Minnesota, the budding attorney moved to Zumbrota in 1894. Rockne purchased the Zumbrota News in 1897. He became less involved with the paper after three years, bringing in E.F. Davis as partner and editor-manager. An interest in politics led him to run for office. In 1903, voters elected Rockne to the state House of Representatives where he became House Speaker six years later. The Zumbrota lawyer became a state senator in 1915. As a conservative Republican, A.J. Rockne believed government power should be limited. He ably resisted its expansion. In 1915 Rockne became chairman of the state legislature's important Senate Finance Committee. He made it his business to see that Minnesota tax money was spent carefully. By 1930 he ranked among the state's most powerful political figures. Rockne thus became a central figure in Minnesota politics during the uncertain years of America's Great Depression. The Great Depression was in its third year by 1932 and showed few signs of ending. Minnesotans elected Floyd B. Olson of the Farmer-Labor party as governor in 1930 and again two years later. They hoped his socialist-progressive policies would boost the slump-ridden economy. A.J. Rockne, however, blocked the governor's plans. He used his power to tie up legislation in committee and prevent it from reaching a vote in the Senate. Olson believed publicly funded programs would improve the economy, help those in need and halt farm foreclosures. In June 1932 Olson told an audience that Minnesota had reached such an economic crisis that only the government could cope with it. Those were fighting words to the "Watchdog of the Treasury." Rockne worked to crush Olson's programs. The popular governor and the powerful senator conducted a verbal slugfest before a watchful state. The Farmer-Labor Leader, the weekly newspaper of Olson's party, attacked Senator Rockne on March 15, 1933. The Depression was worsening, the Leader claimed. The poor were even more desperate. Rockne, the report argued, made matters more difficult by working against the governor's plans. The Leader proclaimed that the senator's finance committee dealt "with the frozen blood in the veins of tiny babies." Rockne's friends rallied to his defense, but the senator faced public pressure. He allowed some of the laws that had been delayed by his committee to be brought to the legislature, but he still fought the governor wherever he could. The senator tried to stop a request for five million dollars for more relief plans. That move led Governor Olson to make a radio attack against his opponent in late December 1933. Olson called Rockne a defender of "property rights against human rights" and leader of a "dying" social order. A.J. Rockne fought back with a radio reply that detailed his opinions, but a second broadcast speech by the persuasive governor took his opponent's case apart. The state senate passed a relief bill. Rockne's stubborn resistance to programs designed to help victims of the Great Depression damaged his public and political image statewide. But Anton J. Rockne was far from finished. He remained a powerful force in the senate chairing the finance committee until his retirement in 1946. Upon his retirement at age seventy-six, A.J. Rockne had served during twenty-two sessions of the Minnesota legislature, House and Senate combined, tied for the most in state history. His thirty-six years of Senate service is also a record. He died on May 2, 1950. Bailey, Howard. "Zumbrota Newspapers." Goodhue County Historical News, 14 (February 1980): 3. Centennial Book Committee. Zumbrota: The First 100 Years. Zumbrota, MN: The Committee, 1956. Creel, H.G. "Commander-in-Chief of the Hunger Brigade," (editorial) Farmer Labor Leader (St. Paul), March 15, 1933. Creel, H.G. "Rockne's Problem," (editorial) Farmer Labor Leader (St. Paul), March 30, 1933. Curtiss-Wedge, Franklyn, ed. History of Goodhue County, Minnesota. Chicago: H.C. Cooper, 1909. Johnson, Frederick L. Goodhue County, Minnesota: A Narrative History. Red Wing: Goodhue County Historical Society, 2000. Mayer, George H. The Political Career of Floyd B. Olson. St. Paul: Minnesota Historical Society, 1987. Olson, Floyd B. Manuscript of Governor Floyd B. Olson's WCCO radio address, Dec. 21, 1933. Floyd B. Olson Papers, Manuscript Collection, Minnesota Historical Society. "Rockne Brands Olson's Needy Plea, 'Politics.'" St. Paul Pioneer Press, April 14, 1933. "Rockne Calls Olson Relief Plan 'Half baked' in Attack Over Air." St. Paul Pioneer Press, December 27, 1933. "Rockne Renews Fight on Relief Bill After Attack by Governor." St. Paul Pioneer Press, December 28, 1933. Rockne for Senator Volunteer Committee. Lest We Forget: A.J. Rockne, Candidate for State Senator for Goodhue County, 1938. Toensing, W.F., ed.Minnesota Congressmen, Legislators, and Other Elected State Officials: An Alphabetical Check List, 1849-1971. St. Paul: Minnesota Historical Society, 1971. In 1915 Anton J. Rockne becomes chairman of the Minnesota State Senate Finance Committee, where he later uses his political influence to block Governor Floyd Olson's poverty relief programs.
<urn:uuid:704015ac-a065-40f1-aa24-fe702f37553c>
CC-MAIN-2013-20
http://mnopedia.org/person/rockne-anton-julius-1868-1950
2013-05-24T01:29:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961643
1,267
Question: What is astigmatism and how common is it? Answer: Astigmatism is an eye error or a refractive error that is extremely common. It is referring to the fact that, as opposed to a basketball which is perfectly round in all directions, an eye might be shaped more like a football that has a different curvature in one direction than the other. And it simply means that instead of one sharp focus on the retina or the film of the eye there's two focuses. It could be one in front, one behind or both in front of the eye and as a result things can be blurry often picked up first close but also at a distance. We all have some small amounts of astigmatism and you don't have to have glasses for all of them but they can be significant enough to require correction.
<urn:uuid:58da92a8-a8f0-41c2-8686-07c45de05436>
CC-MAIN-2013-20
http://abcnews.go.com/Health/EyeHealthCorrectiveLenses/astigmatism-common/story?id=8418413
2013-05-24T16:45:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704752145/warc/CC-MAIN-20130516114552-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978475
168
On Wednesday, the Brookings Institute hosted a panel on the poor educational attainment of immigrant children entitled “Immigrant Children Falling Behind: Implications and Policy Prescriptions.” Although there was some discussion on the challenges facing immigrant children, the discussion sadly devolved into a debate over the merits and disadvantages of the DREAM Act. The event started off on a good enough note, to be sure. “If immigrant kids have a problem, then the nation has a problem,” said Ron Haskins, the co-director of the Brookings Center on Children and Families. The importance of immigration children to the country’s well-being is made plain through demographics, argued several of the panelists. The American population is aging, and immigration is the only thing that can keep the labor force growing, they said. “At the beginning of the 20th Century, there were 10 children for every senior,” said Brookings demographer Audrey Singer. “By 2030, there will be only 1.2 children for every senior.” “If it weren’t for immigrant women, children would be even a smaller share of the U.S. population today,” added Marta Tienda of Princeton University. “[Immigration] is the dynamic of growth, it is the reason that the United States, U.K., Canada and Australia are not declining in population like Spain and Italy and Japan.” Singer also pointed out that the immigrants are making up a larger and larger segment of the child population. In fact, she says, even if immigration were to stop completely today, the United States would have a minority-majority child population by 2050. The problem is that immigrant children are performing poorly in educational attainment. “Immigrants are overrepresented [in the less than high school education] category,” said Haskins, and education is correlated with income – something that has no doubt contributed to the fact that wages for first and second generation immigrants have been declining since the 1940s. The key is to invest in these immigrant children – something that could lead to a real economic benefit for the country, argued Marta Tienda. So what kinds of investments are being proposed to increase the educational well-being of immigrant children, and, arguably, the economic well-being of the country? Haskins and Tienda had recently outlined three suggestions in a policy brief: expand pre-school programs to boost the readiness of immigrant children for public school; improve programs for those learning English as a second language so that they can master it by the second grade; and pass the DREAM Act to boost educational opportunities for undocumented immigrant youth. But although the event was meant to address a massive social problem, it quickly turned into a debate about whether the DREAM Act should be enacted. “If you take an investment perspective, rather than, ‘Did you come here legally or not, or were you dragged across when you were three’… anyone who beats the odds and outperforms individuals who have had all the benefits of citizenship throughout their life – there’s something there that we might not be able to measure, but we certainly want to bottle it and capitalize on it,” Tienda said. Jena McNeill, a homeland security policy analyst at the Heritage Foundation, saw it differently. She pointed out that the U.S. had passed a bill granting amnesty to certain illegal immigrants in the hopes that this, combined with better border enforcement, would reduce illegal immigration. The illegal immigrant population in the United States has risen four-fold since then, she pointed out. In fact, with the lessons she drew from 1986 in mind, McNeill opposes the DREAM act as merely incentivizing more illegal immigration. “Whether it’s a piecemeal amnesty like the DREAM Act or something like a larger earned legalization under a comprehensive bill, it all to me has the impact of encouraging more illegal immigration to the United States,” she said. Discussion on the DREAM Act is of course interesting and important, but the topic of conversation was disappointing considering that the panel was billed as one that would try and tackle one of society’s most urgent problems. The parochial focus on a piece of legislation as opposed to the larger problem of immigrant children and their educational attainment is telling – if those who are best positioned to have debates about the immigration system and education are not having the broader discussions, how can the rest of us talk about it intelligently? Add Tim on twitter: www.twitter.com/timkmak
<urn:uuid:292b8bd7-6b64-460f-b1e0-e765775dd948>
CC-MAIN-2013-20
http://www.frumforum.com/amnesty-debate-drowns-out-education-gap-talk/
2013-05-24T08:50:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973956
946
- Historic Sites A Man To Match The Mountains To David Thompson—who died blind, penniless, and bypassed by history—we owe our first knowledge of the American continent’s rugged Northwest October 1960 | Volume 11, Issue 6 Usually, his only companions were Indians or halfbreeds who helped him find passages for his canoe and supplied him with fish and deer and caribou meat. In the face of an unending procession of hardships and close calls with death, he learned to live and travel like the natives, moving with speed and exactness across vast stretches of land, pausing only to seek protection from gales and blizzards or to gum the leaking seams of his cedar canoe with pine pitch. His sole comforts were the fair-weather lapping of lake water, the warming flames of evening fires, and the clean forest smell of pine-needle beds. Despite his increasing interest in exploring and surveying, the Hudson’s Bay Company wanted him to confine his activities to trading, and in 1797, when his term with that company ended, he joined the more aggressive North West Company, whose partners were more appreciative of his special skills. Unhampered by the problems of trade, he set off at once on an unprecedented mapping tour for his new employers, traveling south across the plains to the Mandan Indian villages on the Missouri River in present-day North Dakota, charting the Red River country and the wild-rice lake district of northern Minnesota, coming within a few miles of correctly identifying the source of the Mississippi River (it was not found until 1832), and going on to survey for the first time the entire shore line of Lake Superior. During this trip, he met Alexander Mackenzie (see “First by Land,” AMERICAN HERITAGE, October, 1957), at Sault Ste. Marie and was told by that great North West Company explorer that he had accomplished more in ten months than the company expected could be done in two years. In those months, which included the worst wintry traveling seasons of the year, Thompson “had covered a total of 4,000 miles of survey.” During the next two years, he mapped Canada’s cold and remote Churchill and Athabaska regions, again probing unexplored forests and barrens, knowing the howl of wolves and the nightly call of loons, and charting rapids and gale-whipped lakes across hundreds of miles of bleak, quiet land. In the summer of 1800, he returned to the birch and aspen groves on the eastern slopes of the Rocky Mountains, second in command of a party seeking to cross the mountains and open trade with Indians in the upper basin of the Columbia River, where whites had not yet been. The plan failed when the leader of the expedition came down with an attack of rheumatism, but Thompson reached the high precipices of the Canadian Rockies, west of what is now Banff. There he met some Kutenai Indians from the west side of the mountains, and gathered information about what lay beyond. When the groups parted, Thompson recorded that he sent two of his men, “La Gasse and Le Blanc,” to live with the Indians. They were the first two men of white blood from eastern Canada known to have entered the Columbia basin. For the time being, the North West Company postponed further attempts to expand across the Rockies, and during the following years Thompson continued his exploring and trading activities in the more northerly regions of Lesser Slave Lake, the Peace River, and the “muskrat country” between the Nelson and Churchill rivers. In 1806, the Canadians were alarmed by the Lewis and Clark expedition, which threatened to flank British traders on the west, and once more the North West Company ordered Thompson to try to cross the Continental Divide. This time he was successful. Setting out from the Saskatchewan River on May 10, 1807, he led a trade group up the mountains into “stupendous & solitary Wilds covered with eternal Snow, & Mountain connected to Mountain by immense Glaciers, the collection of Ages & on which the Beams of the Sun makes hardly any impression....” On June 25, they finally topped the pass now called Howse and five days later, after following down the “foaming white” Blaeberry River, reached the upper Columbia River. Since it flowed north at that point, Thompson did not recognize it as the Columbia. He named it the Kootenai after the Indians of the area, and on it built the “Kootanae House,” a crude storage post for his trade goods and furs. [Note the many absurd differences in the modern spelling of this word. Canadian and American officials who were unaware of Thompson’s original version, Kootanae, stamped approval on all sorts of later local preferences.] While there he sought to make contact and open trade with natives farther south. One tribe to whom he sent messengers were the Flatheads of Montana, but on August 13 the messengers returned with the doleful news that the Flatheads had been defeated by a band of Blackfeet and had gone, instead, “to a military Post of the Americans.” In explanation, Thompson noted in his journal that the Kutenais “informed me that about 3 weeks ago the Americans to the number of 42 arrived to settle a military Post, at the confluence of the two most southern & considerable Branches of the Columbia & that they were preparing to make a small advance Post lower down on the River. 2 of those who were with Capt. Lewis were also with them of whom the poor Kootanaes related several dreadful stories.”
<urn:uuid:cf63a79d-b8c1-4275-9efc-65644a89bb42>
CC-MAIN-2013-20
http://www.americanheritage.com/content/man-match-mountains?page=3
2013-05-19T18:36:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972713
1,186
New Mexico isn’t known for its production of strawberries. However, a new study is underway and looking to determine the feasibility of growing strawberries as a specialty crop in the northern parts of the state. Fruit tree crops are often damaged by late frosts. These frosts decimate the production and harvest and profitability of New Mexico farmers. Strawberries produce fruit in clusters, and their blossoms are often not uniformly destroyed by frosts. As such, the study currently underway is evaluating whether or not growing strawberries makes sense and can overcome the unique challenges of the region. So far, of the 16 varieties in the study, Kent, Mesabi, Cavendish, Honeoye, Brunswick, and Cabot have shown the most resistance to cold injury. Continue reading Northern New Mexico Strawberries Studied Strawberry growers have long sought to increase production to meet the demand for fresh strawberries. And, demand is high. The epicenter of world strawberry production is the state of California. Over 40,000 acres of strawberries are cultivated each year, and approximately half of that total is located in Watsonville and Salinas. Strawberry cultivation has obstacles to overcome, however. Soil pathogens have long been a thorn in the flesh of farmers trying to maximize production. Over the years, numerous attempts have been made to solve the problem of crop loss due to infection from fungal organisms. Methyl bromide was used as a fumigant to sterilize soil. After being condemned internationally many years ago, it has slowly been phased out here as well. Methyl iodide, the replacement fumigant that followed, could have been used on strawberry farms, but was pulled last year after widespread concern over its alleged toxicity was raised by environmental groups. This void of effective chemical fumigants opened the door for a new, organic production method to prevent disease. Enter anaerobic soil disinfestation. Anaerobic soil disinfestation is a new treatment that seems to work as well as past fumigation techniques, without the dangers. As part of the treatment, carbon sources like rice bran, molasses and grape skins are mixed into the soil. A tarp is placed over the field, and drip irrigation is used to saturate the planting beds, thus triggering the growth of anaerobic bacteria. While not completely understood as of yet, the anaerobic bacteria probably produce organic acids that inhibit the fungal organisms. And, if that wasn’t hope-inspiring enough, the process is less-expensive than traditional fumigation methods. Continue reading New Strawberry Method Shows Great Promise Strawberry festivals are a wonderful source of fun and excitement for kids of all ages and adults too! Strawberry shortcake, pageants, and a host of entertaining events make for a great weekend experience for families. As temperatures rise and summer fun begins, why not celebrate the end of school and the beginning of vacation with one of these fabulous June strawberry festivals?! The festivals that are happening in June are listed below. If you can’t make one this year, plan ahead! See the entire directory for the annual events. Continue reading June Strawberry Festivals Strawberry festivals are a wonderful source of fun and excitement for kids of all ages and adults too! Strawberry shortcake, pageants, and a host of entertaining events make for a great weekend experience for families. As strawberry harvest season begins in earnest all around the country, the strawberry festivals coincide. If you are looking for something to do this weekend, check out these strawberry events. If you are are even relatively close to where one is occurring, consider making the trip! The festivals that are happening this weekend are listed below. However, even MORE festivals are going to be happening over Memorial Day weekend, so if you can’t make one this weekend, see the entire directory for those happening throughout the rest of the year. Continue reading Weekend Fun: Strawberry Festivals! The article linked in this post makes a few political comments, and it is not the place for this website to delve deeply into the treacherous currents of political discourse. However, the linked article points out a few of the difficulties associated with growing strawberries commercially. First and foremost of the difficulties is that strawberry plants are too delicate to plant via mechanized system. They have to be planted by hand. So, when the millions upon millions of strawberry plants are planted each year for the annualized plasticulture growing systems, they are inserted into the soil by human digits. That can make for some tired phalanges. For a better idea of how the planting process works, watch this video, and then click the link below to proceed straight to the full article: Continue reading Millions of Strawberry Plants…Planted by Hand From New Zealand comes news of a new elevated strawberry growing system. The strawberries are grown in elevated systems about a meter off the ground, and are grown in soil and pots. Although they are fertilized and watered in a precise way, they are not hydroponic since the plants and their roots are anchored [...] A new substrate has been developed by Riococo for growing strawberries. The substrate is developed from coconut coir grown primarily in Kurunegala, Sri Lanka. The growing medium is composed and developed specifically with the greenhouse cultivation of strawberry plants in mind. Already in use by some of the biggest greenhouse growers in the United [...] Latinos are taking advantage of the enormous California strawberry industry to carve out space for themselves and their families. Through the sacrificial decisions of first-generation farm owners, second-generation Latino strawberry growers are finding success as farm entrepreneurs. The number of Latino strawberry farm operators in California is growing rapidly. As the ideal climate and [...] It has been a long time in coming, but the University of California has updated the guidelines for nutrient sufficiency in strawberry plants. The last such publication was released in 1980, over 30 years ago. The study that led to the latest guidelines was funded by the California Strawberry Commission and was enabled by [...] Thanks to the service of a few kind-hearted people, strawberries have been brought to Kenya to aid an orphanage. Irish volunteers from Wexford brought strawberry plants to Kenya to fill a need and serve the community’s demand for the tasty berries they produce. The organization Humanitarian Volunteers worked at St. Paul’s Children Care Centre [...] It is never too early for a true Green Thumb to start thinking about the Spring and the garden that will come forth when the temperatures reverse their cooling trend and start warming again. Why not do something exotic in your garden this next growing season? Of course, our humble opinion is that strawberry [...] Everyone loves a good strawberry. But, no one really wants to consume artificial chemicals. While still unsettled, the debate about the production methods used by the Strawberry Industry continues. For some background information, use the search box at the top right of the page to search for “iodide.” In short, to increase yields and [...] Strawberries are a multi-billion dollar crop, and the market for them is global due to their wonderful flavor. Unless an unfortunate person has a strawberry allergy or other intolerance, the chances are good that strawberries list among that individual’s favorite fruits. Because of the popularity of the small red berries, technological and chemical advancements [...] Strawberry plants are usually considered for new spring gardens in the middle of winter. Cabin fever has set in, and the barren brown or snowy white landscape evokes fond thoughts of green buds springing up from the ground. That is when most would-be gardeners start perusing the seed catalogs or surfing the internet wistfully [...] New strawberry varieties are constantly being developed. Oftentimes, the improvements that are made through the breeding and selection process are somewhat significant, but small. Or, only certain aspects of the desired traits are manifest while others are not passed on during the process. The latest release from Cornell’s berry breeder Courtney Weber is very [...]
<urn:uuid:6272ea9b-e73b-482d-9352-bec077fc8b9b>
CC-MAIN-2013-20
http://strawberryplants.org/category/strawberry-news/
2013-06-19T06:22:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949986
1,641
Fire it up; Carefully! Save the Trees & Parks from Hot Coals & Potential Fires! The Chicago Park District, Chicago Fire Department and Forest Preserve District of Cook County would like to remind park and forest preserve visitors not to dump hot coals at the base of the trees or in areas where fires could ignite, especially during this unusually hot and dry summer season. The very dry conditions we are now experiencing make it more important than ever to discard hot coals in the appropriate manner. "The Chicago Fire Department has already responded to many reports of prairie fires this season caused by discarded cigarettes and barbecue coals,” said Fire Commissioner Jose A. Santiago. “These fires can spread very rapidly and are difficult to extinguish." “Red metal cans are provided to protect park patrons as well as the trees from the damage that can be caused from coals that are incorrectly disposed of in the park,” said Mike Kelly, Chicago Park District General Superintendent and CEO. “Hot coals burn the base of trees, killing the roots and eventually killing the tree. We have lost countless numbers of trees because of this.” Park and preserve patrons are reminded during this hot summer season to follow a few simple safety precautions when grilling in a park: Only grill in designated grilling areas that have red, metal “hot coal cans” (in the forest preserves, these cans are white). Please stay away from playgrounds or trees. Only grill in open grassy areas. When finished grilling, please extinguish hot coals with water and dispose of hot coals in the provided red, metal “hot coal cans”. DO NOT dump the hot coals at the base of tree trunks or near any playgrounds. Please dispose of trash and recyclables in appropriate blue recycling containers or green trash containers located in every park in the District.
<urn:uuid:7878dba8-bd34-4c05-88ba-66edc40e25c0>
CC-MAIN-2013-20
http://www.chicagoparkdistrict.com/fire-it-up-carefully-save-the-trees--parks-from-hot-coals--potential-fires/?ParkId=6206
2013-06-19T18:54:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943451
390
Normal body temperature may change during any given day. It is usually highest in the evening. Other factors that may affect body temperature are: In the second part of a woman's menstrual cycle, her temperature may go up by 1 degree or more. Physical activity, strong emotion, eating, heavy clothing, medications, high room temperature, and high humidity can all increase your body temperature. Fever is an important part of the body's defense against infection. Most bacteria and viruses that cause infections in people thrive best at 98.6 °F. Many infants and children develop high fevers with minor viral illnesses. Although a fever signals that a battle might be going on in the body, the fever is fighting for the person, not against. Brain damage from a fever generally will not occur unless the fever is over 107.6 °F (42 °C). Untreated fevers caused by infection will seldom go over 105 °F unless the child is overdressed or trapped in a hot place. Febrile seizures do occur in some children. However, most febrile seizures are over quickly, do not mean your child has epilepsy, and do not cause any permanent harm.. Unexplained fevers that continue for days or weeks are called fevers of undetermined origin (FUO). Almost any infection can cause a fever. Some common infections are: Medications, such as some antibiotics, antihistamines, and seizure medicines A simple cold or other viral infection can sometimes cause a high fever (102 - 104 °F, or 38.9 - 40 °C). This does not usually mean you or your child have a serious problem. Some serious infections may cause no fever or even a very low body temperature, especially in infants. If the fever is mild and you have no other problems, you do not need treatment. Drink fluids and rest. The illness is probably not serious if your child: Is still interested in playing Is eating and drinking well Is alert and smiling at you Has a normal skin color Looks well when their temperature comes down Take steps to lower a fever if you or your child is uncomfortable, vomiting, dried out (dehydrated), or not sleeping well. Remember, the goal is to lower, not eliminate, the fever. When trying to lower a fever: Do NOT bundle up someone who has the chills. Remove excess clothing or blankets. The room should be comfortable, not too hot or cool. Try one layer of lightweight clothing, and one lightweight blanket for sleep. If the room is hot or stuffy, a fan may help. A lukewarm bath or sponge bath may help cool someone with a fever. This is especially effective after medication is given -- otherwise the temperature might bounce right back up. Do NOT use cold baths, ice, or alcohol rubs. These cool the skin, but often make the situation worse by causing shivering, which raises the core body temperature. Here are some guidelines for taking medicine to lower a fever: Acetaminophen (Tylenol) and ibuprofen (Advil, Motrin) help reduce fever in children and adults. Sometimes doctors advise you to use both types of medicine. Take acetaminophen every 4 - 6 hours. It works by turning down the brain's thermostat. Take ibuprofen every 6 - 8 hours. DO NOT use ibuprofen in children younger than 6 months old. Aspirin is very effective for treating fever in adults. DO NOT give aspirin to a child unless your child's doctor tells you to. Know how much you or your child weighs, and then always check the instructions on the package. In children under age 3 months, call your doctor first before giving medicines. Eating and drinking with a fever: Everyone, especially children, should drink plenty of fluids. Water, popsicles, soup, and gelatin are all good choices. Do not give too much fruit or apple juice and avoid sports drinks in younger children. Although eating foods with a fever is fine, do not force foods. Call your health care provider if Call a doctor right away if your child: Is younger than 3 months old and has a rectal temperature of 100.4 °F (38 °C) or higher Is 3 -12 months old and has a fever of 102.2 °F (39 °C) or higher Is under age 2 and has a fever that lasts longer than 24 - 48 hours Is older and has a fever for longer than 48 - 72 hours Has a fever over 105 °F (40.5 °C), unless it comes down readily with treatment and the person is comfortable Has other symptoms that suggest an illness may need to be treated, such as a sore throat, earache, or cough Has been having fevers come and go for up to a week or more, even if they are not very high Has a serious medical illness, such as a heart problem, sickle cell anemia, diabetes, or cystic fibrosis Recently had an immunization Has a new rash or bruises appear Has pain with urination Has trouble with the immune system (chronic steroid therapy, after a bone marrow or organ transplant, spleen was removed, is HIV-positive, or is being treated for cancer) Has recently traveled to a third world country Call 911 if you or your child has a fever and: Is crying and cannot be calmed down (children) Cannot be awakened easily or at all Has difficulty breathing, even after their nose is cleared Has blue lips, tongue, or nails Has a very bad headache Has a stiff neck Refuses to move an arm or leg (children) Has a seizure Call your doctor right away if you are an adult and you: Have a fever over 105 °F (40.5 °C), unless it comes down readily with treatment and you are comfortable Have a fever that stays at or keeps rising above 103 °F Have a fever for longer than 48 - 72 hours Have had fevers come and go for up to a week or more, even if they are not very high Have a serious medical illness, such as a heart problem, sickle cell anemia, diabetes, cystic fibrosis, COPD, or other chronic lung problems Have a new rash or bruises appear Have pain with urination Have trouble with your immune system (chronic steroid therapy, after a bone marrow or organ transplant, had spleen removed, HIV-positive, were being treated for cancer) Have recently traveled to a third world country What to expect at your health care provider's office Your doctor will perform a physical examination, which may include a detailed examination of the skin, eyes, ears, nose, throat, neck, chest, and abdomen to look for the cause of the fever. Treatment depends on the duration and cause of the fever, as well as your other symptoms. Legget J. Approach to fever or suspected infection in the normal host. Goldman L, Ausiello D, eds. Cecil Medicine, 23rd ed. Philadelphia, Pa: Saunders Elsevier; 2007: chap 302. Neil K. Kaneshiro, MD, MHA, Clinical Assistant Professor of Pediatrics, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
<urn:uuid:c65778da-2742-4fb1-ab49-65ab8a3d5a7d>
CC-MAIN-2013-20
http://www.sjhlex.org/body.cfm?id=2070&action=detail&AEArticleID=003090&AEProductID=Adam2004_5117&AEProjectTypeIDURL=APT_1
2013-05-23T19:06:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924246
1,547
The task of creating equal education opportunities in a society full of uneven opportunities can seem nearly impossible. But, as Richard E. Nisbett reminds us in "Think Big, Bigger … and Smaller" (p. 10), even small changes can yield long-term benefits for students who start out at a disadvantage. Consider one of the low-tech interventions Nisbett describes that has been shown to improve the school achievement of struggling students. When teachers clearly tell students about the powerful role that effort—as opposed to purely natural intelligence—plays in getting high grades, and show students that they themselves can build stronger neural connections through applying themselves to learning, even chronically failing students start to work harder and do better. - Tune in to the messages that your school communicates to kids. Examine the language used to recognize high academic achievers, descriptions of famous people in textbooks and class readings, or even the comments you write on student work—about what leads to accomplishments: native ability or hard work? What might give students the message that only people with special abilities can achieve? How might you infuse messages about the importance of strong effort in a way that would motivate students who are far behind and discouraged? - Spend a class or two talking with students about how the brain develops and how, through applying effort, they can actually strengthen their own intelligence (You may find the December 2009/January 2010 EL article "How to Teach Students about the Brain" and the accompanying downloadable handout for students helpful). Are your students aware that intelligence is malleable, or do they perceive it as a fixed commodity they either possess or don't? What about the low achievers in your class? The Question of Segregation According to the articles "Integrated Schools: Finding a New Path" (Gary Orfield, Erica Frankenberg, and Genevieve Siegel-Hawley, p. 22) and "Overcoming Triple Segregation" (by Patricia Gándara, p. 60), segregation by ethnic background of public schools in the United States is on the upswing, a reality which limits minority students' prospects for a high-quality education and all students' prospects for learning to work and interact with students from varied cultures. Gándara claims on page 63 that segregation contributes significantly to the persistent Latino-white achievement gap: One recent study of mathematics achievement in the United States concluded that although the increase in average education and income of Latino families should have significantly closed achievement gaps, the damage caused by increased segregation had cancelled out those gains (Berends & Peñaloza, 2010). Likewise, researchers at the University of California at Santa Barbara concluded that the variable that explained the greatest amount of variance in academic achievement between ELLs and native English speakers was the degree of segregation that the ELLs experienced (Rumberger & Tran, 2010). - Are schools within your district—or your city—segregated by race? By socioeconomic background? Are the schools with higher proportions of white students considered the "better" schools, and does official data about school achievement back this up? How much chance do nonwhite kids have to attend these majority-white schools? - Is there de facto segregation within your school: Are a greater preponderance of students from white or Asian backgrounds represented in more rigorous academic programs (or alternative programs with positive reputations), and students of other races overrepresented in less prestigious classes? Do many Latino students spend much of their school day solely in ELL classes, or with other Latinos? If such segregation exists, is it openly acknowledged as a problem—and if not, how do you think it could be? - Do you consider majority-white schools to be segregated? Why or why not? How might spending the bulk of their school years only with students of their same ethnicity be a problem for white students as well as for students of color? Are Unrelenting Expectations Enough? An axiom held by many advocates of closing achievement gaps is that maintaining unrelenting high expectations for students from at-risk backgrounds—and teachers who serve them—can shrink the gap. Another is that lowering expectations does violence to hope. Karin Chenoweth ("Leaving Nothing to Chance," p. 16) lists "Expect that all students will meet or exceed standards" as one of the fundamental practices of school leaders who increase achievement in high-poverty schools, and notes that these schools must "operate on a higher plane than many middle-class schools." But—also in this issue, on p.29—veteran advocate for education equity Jonathan Kozol states People who devote their lives to tinkering with clever ways to close the race gap by "demanding more" of children and their principals and teachers within segregated settings are, knowingly or not, upholding the same failed and tainted promises given to us more than a century ago by Plessy v. Ferguson. What do you think? Will holding high expectations be enough to close achievement gaps if schools remain segregated and highly unequal in resources? Could a focus on "demanding more" ever become a liability in the press for equity?
<urn:uuid:1f9419ad-07be-4104-9e59-3e761c304d85>
CC-MAIN-2013-20
http://www.ascd.org/publications/educational-leadership/nov10/vol68/num03/EL-Study-Guide.aspx
2013-05-19T19:13:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95099
1,047
Stealing Mona Lisa Excerpted from The Crimes of Paris: A True Story of Murder, Theft, and Detection, by Dorothy and Thomas Hoobler, to be published this month by Little, Brown and Company; © 2009 by the authors. It was a Monday and the Louvre was closed. As was standard practice at the museum on that day of the week, only maintenance workers, cleaning staff, curators, and a few other employees roamed the cavernous halls of the building that was once the home of France’s kings but for centuries had been devoted to housing the nation’s art treasures. Acquired through conquest, wealth, good taste, and plunder, those holdings were splendid and vast—so much so that the Louvre could lay claim to being the greatest repository of art in the world. With some 50 acres of gallery space, the collection was too immense for visitors to view in a day or even, some thought, in a lifetime. In the Salon Carré—the “square room”—alone could be seen two paintings by Leonardo da Vinci, three by Titian, two by Raphael, two by Correggio, one by Giorgione, three by Veronese, one by Tintoretto, and—representing non-Italians—one each by Rubens, Rembrandt, and Velázquez. But even in that collection of masterpieces, one painting stood out from the rest. As the Louvre’s maintenance director, a man named Picquet, passed through the Salon Carré during his rounds on the morning of August 21, 1911, he pointed out Leonardo’s Mona Lisa, telling a co-worker that it was the most valuable object in the museum. “They say it is worth a million and a half,” Picquet remarked, glancing at his watch as he left the room. The time was 7:20 a.m. Shortly after Picquet departed the Salon Carré, a door to a storage closet opened and at least one man—for it would never be proved whether the thief worked alone—emerged. He had been in there since the previous day—Sunday, the museum’s busiest. Just before closing time, the thief had slipped inside the little closet so that he could emerge in the morning without the need to identify himself to a guard at the entrance. There were many such small rooms and hidden alcoves within the ancient building; museum officials later confessed that no one knew how many. This particular room was normally used for storing easels, canvases, and art supplies for students who were engaged in copying the works of the old masters. The only firm anti-forgery requirement the museum imposed was that the reproductions could not be the same size as the original. Emerging from the closet in a white artist’s smock, the intruder might have been mistaken for one of these copyists—or, perhaps, for a member of the museum’s maintenance staff, who also wore such smocks, in a practice intended to demonstrate that they were superior to other workers. If anyone noticed the thief, he would likely be taken for another of the regular museum employees. As he entered the Salon Carré, the thief headed straight for the Mona Lisa. Lifting down the painting and carrying it into an enclosed stairwell nearby was no easy job. The painting itself weighs approximately 18 pounds, since Leonardo painted it not on canvas but on three slabs of wood, a fairly common practice during the Renaissance. A few months earlier, the museum’s directors had taken steps to physically protect the Mona Lisa by reinforcing it with a massive wooden brace and placing it inside a glass-fronted box, adding 150 pounds to its weight. The decorative Renaissance frame brought the total to nearly 200 pounds. However, only four sturdy hooks held it there, no more securely than if it had been hung in the house of a bourgeois Parisian. Museum officials would later explain that the paintings were fastened to the wall in this way to make it easy for guards to remove them in case of fire. Once safely out of sight behind the closed door of the stairwell, the thief quickly stripped the painting of all its protective “garments”—the brace, the glass case, and the frame. Since the Mona Lisa’s close-grained wood, an inch and a half thick, made it impossible to roll up, he slipped the work underneath his smock. Measuring approximately 30 by 21 inches, it was small enough to avoid detection. Though evidently familiar with the layout of the museum, the thief made one crucial mistake in his planning. At the bottom of the enclosed stairway that led down to the first floor of the Louvre was a locked door. The thief had obtained a key, but now it failed to work. Desperately, as he heard footsteps coming from above, he used a screwdriver to remove the doorknob. Down the stairs came one of the Louvre’s plumbers, named Sauvet. Later, Sauvet—the only person to witness the thief inside the museum—testified that he had seen only one man, dressed as a museum employee. The man complained that the doorknob was missing. Apparently thinking that there was nothing strange about the situation, Sauvet produced a pliers to open the door. The plumber suggested that they leave it open in case anyone else should use the staircase. The thief agreed, and the two parted ways. The door opened onto a courtyard, the Cour du Sphinx. From there the thief passed through another gallery, then entered the Cour Visconti, and—perhaps trying not to appear in a hurry—headed toward the main entrance of the museum. Few guards were on duty that day, and only one was assigned to that entrance. As luck would have it, the guard had left his post to fetch a bucket of water to clean the vestibule. He never saw the thief, or thieves, leave the building. One passerby noticed a man on the sidewalk carrying a package wrapped in white cloth. The witness recalled noticing the man throw a shiny metal object into the ditch along the edge of the street. The passerby glanced at it—it was a doorknob. Inside the museum, all was serene and would remain so for quite some time. At 8:35 a.m., Picquet passed through the Salon Carré again and noted that the painting was gone. He thought little of it at the time, since the museum’s photographers freely removed objects without notice and took them to a studio elsewhere in the building. Indeed, Picquet even remarked to his workers, “I guess the authorities have removed it because they thought we would steal it!” If anyone else noticed during the rest of the day that there were four bare hooks where the Mona Lisa usually hung, they kept it to themselves. Incredibly, not until Tuesday, when the Louvre again opened its doors to the public, did anyone express concern over the fact that the world’s most famous painting was missing from its usual place. When an artist set up his easel in the Salon Carré and noticed that the centerpiece of his intended work was absent, he complained to a guard, who merely shrugged. Like Picquet the day before, the guard assumed the Mona Lisa had been removed to the photographers’ studio. But the artist persisted. How soon would it be returned? The guard finally went to see a photographer, who denied having anything to do with the painting. Perhaps it had been taken by a curator for cleaning? No. Finally, the guard thought it wise to inform a superior. A search began and soon became frantic. The director of the museum was on vacation, so the unthinkable news filtered up to the acting head, Georges Bénédite: Elle est partie! She’s gone. “Paris Has Been Startled” Lisa Gherardini, who married Francesco del Giocondo of Florence at age 16, would have been in her mid-20s when she sat for her portrait with Leonardo da Vinci in 1503. Leonardo worked on the Mona Lisa—or La Joconde, as she is known in France—for four years, but like so many of his works, the painting was never completed. However, it had already achieved fame by the mid–16th century, owing to the innovations that had gone into its production—particularly in material, brush technique, and varnish—and its subject’s famously coy smile, which is said to be the result of musicians and clowns the artist kept on hand to prevent her from growing bored. When Leonardo traveled to France around 1517, at the invitation of King Francis I, the Mona Lisa left Italy, it seemed, forever. The artist died only two years later, and by the middle of that century the painting—purchased for a considerable sum—had entered the collection of the French monarchy. Louis XIV gave the Mona Lisa a place of honor in his personal gallery at Versailles. But his successor, Louis XV, sent the painting to hang ignominiously in the office of the keeper of the royal buildings. However, in 1797, La Joconde was chosen as one of the works displayed in the nation’s new art museum, the Louvre, which is where she remained—save a brief stay in Napoleon’s bedroom—until someone carried her off in August 1911. Paris during the Belle Époque—the “beautiful time” between the late 19th century and the outbreak of World War I—had become an international center for painting, dance, music, theater, and publishing. The construction of Gustav Eiffel’s tower for the 1889 world’s fair had made it the “city of light”—both literally and metaphorically. The city could boast many of the world’s foremost medical and scientific institutions of the day, and Europe’s most modern manufacturing facilities. The face of the future, many believed, could be seen in Parisian leadership in such brand-new fields as motion pictures, automobile manufacturing, and aviation. This made the disappearance of France’s most treasured artwork all the more unbearable. In the days and weeks immediately following the theft, anyone carrying a package received attention—including, at one point, a young Spanish artist named Pablo Picasso, who, four years earlier, had purchased several small Iberian stone heads that were filched from the Louvre by the secretary of avant-garde writer Guillaume Apollinaire. (Apollinaire spent a few days in jail, but Picasso had the last laugh—he used the Iberian heads as models for his Demoiselles d’Avignon.) Police at checkpoints on roads leading out of the capital examined the contents of every wagon, automobile, and truck. Fearing that the thief would try to flee the country, customs inspectors opened and examined the baggage of everyone leaving on ships or trains. Ships that departed during the day that had elapsed between the theft and its discovery were searched when they reached their overseas destinations. After the German liner Kaiser Wilhelm II docked at a pier across the Hudson River from New York City in late August, detectives combed every stateroom and piece of luggage for the masterpiece. In the following days, from Manchester to São Paulo, the crime became front-page news. The Times of London declared, “Paris has been startled.” The Washington Post claimed, “The art world was thrown into consternation.” But perhaps The New York Times most accurately conveyed the enormity of the heist when it asserted that the crime “has caused such a sensation that Parisians for the time being have forgotten the rumors of war.” Nowhere, however, did the media cry out louder than in France itself. “What audacious criminal, what mystifier, what maniac collector, what insane lover, has committed this abduction?” asked Paris’s leading picture magazine, L’Illustration, which offered a reward of 40,000 francs to anyone who would deliver the painting to its office. Soon the Paris-Journal, its rival, offered 50,000 francs, and a bidding war was on. The theft continued to inspire newspaper stories for weeks; any report on the case, no matter how trivial, found its way into print. One of the most popular conspiracy theories suggested that a rich American had masterminded the theft. The favorite candidate was banking scion J. Pierpont Morgan, known for his avid, not to say avaricious, collecting habits, which frequently took him through Europe on buying sprees. When Morgan arrived the following spring in the spa town of Aix-les-Bains for his annual visit, the Mona Lisa had still not been found. Paris newspapers reported that two mysterious men had come to offer to sell him the Mona Lisa. Morgan indignantly denied the account, and when a French reporter came to interview him, the American wore in his buttonhole the rosette that marked him as a commander of the Legion of Honor—France’s highest decoration. He had recently been awarded it, causing some French newspapers to speculate that he had earned the decoration by offering “a million dollars and no questions asked” for the return of the Mona Lisa to the Louvre. Early in September, after a brief closing, the Louvre was once again opened to the public, and an even greater number of visitors than usual came to gape at the four hooks on the wall that marked the place where La Joconde once hung. One tourist, an aspiring writer named Franz Kafka, visiting the Louvre on a trip to Paris in late 1911, noted in his diary “the excitement and the knots of people, as if the Mona Lisa had just been stolen.” Some even began to place bouquets of flowers beneath the spot where the painting once resided. What everyone wanted to know—and speculated on endlessly—was where the thief could have gone with what was probably the most recognizable artwork in the world. But the only clues were a fingerprint and the doorknob, which had been recovered by the police from the gutter outside the museum. The plumber who had opened the stairway door was asked to look at hundreds of photographs of museum employees, past and present. Every sighting or rumor about the painting’s whereabouts had to be checked out—and they came in from places as distant as Italy, Germany, Britain, Poland, Russia, the United States, Argentina, Brazil, Peru, and Japan. But by December, as the trail grew cold, the police had to shift their attention to another spectacular case. A gang of anarchist bank robbers had begun to terrorize Paris, audaciously fleeing their crimes in the first recorded use of a getaway car. “Our Party Coming from Milan Will Be Here with Object Tomorrow” A year after the Mona Lisa vanished, the officials of the Louvre were forced to confront the unthinkable: that she would never return. The blank space on the wall of the Salon Carré had been filled with a colored reproduction of the painting. Even that had begun to fade and curl, and many people now averted their eyes as they passed it, as if to avoid the reminder of a tragic death. So, on one December day in 1912, patrons discovered another painting hanging there: also a portrait, but of a man, Baldassare Castiglione, by Raphael. Occasionally, stories appeared about sightings of the Mona Lisa, including one alleging that London art dealer Henry J. Duveen had been offered the painting. Duveen, however, avoided involvement by pretending that the proposal had been a joke. But another international dealer, Alfredo Geri, in Florence, was astonished by a letter he received in November 1913, more than two years after the painting had vanished. The sender, who signed himself “Leonard,” claimed to have the Mona Lisa in his possession. Leonard said he was an Italian who had been “suddenly seized with the desire to return to [his] country at least one of the many treasures which, especially in the Napoleonic era, had been stolen from Italy.” (The fact that the Mona Lisa had come to France more than two centuries before Napoleon was born didn’t seem to dim the thief’s patriotism.) He also mentioned that, although he was not setting a specific price, he himself was not a wealthy man and would not refuse compensation if his native country were to reward him. Geri glanced at the return address. It was a post-office box in Paris. Despite his suspicions, Geri took the letter to Giovanni Poggi, director of Florence’s Uffizi Gallery. Poggi had photographs from the Louvre that detailed certain marks that were on the back of the original panel; no forger could be aware of these. At Poggi’s suggestion, Geri invited the seller to Florence, but Leonard proved to be an elusive figure. More than once, he set a date for his arrival and then sent a letter canceling the meeting. Geri came to assume that it was all a hoax, until on December 9 he received a telegram from Leonard saying that he was in Milan and would be in Florence on the following day. The news was inconvenient, since Poggi had gone on a trip to Bologna. Geri sent Poggi an urgent telegram: our party coming from milan will be here with object tomorrow. need you here. please respond. geri. Poggi wired back that he could not arrive by the following day, but would be in Florence the day after that, a Thursday. Geri prepared to stall. When a thin young man wearing a suit and tie, with a handsome mustache, arrived at the dealer’s gallery the next day, Geri showed him into his office and pulled down the blinds. Eagerly, he asked him where he was holding the painting. Leonard replied that it was in the hotel where he was staying. When questioned about the authenticity of the painting, Leonard replied, according to Geri’s account, “We are dealing with the real Mona Lisa. I have good reason to be sure.” Leonard coolly declared that he was certain because he had taken the painting from the Louvre himself. Had he worked alone?, Geri asked. Leonard seemed to be hiding something. According to Geri, he “was not too clear on that point. He seemed to say yes, but didn’t quite do so, [but his answer was] more ‘yes’ than ‘no.’” Nevertheless, the discussion got down to the reward. According to Geri, the thief boldly asked for 500,000 lire. That was the equivalent of $100,000 and quite a fortune, though some newspapers had estimated the painting’s value at roughly five million dollars. Geri, holding his breath, thought that he had better agree, so he said, “That’s fine. That’s not too high.” They made a plan to meet the following day. The next afternoon, after arriving 15 minutes late, Leonard was introduced to Poggi. To Geri’s relief, the two men “shook hands enthusiastically, Leonard saying how glad he was to be able to shake the hand of the man to whom was entrusted the artistic patrimony of Florence.” As the three of them left the gallery, “Poggi and I were nervous,” Geri recalled. “Leonard, by contrast, seemed indifferent.” Leonard took them to the Hotel Tripoli-Italia, on the Via de’ Panzani, only a few blocks from the Duomo. Leonard’s small room was on the third floor. Inside, he took from under the bed a small trunk made of white wood. When he opened the lid, Geri was dismayed. It was filled with “wretched objects: broken shoes, a mangled hat, a pair of pliers, plastering tools, a smock, some paint brushes, and even a mandolin.” Calmly, Leonard removed these one by one and tossed them onto the floor. Surely, Geri thought, this was not where the Mona Lisa had been hidden for the past 28 months. He peered inside but saw nothing more. Then Leonard lifted what had seemed to be the bottom of the trunk. Underneath was an object wrapped in red silk. Leonard took it to the bed and removed the covering. “To our astonished eyes,” Geri recalled, “the divine Mona Lisa appeared, intact and marvelously preserved.” They carried the painting to a window, where it took Poggi little time to determine its authenticity. Even the Louvre’s catalogue number and stamp on the back checked out. Geri’s heart was pounding, but he forced himself to remain calm. He and Poggi explained that the painting had to be transported to the Uffizi Gallery for further tests. The painting was re-wrapped in the red silk, and the three men went downstairs. As they were passing through the lobby, however, the concierge stopped them. Suspiciously, he pointed to the package and asked what it was. He obviously thought it was the hotel’s property, but Geri and Poggi, showing their credentials, vouched for Leonard, and the concierge let them pass. At the Uffizi, Poggi compared sections of the painting with close-up photographs that had been taken at the Louvre. There was a small vertical crack in the upper-left-hand part of the panel, matching the one in the photos. Most telling of all was the pattern of craquelure, cracks in the paint that had appeared as the surface dried and aged. A forger could make craquelure appear on a freshly painted object, but no one could duplicate the exact pattern of the original. There could be no further doubt, Poggi concluded: the Mona Lisa had been recovered. Poggi and Geri then explained to Leonard that it would be best to leave the painting at the Uffizi. They would have to get further instructions from the government; they themselves could not authorize the payment he deserved. The Uffizi was an awesome setting, and Leonard must have felt overwhelmed by their arguments. How could he doubt two men of such standing and integrity? He did mention that he was finding it a bit expensive to stay in Florence. Yes, they understood. He would be well rewarded, and soon. They shook his hand warmly and congratulated him on his patriotism. As soon as he left, Geri and Poggi notified the authorities. Not long after Leonard returned to his hotel room, he answered a knock at the door and found two policemen there to arrest him. He was, they said, quite astonished. When a reporter telephoned a curator of the Louvre to tell him the news, the Frenchman, in the middle of his dinner, said it was impossible and hung up. The following day, December 12, 1913, the museum issued a cautious statement: “The curators of the Louvre … wish to say nothing until they have seen the painting.” But when the Italian government made an official announcement confirming Poggi’s assessment, on December 13, the French ambassador made calls on the prime minister and foreign minister of Italy to offer his government’s gratitude. After disagreement within the Italian Parliament about whether the painting should be returned, the minister of public education put the argument to rest. “The Mona Lisa will be delivered to the French Ambassador with a solemnity worthy of Leonardo da Vinci and a spirit of happiness worthy of Mona Lisa’s smile,” he announced. “Although the masterpiece is dear to all Italians as one of the best productions of the genius of their race, we will willingly return it to its foster country … as a pledge of friendship and brotherhood between the two great Latin nations.” After a triumphal tour through Italy, on January 4, 1914, the Mona Lisa resumed its old place on the wall of the Salon Carré. It had been gone for two years and four and a half months. In the next two days, more than 100,000 people filed past, welcoming back one of Paris’s most famous icons. The young thief known as Leonard had been born Vincenzo Perugia, in 1881, in a village near Lake Como, in Italy. Having moved to France as a young man, the aspiring artist settled for work as a housepainter. Perugia had very briefly worked at the Louvre, from October 1910 to January 1911, and, it was discovered, even claimed to have helped craft the protective box that encased the Mona Lisa. By the time he stood trial for his crime, in Florence in June 1914, the thief’s hopes of receiving a reward for returning the painting to his native country had been finally dashed. Alfredo Geri, on the other hand, collected the 25,000 francs that had been offered by Les Amis du Louvre, a society of wealthy art-lovers, for information leading to the return of the painting. The grateful French government also bestowed upon him the Legion of Honor, as well as the title “officier de l’instruction publique.” Geri showed what were perhaps his true colors when he promptly turned around and sued the French government for 10 percent of the value of the Mona Lisa. His contention was based on a Gallic tradition that gave the finder of lost property a reward of one-tenth the value of the object. In the end, a court decided that the painting was beyond price and that Geri had only acted as an honest citizen should. He received no further reward. Perugia, meanwhile, was growing depressed in jail. Guards reported that he occasionally wept. But by the time his trial began, on June 4, he was again calm and self-possessed, insisting that he had acted as a patriot. Since there was no question of guilt, the legal proceedings functioned more like an inquest intended to establish the truth, if such a thing were possible. Three judges presided in a large room in Florence’s stunning Romanesque Palazzo Vecchio, which had been remodeled to provide space for journalists from around the world. (The French government never attempted to extradite Perugia.) The designer of the room had placed on a cushion, in the middle of a semicircle, a massive silver hemisphere that symbolized justice. A cynical journalist remarked that it would not be prudent to allow the defendant to sit too close to this artistic treasure. Perugia, now 32 years old, was handcuffed when he entered the courtroom at nine a.m. Nattily dressed in a suit and tie, he smiled graciously at the photographers. Like everyone else, the chief judge was curious to learn how this apparently humble man could have carried out such an audacious crime. Could Perugia describe what happened on August 21, 1911, when he stole the Mona Lisa? Somewhat eagerly, Perugia asked if he could also explain why he had committed the crime, but the chief judge told him that he must do that later. For now, he wanted a description of the act itself. Perugia offered an abbreviated version that contradicted both his account to Geri and the Paris Prefecture of Police’s reconstruction of the crime. He claimed to have entered the Louvre through the front door early that Monday, wandered through various rooms, taken the Mona Lisa from its place on the wall, and left the same way. A judge pointed out that, during the pre-trial interrogations, Perugia had admitted trying to force the door at the bottom of the little stairwell that led to the Cour du Sphinx. Perugia had no answer for this, and the judge did not press him for one. It is difficult to understand why Perugia changed his story, or even why he did not tell the full truth about how he had entered and left the museum, given the fact that he freely confessed to the crime itself. Perhaps he was afraid of implicating others, but certainly the motive that he had concocted for himself—that he was a patriot reclaiming one of Italy’s treasures—would have sounded better if he had been the sole actor in this drama. When Perugia was asked why he had stolen the Mona Lisa, he responded that all the Italian paintings in the Louvre were stolen works, taken from their rightful home—Italy. When asked how he knew this, he said that when he had worked at the Louvre he had found documents that proved it. He remembered in particular a book with prints that showed “a cart, pulled by two oxen; it was loaded with paintings, statues, other works of art. Things that were leaving Italy and going to France.” Was that when he decided to steal the Mona Lisa? Not exactly, Perugia replied. First he considered the paintings of Raphael, Correggio, Giorgione, and other great masters. “But I decided on the Mona Lisa, which was the smallest painting and the easiest to transport.” “So there was no chance,” asked the court, “that you decided on it because it was the most valuable painting?” “No, sir, I never acted with that in mind. I only desired that this masterpiece would be put in its place of honor here in Florence.” A judge then interrupted to play one of the prosecution’s trump cards: “Is it true,” he asked, “that you tried to sell the Mona Lisa in England?” Accounts of the trial say that this was one of the few moments when Perugia lost his composure. He glared around the courtroom, clenching his fists as if to do battle with his accusers. “Me? I offered to sell the Mona Lisa to the English? Who says so? It’s false!” The chief judge pointed out that “it is you yourself who said so, during one of your examinations which I have right here in front of me.” Unable to deny that, Perugia claimed, “Duveen didn’t take me seriously. I protest against this lie that I would have wanted to sell the painting to London. I wanted to take it back to Italy, and to return it to Italy, and that is what I did.” “Nevertheless,” said one of the judges, “your unselfishness wasn’t total—you did expect some benefit from restoration.” “Ah benefit, benefit,” Perugia responded—“certainly something better than what happened to me here.” That drew a laugh from the spectators. The next day, the chief judge announced a sentence for Perugia of one year and fifteen days. As he was led out of the courtroom, he was heard to say, “It could have been worse.” It actually got better. The following month, Perugia’s attorneys presented arguments for an appeal. This time, the court was more lenient, reducing the sentence to seven months. Perugia had already been incarcerated nine days longer than that since his arrest, so he was released. A crowd had gathered to greet him as he left the courthouse. Someone asked him where he would go now, and he said he would return to the hotel where he had left his belongings. When he did, however, he found that the establishment’s name had changed. No longer was it the Tripoli-Italia; now it was the Hotel La Gioconda—and it was too fancy to admit a convicted criminal. Perugia’s lawyers had to vouch for him before the staff would give him a room. But most spectators had already moved on. Archduke Franz Ferdinand of Austria had recently been assassinated in his touring car in the streets of Sarajevo. Soon the nations of Europe would be at war, and Perugia’s crime—and the ensuing hysteria—would seem rather trivial by comparison. In January 1914, months before Perugia’s trial began, a veteran American newspaperman named Karl Decker was on assignment in Casablanca. While having a drink with an elegant confidence man who went by the name Eduardo, he overheard an interesting story that would shed new light on the disappearance of the Mona Lisa. Eduardo had many aliases, but to his associates he was known as the Marqués de Valfierno or the “Marquis of the Vale of Hell.” With a white mustache and wavy white hair, he looked the part. He had, wrote Decker, “a distinction that would have taken him past any royal-palace gate in Europe.” Decker had crossed paths with Valfierno in a number of exotic places, and the two had developed a friendship. After the police arrested Vincenzo Perugia, Valfierno commented casually to Decker that Perugia was “that simp who helped us get the Mona Lisa.” When Decker pressed him for details, Valfierno offered to confide his version of the events as long as the journalist promised not to publish them until he gave permission, or died. It was the latter event that allowed Decker to reveal what he had been told, nearly 20 years later, in 1932, in The Saturday Evening Post. After years of success selling fake artworks, Valfierno moved his operation from Buenos Aires to Paris, where, he said, “thousands of Corots, Millets, even Titians and Murillos, were being sold in the city every year, all of them fakes.” He added people to his organization, including a well-connected American whom he refused to name. Valfierno was selective in choosing those he wished to fleece, concentrating on wealthy Americans who could pay highly for “masterpieces” that had supposedly been stolen from the Louvre. But Valfierno and his gang never took anything from the Louvre. “We didn’t have to,” he said. “We sold our cleverly executed copies, and … sent [the buyers] forged documents [that] told of the mysterious disappearance from the Louvre of some gem of painting or world-envied objet d’art.… The documents always stated that in order to avoid scandal a copy had been temporarily substituted by the museum authorities.” Eventually, Valfierno peddled the ultimate prize: the Mona Lisa itself, in June 1910. Not the genuine painting, but a forged copy, along with forged official papers that convinced the buyer (an American millionaire) that, in order to cover the theft, Louvre officials had hung a replica in the Salon Carré. The buyer, unfortunately, had been a little too free in bragging about his new acquisition, which prompted the newspaper Le Cri de Paris to publish an article—a year before the actual theft—stating that the Mona Lisa had been stolen. Still, it had been a disturbing experience, one that Valfierno was determined to avoid a second time: “The next trip, we decided, there must be no chance for recriminations. We would steal—actually steal—the Louvre Mona Lisa and assure the buyer beyond any possibility of misunderstanding that the picture delivered to him was the true, the authentic original.” Valfierno never intended to sell the real painting. “The original would be as awkward as a hot stove,” he told Decker. The plan would be to create a copy and ship it overseas before stealing the original. “The customs would pass it without a thought, copies being commonplace and the original still being in the Louvre.” After the Mona Lisa had been stolen, the imitation could be taken out and sold to a buyer who was convinced he was getting the missing masterpiece. “We began our selling campaign,” recalled Valfierno, “and the first deal went through so easily that the thought ‘Why stop with one?’ naturally arose. There was no limit in theory to the fish we might hook.” Valfierno stopped with six American millionaires. “Six were as many as we could both land and keep hot,” he told Decker. The forger then carefully produced the six copies, which were sent to America and kept waiting for the proper time to be delivered. Valfierno said that an antique bed, made of Italian walnut, “seasoned by time to the identical quality of that on which the Mona Lisa was painted” provided the panels that the forger painted on. Now came what Valfierno thought was the easy part: “Stealing the Mona Lisa was as simple as boiling an egg in a kitchenette,” he told Decker. “Our success depended upon one thing—the fact that a workman in a white blouse in the Louvre is as free from suspicion as an unlaid egg.” Recruiting someone—Perugia—who actually had worked in the Louvre was helpful because he knew the secret rooms and staircases that employees used. Perugia did not act alone, Valfierno said. He had two accomplices who were needed to lift the painting, with its heavy protective container and frame, from the wall and carry it to a place where the frame could be removed. Valfierno did not name them either. The one hitch in the plan was that Perugia had failed to test the duplicate key Valfierno ordered to be made for the door at the bottom of the staircase. At the moment he needed it, the key failed to turn the lock. While he was removing the doorknob, the trio heard footsteps from above, and Perugia’s two accomplices hid themselves. The plumber appeared but, seeing only one man in a white smock, had no reason to be suspicious. He opened the door and went on his way, soon followed by Perugia and the other two thieves. At the vestibule, the guard stationed there had temporarily abandoned his post. An automobile waited for the thieves and took them to Valfierno’s headquarters, where the gang celebrated “the most magnificent single theft in the history of the world.” Now the six copies that had been sent to the United States could be delivered to the purchasers. Because each of the six collectors thought he was receiving stolen merchandise, he could not publicize his acquisition—or even complain should he suspect it wasn’t the genuine article. Perugia was paid well for his part in the scheme. However, he squandered the money on the Riviera, and then, knowing where Valfierno had hidden the real Mona Lisa, stole it a second time. “The poor fool had some nutty notion of selling it,” Valfierno told Decker. “He had never realized that selling it, in the first place, was the real achievement, requiring an organization and a finesse that was a million miles beyond his capabilities.” What about the copies?, Decker wanted to know. Someday, speculated Valfierno, all of them would reappear. “Without those, there are already thirty Mona Lisas in the world,” he said. “Every now and then a new one pops up. I merely added to the gross total.” Characteristically, perhaps, reports of the date of Perugia’s death vary. It is known, however, that he died in France—an odd end point for a man who had once so vehemently asserted his Italian patriotism. Whatever secrets he knew about the theft were carried to the grave. The Decker account is the sole source for the existence of Valfierno and this version of the theft of the Mona Lisa. There is no external confirmation for it, yet it has frequently been assumed to be true by authors writing about the case. If indeed it is true, Valfierno had carried out the perfect crime.
<urn:uuid:5c637bb3-d4da-4f6c-b8c4-3e7bb69f3152>
CC-MAIN-2013-20
http://www.vanityfair.com/style/features/2009/05/mona-lisa-excerpt200905.print
2013-05-19T02:02:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.985007
8,528
Robots are all the rage in science. A two-armed robot created by manufacturers in Japan is currently being used in laboratories in pharmacies and universities. Developed by Yaksawa and Japan’s National Institute of Advanced Industrial Science and Technology (AIST), Mahoro the robot can conduct laboratory work and conduct cultures quicker, more precisely and efficiently than humans. Because robots have no immune system to speak of, they are particularly ideal at working with biohazards, such as radioactive materials, and conducting clinical tests. Tohru Natsume, speaking on behalf of AIST, says that they conduct lab work often on the influenza virus, and the robot is ideally equipped to deal with those cultures. The company says that they have attempted to use robots before, focusing specifically on special-purpose ones. But when they changed trial procedures, or continued onto a different experiment, the robots became useless. Natsume adds also that developing robots is time-consuming. A robot such as Mahoro, which can do what people can do, using the tools that people use, provides the best of current technological capabilities. In the past, an equivalent robot would have taken a long time to be trained. Mahoro, on the other hand, can be trained easily with the aid of a computer. The computer shows virtually the robot at a work station. When someone is operating the computer, Mahoro will do what the computer instructs. For example, if the computer instructs Mahoro to pick up a tube, the robot will then do so. While the robot looks similar to robots currently in place in factories to assemble items, it has seven joints instead of six. The seventh joint allows the robot to have use of an elbow. The organization responsible for the robot wants to improve its safety so that it can be able to work with humans. A video providing an introduction to Mahoro is displayed below. Published by Medicaldaily.com
<urn:uuid:d3dfc8cd-8a58-4e8e-bfbf-b098290841c4>
CC-MAIN-2013-20
http://www.medicaldaily.com/articles/10709/20120709/robot-lab-work-japan-influenza-mahoro.htm
2013-05-25T13:02:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95213
396
The smokybrown cockroach, Periplaneta fuliginosa, is a fairly large roach, and one of my personal favorites. It is closely related to the american cockroach P. americana, however is easily distinguishable from it. It has a uniformly dark brown mahogany colour. Its thorax is dark and shiny, unlike the light rimmed pattern of the American cockroach. This is one of the most popular cockroaches, and one of the most loathed. It is very common in Japan and the sub-tropical southern US as an introduced peridomestic species. In some localities it may account for almost 80% of cockroaches captured near homes. This roach can be found in Florida, Louisiana, Mississippi, Texas, and other moist gulf coastal states, and along the southern Mississippi River. It prefers warmer climates and is not cold tolerant, however, it may be able to survive indoors in colder climates. It does well in moist conditions and it seems to be most commonly concentrated in moist concealed areas. It often lives around the perimeter of buildings, and it is common species outdoors. It can feed off a wide array of organic (including decaying) matter; like most cockroaches, it is a scavenger. It tends to lose more moisture than its relatives and requires water every 2-3 days. It may come indoors to look for food, and even to live, however, in warm weather it may move outdoors and enter buildings looking for food. More on the Periplaneta species: - There are over 50 species in the genus Periplaneta, from P. aboriginea of Australia to P. vosseleri of Tanzania. - None of the Periplaneta species are endemic to the Americas; despite the name, P. americana was introduced to the United States from Africa as early as 1625. They are now common in tropical climates because human activity has extended the insect’s range of habitation, and global shipping has transported the insects all around the world. - Periplaneta is nocturnal, negatively phototactic, and prefers dark warm, moist habitats. It is acutely sensitive to vibrations and is one of the world’s fastest running insects. - Periplaneta americana is one of several cockroaches found near (peridomestic) or in (domiciliary) human habitations. Such insects are referred to as synanthropic (= with man). - Cockroach population density is controlled naturally by several species of parasitic wasps including Evania and Aprostocetus that attack cockroach oothecae (egg cases). - Periplaneta, nor any other type of cockroaches, are actual biological vectors for human disease, although they can serve as mechanical vectors simply by harboring infectious organisms such as Ascaris eggs, bacteria, or protozoan cysts on their body surfaces. The American cockroach is the host for the cystacanth stage of the rat intestinal acanthocephalan, Moniliformis moniliformis. - Periplaneta is available at modest cost, alive or preserved, from biological supply companies. They are useful laboratory specimens.
<urn:uuid:40aab8a7-19d5-4c1e-a77f-ff4e515234be>
CC-MAIN-2013-20
http://penanggalan.tumblr.com/post/900080015/smokybrown
2013-05-20T12:07:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940385
669
- Original Caption Released with Image: Figure 1This approximate true-color image taken by the Mars Exploration Rover Spirit shows the rock outcrop dubbed "Clovis." The rock was discovered to be softer than other rocks studied so far at Gusev Crater after the rover easily ground a hole into it with its rock abrasion tool. This image was taken by the 750-, 530- and 480-nanometer filters of the rover's panoramic camera on sol 217 (August 13, 2004). Elemental Trio Found in 'Clovis' Figure 1 above shows that the interior of the rock dubbed "Clovis" contains higher concentrations of sulfur, bromine and chlorine than basaltic, or volcanic, rocks studied so far at Gusev Crater. The data were taken by the Mars Exploration Rover Spirit's alpha particle X-ray spectrometer after the rover dug into Clovis with its rock abrasion tool. The findings might indicate that this rock was chemically altered, and that fluids once flowed through the rock depositing these elements. - Image Credit: Graph Credit: NASA/JPL/Cornell/Max Planck Institute Image Addition Date:
<urn:uuid:e4740ca2-c5db-4d25-b9d8-0fad3537d451>
CC-MAIN-2013-20
http://photojournal.jpl.nasa.gov/catalog/PIA06772
2013-05-19T03:22:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910646
248
Water: A Source of Life and CulturePrint this Page - Subject(s): Language Arts & Literature, Social Studies & Geography, Arts & Music - Region / Country: Africa - Grade Level(s): 9–12 - Related Publication: Water in Africa - Duration: 1-2 class periods Students will use primary and secondary sources to research water as a feature of culture. Using text and photos from Peace Corps Volunteers serving in various African countries, students will uncover the role water plays in shaping daily life. Students will analyze the material and create symbols that summarize their findings. Symbols will be collected and arranged to make a contemporary work of art. - Water in Africa website (or photocopies of the essays and photos) - Map of Africa - 5 graphic organizers: - "Where Do Artists Get Ideas?" - "Water in Daily Life: Our Culture" - "Water in Daily Life: Another Culture" - "Symbols of Water: Our Culture" - "Symbols of Water: Another Culture" - "Guiding Questions" - Evaluation of Art Product - Chalkboard or overhead projector - 12"x 24" black construction paper - Tape or tacks to mount symbols - X-acto knives (optional) - Cutting boards (optional) - Read and analyze primary and secondary sources and interpret how they relate to the essential questions. - Record primary and secondary sources about water and create symbols to represent the information. - Organize the symbols in a format that communicates an idea or concept. - Photomontage: is the process (and result) of making a composite photograph by cutting and joining a number of other photographs 1. Introduce the activity by distributing the first graphic organizer, "Where Do Artists Get Ideas?" Give students 10–15 minutes to complete the organizer on their own, then incorporate students' ideas into an organizer for the entire class (using the chalkboard or an overhead projector). Emphasize the fact that artists are inspired through different avenues, and artists often must research a topic before creating a work of art. Art not only heightens our aesthetic sensibilities; it can also raise our awareness about specific issues. 2. Inform students that they will create a work of art based on how the natural resource water affects cultures. 3. Ask students to consider how water shapes their daily life, community, and culture. Distribute the graphic organizer, "Water in Daily Life: Our Culture." 4. Use the "Guiding Questions" to help students respond to the topics. 5. Divide the class into groups of four students and distribute the chart "Symbols of Water: Our Culture." Have each student in the group choose two topics from their chart "Water in Daily Life: Our Culture," such as recreation or transportation. Instruct the groups to work together to ensure that all topics are covered and there is no duplication of topics. 6. Have each student expand upon the two topics chosen from the chart "Water in Daily Life: Our Culture," and record a more detailed description in the first column titled "What is important" in the chart "Symbols of Water: Our Culture." 7. Introduce the essential question "How can symbols be used to communicate an idea or concept?" Ask students to explain the meaning of a symbol. (A symbol can be defined as a simplified expression of a complex idea or meaning.) After defining the term, ask students to give examples of symbols that they see on a daily basis. Prompt a discussion with questions such as, Why are symbols used? Can symbols be misunderstood? What could cause the misunderstanding of a symbol? 8. Have students choose one symbol from each of the organizers, representing water in our culture, and water in the culture of an African community. 9. Distribute two sheets of black construction paper to each student, asking the students to draw the outline of each symbol on separate sheets. Symbols should be cut from the same size paper and should fill the entire piece of paper in order to maintain consistency in size. The neutral black paper is used to achieve high contrast silhouettes and reduces misinterpretation of the symbols. (The use of color would require an additional lesson on color symbolism.) 10. After outlining the symbols, students cut out the shapes. To add more details to the symbols, students may cut out shapes within the silhouette to help define the symbol. Peace Corps Volunteers may be invited to class to further discuss water as a feature of culture. Have students mount their collection of symbols on the wall and begin to consider the meaning created by the collection. Alterations and additions can be made if needed. Collections should be mounted in a manner that will facilitate comparing two different cultures and regions. Framework and Standards - How can a photograph distort reality? - How is art used to influence the thoughts and beliefs of people? - Language Arts Standards - NL-ENG.K-12.2 : Understanding the human experience - NL-ENG.K-12.3 : Evaluation strategies - NL-ENG.K-12.4 : Communication skills - NL-ENG.K-12.8 : Developing research skill - NL-ENG.K-12.9: Multi-cultural understanding - Technology Standards - NT.K-12.2: Technology communication tools - NT.K-12.5: Technology research tools - Geography Standards - NSS -G.K-12.2: Places and regions - NSS -G.K-12.5: Environment and society - Visual Arts Standards - NA-VA.9-12.1: Understanding and applying media, techniques and processes - NA-VA.9-12.5: Reflecting upon and assessing the characteristics and merits of their work and the work of others - NA-VA.9-12.6: Making connections between visual arts and other
<urn:uuid:73957644-8b0a-40c9-ae84-3312c38d3aa1>
CC-MAIN-2013-20
http://wws.peacecorps.gov/wws/educators/lessonplans/lesson.cfm?lpid=2321&rid=afric
2013-05-24T01:43:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.887824
1,228
If Thomas Paine had lived today, he could write a blog about the need to protect Internet independence that would reach across the world. The Internet is the most important new communications platform in American history. Through an open Internet, ordinary individuals can directly reach an audience of hundreds of millions of people around the world with their words, music, art, photography and literature--with just about any kind of creation imaginable. The freedom for ordinary people to connect with one another has led to some remarkable innovation. Two Stanford Ph.D. students founded Google while working out of a dorm room. and in less than 10 years. they grew it into the world's leading search engine. eBay's founder wrote some auction software for his personal Web site, and now millions of buyers and sellers use eBay to trade with one another every day. Before Yahoo became one of the most popular Web portals, it started as a hobby on a student computer workstation. These examples attest to how the Internet empowers ordinary people to change the world. And with a free Internet, the ability of the next innovation to change the world is ever present. But recently, the freedom of ordinary people to connect with one another has come under attack. A few large corporations don't seem to value the Internet's empowerment of individuals and are asserting a desire to control technology. The latest chapter in that attack on freedom is the fight against Net neutrality. For most Americans, our options for broadband Internet come down to two choices--a phone company or a cable company. Instead of continuing our freedom to use those connections with whatever content, devices and services we want, some corporations want to control what we access over the Internet. This would include giving better connections to their favored content and charging money for that privilege. What would the world look like if the Internet had been controlled in this way a few years ago? Imagine if the students who created Google or Yahoo had been charged a fee by a phone company for the privilege of letting their potential users have fast access. These small projects would not have turned into big ideas that revolutionized the World Wide Web. The proposed control of content goes directly against the level playing field created by Internet technology. The concept of freedom written about by Thomas Paine is being challenged by this threat to Net neutrality. The fight to preserve Net neutrality is in full swing in Congress. On April 26, the House Commerce Committee passed up its chance to keep the Internet open by taking Net neutrality provisions out of its telecommunications bill. I serve on the House Judiciary Committee, which also has a vital role to play in keeping the Internet open through its antitrust jurisdiction. Right now, we are caught in a jurisdiction fight with the House leadership on the issue of whether my committee is allowed to weigh in on this issue of vital importance to the Internet's future. My colleague Rick Boucher of Virginia and I have been working together on antitrust legislation to preserve Net neutrality. This legislation would impose antitrust penalties on broadband access providers that attempt to demand fees from Web content providers in exchange for priority treatment of their search, shopping and information retrieval services. The Internet has revolutionized the way Americans communicate with one another and do business. It's just common sense to keep that revolution where it belongs--in the hands of ordinary individuals instead of a handful of big corporations. Americans' Internet freedom depends on it. U.S. Rep. Zoe Lofgren represents Silicon Valley and the 16th district of California. She serves on the House Homeland Security Subcommittee on Economic Security, Infrastructure Protection and Cybersecurity, as well as on the House Judiciary Subcommittee on Courts, the Internet and Intellectual Property. 10 commentsJoin the conversation! Add your comment
<urn:uuid:e1c2bfd6-2e00-4223-b548-d92d6acc7dde>
CC-MAIN-2013-20
http://news.cnet.com/The-wrong-way-to-spread-broadband/2010-1028_3-6068364.html
2013-05-23T18:40:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952923
746
Return to Artiodactyla Body Length: 180-195 cm / 6-6.5 ft. Shoulder Height: 110-120 cm / 3.6-4 ft. Tail Length: 10-15 cm / 4-6 in. Weight: 70-110 kg / 154-242 lb. The slightly shaggy coat is primarily reddish brown or chestnut in colour, with the undersides, especially the lower surface of the neck, being lighter. The lower legs are black in colour. Unlike many deer species, young marsh deer are born without spots. There is a faint white eye ring, and the muzzle and lips are conspicuously black. The ears are large and lined with fluffy white hair. The upper surface of the tail is the same colour as the back, while the bushy underside is dark brown or black. Like other ungulates adapted to a boggy habitat, the dewclaws of the marsh deer are well developed and the widely-splayed hooves are very long, growing 7-8 cm / 2.8-3.2 inches in length. Males bear a pair of large, dark yellow antlers about 60 cm / 24 inches in length, with four or five tines each. The heavy antlers, each weighing 1.65-2.5 kg / 3.6-5.5 lb, are shed irregularly, and a grown set may be retained for up to 21 months. Ontogeny and Reproduction Gestation Period: 260 days. Young per Birth: 1 Weaning: About 5 months. Sexual Maturity: At 1 year. Immediately after parturition the female comes back into heat, and hence may be pregnant throughout her breeding years. Fawns may associate with their mother for over a year after birth. Ecology and Behavior Remaining hidden during the day, marsh deer emege at dusk to graze in flooded clearings, retiring again in the early morning. As its name and habitat preference infer, the marsh deer frequently enters water. However, it is primarily a wader, preferring areas where the water is less than 60 cm / 2 feet deep. Excessive flooding causes these deer to retire to higher ground, where they often come into contact with domestic cattle, which carry several diseases which are fatal to this species. The hindquarters are well developed - an excellent adaptation for jumping (the fastest way to move in waist-deep water). Males do not spar for breeding privileges, which renders the antlers as primarily ornamental objects. Population densities range from one deer per 3.8-42.0 square kilometers. Family group: Solitary, or in groups of less than 6 animals, generally and adult male, a few females, and their young. Diet: Grasses, reeds, aquatic plants. Main Predators: Jaguar, anaconda, domestic dogs. Floodplains and and moist forests in central South America. Range Map (Redrawn from Whitehead, 1993) The marsh deer is classified as vulnerable by the IUCN (1996). Sometimes called the swamp deer, care must be taken to differentiate between this species and the barasingha (Cervus duvaucelii). Fortunately for this species, the meat of the marsh deer is said to be unpalatable. Blastos (Greek) a bud or shoot; keras (Greek) the horn of an animal. Dikhe (Greek) in two ways; tome (Greek) cutting, sharp: a reference to the doubly forked antlers. Eisenberg, J. F., and K. H. Redford. Mammals of the Neotropics. Chicago: The University of Chicago Press, 1999. Geist, V. 1990. Pampas and swamp deer (Genera Ozotoceros and Blastocerus). In Grzimek's Encyclopedia of Mammals. Edited by S. P. Parker. New York: McGraw-Hill. Volume 5, pp. 218-219. Whitehead, K. G. 1993. The Whitehead Encyclopedia of Deer. Stillwater, MN: Voyageur Press, Inc. Wilson, D. E., and D. M. Reeder [editors]. 1993. Mammal Species of the World (Second Edition). Washington: Smithsonian Institution Press. Available online at http://nmnhwww.si.edu/msw/ Return to Artiodactyla © Brent Huffman, www.ultimateungulate.com
<urn:uuid:51f4d983-10f8-4cd3-a2f6-02dfb0f0eb69>
CC-MAIN-2013-20
http://www.ultimateungulate.com/Artiodactyla/Blastocerus_dichotomus.html
2013-05-20T02:32:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.883308
945
Julie Andrews / Marty Green TELL IT AGAIN "Tell it again, tell it again, Tell it just the same. The very same people, The very same story, And call it the very same name." The nursery rhyme is the novel and light reading of the infant scholar. It occupies, with respect to the ABC, the position of the romance which relieves the mind from the cares of a riper age.' Halliwell-Phillips, The Nursery Rhymes of England (1886) Children always want to hear their favorite songs and stories repeated, and "just the same way". If a word or an inflection is changed, the spell is broken. This record is for them. But "Tell It Again" is also for their parents - and their sisters, and their cousins, and their aunts; for all grown-ups who, introducing these magic rhymes to children, share their cultural heritage with a new generation with the added pleasure of reliving the happy hours of their own childhood. The boys and girls who entered the enchanted world of nursery rhyme with older members of the family will, later in life, identify this early experience with those adults lovingly brought to them the cherished treasures of childhood. Julie Laurence, producer of the record, is - despite her youth - an authority on children's records. She has lectured at schools and talked about them on television. She has helped prepare the catalogue of circulating children's records for the New York Public Library. For "Tell It Again" she did extensive research so that the rhymes would be as authentic as they are entertaining. For example, "Multiplication is vexation" was found in a manuscript dated 1570: "Thirty days hath September" in an old play, "The Return from Parnassus", printed in 1606. When there was a choice of versions, she chose the older; in the case of "Rock-a-by Baby" the original verse was not only prettier but in it "father's a nobleman and mother's a queen" and baby does not fall from a tree-top. The only lyrics which Julie Laurence wrote herself are "Tell It Again" and the Prayer at the end. Most of the things which interest children are in "Tell It Again". There are play songs and learning songs, riddles and nonsense rhymes, animals and the familiar objects of a child's life. "We all had fun making the record," says Julie Laurence. "We hope the whole family will have fun listening." The music for "Tell It Again" was written for this recording. Through simple melodies and varied rhythms it seeks to recapture the wit and humor, charm and freshness which have made these ryhmes the joy of children for generations. To please young ears, instruments were chosen for which children themselves show a marked preference. The composer and percussion player is the musician known as MOONDOG. His blind, bearded figure, in blanket-robe and sandals, is a familiar sight in midtown New York. With the sidewalk as concert platform, he performs his own music on strange instruments, fascinating professional musicians as well as casual passersby. Born Louis Hardin in Kansas, Moondog studied music at a school for the blind. But his special feeling for and use of percussion instruments was learned from the Indians with whom he grew up in the West. The flutist JULIUS BAKER was born in Cleveland and was graduated from the Curtis Institute of Music. He played four seasons with the Cleveland Orchestra, two with the Pittsburgh Symphony, nine with the CBS Symphony until it was disbanded, and then with the Chicago Symphony. He is now on the faculty of the Juilliard School of Music and is a founder-member of the Bach Aria Group. Julie Andrews says she enjoyed making this record of her favorite nursery rhymes. It reminded her of home in England and her three younger brothers to whom she would sing songs and read stories at bedtime. The recording also was for her a happy opportunity to know and work with one of the figures of the English stage whom she greatly admired - Martyn Green. The young girl who became world famous overnight in "My Fair Lady" was born to the theatre. Her father and mother were a musical hall act. Julie began studying singing at seven and at twelve gave hints of her destiny singing an aria from "Mignon" at London's Hippodrome "in a true, sweet soprano". Her stage experience was confined to holiday roles in pantomimes until, in "Cinderella", she caught the eye of Vida Hope, director of "The Boy Friend", who promptly whisked her to New York. Since March 1956, when she opened on Broadway in "My Fair Lady" - the now legendary musical based on Shaw's "Pygmalion" - audiences have swooned with delight over her portrayal of the Cockney flower girl who learned to become a great lady. In "Tell It Again" Julie Andrews has slipped away from Spain, where "the rain stays mainly in the plain", to the Never-Never-Land where hurricanes never happen and where gardens grow silver bells and cats go to London to visit the Queen. Shedding Eliza Doolittle and her phonetic troubles, she left all tongue-twisters - such as "Betty Botter bought some butter" - to Martyn Green and sings, with simple charm and water-pure diction, about Mary and her Lamb, Little Bo-Peep and her Sheep and Miss Muffet and the Spider. Martyn Green likes singing and acting for children. They make wonderful audiences, he says. During his many years of Gilbert and Sullivan in England and America, matinees were virtually children's performances, with houses packed with intent, responsive young people. In recording "Tell It Again", Martyn Green recalled that his first appearance on the stage was when he was eight, in a Christmas pantomime with his sister. He sang "Polly Put the Kettle On". To this day he not only remembers hundreds of nursery rhymes but also composes nonsense verses and songs for his friends' children. The son of William Green, famous English tenor of the turn of the century, Martyn studied singing under Gustave Garcia, whose father had been the teacher of Jenny Lind. He joined the D'Oyly Carte Opera Company in 1923, later succeeding Sir Henry Lytton in the comic role. He made many tours of the United States with the company before leaving it in 1951. He fought in the first World War, enlisting at fifteen; in the second he served in the British equivalent of America's USO until he received a direct commission in the Royal Air Force. In recent years he has starred on Broadway, has been featured in TV and in the film "The Gilbert and Sullivan Story". His autobiography is entitled "Here's a How-de-do". Julius Baker, Flute - Music and Percussion by Moondog Produced by Young Record World for Angel Records. Continuity and direction by Julie Laurence. Al Gifford from Washington DC, USA, told us the following story: "I talked to Julius Baker shortly after he recorded it and he told me that Julie Andrews had a great deal of trouble with the rythmic complexity and the recording session was lengthy as a result. I guess she is a typical soprano, no sense of rythm."
<urn:uuid:4d531bb1-403d-4ac2-8153-4c35cae93f4d>
CC-MAIN-2013-20
http://www.moondogscorner.de/disco/rec11.htm
2013-06-18T22:45:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976326
1,549
Counterfeit, falsified and substandard medicines pose a serious threat to human health, particularly in poorer countries with weak regulatory mechanisms. But the relationship between combating counterfeit medicines, addressing safety, quality and efficacy issues and enforcing privately owned intellectual property rights has become controversial. There are concerns that a wider definition of "counterfeit" threatens the trade in generic medicines of assured quality on which many developing countries depend; and about the legitimacy of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT), the detention of generic drugs in transit in the European Union, and the negotiation of the Anti-Counterfeiting Trade Agreement (ACTA). "Counterfeit" has a specific meaning in intellectual property, related to willful trademark violations. But in relation to medicines it is now sometimes used in a much broader sense to do with misrepresentation of identity or source, or even medicines that are simply "substandard". Some countries use the term "falsified" to describe medicines that misrepresent their identity or source, but do not necessarily violate intellectual property rights. "Substandard" medicines are those that do not meet quality standards specified for them, but may also be defined specifically to cover products from authorized manufacturers which fail to meet quality standards set for them. Failure to reach agreement on the definitions of counterfeit, falsified and substandard medicines hampers the constructive policy debate and collaboration at the international level that are necessary to take effective action against the producers and distributors of these medicines.
<urn:uuid:84a997b8-5bc9-41cd-838f-3ebb24b08fac>
CC-MAIN-2013-20
http://apps.who.int/medicinedocs/fr/m/abstract/Js17549en/
2013-05-24T02:40:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929567
298
A long, winding constellation twists its way to the west of bright Orion -- Eridanus, the river. Astronomers have been keeping a close eye on its brightest star, and recently found that there's more to it than meets the eye: the star is actually two stars. Achernar marks the southern end of Eridanus. In fact, "Achernar" is an Arabic name that means "end of the river." Even though it's 140 light-years away, Achernar shines brightly in Earth's night sky because it's hundreds of times brighter than the Sun. Unfortunately, though, it's so far south that few people in the United States can see it. In 2006, astronomers using a large telescope in Chile discovered that Achernar has a companion star. The companion resembles Sirius, the brightest star in the night sky. Like Sirius, it's several times brighter than the Sun. But it's much fainter than Achernar's brighter star, so it's overpowered by the glare. That's why no one saw it until recently. A few years before this discovery, astronomers studying Achernar had made another. They found that the star isn't round. Instead, it spins so fast that it's flattened itself out -- it's more than half again as wide through the equator as through the poles. That makes it the flattest star yet discovered. Look for Eridanus to the right of Rigel, the bright blue star at the lower right corner of Orion. Eridanus streams to the south and west of Rigel, with Achernar out of view just below the horizon. Script by Ken Croswell, Copyright 2008 For more skywatching tips, astronomy news, and much more, read StarDate magazine.
<urn:uuid:b779daa8-cd52-4bd6-b92a-5ed0587eb094>
CC-MAIN-2013-20
http://stardate.org/print/4785
2013-05-22T00:34:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951722
365
Yet Another Source of Human Genetic Variability The latest type of genetic variation to be acknowledged comes as a startling revelation that roughly half of the human genome is made up of "transposons", or jumping genes. Transposons, or "jumping genes," make up roughly half of the human genome. Geneticists previously estimated that they replicate and insert themselves into new locations roughly one in every 20 live births. New results suggest that every newborn is likely to have a new transposon somewhere in his or her genome. ...Transposons resemble e-mail spam: short repeated sequences that have no obvious function other than making more of themselves. The full name for the type of transposon that is most abundant in the human genome is retrotransposon. The "retro" term comes from how they replicate: first, the DNA is transcribed into RNA, and the RNA is reverse-transcribed into DNA again. This process normally only happens during very early in development, when the cells that will become eggs and sperm have not turned down a separate path of differentiation. ...While working in Devine's lab as a graduate student, first author Rebecca Iskow, Ph.D. devised a technique for "amplifying" the stretches of individual genomes that border transposons and reading thousands of the junctions with advanced sequencing techniques, then comparing them to the reference human genome. "The basic problem was that a new insertion can be anywhere within three billion base pairs – how do you find it compared to all the other ones?" Devine says. Ninety-seven percent of genomes the team surveyed had at least one rare insertion of the L1 variety of transposon that was present in only a single human in the study, and some genomes had several. Since the study surveyed 76 genomes, "rare" insertions could still be shared by large groups consisting of thousands of people. Rare insertions corresponded to the most recent transposons, which are less likely to have their jumping abilities impaired by other types of mutations. Devine's team also showed that transposons frequently jump to new locations during the process of tumor formation. Surveying 20 lung tumors and comparing their genomes against the normal tissues they came from, the team found that six tumors had new transposons insertions that were not present in the normal adjacent tissues. "This indicates that transposons are jumping in tumors and are generating a new kind of genomic instability," Devine says. Transposons can inactivate tumor suppressor genes and can facilitate rearrangements that involve large stretches of chromosomes. Geneticists have already identified many transposons that interrupt genes and cause human diseases, including neurofibromatosis, hemophilia and breast cancer. ...The research was initiated at Emory University School of Medicine, where Devine was in the Department of Biochemistry. Iskow, (now a postdoctoral fellow at Brigham & Women's Hospital in Boston) was a graduate student at Emory. The findings were published in the June 25, 2010 issue of Cell. Two other papers on human transposons appear in the same issue of Cell. _ScientificComputing More at Eurekalert. Wikipedia lists these forms of human genetic variation: - 2.1 Single nucleotide polymorphisms - 2.2 Copy number variation - 2.3 Epigenetics - 2.4 Genetic variability - 2.5 Clines - 2.6 Haplogroups - 2.7 Variable number tandem repeats There is a reason why clans and tribes spring up so easily, and can maintain their identities for such long periods of time. Behaviour arises to a large extent from the genes. It is easier to understand -- and therefore trust -- someone who may tend to act and react in similar ways to oneself. Such tribal societies tend to marry and keep the wealth within the tribe. Multicultural countries such as the US are attempting to accomplish on a national scale what has generally only been successful in large polyglot trading centers and imperial capitals, in the past. The low-trust interfaces found within ethnic, cultural, and religious heterogeneity can lead to higher rates of crime and vandalism. Leftist postmodern multiculturalists tend to take exactly the wrong approach in this situation, by accentuating the differences in cultures and religions -- and trying to mould the law around these differences. In fact the opposite should be done. Each culture and / or religion must be forced to adhere to the same set of laws if a multicultural society is to be successful. That is one reason that Kagan and Sotomayor were such abysmally bad choices -- reflecting badly upon Obama's judgment. Both Kagan and Sotomayor are likely to pursue the leftist postmodern multiculturalist approach, which will result in deeper societal schisms, reduced trust, and increased violence. Labels: gene variation
<urn:uuid:ea0b33f3-409a-44ce-8200-59c5ffc81eac>
CC-MAIN-2013-20
http://alfin2100.blogspot.com/2010/06/yet-another-source-of-human-genetic.html
2013-05-23T18:51:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958164
1,007
If there’s an outbreak of the flu in your city, you can count on the Centers for Disease Control for help, but if it’s the “I Love You” bug, forget it. That’s because viruses spread differently on the internet than they do in the real world, according to a paper in the 2 April PRL. While a biological disease can only spread from person to person, a digital virus can reach many computers simultaneously from a single server. This difference in transmission makes computer viruses all but impossible to eliminate, according to the authors, but the models they describe may lead to better strategies for protecting the electronic world. Normally, the prevalence of a disease depends upon its spreading rate relative to the epidemic threshold of a population. If the disease can spread at a rate above that threshold, it will survive, but if it cannot, it will die out. The flu spreads easily enough to keep a significant percentage of Americans constantly infected, but salmonella, transmitted solely through contaminated meat, exists only in isolated outbreaks. Computer viruses don’t act this way; they can persist at nearly undetectable levels for very long periods of time without dying out entirely. This unusual behavior makes internet outbreaks difficult to predict and control. Romulado Pastor-Satorras of the Catalonian Polytechnic University in Spain and Alessandro Vespignani of the Abdus Salam International Center for Theoretical Physics in Italy suggest a new model that explains how computer viruses survive. In traditional epidemic models, each human has a small, fixed number of connections to others, according to Pastor-Satorras. But on the internet, desktop PC’s have only one connection, while large government servers have many. So Pastor-Satorras and Vespignani varied the number of connections held by each computer, to better mimic the virtual world, where PCs, local network hubs, and large routers have radically different levels of connectivity. Their findings, which match trends in data collected by a computer virus tracking organization, were surprising. A virus can spread so easily inside the highly connected internet that there is no threshold below which it will die out. This model also makes an unsettling prediction: A long-forgotten virus hidden in a poorly connected PC can suddenly reemerge if it reaches a major server. “These kinds of simulation models can tell us interesting things,” says Mark Newman, an expert in complex systems at the Santa Fe Institute. But, Newman adds, they are only a rough approximation of how the internet really works. Still, Pastor-Satorras and Vespignani believe their model provides new insight into how computer viruses spread, and they are now working on immunization techniques that they hope will keep the digital world safe from virtual scourges.
<urn:uuid:d81faafc-41bc-4972-aff1-44ee3aa0436d>
CC-MAIN-2013-20
http://physics.aps.org/story/v7/st15
2013-05-22T14:46:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926114
573
at Merit Academy Middle School The History program at Merit offers a comprehensive study of humanity’s past, helping students understand who they are as members of a global society. Historical knowledge is the background against which students can understand the struggles of the modern world and the problems of the future. Students who understand the struggles of history make well-rounded thinkers and citizens because they have a better appreciation of current international issues and a deeper respect for the diversity of different cultures. Playing their parts with Shakespeare at the The History curriculum is arranged in chronological order, helping students understand the historical forces that shaped the modern world. In 6th grade, students review the medieval period and study the Renaissance into 7th grade. In 7th grade, students explore the Reformation, the Scientific Revolution, and European exploration of the world. In 8th grade, students study the modern Age. The coursework is rigorous. All courses require essays and major research papers. Students deliver speeches and answers questions about their research in addition to creating maps and timelines. At Merit’s Movie Nights, students watch films about the historical periods they are studying in class. The students travel extensively around the world, giving students the opportunity to experience different cultures and to examine history for a different perspective. In Middle School, students travel to England, France, Italy, Germany, Mexico, Costa Rica, Japan, and Africa. In addition to traveling abroad, Meritans also explore the rich history of the United States.
<urn:uuid:ab836bcd-a2dc-40f7-be8f-737b4e250e92>
CC-MAIN-2013-20
http://www.meritworld.com/en/Merit_Academy__Middle_School/History/48/viewtopic.php?PageID=255
2013-05-18T06:30:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936126
301
Tracking online activity is a difficult business. People move more and more of their lives to the world wide web, and there is thus a wealth of information out there that people have exposed, whether intentionally or unintentionally. With this comes all new methods of tracking down wrongdoing–every day, people use online mediums to communicate about or coordinate illegal activities. But the internet is a big place, and tracking down these cases–performing the necessary Big Data Mining–is not so simple as just typing a few keywords into Google, or another search engine. What is the Deep Web? The Deep Web is a complex concept. It is essentially two categories of data. The first is basically any information that is not easy to obtain through standard searching, which could be Twitter or Facebook posts, links buried many layers down in a dynamic page, or results that sit so far down the standard search results that typical users will never find them. The second category is the larger of the two and represents a vast repository of information that is not accessible to standard search engines. It is comprised of content found in websites, databases, and other sources. Often it is only accessible through a custom query directed at individual websites, which cannot be accomplished by a simple “surface web” search.
<urn:uuid:db5affac-478d-4e6d-99bd-ad82dbedf33d>
CC-MAIN-2013-20
http://www.brightplanet.com/tag/invisible-web/
2013-06-18T22:24:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935789
257
Sorry, no definitions found. - From Anglo-Norman Allemaine, Almaine et al., Old French Alemaigne, from Late Latin Alamannia ("territory of the Alamanni tribe"), from Alemannī, Allemannī, of Germanic origin probably corresponding to all + men. Compare Alemannic. (Wiktionary) “Of the fourteen hundred men whom the metropolis sent forth on this occasion, eight hundred, armed in fine corselets, bore the long Moorish pike; two hundred were halberdiers wearing a different kind of armour, called Almain rivets; and the gunners, or musketeers, were equipped in shirts of mail, with morions or steel caps.” “Almain, which is called the Holy Roman Empire, of which so many priests are princes! — they are done, and neither ban nor monition is issued against a race of sorcerers, who, from age to age, go on triumphing in their necromancy!’” “Why, he drinks you, with facility, your Dane dead drunk; he sweats not to overthrow your Almain; he gives your Hollander a vomit, ere the next pottle can be filled.” “Afterwards he combed his head with an Almain comb, which is the four fingers and the thumb.” “In matter of musical instruments, he learned to play upon the lute, the virginals, the harp, the Almain flute with nine holes, the viol, and the sackbut.” “Sir Eustace being a-horseback laid his spear in the rest and ran into the French battle, and then a knight of Almaine, called the lord Louis of Recombes, who bare a shield silver, five roses gules, and sir Eustace bare ermines, two branches of gules , -- when this Almain saw the lord Eustace come from his company, he rode against him and they met so rudely, that both knights fell to the earth.” “The King of Navarre, however, did not greatly appreciate Tremayne, and a short time afterwards Throckmorton writes: 'The bearer, Mr Tremayne, came out of England with intent to see the wars in Almain, or elsewhere, thereby to be better able to serve the” “Why, he drinks you with facility your Dane dead drunk; he sweats not to overthrow your Almain; he gives your Hollander a vomit ere the next pottle can be filled.” “In the sixteenth century Almain and Major make but a poor figure in contrast with Torquemada and Cajetan, the leading theorists of pontifical primacy.” “Like Almain rutters26 with their horsemens staves” ‘Almain’ hasn't been added to any lists yet. Looking for tweets for Almain.
<urn:uuid:60a39dfc-edb9-4b60-995b-0e682f596ae6>
CC-MAIN-2013-20
http://www.wordnik.com/words/Almain
2013-05-25T06:14:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929959
638
Bullying is Preventable Bullying is Preventable By Sheriff Al Lamberti As the 2012/2013 school year begins, it is important for parents to be aware of the Broward County School District’s strict Anti-Bullying Policy. The policy prohibits bullying of, or by, any district student or employee. Since knowledge is power, it is vital that parents become familiar with the policy, which is sent home with every Broward County student the first week of school for parental and student review. Bullying among children is aggressive behavior that is intentional and involves an imbalance of power or strength. A child who is being bullied has a hard time defending himself or herself. Usually, bullying is repeated over time. Bullying can take many forms such as physical, verbal, emotional and cyber-bullying. There are signs you can look for to know if your child is being bullied: · torn clothes · loss of appetite · mood changes · reluctance to go to school · bruises or injuries that can’t be explained. If you suspect your child is being bullied, it is important to talk with your child, be supportive and gather information about the bullying. All suspected bullying should be reported to your child’s school. You can also make an anonymous report by calling the district’s emergency hotline at (754) 321.0911 or by visiting www.browardschools.com. Face-to-face bullying isn’t the only way children can be victimized. Many children and young adults are using their computers and cell phones to send or post texts or images intended to hurt or embarrass their classmates. This includes sending mean, vulgar or threatening messages and images, posting sensitive or private information about another person, or pretending to be someone else in order to make that person look bad. Children and teens can cyber-bully each other through e-mails, instant messaging, text messages, web pages, blogs or chat rooms. If your child is a victim of such bullying: · encourage your child not to respond to cyber-bullying · do not erase the messages or pictures (save these as evidence) · try to identify the individual doing the cyber-bullying · consider filing a complaint with your service provider to block the sender · contact your child’s school · contact law enforcement if cyber-bullying involves acts such as threats of violence, extortion, obscene or harassing phone calls or text messages, stalking, hate crimes or child pornography. I encourage all parents to talk to their children about what it means to hurt another person physically or verbally. The Broward Sheriff’s Office is working with the Broward County School Board on an educational curriculum called ThinkB4UPost to be launched during Red Ribbon Week in October. This is a direct message from school officials and law enforcement to children and young adults about the consequences of cyber-bullying. For more information about ways to prevent and identify bullying, please visit www.sheriff.org/anti-bullying. With the help and guidance of law enforcement, parents, caregivers and teachers, I am confident we can put an end to bullying!
<urn:uuid:37409959-fc15-4e28-9cdd-bfcdc3c6c980>
CC-MAIN-2013-20
http://thewestsidegazette.com/bullying-is-preventable/
2013-05-24T22:49:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930225
652
Distillation Column Using the Francis Formula for Flow through Weirs Consider a distillation column operating at atmospheric pressure with 10 stages, a partial reboiler, and a total condenser. An equimolar mixture composed of ethanol and water is to be separated by this distillation column. The feed is a saturated liquid with a flow rate equal to 10 kmol/min. Feed enters at stage 8, counting from the top. At normal operating conditions, the reflux and reboil ratios are set equal to 10 and 15, respectively. If you assume (1) a uniform tray spacing equal to 24 in and (2) an operating vapor-phase velocity equal to 80% of the value of the flooding velocity, then the column diameter can be calculated and is equal to 3.394 m. The active area is set equal to (i.e. a fraction of the total cross-sectional area of the column). The momentum balance for each tray is neglected. The Francis weir formula is assumed and provides the additional equations used in the Demonstration in order to compute molar holdup of the trays. The weir height is set equal to 5 cm. In addition, condenser and reboiler volumes are taken equal to . A step in either the reflux or reboil ratio is applied at . For every stage, plots of the composition and the temperature profiles as well as the molar holdup (all variables are versus time in minutes) for user-set values of the percent step are displayed by the Demonstration. The most drastic dynamic effects are observed in the lower part of the column, from the feed stage downwards. The Francis formula for flow through weirs is given by: where is the molar holdup at stage in kmols, is the weir height in m, is the molar flow rate in kmol/min, is the gravitational acceleration, is the active area, is the weir length (calculated from the knowledge of the active area and from pure geometrical considerations), and is the liquid density at stage . Expressions for pure component molar liquid densities and vapor and liquid enthalpies were adapted from Aspen HYSYS. The mixture is assumed to obey modified Raoult's law, and activity coefficients are predicted using the Wilson model . G. M. Wilson, "Vapor-Liquid Equilibrium XI: A New Expression for the Excess Free Energy of Mixing," Journal of the American Chemical Society, 86(2), 1964 pp. 127–130.
<urn:uuid:7d1df1b1-73d9-4334-b3d6-6a6628e10797>
CC-MAIN-2013-20
http://demonstrations.wolfram.com/DistillationColumnUsingTheFrancisFormulaForFlowThroughWeirs/
2013-05-21T17:44:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917987
522
Native mammals you're quite likely to see in the reserve include: - eastern grey kangaroo (Macropus giganteus) - swamp wallaby (Wallabia bicolor) - common wombat (Vombatus ursinus) - sugar glider (Petaurus breviceps) - agile antechinus (Antechinus agilis) - dusky antechinus (A. swainsonii) - bush rat (Rattus fuscipes) - swamp rat (Rattus lutreolis) - long-nosed bandicoot - common brushtail possum (Trichosurus vulpecula) - greater glider (Petauroides volans) - common ringtail possum (Pseudocheirus peregrinus). The swamp wallaby lives in thick undergrowth in forest, woodland and heath in eastern and southern Australia. Areas of dense grass or ferns, sometimes in wet spots on hillsides of open eucalypt forest, provide daytime shelter from which it emerges to feed at night. It seems to only live where there is enough dense vegetation for shelter. Although solitary, swamp wallabies may gather in groups when feeding. The greater glider lives in a variety of eucalypt dominated habitats, ranging from low, open forests on the coast to tall forests in the ranges and low woodland west of the Dividing Range. In any particular area it feeds on only one or two species of eucalypt, but over its entire range the number of species it eats is much greater. Strictly nocturnal and essentially solitary, it rests during the day in a tree hollow, usually high in an old tree. When it emerges it moves by a series of glides, often along established routes, to a feeding area. It sometimes glides as far as 100 m and can execute a 90° turn mid-glide. The yellow-bellied glider has several distinctive calls, most characteristic of which is a short, high-pitched shriek that subsides into a throaty rattle. This territorial call can be heard from 400 m away. It's an active and very mobile climber, often running along the underside of a branch. During the day it rests in a den in a hollow branch, usually in a living, smooth-barked eucalypt. The home range of an individual is remarkably large and it may spend 90% of its waking hours foraging for food. Its numbers appear to be diminishing and its long-term survival depends on maintaining the integrity of large areas of forest, with adequate food resources and nest trees. Sixteen species of bats have been recorded in the reserve. Most live in trees, of which the most common are: - lesser long-eared bat (Nyctophilus geoffroyi) - southern forest bat (Vespadelus regulus) - little forest bat (V. vulturnus) - chocolate wattled bat (Chalinolobus morio). Eastern horseshoe bats (Rhinolophus megaphllus) and threatened common bentwing-bats (Miniopterus shreibersii) are found in the sea caves in the reserve. Threatened species in the reserve that rely on maintenance of the moist and old-growth forests include the long-nosed potoroo, southern brown bandicoot, yellow-bellied glider and tiger quoll. Other gliders, possums and many other species also depend on these forests. Protecting the forests from too many fires is vital. Preliminary surveys have indicated there might be very rare long-footed potoroos (Potorous longipes) in the wetter forests in the south-west corner of Nadgee Nature Reserve. Several dingo (Canis familiaris) families are present in the reserve. Because of the Nadgee's isolation, the dingos have been subject to minimal disturbance and this provides a valuable opportunity for research into their biology and behaviour. This research is important as the dingo has interbred in many areas and pure dingo populations have declined drastically. Please store food and rubbish out of reach of wildlife. Human food does not meet the dietary requirements of native animals. If fed, they can become aggressive, dependent and ultimately sick. The coastal heaths have 34 bird species of which 27 are heath residents. Heathland birds you might see in Nadgee include: - southern emu wren - welcome swallow - New Holland honeyeater - tawny-crowned honeyeater. The southern emu wren, tawny-crowned honeyeater and the threatened striated fieldwren and ground parrot are restricted to heath. Most of the heath species prefer heathland with a large proportion of shrubs and low trees. For these species fire must be sufficiently infrequent to allow seed production in the food plants. Birds of the forests include: - wonga pigeon - yellow-tailed black cockatoo - crimson rosella - fan-tailed cuckoo - superb lyrebird - grey fantail - scarlet robin - striated thornbill. The total estimated eastern bristlebird population is less than 2000 and Nadgee Nature Reserve and Croajingalong National Park in Victoria support the entire remaining southern population of the species, estimated at 120 individuals. The near-coastal areas of heathland, scrubland and woodland/forest between Little Creek estuary and Cape Howe are significant breeding and foraging habitat for the eastern bristlebird. Conservation of habitat is a very important priority in the reserve because of the few locations where this species is still found. Sea birds such as the short-tailed shearwater, crested tern and gannet use the rock platforms and beaches of the reserve. A large number of waterbirds are found in the estuaries including: - black cormorant - pied cormorant - white-faced heron - black swan - black duck. Most of the park's beaches support a breeding pair of endangered hooded plovers. If these birds are frequently disturbed, keeping them away from their eggs and young, they may not be able to successfully breed. They are also affected by storm damage and by predatory animals like foxes and dingos. Hooded plovers make their nests in small sand scrapes above the high tide mark on beaches and sand dunes. Little terns nest in the same way. You can help protect these shorebirds by: - watching them from a distance - keeping away from sand dunes during the nesting season (December to the end of February) - when walking on the beach, keeping below the high tide mark. Raptors recorded in the park include: - wedge-tailed eagle - white-bellied sea eagle - whistling kite - brown falcon. Breeding, foraging and roosting needs of the threatened masked owl, powerful owl and sooty owl are centred around the riparian forests including rainforest, tall wet eucalypt forest and low open forest with a dense heath understorey. Patches of old-growth forest throughout the reserve are particularly important because they provide large breeding and roosting hollows for these species. Amphibians and reptiles Reptiles seen in the reserve include: - red-bellied black snakes - garden skinks - highland water skinks - blue-tongued lizards - tiger snakes - common scaly-foots - lace monitors. Mainland tiger snakes are found in a broad range of habitats, from rainforest in the north to dry open forest and river floodplains in the south. They mainly eat frogs and are aggressive only when aroused. They're out during the day or at dusk when it's cool, but are nocturnal in warmer weather. The lace monitor, or goanna, is a large tree-dwelling lizard which eats insects, reptiles and small mammals, but is also a predator of nesting birds. It often forages on the ground, but will take to a tree when disturbed. Like many arboreal lizards, it spirals upwards around a tree trunk when pursued, always keeping to the opposite side of the tree from its pursuer. Frog species recorded in Nadgee include: - brown froglet - Bibron's toadlet - Lesueur's frog - green leaf tree frog - eastern banjo frog. Lesueur's frog (Litoria lesueuri) lives in eucalypt forest, woodland and associated grassy areas. It's common around rocky flowing creeks, but will breed in still ponds close to these creeks. Its call sounds like a soft purr. The eastern banjo frog (or eastern pobblebonk) is widely distributed and is found in woodland, rainforest, farmland, heathland and grassy areas. It's often noticable after rain and is commonly associated with dams, ditches and other bodies of still water. Its call is a single banjo-like 'plonk' or 'bonk' repeated at intervals. Information about reptiles and frogs in Nadgee is scarce, limited to incidental records collected during general fauna surveys. Sampling of frog and reptile habitats is needed to assess the status of threatened or regionally significant species such as the green and golden bell frog, giant burrowing frog and diamond python. Little is known about invertebrates in Nadgee Nature Reserve. Several of the sea caves contain important invertebrate communities dependent on bat guano. At Merrica Beach Cave there is significant interaction between the guano community and the marine seawrack-dependent seashore community. These communities are easily damaged by trampling. Please keep away from sea caves such as Merrica Beach Cave.
<urn:uuid:511c09bd-acfc-4d5c-968b-f82bfe7129bc>
CC-MAIN-2013-20
http://www.environment.nsw.gov.au/NationalParks/parkWildlife.aspx?id=N0458
2013-05-22T07:40:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941664
2,098
Too much sun – for plants as well as people – can be harmful to long-term health. But to avoid the botanical equivalent of “lobster tans,” plants have developed an intricate internal defense mechanism called photoprotection, which acts like sunscreen to ward off the sun's harmful rays. “We knew that biomolecules called carotenoids participate in this process of photoprotection, but the question has been, ‘How does this work?' ” says Iris Visoly-Fisher, a postdoctoral research associate in the Biodesign Institute at ASU. Carotenoids act as “wires” to carry away the extra sunlight energy in the form of unwanted electrons, somehow wicking away the extra electrons across long distances from locations that could damage plant tissues and photosynthesis. During photoprotection, the consensus school of thought was that carotenoids—the source of the orange pigments in carrots and Vitamin A – become oxidized, or charged, losing an electron in the process. Fisher and other ASU scientists have found a way to measure the electrical conductance within such an important biomolecule. In doing so, the team has produced a new discovery that shatters the prevailing view. The research team found that oxidation is not required for photoprotection; rather, carotenoids in a neutral, or uncharged, state can readily handle the electron overload from the sun. Their findings have been published in the prestigious journal Proceedings of the National Academy of Sciences (PNAS) under the title “Conductance of a Biomolecular Wire.” The findings can be accessed at the Web site (www.pnas.org/cgi/content/abstract/0600593103v1). “This is a remarkable experimental tour de force, and the result is quite unexpected,” says Lindsay, who directs Fisher's work in the Biodesign Institute's Center for Single Molecule Biophysics. “Carotene was regarded as the poster child for this molecular mechanism, but it turns out that a much simpler mechanism works just fine.” The innovative work was a collaboration between several ASU departments and the Univesidad Nacional de Rio Cuarto in Argentina . In addition to Fisher, who was lead author on the paper, contributions from chemistry and biochemistry professors Devens Gust, Tom Moore and Ana Moore of ASU's Center for the Study of Early Events in Photosynthesis were instrumental in the project. “The initial interest was to more fully understand how photosynthesis works,” Fisher says. “Because our center focuses on electron transport in a single molecule, Devens Gust and Tom and Ana Moore suggested that we look at single-molecule transport in carotene.” To get at the heart of the problem, Fisher had to attempt an experiment that had never been done before for any biomolecule: to control the charge of the biomolecule while measuring its ability to hold a current. By holding a carotenoid under potential control, Fisher could control whether the biomolecule was in a neutral state or in the charged state (the oxidized state), while simultaneously measuring the electron transport through a single molecule. “The importance of this result is not only for understanding natural systems and photosynthesis, but also for the fact that, technically, for the first time, we could hold a molecule in a state pretty close to the natural conditions found in the plant,” Fisher says. To make the experimental measurements, Fisher needed to work out several technically challenging variations to a method first pioneered by electrical engineering professor Nongjian Tao of ASU's Fulton School of Engineering. In concept, it's much like trying to measure the current of a wire found in an everyday household appliance – only, in this case, the “wiring” is a miniscule 2.8 nanometers long and less than a single nanometer thick. That's about 10,000 times smaller than the width of a human hair. One of the greatest challenges of the experiment came down to the human endurance of taking thousands of measurements over an intense, six-month period. “We needed to keep this finicky molecule away from the light,” Fisher says. “So sometimes the microscope room became like a cave, where I was sitting for hours and hours in the dark.” But for Fisher and the rest of the team, the main satisfaction was being able to break down a complex process to understand its simplest components and produce a groundbreaking discovery. Source: Arizona State University Explore further: Honeybees trained in Croatia to find land mines
<urn:uuid:f57eb831-df00-4931-b149-dca1c1f5b6cd>
CC-MAIN-2013-20
http://phys.org/news72545098.html
2013-05-19T19:22:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947029
984
With the fall of the regime, glimmers of hope are emerging after scientists recently entered the marshes for the first time in 20 years. CSIRO's Dr Rob Fitzpatrick was contracted by the consultancy company Development Alternatives Inc. (DAI), under the USAID Marshlands Restoration Program, to lead a ground-breaking soil science expedition to Iraq in February this year. The trip was a revelation for Dr Fitzpatrick. He explored a landscape unknown to soil science. "We ended up finding a whole range of new minerals, new processes going on in the system quite seriously I'm overwhelmed by it all," Dr Fitzpatrick says. "New criteria, based on properties defined in this study, will need to be submitted to the international bodies. We have discovered new soil types." Dr Fitzpatrick is ecstatic about what he has discovered and what can be done to help the local people. With his team, he set out to establish the limitations of soil and water resources to agricultural production in both drained and re-flooded areas. This information was to be used to develop a set of practical indicators to help the local people interpret signs of soil and water degradation, and to direct efforts to local projects with the best chance of success. "We have developed a system farmers can use to recognise the various soil types that can be used to grow crops and which ones to avoid," Dr Fitzpatrick says. His team is now preparing a series of papers for major international journals and conferences to communicate the impact of their findings. "It was a magical experience," Dr Fitzpatrick says. "The potential to do a lot more really good science and work over there Contact: Clare Peddie, CSIRO Land and Water,
<urn:uuid:ea41541c-b366-41af-bdc4-82302b71582c>
CC-MAIN-2013-20
http://news.bio-medicine.org/biology-news-3/Australian-expertise-helps-resurrect-Iraqs-ancient-marshlands-14305-1/
2013-05-22T14:46:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960497
346
December 10, 2008 | In a forest outside Colorado Springs, a group of researchers is investigating how forests impact weather and air quality. The team even suspects that the pine beetles eating their way through the West’s forests are altering local weather patterns. A forest near Colorado’s Continental Divide shows signs of pine beetle infestation. Scientists suspect that the beetles can also alter local weather patterns. (Photo by Carlye Calvin.) As Alex Guenther (ESSL/ACD/TIIMES) explains it, forests help control the atmosphere. “There’s a big difference between the impacts of a living forest and a dead forest,” he says. “With a dead forest, we may get different rainfall patterns.” Alex, a biogeochemist, is one of the principal investigators on BEACHON (pronounced beacon), the Bio-hydro-atmosphere Interactions of Energy, Aerosols, Carbon, H2O, Organics, and Nitrogen. During this NCAR–led field project, researchers are exploring how trees and other vegetation influence rainfall, temperature, smog, and other aspects of the atmosphere. The goal is to learn more about cloud formation, climate change, and the cycling of gases and particles between the land and the atmosphere. BEACHON involves several dozen NCAR researchers in collaboration with university, government, and international colleagues. The project, launched this summer, is scheduled to continue for four years across a region from southern Wyoming to northern New Mexico. Although it’s not the first field project to measure emissions from vegetation, BEACHON’s extent and duration will allow researchers to study emissions during different seasons and measure yearly changes. It is especially unique in that the researchers will look at the feedbacks of climate on vegetation, with an eye toward questions such as how drought affects the emissions of particles that control clouds, which in turn produce rain that impacts vegetation. A broad range of scientists will lend expertise, including biologists who study plant physiology, atmospheric chemists, hydrologists, and more. The team’s arsenal for making observations includes ground-based instruments and sensors, computer models, and possibly an aircraft or helicopter. Alex Guenther (ESSL/ACD) examines an instrument at the Manitou field site during BEACHON. “BEACHON will give us a very comprehensive picture of a forest’s impact on the atmosphere,” Alex says. “But at this point, we don’t know what the project will reveal. We may end up with more questions than answers.” In the field One of the main field sites for BEACHON is located 28 miles northwest of Colorado Springs in the Manitou Experimental Forest, part of the U.S. Forest Service’s Rocky Mountain Research Station. The research team’s goal during the summer 2008 phase, which ran July 21–September 19, was to prepare the site for long-term observations, set up infrastructure, test instruments, and begin addressing science questions. The team constructed a 100-foot tower to measure emissions above the forest canopy. On the ground, scientists began sampling aerosols and measuring trace gases (ozone, carbon monoxide, nitrogen oxide, and sulfur dioxide). Over the course of two days, they launched balloon-borne radiosondes every three hours to measure temperature, humidity, and winds. BEACHON researchers use towers that rise up to 100 feet above the forest canopy to measure the exchange of gases and particles between plants and the atmosphere. “We’re trying to set up a core suite of measurements that will be there for several years,” explains site coordinator Jim Smith (ESSL/ACD). The researchers will use these measurements to view long-term trends in exchanges between the atmosphere and forest, as well as to determine if air from the Front Range influences the Manitou site. As an aerosol scientist, Jim hopes that BEACHON will shine light on how emissions of hydrocarbons from forest vegetation (and possibly soil) affect local climate. These hydrocarbons play a major role in atmospheric chemistry. “To me, the most intriguing question is the climatic significance of the birth of these very small particles into the atmosphere,” Jim says. “It’s the kind of question that benefits from being in one place year in and year out.” The team is studying the ground at the Manitou site as well as the air. Hydrometeorologist Dave Gochis (RAL) is using sensors to measure soil moisture and temperature, as well as deploying rain gauges. His goal is to monitor the impact of precipitation and climate forcing on soil hydrology, the availability of water to plants, and the impact of moisture stress on plant emissions. The site will be quieter over the winter, though researchers will continue making measurements remotely and make occasional site visits. Jim and Tom Karl (ACD) plan to look at how snow affects emissions of trace gases. “We think snow could be acting as a trap for some hydrocarbons, so that when the snow melts, it represents an emissions source,” Jim says. Living, breathing vegetation Plants interact with the atmosphere in a variety of ways. They take in and emit chemicals and gases, and absorb the Sun’s heat. Tiny airborne particles from plants rise into clouds and seed them by providing surfaces for water droplets to adhere to and grow into raindrops. Plants emit chemicals known as volatile organic compounds that interact with human-caused pollution to form smog, which affects air quality and local temperatures. Carbon dioxide that is emitted in large quantities from dead forests joins carbon dioxide from human activities to influence the amount of the Sun’s heat that reaches Earth. The magnitude of the tree loss from the pine beetle epidemic is enough to disrupt local weather patterns and air quality. Preliminary computer modeling for BEACHON by Fei Chen (RAL) suggests that beetle kill can lead to temporary temperature increases of about 2–4°F, partly due to a decrease in the ability of trees to cool the atmosphere by transpiring water—similar to how people cool their bodies by sweating. Scientists also believe that beetles stimulate trees to release more particles and chemicals into the atmosphere as they try to fight off the insects. This worsens air quality, at least initially, by increasing ground-level ozone and particulate matter. The mountain pine beetle (dendroctonus ponderosae), which is eating its way through Colorado’s forests, can indirectly affect local weather. Land use changes Wildfires, clearcutting, and new development also change the atmosphere through vegetation removal. The impacts in each case can vary significantly, depending on the remaining vegetation and changes to soil conditions. If cloud and precipitation patterns change for a decade or more, the land cover can in turn be altered. In arid places such as the Rockies, the exchange of gases and particles between Earth’s surface and the atmosphere is especially critical since even slight changes in precipitation can have significant impacts on the region. “Here in the western United States, it is particularly important to understand these subtle impacts on precipitation,” Alex says. “Rain and snow may become even more scarce in the future as the climate changes, and the growing population wants ever more water.”
<urn:uuid:86bb64d4-bf86-48e0-8cfd-3371644933a2>
CC-MAIN-2013-20
http://www2.ucar.edu/for-staff/update/land-atmosphere-connection
2013-06-18T22:38:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937878
1,523
Light-emitting diodes use less energy and last longer than even compact fluorescent lights.This article, written by Angela Spivey*, appeared first in Environmental Health Perspectives—the peer-reviewed, open access journal of the National Institute of Environmental Health Sciences. The article is a verbatim version of the original and is not available for edits or additions by Encyclopedia of Earth editors or authors. Companion articles on the same topic that are editable may exist within the Encyclopedia of Earth. The Mixed Blessing of Phosphor-Based White LEDs Light-emitting diodes (LEDs), which use less energy and last longer than even compact fluorescent lights,1 are predicted to become the leading lighting technology in the United States as incandescent bulbs are phased out.2 But Abraham Haim, director of the Israeli Center for Interdisciplinary Studies in Chronobiology, will not bring white LEDs and other so-called short-wavelength lights into his home because of his concerns about their health effects. Why? Blue light such as that emitted by LEDs has been shown to suppress production of the hormone melatonin to a greater degree than other visible wavelengths emitted at the same intensity.3,4 Melatonin suppression has been demonstrated to disrupt sleep/wake cycles and has been linked to increased risk of breast cancer.5 “Modern lights . . . that use the wavelength in the range of 460 nm to 500 nm should be considered ‘bad light,’” Haim says. Although the light from LEDs appears white, it consists of one strong, sharp peak of short-wavelength blue light (in the range of 460 nm) and a second, broader emission in the longer-wavelength part of the spectrum. This is achieved by fitting a blue LED with a fluorescent phosphor layer that absorbs part of the blue light and re-emits light of a longer wavelength. Concerns about white-appearing LEDs center around nighttime exposure to blue light. Daytime skylight also is a blue-enriched, white-appearing light, explains George Brainard, director of the Lighting Research Program at Thomas Jefferson University, but this blue-light exposure is desirable for cuing the human circadian rhythm, which synchronizes with cycles of light/dark, eating, and activity. “To my knowledge,” Brainard says, “the white-appearing, blue-enriched LEDs do not pose the same sort of potential health consequences during the daytime.” You can control the light in your home, but not outdoors. That’s why Haim and other coauthors of a new study call for regulations limiting the use of certain types of lights, including LEDs, for nighttime lighting outdoors.6 The paper reviews research showing that nighttime exposure to white LEDs suppresses melatonin to a greater degree compared with other lighting types such as high- and low-pressure sodium, metal halide, and incandescent bulbs, and it includes measurements the researchers made of the wavelength and other spectral characteristics of several types of lights. But the bulk of the paper consists of recommendations for reducing light pollution. In addition to limiting nighttime use of the blue-spectrum light typical of metal halide lamps and white LEDs, those suggestions include using as little light as possible outdoors, aiming for a zero increase in total outdoor lighting (for example, not adding lighting without decreasing the amount or intensity of lighting somewhere else), and prohibiting lights from being aimed upward (above the horizontal).7 The health problems potentially caused by current LEDs may be avoidable. “The LED industry would be wise to develop white-appearing LEDs that do not have the high emissions in the blue region of the visible spectrum for outdoor lighting applications,” Brainard says. “This would permit use of newer energy-efficient solid-state lighting while still avoiding the potential health consequences of circadian and neuroendocrine disruption from inappropriate exposure to light at night.” “Any problems with the spectrum produced by white-light LEDs, to the extent they exist, are not inherent to LEDs themselves but rather the current implementation,” adds Jay Neitz, a professor of ophthalmology at the University of Washington Medical School. “Research is under way to improve both the spectral characteristics and efficiency of white-light LEDs. If there are problems with LEDs at the moment, they will probably be short-lived as better technologies come into use.” Fabio Falchi, a scientist at Italy’s Light Pollution Science and Technology Institute and first author of the paper, agrees that LEDs likely will be “the future of lighting outdoors and indoors.” But he says there are ways to manage light pollution from LEDs by taking advantage of their ability to turn on and off quickly. He suggests keeping outdoor lights off or at a low level unless they are in use, which could be accomplished by using motion-detector lights that increase to full power only when a pedestrian or car approaches. The authors also call for manufacturers to label lights with information about how much of a bulb’s light is emitted in the shorter, circadian-disrupting wavelength, much as the food industry is required to include nutritional content on labels. “We have to pay attention to several aspects of light that we are not used to paying attention to,” Falchi says. Not all researchers agree that white LEDs pose a danger to human health. Neitz points out that the studies showing melatonin suppression from LEDs did not simulate real-world exposures. For instance, some of the studies had participants put their heads into a dome that exposed their full visual field to a single wavelength of light.4 “It’s a long stretch to go from that to make an argument about light pollution, where you are talking about light levels that would be quite low, way below where they would make a significant contribution to our circadian rhythms,” Neitz says. Real-world exposures include multiple bandwidths of light from many different sources, and in the context of all those exposures, the increased sensitivity to short-wavelength light would not make a significant difference, Neitz says. Brainard agrees that most studies to date have not replicated real-world exposures. But he says it’s too soon to make firm conclusions because knowledge about the health effects of light is constantly evolving. For example, the first study to show melatonin suppression from any type of light used an intensity of 2,500 lux, but more recent studies have shown suppression with less than 1 lux.8 “Back in the 1990s no one ever imagined you could suppress melatonin in humans with less than one lux,” Brainard says. “The regulation of human neuroendocrine, circadian, and neurobehavioral responses by light, and the potential health consequences of that regulation, are going to be far more complicated and nuanced than anyone ever dreamed.” References and Notes - Energy efficiency: The quest for white LEDs hits the home stretch. Science 325(5942):809. 2009. http://dx.doi.org/10.1126/science.325_809 - The Energy Policy Act of 2005. Public Law No. 109-58. Available: http://tinyurl.com/3swp6dg [accessed 4 Oct 2011]. - West KE, et al. Blue light from light-emitting diodes elicits a dose-dependent suppression of melatonin in humans. J Appl Physiol 110(3):619–626 (2011); http://dx.doi.org/10.1152/japplphysiol.01413.2009. - Action spectrum for melatonin regulation in humans: evidence for a novel circadian photoreceptor. J Neurosci 21(16):6405–6412. 2001. http://www.ncbi.nlm.nih.gov/pubmed/11487664 - Circadian regulation of molecular, dietary, and metabolic signaling mechanisms of human breast cancer growth by the nocturnal melatonin signal and the consequences of its disruption by light at night. J Pineal Res 51(3):259–269. 2011. http://dx.doi.org/10.1111/j.1600-079X.2011.00888.x - Limiting the impact of light pollution on human health, environment and stellar visibility. J Environ Manag 92(10):2714–2722. 2011. http://dx.doi.org/10.1016/j.jenvman.2011.06.029 - Switch on the night: policies for smarter lighting. Environ Health Perspect 117(1):A28–A31. 2009. http://dx.doi.org/10.1289/ehp.117-a28 - Ocular input for human melatonin regulation: relevance to breast cancer. Neuroendocrinol Lett 23: suppl 217–22. 2002. Find this article online - *Angela Spivey writes from North Carolina about medicine, environmental health, and personal finance. - Citation: Spivey A 2011. The Mixed Blessing of Phosphor-Based White LEDs. Environ Health Perspect 119:a472-a473. http://dx.doi.org/10.1289/ehp.119-a472 - Online: 01 November 2011
<urn:uuid:b5d3d624-e47b-47b0-8341-f215660ec4cd>
CC-MAIN-2013-20
http://www.eoearth.org/article/Phosphor-Based_White_LEDs:_Mixed_Blessings?topic=49593
2013-06-18T22:37:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920198
1,929
July 23, 2010 Pioneering observations with the National Science Foundation's giant Robert C. Byrd Green Bank Telescope (GBT) have given astronomers a new tool for mapping large cosmic structures. The new tool promises to provide valuable clues about the nature of the mysterious "dark energy" believed to constitute nearly three-fourths of the mass and energy of the Universe. Dark energy is the label scientists have given to what is causing the Universe to expand at an accelerating rate. While the acceleration was discovered in 1998, its cause remains unknown. Physicists have advanced competing theories to explain the acceleration, and believe the best way to test those theories is to precisely measure large-scale cosmic structures. Sound waves in the matter-energy soup of the extremely early Universe are thought to have left detectable imprints on the large-scale distribution of galaxies in the Universe. The researchers developed a way to measure such imprints by observing the radio emission of hydrogen gas. Their technique, called intensity mapping, when applied to greater areas of the Universe, could reveal how such large-scale structure has changed over the last few billion years, giving insight into which theory of dark energy is the most accurate. "Our project mapped hydrogen gas to greater cosmic distances than ever before, and shows that the techniques we developed can be used to map huge volumes of the Universe in three dimensions and to test the competing theories of dark energy," said Tzu-Ching Chang, of the Academia Sinica in Taiwan and the University of Toronto. To get their results, the researchers used the GBT to study a region of sky that previously had been surveyed in detail in visible light by the Keck II telescope in Hawaii. This optical survey used spectroscopy to map the locations of thousands of galaxies in three dimensions. With the GBT, instead of looking for hydrogen gas in these individual, distant galaxies -- a daunting challenge beyond the technical capabilities of current instruments -- the team used their intensity-mapping technique to accumulate the radio waves emitted by the hydrogen gas in large volumes of space including many galaxies. "Since the early part of the 20th Century, astronomers have traced the expansion of the Universe by observing galaxies. Our new technique allows us to skip the galaxy-detection step and gather radio emissions from a thousand galaxies at a time, as well as all the dimly-glowing material between them," said Jeffrey Peterson, of Carnegie Mellon University. The astronomers also developed new techniques that removed both man-made radio interference and radio emission caused by more-nearby astronomical sources, leaving only the extremely faint radio waves coming from the very distant hydrogen gas. The result was a map of part of the "cosmic web" that correlated neatly with the structure shown by the earlier optical study. The team first proposed their intensity-mapping technique in 2008, and their GBT observations were the first test of the idea. "These observations detected more hydrogen gas than all the previously-detected hydrogen in the Universe, and at distances ten times farther than any radio wave-emitting hydrogen seen before," said Ue-Li Pen of the University of Toronto. "This is a demonstration of an important technique that has great promise for future studies of the evolution of large-scale structure in the Universe," said National Radio Astronomy Observatory Chief Scientist Chris Carilli, who was not part of the research team. In addition to Chang, Peterson, and Pen, the research team included Kevin Bandura of Carnegie Mellon University. The scientists reported their work in the July 22 issue of the scientific journal Nature. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. Other social bookmarking and sharing tools: - Chris L. Carilli. Astrophysics: Broad-brush cosmos. Nature, DOI: 10.1038/466444a Note: If no author is given, the source is cited instead.
<urn:uuid:718f4aa2-3abe-45b7-9b6a-c359baa3f9d9>
CC-MAIN-2013-20
http://www.sciencedaily.com/releases/2010/07/100721132627.htm
2013-05-24T01:59:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938964
801
How Access to Clean Water Prevents Conflict Abstract : The number of people with improved access to safe drinking water is growing. According to UNICEF, since 1990, an additional 1.8 billion people are using an improved source of drinking water. Yet many people are living with water scarcity, particularly in Africa. The solutions highlighted here are just a few of the possible responses. Safe drinking water and sanitation in schools may serve as a way to keep girls in school, increasing their economic opportunities, and eventually, the health of their own children. Innovative ways to finance water entrepreneurs could open up an avenue for new investments and improve sustainability. Strengthening regional institutions, promoting scientific dialogue, and harnessing social capital can help to facilitate cooperation and reconciliation. Appropriate investments in water use, sanitation, and conservation are essential to reducing vulnerability among the poor, to ensuring sustainable development, and to promoting security in a period of climate change. Pour en savoir plus : http://www.thesolutionsjournal.com/node/1037#comment-form
<urn:uuid:639354c0-7aee-4597-bbf8-05204a3d69b9>
CC-MAIN-2013-20
http://chaire-eppp.org/eau_et_conflits
2013-05-23T19:13:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940908
214
NFTA 86-03, May 1986A quick guide to useful nitrogen fixing trees from around the world A class of plants that helped develop soil on glaciated sites in the past has a future in agroforestry and land reclamation projects of today and tomorrow. These plants are known as actinorhizae, as they are nodulated by the nitrogen-fixing actinomycete, Frankia These predominately temperate trees are especially useful in areas where the mostly tropical woody legumes can not live or thrive. Actinorhizal plants have been used historically to increase fertility in agricultural systems. Lack of knowledge about the group's ecology prevents more widespread user but the trees are currently used in the following four ways: 1. As a primary crop for timber and pulpwood (Alnus, Casuarina spp.) 2. As an interplanted "nurse" plant for other, more valuable species (Elaeagnus spp.) 3. As a component of a multipurpose agroforestry plantation (Casuarina spp.) 4 As a plantation for soil reclamation (Elaeagnus, Shepherdia, Purshia spp.) Environmental protection and land reclamation are benefits provided by several actinorhizal species. Elaeagnus, Purshia and Shepherdia spp. are widely planted in North America to prevent soil erosion (Fessenden, 1979). Hippophae rhamnoides is used for this same purpose in western and northern Europe. Casuarina species are planted in shelterbelts along deserts and coastlines in western Africa, India, China and other Asian countries to stop encroachment of sand dunes, diminish winds and decrease downwind deposition of salt spray (Andeke-Lengui & Dommergues, 1981; Turnbull, 1981). A shelterbelt built mainly with casuarina in southern China forms a "green wall" ranging from 0.5 to 5 km wide for 3000 km along the South China Sea. The greatest use of any one actinorhizal genus is probably the production of Casuarina for firewood in the tropics. Large plantations are maintained on 5 to 15 year rotations (Kondas, 1981). Agricultural crops often are interplanted with casuarina during the first few years of the rotation. Harvested trees can be sold as firewood or converted into charcoal. Planting Alnus, or alders, for lumber, pulp or fuelwood production is the second most common use of actinorhizal trees. Wood harvested from native stands is sold as fuelwood (Smith, 1978) or pulped and combined with softwood pulp for paper production (Hrutfiord, 1978). Mean annual wood yields for 8 to 10-year-old red alder, Alnus rubra, were nine oven-dry tons/ha/yr in British Columbia, and maximum production was 28 oven-dry t/ha/yr (Smith, 1978). Natural regeneration of alnus stands is excellent. Alnus acuminata and A. nepalensis are tropical highland species. Other actinorhizal species are used as nurse crops for other trees. In the United States, Elaeagnus umbellata has been shown to greatly increase the productivity and quality of Juglans nigra, a hardwood species used extensively in furniture production (Schlesinger and Williams 1984). Elaeagnus apparently increased soil fertility, moderated temperatures and/or provided beneficial competition, which led to self-pruning of the tree crop. Alder has been shown to improve the growth of Populus, Pinus, and Pseudotsuga in mixed stands (Silvester, 1977). Various casuarina species are planted from the tropics to temperate zones as windbreaks, to control soil erosion, as ornamentals, for particle board, and as a fallow -improvement crop in Papua New Guinea. Alder foilage, twigs and sawdust have been successfully used as a cattle feed supplement (DeBell and Harrington, 1979). Actinorhizal plants can contribute as much nitrogen per hectare as the most productive legumes (Torrey, 1978). A Senegal study estimated that casuarina fixed 288 kg/N/ha/yr (Gauthier et al., 1984). Alders accumulate between 40 to 200 kg/N/ha/yr, with maximum accumulations of up to 320 kg/ha/yr (Silvester, 1977). Frankia is present in adequate amounts in most ecosystems for natural nodulation to occur. Inoculation might be necessary in disturbed soil, arid environments or sites where actinorhizal plants are not native. Pure cultures for many of the most important actinorhizal species are now 38 Winrock Drive Morrilton, Arkansas 72110-9370, USA
<urn:uuid:867c9283-30b0-465f-a425-2410595bf2ae>
CC-MAIN-2013-20
http://www.winrock.org/fnrm/factnet/factpub/FACTSH/Actinorhizal.html
2013-05-19T18:51:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.89102
1,043
This section discusses some of the caveats of climate scenario development and focuses on the need for consistency in representing different physical aspects of the climate system. It does not discuss the many possible inconsistencies with respect to socio-economic issues in scenario development. Chapter 3 of the TAR WG II (Carter and La Rovere, 2001) and Chapter 2 of the TAR WG III (Morita and Robinson, 2001) provide a detailed treatment of these issues. Three common inconsistencies in applying climate scenarios are discussed, concerning the representation of ambient versus equivalent CO2 concentrations, biosphere-ocean-atmosphere interactions and time lags between sea level rise and temperature change. The climate system consists of several components that interact with and influence each other at many different temporal and spatial scales (see Chapter 7). This complexity adds further constraints to the development of climate scenarios, though their relevance is strongly dependent on the objectives and scope of the studies that require scenarios. Most climate scenarios are based on readily available climate variables (e.g., from AOGCMs) and, where these are used in impact assessments, studies are often restricted to an analysis of the effects of changes in climate alone. However, other related environmental aspects may also change, and these are often neglected or inadequately represented, thus potentially reducing the comprehensiveness of the impact assessment. Furthermore, some feedback processes that are seldom considered in AOGCM simulations, may modify regional changes in climate (e.g., the effect of climate-induced shifts in vegetation on albedo and surface roughness). Concurrent changes in atmospheric concentrations of gases such as CO2, sulphur dioxide (SO2) and ozone (O3) can have important effects on biological systems. Studies of the response of biotic systems require climate scenarios that include consistent information on future levels of these species. For example, most published AOGCM simulations have used CO2-equivalent concentrations to represent the combined effect of the various gases. Typically, only an annual 1% increase in CO2-equivalent concentrations, which approximates changes in radiative forcing of the IS92a emission scenario (Leggett et al., 1992), has been used. However, between 10 and 40% of this increase results from non-CO2 greenhouse gases (Alcamo et al., 1995). The assumption that CO2 concentrations equal CO2-equivalent concentrations (e.g., Schimel et al., 1997; Walker et al., 1999) has led to an exaggeration of direct CO2 effects. If impacts are to be assessed more consistently, proper CO2 concentration levels and CO2-equivalent climate forcing must be used. Many recent impact assessments that recognise these important requirements (e.g., Leemans et al., 1998; Prinn et al., 1999; Downing et al., 2000) make use of tools such as scenario generators (see Section 126.96.36.199) that explicitly treat atmospheric trace gas concentrations. Moreover, some recent AOGCM simulations now discriminate between the individual forcings of different greenhouse gases (see Chapter 9, Table 9.1) The biosphere is an important control in defining changes in greenhouse gas concentrations. Its surface characteristics, such as albedo and surface roughness, further influence climate patterns. Biospheric processes, such as CO2-sequestration and release, evapotranspiration and land-cover change, are in turn affected by climate. For example, warming is expected to result in a poleward expansion of forests (IPCC, 1996b). This would increase biospheric carbon storage, which lowers future CO2 concentrations and change the surface albedo which would directly affect climate. A detailed discussion of the role of the biosphere on climate can be found elsewhere (Chapters 3 and 7), but there is a clear need for an improved treatment of biospheric responses in scenarios that are designed for regional impact assessment. Some integrated assessment models, which include simplifications of many key biospheric responses, are beginning to provide consistent information of this kind (e.g., Alcamo et al., 1996, 1998; Harvey et al., 1997; Xiao et al., 1997; Goudriaan et al., 1999). Another important input to impact assessments is sea level rise. AOGCMs usually calculate the thermal expansion of the oceans directly, but this is only one component of sea level rise (see Chapter 11). Complete calculations of sea level rise, including changes in the mass balance of ice sheets and glaciers, can be made with simpler models (e.g., Raper et al., 1996), and the transient dynamics of sea level rise should be explicitly calculated because the responses are delayed (Warrick et al., 1996). However, the current decoupling of important dynamic processes in most simple models could generate undesirable inaccuracies in the resulting scenarios. Climate scenario generators can comprehensively address some of these inconsistencies. Full consistency, however, can only be attained through the use of fully coupled global models (earth system models) that systematically account for all major processes and their interactions, but these are still under development. Other reports in this collection
<urn:uuid:715fca84-59ac-4ebd-bc29-86554a79b04b>
CC-MAIN-2013-20
http://grida.no/climate/ipcc_tar/wg1/498.htm
2013-05-23T19:20:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906702
1,044
With the recent riots in London, Manchester, Birmingham, and other locations, it is an apt time to examine crowd behaviour and the ‘mob mentality.’ So, what causes mob mentality? There are a number of explanations for mob mentality within social psychology. These include: Deindividuation – when people are part of a group, they experience a loss of self-awareness. Identity – when people are part of a group, they can lose their sense of individual identity. Emotions – being part of a group can lead to heightened emotional states, be that excitement, anger, hostility, etc. Acceptability – behaviours that are usually seen as unacceptable suddenly become acceptable when others within a group are seen to be carrying them out. Anonymity – people feel anonymous within a large group, which reduces their sense of responsibility and accountability. Diffusion of Responsibility – being part of a group creates the perception that violent or unacceptable behaviour is not a a personal responsibility but a group responsibility. The larger the group or crowd, the more likely that there will be deindividuation and diffusion of responsibility. It is generally believed that everyone is capable of this mob mentality. However, research does suggest that some personalities or circumstances make it more likely. For example: Adolescents who lack a stable family can gain a sense of identity when part of a group. People are more likely to take part in looting during times of hardship. Particularly emotional events such as football matches.
<urn:uuid:a53dcaef-f68b-4aa5-9bd2-13ea4a268fce>
CC-MAIN-2013-20
http://healthpsychologyconsultancy.wordpress.com/2011/08/09/the-psychology-of-the-mob-mentality/
2013-05-24T22:28:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954435
309
American Heritage® Dictionary of the English Language, Fourth Edition - v. To bring or draw out (something latent); educe. - v. To arrive at (a truth, for example) by logic. - v. To call forth, draw out, or provoke (a reaction, for example). See Synonyms at evoke. Century Dictionary and Cyclopedia - To draw out; bring forth or to light; evolve; gain: as, to elicit sparks by collision; to elicit truth by discussion; to elicit approval. - Immediately directed to an end: opposed to imperate. - Performed by the will itself without the aid of any other faculty: as, volition, nolition, choice, consent, and the like are elicit acts: opposed to imperate. - v. To evoke, educe (emotions, feelings, responses, etc.); to generate, obtain, or provoke as a response or answer. - v. To draw out, bring out, bring forth (something latent); to obtain information from someone or something. - v. To use logic to arrive at truth; to derive by reason; deduce; construe. GNU Webster's 1913 - adj. obsolete Elicited; drawn out; made real; open; evident. - v. To draw out or entice forth; to bring to light; to bring out against the will; to deduce by reason or argument. - v. deduce (a principle) or construe (a meaning) - v. derive by reason - v. call forth (emotions, feelings, and responses) - Latin elicitus from elicere, to draw forth (Wiktionary) - Latin ēlicere, ēlicit- : ē-, ex-, ex- + lacere, to entice. (American Heritage® Dictionary of the English Language, Fourth Edition) “Of course, this answer is the one I suspect that Dawkins wishes to elicit from the reader of "Meet my cousin, the chimpanzee".” “A similar problem unfolds in stanza five as the speaker seeks to elicit from the urn a transcendental message both aesthetic and ontological that will bring the poem to thematic and formal closure and that will confirm the urn's (and the poem's) status as a revelatory Romantic symbol.” “The first step, however, was to elicit from the Germans a concrete statement of aims.” “Meanwhile Dr Malan made an attempt to elicit from the Germans a more definite indication of their intentions towards South Africa.” “And this is pretty much the standard crest-and-trough reaction I elicit from the Chinese.” “i wonder how many “kill it” posts this will elicit from the local contingent of bitter, pizza-faced, boys.” “But country-of-origin labels elicit an even more perplexing question.” “Say Chinnery to any art buff under 40 and the name will elicit no response, and now that Tate Britain has all but abandoned its responsibility to keep successive generations aware of historic British painting, it is probable that Chinnery will be for ever lost to common knowledge, obliterated, with many other once known artists, by the enforced fashion there for contemporary art.” “I also loved that you used 'elicit' and 'illicit' within the same piece, and close together.” These user-created lists contain the word ‘elicit’. A complete Barron's Wordlist for GRE preparation. Your online flashcard replacement. Similar words meaning different things These come from gamma meditation ,I think. List of most of the words I've learned Looking for tweets for elicit.
<urn:uuid:c0890e82-3a14-4ed1-b9b8-5302a43e8346>
CC-MAIN-2013-20
http://www.wordnik.com/words/elicit
2013-05-26T03:46:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.911082
803
People wanting to gain more information about self harm awareness have been growing in number each day. Because humans are naturally inquisitive, one main agenda of people joining self harm training groups is to know why people cut themselves. Self injury is generally an intricate matter and can be really a problem for people involved. Statistics and reports show that it is a growing worry that there is a 50% greater chance of committing suicide by people who need self harm help than those who do not. Proper treatment at the right time is important for fast recovery and coping. It should not be mistaken to be merely seeking attention because self harm reasons range from many aspects and backgrounds as well as experiences. And disclosure by people who self harm can be varied as one can share it with a friend and not with a family member or vice versa. You may know someone who self harms, maybe even closer at home, and hence the resource materials for teachers have been created to give training for self harm help. How Self Harm Training Can Help The thirst for knowledge is natural given that there is a shortage of proper self harm help or training provision. Self injury is already a complicated issue alone, as there is the emotional pull on the young person self harming as well as those supporting them, for instance, teachers, parents and friends. Training is vital because it can encourage people to explore their self harm awareness and enhance that inner understanding of the issues faced by those with self harming behaviour. Training will further delve into the facts of why people cut themselves and the impact supporters can have on the health, well-being, and recovery of the person self harming. Through proper preparation, skills can be developed to battle self injury and influence faster recuperation. Perhaps the most significant part of professional training against self harm is gaining know-how, comprehension of the symptoms of such behaviours; confidence to be able to differentiate a real suicidal attempt from self harm; and be aware of instances where immediate action is needed. Training courses have been established to help supporters to empathize with those individuals who deliberately injure themselves and for them to be in a better position to provide more relevant or appropriate self harm help. In a world full of judgmental attitudes, it is always touching and encouraging to know that there are those – even though not related to us by blood – who really care and will always lend us a hand whenever and wherever we need them to.
<urn:uuid:b9adc04c-efdb-4ad9-9b2c-56a07558b2f2>
CC-MAIN-2013-20
http://stepup-international.co.uk/2012/03/16/importance-of-self-harm-training-for-professionals/
2013-05-22T21:23:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967206
483
1. According to most historians, the game of tennis was invented by Frenchmonks in the 11th or 12th century. They hit a ball with their hands over a rope strung across a courtyard or against the walls of the monasteries. 2. Next, players began using a glove with webbing between the fingers or a solid paddle. 3. Eventually players began using a piece of webbing attached to a handle, which became the modern day racquet. 4. The first tennis balls were made of wool or hair wrapped up in leather. 5. Tennis is a game played indoors or outdoors by two players when playing singles or four players when playing doubles. 6. Tennis is played on a level court usually made of grass, clay , concrete, or materials made for indoor use such as wood or synthetics. 7. The length of a standard tennis court is 78 feet. The width of a court differs with the type of match played there. For singles match, width of tennis court is 27 feet, and for doubles match it is 36 feet. 8. The height of the net in the middle of the tennis court is approximately three feet. 9. A player must score four points to win a game. Six games need to be won to win a set and two sets to win a match. 10. The different shots used in tennis are: forehand, backhand, serve, volley, lob and drop shot
<urn:uuid:890e7a37-816f-44e2-9bc1-4adc88999173>
CC-MAIN-2013-20
http://www.kidskonnect.com/subjectindex/30-categories/sports/258-tennis.html
2013-05-24T09:03:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978812
297
One of the most highly honored French figures of his day, composer Camille Saint-Saëns reputedly declared--perhaps sardonically--that he was not a homosexual but a pederast. The question of the homosexuality of Franz Schubert, among the greatest composers of classical music, is a subject of continuing debate. The most important female composer in early twentieth-century English music, Dame Ethel Smyth enjoyed a class privilege that allowed her to be an unapologetic lesbian. Best known to television viewers for his role as Major Charles Emerson Winchester III on the series M*A*S*H, David Ogden Stiers has had a long and successful career. It is not surprising, since the Bible insists that David be looked at and admired, that he should emerge in Western art as the incarnation of male physical attractiveness, especially as rendered by Michelangelo. American composer Conrad Susa is best known for his operas and choral music, some of which are informed by his experience as a gay man. Revered as the father of Polish contemporary classical music, Karol Szymanowski unequivocally expresses homoeroticism in his music. One of the greatest composers in the history of music, Pyotr Ilich Tchaikovsky inspired a cult of gay admirers who detected in his work themes of forbidden love. Critic and composer Virgil Thomson was a pioneer in creating a specifically American form of classical music that is at once "serious" yet whimsically sardonic. One of the most prominent American conductors of his generation, Michael Tilson Thomas may be the first gay conductor to achieve such eminence without masking his sexuality. English composer Sir Michael Tippett became one of the most respected figures in British classical music despite his pacifism, unabashed homosexuality, and incorporation of homosexual themes in his operas. Concerned with the music, theoretical writings, political ideas, and aesthetics of the German composer Richard Wagner, Wagnerism had a profound influence on late nineteenth-century European culture, including the expression of same-sex desire. Siegfried Wagner, the son of composer Richard Wagner, was himself a prolific composer and conductor; his bisexuality was the source of both scandal and also of elaborate attempts to erase it from histories of the Wagner family. Composers and lyricists Robert Wright and George "Chet" Forrest, partners in life and art, specialized in adapting themes from classical music into engaging tunes for movie scores and stage musicals.
<urn:uuid:251d67d3-b58d-4059-8a4d-2489fe94e6b2>
CC-MAIN-2013-20
http://www.glbtq.com/topic/arts_49_3.html
2013-05-22T08:32:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974159
519
Benjamin Franklin was inspired to create his own version of the armonica after listening to a concert of Handel's Water Music which was played on tuned wine glasses. Benjamin Franklin's armonica, created in 1761, was smaller than the originals and did not require water tuning. Benjamin Franklin's design used glasses that were blown in the proper size and thickness which created the proper pitch without having to be filled with water. The glasses were nested in each other which made the instrument more compact and playable. The glasses were mounted on a spindle which was turned by a foot treadle. His armonica won popularity in England and on the Continent. Beethoven and Mozart composed music for it. Benjamin Franklin, an avid musician, kept the armonica in the blue room on the third floor of his house. He enjoyed playing armonica/harpsicord duets with his daughter Sally and bringing the armonica to get togethers at his friends' homes.
<urn:uuid:9ad03e5d-1e26-4879-9507-178479baf60b>
CC-MAIN-2013-20
http://inventors.about.com/od/fstartinventors/ss/Franklin_invent.htm
2013-05-26T03:08:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.990261
203
Kittlitz’s Murrelet (Brachyramphus brevirostris) This decline is thought to be related to the recession of glaciers and the associated loss of preferred prey that use glacially-influenced waters, perhaps due to climate change. Alaska Department of Fish and Game. 2006. A Wealth Maintained: A Strategy for Conserving Alaska's Diverse Wildlife and Fish Resources. Alaska Department of Fish and Game, Juneau, AK. Retrieved June 2010 from http://www.sf.adfg.state.ak.us/statewide/ngplan/NG_outline.cfm. BirdLife International. 2010. Species factsheet: Brachyramphus brevirostris, Kittlitz's Murrelet. Retrieved June 2010 from http://www.birdlife.org/datazone/species/index.html?action= Day, R. H., K. J. Kuletz, and D. A. Nigro. 1999. Kittlitz's Murrelet (Brachyramphus brevirostris). The Birds of North America Online (A. Poole, Ed.). Ithaca: Cornell Lab of Ornithology. Retrieved June 2010 from http://bna.birds.cornell.edu/bna/species/435. Kittlitz’s Murrelet - Photo by Gerald A Sanger Action Plan in development Return to Focal Species Strategy December 2, 2011
<urn:uuid:735144c9-1ba2-41a5-95f3-0d230232007a>
CC-MAIN-2013-20
http://www.fws.gov/migratorybirds/CurrentBirdIssues/Management/FocalSpecies/KittlitzsMurrelet.html
2013-05-20T12:08:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.768189
315
We look the obesity epidemic in the face every day Our youth are facing a health crisis which may eventually overcome their aspirations. Overweight and obesity among American children is a national epidemic with dire health and economic consequences that jeopardizes the quality of life and life expectancy of our youth. Ethnic minorities, when compared to non-minorities have twice the level of obesity and obesity-related illnesses such as diabetes, heart disease and high blood pressure. In addition to this racial disparity, environmental barriers such as limited access to healthy foods lead to poor health outcomes related to poverty. It is imperative that our youth learn the value of nutrition, exercise, and healthy lifestyle choices before it’s too late. We believe that healthy children will live longer, healthier lives, will improve academically, and will espouse a more positive self-image, self-confidence, and a sense of empowerment and control over their own lives. Health Masters Club is helping- we provide evidence-based community programs designed to improve the well-being of children and their families by reducing obesity and obesity-related illnesses. The vision pioneers a wellness promotion model for health care providers linking community partnerships to promote lifelong healthy nutrition and physical activity. Click here for more information on HMC’s history.
<urn:uuid:54dde02a-9a4d-439b-a851-fa133207159d>
CC-MAIN-2013-20
http://healthmastersclub.org/index.php?option=com_content&view=article&id=52&Itemid=64
2013-06-19T12:32:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94342
256
See also the Dr. Math FAQ: 3D and higher Browse Middle School Triangles and Other Polygons Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Pythagorean theorem proofs. - Rectangle to Parallelogram [06/28/2002] As you change a rectangle to a parallelogram, what happens to the area and the perimeter? - Reflecting a Triangle [9/14/1996] A right triangle is reflected about its hypotenuse. What is the new geometric figure that is formed? - Regular and Non-regular Polygon Areas [03/10/1999] Given a regular and a non-regular polygon with the same perimeter, prove that the area of the regular polygon will always be greater. - Regular Decagon [03/25/2002] Can you show me a picture of a regular decagon? - Regular vs. Equilateral Polygons [07/24/2003] What is the difference between a regular polygon and an equilateral - The Relation of Perimeter to Area [10/2/1995] I'm puzzled by the ability of two fences with the same perimeter to have very different areas inside them. I realize by LxW an 8' x 10' fence will have more area than a 6' x 12' fence, but WHY? Both fences have 18' surrounding them but different areas. Also does a circle or a square conserve more area with identical perimeters? - Remembering Area Formulas [12/23/2001] Is there was a good way to help me memorize the formulas for areas of - Rhombus and Square Comparison [01/14/2004] Comparison of the definitions of rhombus and square as a way to answer the questions, 'Is a square a rhombus?' and 'Is a rhombus a square?'. - Rhombus vs. Rhomboid [08/27/2002] What is the difference between a rhombus and a rhomboid? - Right Angles in a Triangle [3/8/1996] How many right angles (90 degrees) can a triangle have? - A Right Triangle of Points [01/14/1999] Determine the values of x that would make the points (x,0), (-2,1), and (3,4) the vertices of a right triangle. - Right Triangles [3/30/1996] Is there an easy way to remember the different right triangles and how to find the length of missing sides? I already know that in any right triangle: (a*a) + (b*b) = (c*c). - Scale Factor of Similar Shapes [05/12/2000] Find the scale factor, the ratio of the perimeters, and the ratio of the areas of two regular octagons that have sides of lengths 21 and 28, - Scalene Triangle [8/20/1996] Construct a triangle PQR: PQ = 9cm, angle PQR = 38 degrees, and angle QPR = 67 degrees... - Ship's Bearing [8/22/1996] A ship travels 8km due east and then 8km due north. What is the bearing of the ship from its initial point? - Sides of an Octagon [04/13/1997] What is the formula for the length of the sides of an octagon whose diameter is 15 feet? - Sides of Similar Triangles [06/11/1998] The sides of a triangle are 24, 16, and 12. The shortest side of a similar triangle is 6. Find the longest side of this triangle. - Similar Rectangles [05/03/1997] The outside boundary of an unfolded card is similar to its boundary when it is folded. Find the width of the card if the open length is 8 and the folded length is 4. - Similar Triangles [1/22/1996] For triangle ABC whose vertices are A(6,3),B(1,5),C(-1,4), what are the vertices of a similar triangle whose perimeter is 5 times that of - Similar Triangles and Area [11/17/1998] P is a point on the segment joining midpoints D, E of the sides AB, AC of a triangle ABC. Prove that BPC has twice the area of ADE. - Similar Triangles and Ratios [12/04/2002] A man who is 54.4 inches tall casts a shadow that is 69.7 inches. His son's shadow is 41 inches. What is the height of the man's son ? - Sine, Co-sine, and Tangent: SOHCAHTOA [03/28/1999] I am having trouble figuring out what to use when solving a triangle - Six Lines, 4 Triangles [8/19/1996] How can you form four triangles from six toothpicks? - The Six Quadrilaterals [2/2/1996] My daughter forgot her textbook and needs to know the 6 types of - The Spider and the Fly [12/23/1999] A spider and a fly are on opposite walls of a rectangular room... Does the spider get the fly? - Square Inscribed in a Circle [09/28/1997] What percent of the circle is contained within the square? - Square Inside a Square [01/30/2001] Imagine a square with eight compass points marked at each corner and midpoints of the sides. Create a smaller square inside... How do the areas of the two squares compare, and why? - SSA Theorem: Valid or Invalid? [12/19/2001] Why can't the SSA Theorem be used to prove congruence? - Stars in a Flag [04/15/1999] Find the area of the stars in the American Flag. - Straightedge and Compass Constructions [12/14/1998] Can you help me with these constructions, using only a straightedge and a compass? A 30, 60, 90 triangle, the three medians of a scalene - Subsets of Shapes [01/27/2004] What is the relationship between square and a rectangle? - Summing Odd Numbers Geometrically [10/30/1999] Can you prove that 1 + 3 + 5 + ... + (2n-1) = n*n by using a simple - Sum of Angles inside a Polygon [2/18/1996] What is the sum of the angles inside a 10-sided polygon? - Sum of Degrees in a Triangle [03/03/1999] Four proofs that the degrees in a triangle sum to 180. - Sum of the Angles in an N-Pointed Star [11/29/1999] Can you tell me how to find an equation for the sum of the angles in the tips of an n-pointed star? - Sum of the Angles in a Star [09/21/1999] How can I find the sum of the measures of the five acute angles that make up a star? - Supplementary Angles in a Parallelogram [10/23/1995] Are all parallelograms supplementary? - Teaching about Bearings [06/08/2000] What are bearings? Do you have any ideas on how I can present bearings to my math class in an interesting fashion? - Teaching Area of Triangles [9/15/1996] When I gave a Unit Assessment, all but one student got area of a triangle wrong. Where did I fail? - Thinking about the Maximum Area Enclosed by a Fence [04/15/2004] You have 2000 meters of fencing. What is the largest area you can enclose with it using various shapes?
<urn:uuid:58de8c3c-785c-41af-809d-f4bcad798409>
CC-MAIN-2013-20
http://mathforum.org/library/drmath/sets/mid_triangles.html?start_at=201&num_to_see=40&s_keyid=38892118&f_keyid=38892119
2013-05-20T12:10:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.846741
1,727
Pneumoencephalography (sometimes abbreviated PEG) was a medical procedure in which most of the cerebrospinal fluid is drained from around the brain and replaced with air, oxygen, or helium to allow the structure of the brain to show up more clearly on an X-ray image. It was derived from ventriculography, an earlier and more primitive method where the air is injected through holes drilled in the skull. Pneumoencephalography was performed extensively throughout the early 20th century, but it was extremely painful. The test was generally not well tolerated by patients. Headaches and severe vomiting were common side effects. Replacement of the drained spinal fluid is by slow natural production, and therefore required recovery for as long as 2–3 months before normal fluid volumes were restored. Video of the procedure is documented in a BBC documentary of an early EMI installation. . Modern imaging techniques such as MRI and Computed tomography have rendered pneumoencephalography obsolete. Today, pneumoencephalography is limited to the research field and is used under rare circumstances. A related procedure is pneumomyelography, where gas is used similarly to investigate the spinal canal. Pneumoencephalography appears in popular culture in the movie The Exorcist (1973), when Linda Blair's Regan MacNeil character undergoes the procedure. It is also referred to in Episode 7, Season 7 of House M.D. as an example of a dangerous procedure. See also - "Walter Dandy". Walter Dandy. Society of Neurological Surgeons. Retrieved 2011-04-28. - Greenberg, Mark (2010). Handbook of Neurosurgery. Thieme.
<urn:uuid:a8b2ed9f-32a5-4814-82e8-9901e03c81f8>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Pneumoencephalography
2013-05-25T06:01:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944153
349
"THAT'S FOR THE BIRDS!" "That's for the birds!" the eagle quipped to himself. He was offended by the intimidating verbal threats and the simple-minded brainwashing of the wrinkled old hen. The young eagle prized his ability to solve problems. He valued his freedom and ability to reason. He had been taught that all problems have understandable causes and even relationship conflicts can be negotiated successfully, if they are analyzed in win-win contexts of mutual respect! PROFESSIONAL LIBRARIAN'S CODE = "ASSERT AND DEFEND YOUR VALUES WITHOUT FEAR!" Return to: Eagle Allegory Go to: Interactive Ideas
<urn:uuid:1f3fecda-a8af-4de9-8e4e-1795bf68adc7>
CC-MAIN-2013-20
http://home.earthlink.net/~denmartin/eagle-birds.html
2013-05-26T02:41:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960156
141
Library Home || Primary || Math Fundamentals || Pre-Algebra || Algebra || Geometry || Discrete Math || Trig/Calc |Geometry, difficulty level 3. AOD is a diameter of the circle with center O. B is any point on the circle that isn't A or D. A tangent is drawn to the circle at point B. A line is drawn through O parallel to AB, meeting the tangent at P. Prove that PD is a tangent to the circle.| |Please Note: Use of the following materials requires membership. Please see the Problem of the Week membership page for more information.| |Online Resource Page #3423| © 1994-2012 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
<urn:uuid:40c75c9f-b88e-4d67-9a52-647e2a826829>
CC-MAIN-2013-20
http://mathforum.org/library/problems/more_info/66960
2013-05-18T05:20:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.847582
178
Updated on Friday 1 March 2013 If a classroom full of children is asked, "What do you want to be when you grow up?" the children will shout out numerous, common answers. Children are likely to yell out occupations such as a doctor, a football star, and on a rare occasion, even, a teacher. What children will never say they want to be when they grow up is a philosopher. As a matter of fact, if that same classroom full of children is asked, "What do philosophers do?" those same excited voices will immediately hush. Even adults are unlikely to know what philosophers do. So why study philosophy? Why study something that no one seems to have an idea about? The deep answer is, "Because philosophy is the study of the most important stuff that humans can think of." But, what is the practical payoff, especially for an international student? Well, the payoff is a good one. Studying philosophy is likely to help a student succeed in their career and find happiness. To really answer the question, "Why study philosophy?" one should first answer, "What is philosophy?" Philosophy is the study of the deepest questions humanity has been able to conceive. Philosophy has been around for at least 3,000 years, before the studies of science, math, and literature. The reason why philosophy existed before the study of science, math, and literature is because philosophy is where all of those studies started. Science began with philosophers who asked questions about the nature of the universe. For example, Democritus started physics when he asked, "What is the smallest anything can be?" He then came up with the idea of the atom. Trying to understand the nature of math, and what makes a story, a story, also started with philosophers. While they were not necessarily the first people to do math or write a story, they were the first people to ask what math and literature are and what uses they have. Philosophers are the people who articulate the ideas that all of our technology, science, and inquiry are based upon. The fundamental questions that philosophers ask include the following: what is the nature of the Universe, what is my place in the Universe, who am I, what is right and wrong and how can I learn the answers to those questions. Some of those questions lead to the exploration of religion, others to the exploration of science, and still others to the exploration of literature. Philosophy is open to all questions, as long as one is willing to consider all possible answers, logically. The only true rule in philosophy is that philosophers must use reason and logic to try to come to the best possible answers. To look at the questions more closely, philosophers study the following: is abortion wrong, how do we know God exists, what is art, should animals have rights, what is the purpose of education and what is happiness. This leads us to one important answer to the question, "Why study philosophy?" Because philosophers study those things that are essential to knowing who one is, what one should do, and what makes one happy. To be a happy person, it is probably a good idea to learn as much about oneself and one's place in the world as possible. That is the career of a philosopher, to know as much about everything as possible. "But how could students, especially international students, make money by studying philosophy?," one might reply. Why study philosophy when there is no immediate philosophy career that comes to mind when we talk about philosophy? How can a student make the money necessary to eat and have a nice life when he or she is just thinking about ideas all the time? Consider the famous story about an ancient Greek philosopher named Thales. Thales was famous for being one of those philosophers who always had his head in the clouds. As a matter of fact, once he was so busy thinking that he fell down a well. After a while Thales got tired of people telling him that he would never make money as a philosopher so he made some very wise investments, so wise, in fact that he made a tremendous amount of money, and his friends found themselves having to ask him for financial help. After a short while, Thales gave the money away, arguing that money really didn't make anyone happy and he was content that he had proven his point—philosophers can make money; it just isn't that important to them. It is true, philosophers can make money. Although one of the only job that states, "looking for philosophers" is as a philosophy professor, many career fields seek students with degrees in philosophy. Law schools tend to value applicants with philosophy degrees because philosophers have spent so much time with logical thought, argumentation, and intellectual rigor. Philosophers are also in demand as diplomats, writers, journalists, and policy-makers. Philosophers find themselves suited well for careers in business, computer science, healthcare, communications, and public relations. Whatever career it is that a student with a philosophy degree pursues, he or she is likely to do well in it because philosophy programs are focused on helping students learn to think well, logically, and rigorously. As a matter of fact the GRE, LSAT, and GMAT all report that philosophy majors test in the highest percentiles in those exams. International students are likely to benefit by studying philosophy. Philosophy focuses primarily on critical thinking skills, reading, and writing. Everyone is likely to benefit from improving their critical thinking skills. International students who have trained their minds to be extra-sharp will find themselves in a better position than many to find jobs and succeed in their careers. Many fields are looking for people who can think quickly, creatively, and from many perspectives, and that is exactly what philosophers are trained to do. International students will also find themselves benefiting from philosophy's focus on writing and reading. Students of philosophy learn how to read critically, and are taught to write with a focused and well-thought-out style. Any student, international or otherwise, who needs to brush up on their writing, will find that philosophy programs are an excellent place to improve and hone their skills. Not only will these writing skills help students in their chosen fields, it will help them get the jobs they seek, as they will be given the skills to write stronger resumes and cover letters. The best answer to why international students, and all students, should study philosophy, is because it is the one field that is focused completely on helping the student better understand his or her place in the world. While most fields train students to do something outside of themselves, philosophy is about learning to do something for yourself. The skills philosophy teaches do not always seem as if they will lead to money-making immediately. But what students find is that their self-reflection, critical thinking, and improved literary and communication skills make getting and excelling a job much easier. Students find that once having studied philosophy, they look at the world with a new wonder, and have a much better idea of what really makes them happy.
<urn:uuid:2562d39a-6bc9-4a71-ab0c-897b597e8600>
CC-MAIN-2013-20
http://www.internationalstudent.com/study-philosophy/why-study-philosophy/
2013-05-19T02:17:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979459
1,421
|Image Credit: Markos Possel Mapos| A few weeks ago word spread of the passing of F. Sherwood Rowland. The first notice was a press release on the UC Irvine website. He had an illustrious career as a chemist and won a Nobel Prize in Chemistry. I never had the pleasure of meeting him, but there are few in meteorology that didn’t know of him. Those who knew him can tell you more about his career and they can be found at Real Climate and Climate Progress. Rowland was a member of the National Academy of Sciences where there is a wonderful tribute to him. Rowland along with post-doctoral student Mario Molino found that chlorofluorocarbons (CFCs), a man-made substance, could be highly destructive to ozone. One CFC could destroy up to 100,000 ozone molecules. This could be damaging to the ozone layer even at concentration on the order of parts per billion. The discovery led to their landmark paper published in Nature in 1974. There were no observations at the time to confirm this, but it would not take long. The first sign of trouble was reported by British scientists making measurements in the Antarctic. Very low readings were being reported, but NASA could not confirm the observations from its satellite record. A software glitch was found to be preventing NASA from seeing the low readings. It turns out that the software was simply ignoring readings below 180 Dobson units, a measure of ozone concentration. NASA was then able to confirm the British observations once the glitch was corrected and the data re-examined. What they found was a tremendous hole in the ozone layer over the southern polar region. The discovery sent shock waves through the scientific community and an international effort to study the phenomena was organized. The scientific team included Susan Solomon, an atmospheric chemist, who proposed that CFCs combined with an extremely cold stratosphere were in fact destroying the ozone layer. It wasn’t that the ozone was all gone, just severely depleted. This heightened concerns and an effort to stop CFC production led to the Montreal Protocol in 1987. The ozone continued to decline even though CFC production came to a halt in 2000. This led to the Nobel Prize for Rowland and his colleagues in 1995. It was an example of discovery in the laboratory applied to the real world. When the danger was recognized action saved the day. The story above is a much simplified version of reality and reality is never simple or nice. Sherry Rowland endured much criticism for his findings often from those who either knew nothing about atmospheric chemistry or who belonged to the industry producing the chemical. I hear some of these myths even today and this is where I became familiar with his work. One of the most frequent myths is that CFCs are too heavy to exist high in the atmosphere. Yes, CFC molecules are heavier than oxygen or nitrogen molecules. However, the atmosphere is not stratified by molecular weight. It is well mixed due to convection in the troposphere and any chemical released at the surface can make it high into the atmosphere. Molecules are no match for air currents. Much heavier substances like dust can make it into the stratosphere. There are still a few scientists today that deny that CFCs cause ozone depletion. However, they have never substantiated their claim in peer-reviewed journals and are not taken seriously by scientists “in the know”. The ninth lowest measurement for ozone over the Antarctic was observed in 2011. There was also an ozone hole observed over the Arctic region for the first time. |Antarctic ozone hole in 2011. Image Credit: NASA| |The Arctic ozone hole in 2011. Note the comparison with 2010, both taken on March 19. Image Credit: NASA| I hear some of these same arguments today relating to carbon dioxide and climate change. The argument goes that CO2 is too heavy to be the cause on global warming in the troposphere. Again, the facts above dispel this myth. Or how about the recent statement that CO2 is not well mixed and cannot be the cause of global warming. There may temporary local concentrations of molecules especially near point sources. However, the atmosphere is well mixed through the troposphere. In fact, the concentrations are homogeneous up to the ozone layer. Sherry Rowland had begun to study the effects of increasing greenhouse gases in recent decades. The news release from the National Academy of Sciences mentioned this. They went on to write: Speaking to a 1997 White House roundtable on climate change, Rowland asked: "Is it enough for a scientist simply to publish a paper? Isn't it the responsibility of scientists, if you believe that you have found something that can affect the environment, isn't it your responsibility to actually do something about it, enough so that action actually takes place? …If not us, who? If not now, when?" In 2008 he sat down with Andrew Revkin of Dot Earth and made to following comments: During a break, I asked Dr. Rowland two quick questions. The first: Given the nature of the climate and energy challenges, what is his best guess for the peak concentration of carbon dioxide? (Keep in mind that various experts and groups have said risks of centuries of ecological and economic disruption rise with every step toward and beyond 450 parts per million, with some scientists, most notably James Hansen of NASA, saying the long-term goal should be returning the atmospheric concentration to 350 parts per million, a level passed in 1988.) His answer? “1,000 parts per million,” he said. My second question was, what will that look like? “I have no idea,” Dr. Rowland said. He was not smiling. Joe Romm of Climate Progress has an idea. He points out that “readers of Climate Progress have an idea, since I have done my best to describe this grim future that scientists rarely model because they can’t believe humanity would be so self-destructive as to let it happen: At 800 to 1000 ppm, the world faces multiple miseries, including: - Sea level rise of 80 feet to 250 feet at a rate of 6 inches a decade (or more). - Desertification of one third the planet and drought over half the planet, plus the loss of all inland glaciers. - More than 70% of all species going extinct, plus extreme ocean acidification.” F. Sherwood Rowland spent much of his career studying the chemistry of the atmosphere and raising the alarm about what the science said. Much like James Hansen and the multitude of climate scientists today trying to warn the world that the path we are on is unsustainable and destructive. Some call them alarmists, but their concern is backed up by the facts and the science. They are not only scientists, but also heroes.
<urn:uuid:0233281b-b2e1-4ae2-b9c1-61b732be06e3>
CC-MAIN-2013-20
http://weatherclimatematter.blogspot.com/2012/04/f-sherwood-rowland-scientist-hero.html
2013-05-20T02:30:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966325
1,407
Each year in British Columbia over 26,000 poisonings are reported to the B.C. Poison Control Centre. These include both unintentional and intentional poisonings and overdoses. The type of poisons and the approach to preventing a poisoning depends on the age group involved. More than half of all poisonings involve young children, with children between one and three years of age at highest risk. The situations are mostly unintentional and are a function of the child’s developmental stage. Young children constantly explore and investigate the world around them. The types of poisons in this age group are often things that they encounter in their environment. Unintentional poisonings in adolescents and adults can occur when product label instructions are not read and followed, or when products are not stored properly. Each season presents unique hazards and spring is no exception. We are often busy spring cleaning and working in the yard, using chemicals and cleaners that we haven’t all year. By increasing our general awareness of some of the springtime hazards we can prevent poisonings in both young children and ourselves. 1. Dolomite lime contains calcium carbonate and magnesium carbonate. 2. Rapid lime contains magnesium carbonate. DO NOT confuse these products with: slaked lime contains calcium hydroxide or calcium magnesium hydroxide. Another name for slaked lime is hydrated lime. unslaked lime is calcium oxide. agricultural lime may contain calcium oxide (unslaked lime) or calcium hydroxide (slaked lime). 1. Dolomite lime and rapid lime have a low order of toxicity. May be irritating to skin. 2. Agricultural lime ingredients are caustic alkalis and can cause burns. Can water lawn immediately after application. Water again before allowing children on the area. Usually contain ferrous sulfate as well as ammonium sulfate or possibly zinc sulfate (e.g. roof moss killers). May be irritating to skin. Can cause vomiting if the powder is swallowed. Allow 48 hours after application prior to watering and allowing children on the area. 1. Glyphosate (Roundup) 2. 2,4-D and derivatives (e.g. mecoprop, MCPA) Clinical Effects: Possible skin irritation. 1. Glyphosate – minimum of 6 hours prior to watering (longer time preferred). 2. 2,4-D and derivatives – minimum of 24 hours prior to watering (48 hours preferred). Some insecticides may be applied to the lawn (e.g. diazinon). Clinical Effects: Possible skin and eye irritation. May be irritating if inhaled. Possible serious effects if ingested. Recommendations: Allow a minimum of 24 hours prior to watering and allowing children on area. Is usually used in combination with lime sulfur. Clinical Effects: Low order of toxicity. Can be irritating to the eyes and skin. A simple mask (e.g. dust mask) is recommended when spraying to avoid inhalation of mists. Children should be kept indoors during spraying. Can be allowed outside once droplets have dried. Azaleas/Rhododendrons – All parts are considered toxic. Azaleas are less toxic than rhododendrons. Symptoms include burning in the mouth, salivation, nausea, vomiting and diarrhea. Crocus – The spring crocus is nontoxic. DO NOT confuse with the Autumn crocus. Daffodils/Narcissus – All parts of the plant are considered toxic (especially the bulb). May cause nausea, vomiting, abdominal pain and diarrhea. Iris – All parts of the plant are considered toxic. May cause mouth, stomach or skin irritation. Mushrooms – Ingestion of a small mushroom or part of a large one may be toxic. Prunus species including flowering plum and cherry trees - Cyanogenic glycosides are contained in the seeds of the fruit. Ingestion of 1-2 pits is not considered toxic. Cherry laurel also contains cyanogenic glycosides. All parts of the plant except the flesh of the berry are considered toxic. The pits of cherry and cherry laurel resist chewing and digestion and are not a problem if swallowed. Tulips – The bulb is nontoxic but may cause dermatitis. •for 24 hr poison first aid and treatment information• BC Poison Control Centre 604-682-5050 or 1-800-567-8911
<urn:uuid:c5ba9ce7-ce20-4950-964c-5220d57f0f1c>
CC-MAIN-2013-20
http://dpic.org/bc-dpic-fact-sheets/springtime-hazards-fact-sheet
2013-05-19T02:02:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.876694
928
The field survey form rated battlefield integrity through the eyes of the survey team. While these observations are valuable in terms of how the battlefield landscapes are perceived subjectively, for the purposes of this study it was deemed important to find a more objective measure of the loss or retention of integrity. For this we turned to the Cultural Resources GIS Facility for a computer analysis of land use within the battlefield study and core areas. Computer mapping and analysis software, collectively known as Geographic Information Systems (GIS) were used in this study to create a mosaic map by combining many different modern and historic maps; to document study and core areas, to assess current land uses within defined areas; and to calculate statistics for land parcels. The battlefield study and core areas were reduced to a computer format to enable various comparisons. The average size of study areas of the fifteen battlefields was 5,727 acres, ranging from 3,082 acres at Front Royal to 22,274 acres at Second Winchester. The size of the Front Royal study area accurately reflects the smaller numbers of troops engaged and their restricted deployment along the main roads. Second Winchester, on the other hand, involved a larger force, a network of Union entrenchments, two sweeping flank marches by Confederate forces that literally encircled the town of Winchester, and three days of fighting and maneuvering. The study areas of the Valley's two major battles (in terms of forces engaged and casualties) at Opequon and Cedar Creek were 11,670 acres and 15,607 acres respectively. Because the study areas of several battlefields overlap, the total acreage for the study areas of the fifteen battlefields was 85,909 acres, 3.4 percent of the area of the Shenandoah Valley under consideration. Battlefield core areas ranged in size from 944 acres at Front Royal to 6,252 acres at Cedar Creek. The mean size of the core areas was 2,415 acres. Total acreage included in the battlefield core areas was 33,844 acres, 1.4 percent of the Valley's land area. Figure 14 presents the integrity of the battlefields as determined by the GIS analysis. The percentage of built-up lands was computed for the battlefield study and core areas, using available 1973 land use data. These figures were then updated by on-site field inspections. In general, built-up lands, new roads, and quarries were subtracted from study and core area acreage, to achieve an integrity rating. One exception was built-up areas that were residential at the time of the Civil War and that still retain a similar scale and density, such as the old towns of Winchester, New Market, and McDowell. These districts were felt to support battlefield integrity. Retention of 75-100 percent natural and agricultural lands rated ``Good,'' 50-74 percent rated ``Fair,'' 25-49 percent rated ``Poor,'' and less than 25 percent rated ``Lost.'' As presented, the GIS analysis reflects the relative integrity of the battlefields as of 1991. Figure 15 compares the findings of the field survey with the GIS integrity assessment. The field surveyors were more critical of visual intrusions, particularly of highways, bridges, powerlines, and construction within the battlefield cores. Four battlefields ranked good by GIS were ranked fair by the field survey team: Cedar Creek, Fisher's Hill, Cool Spring, and Tom's Brook. Four battlefields ranked fair by the GIS methodology, were ranked poor by the field survey: Second Winchester, Second Kernstown, and New Market. Both methods agreed on the goodintegrity of McDowell, Cross Keys, Piedmont, and Port Republic, on the fair integrity of First Kernstown, on the poor integrity of Opequon and Front Royal, and the lost condition of First Winchester. Although the integrity ranking derived through GIS differed in these instances from the field survey rating, both methods cluster the battlefields similarly toward the top and bottom of the scale. The GIS method generates a gross ratio between land of high and low integrity and does not measure many visual intrusions that are apparent in the field. A minor intrusion in terms of acreage might appear as a major visual intrusion, depending on the location and setting. In this sense, the computer is more forgiving than the critical observer. This reference data is crucial, however, for obtaining a more objective view of the current status of the battlefields. Where the GIS rating is considerably higher than the field survey rating, perhaps, visual intrusions could be removed or masked to improve integrity. The GIS assessment will provide a reference point for monitoring further loss of integrity. Several interesting facts emerged from a regional analysis of the battlefield study areas. The study areas contain a higher proportion of agricultural land (63 percent) than is the case for the Valley as a whole (37 percent). Due to this, changes in agricultural patterns or loss of agricultural land tend to have a higher impact on the battlefields than on the overall Valley landscape. Forests make up more than 56 percent of the Valley's acreage but only about 21 percent of battlefield acreage. This is accounted for by the fact that the Valley's forests are more concentrated in the higher elevations, while battles typically were fought on lower, flatter ground. In addition, built-up lands are more concentrated in the battlefield study areas (14 percent) than in the Valley as a whole (6 percent), reflecting the location of battlefields on or near important towns and transportation nodes. A relatively high level of existing residential development within a battlefield study area indicates that further development in the vicinity is probable due to current zoning and continued growth. Figure 5 shows the pattern of agricultural land use in the Shenandoah Valley. Return to contents page Creation Date: 3/10/95
<urn:uuid:f53fd456-c0cd-4ad6-a3d7-2cefdc359eaa>
CC-MAIN-2013-20
http://www.nps.gov/history/hps/abpp/shenandoah/svs4-2.html
2013-05-18T05:16:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949607
1,174
Natura 2000 represents a milestone for nature conservation in Europe. It is one of the largest ecological networks of the world, consisting more than 26,000 protection areas distributed over the whole European Union (cf. Newsletter on Nature and Biodiversity of the European Commission). Natura 2000 bases on two directives aimed at preserving the biological diversity in the member states. - On the one hand this is the Habitats Directive (Council Directive 92/43/EWG on the conservation of natural habitats and of wild fauna and flora). The Annexes I and II to this directive contain types of habitats and species whose conservation requires the designation of special areas of conservation (SACs). In these SACs the member states must take all necessary measures to guarantee the conservation of the habitats and species of European importance and to avoid their deterioration and the significant disturbance of species. The Directive provides for co-financing of conservation measures by the Community. Therefore the funding instrument LIFE has been established. - A further directive, the Wild Birds Directive, specially refers to the protection of birds and their habitats. Again the member states shall create special protection areas (SPAs) for threatened birds and for migratory birds. These areas are to be situated in the birds’ natural areas of distribution and may include wintering and nesting grounds or staging posts along migration routes. It is the primary aim of these directives to protect and re-establish habitats and communities which are endangered throughout Europe. This means nothing less than creating a base for the preservation of the biological diversity in Europe. Here you can read which areas in Germany and Baden-Württemberg are part of the Natura 2000 network, and what this means and what happens in these areas. The project „Floodplains of the river Rhine near Rastatt“ contains one SAC (according to the Habitats Directive) and two SPAs (according to the Wild Birds Directive). Its boundaries you find on our online maps: - “Rhine valley between Wintersdorf and Karlsruhe“ (SAC) - “Rhine valley between Rench and Murg outlet“ (SPA) - “Rhine valley Elchesheim Karlsruhe“ (SPA) Additionally, the project area is situated within the large cross-border Ramsar-Site „Oberrhein - Rhin supérieur“ comprising the Rhine floodplain between Basel and Karlsruhe in Germany and France. The Ramsar convention aims to protect wetlands with world-wide importance, and it is the oldest global nature conservation agreement. It aims to reach a sustainable and balance land-use, aptly escribed in english as “wise use”.
<urn:uuid:f3c9f627-13e8-4829-8032-d24be247ee7f>
CC-MAIN-2013-20
http://www.rheinauen-rastatt.de/en/natura-2000
2013-05-25T13:02:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914301
570
Chapter 1: Genome Sequence Acquisition and Section 1.1 Defining Genomes - What is genomics? - How are whole genomes sequenced? - Why do databases contain so many partial sequences? - How do we make sense of all these bases? - Can we predict protein functions? - How well are genes conserved in diverse species? - How do you know which bases form a gene? - How many proteins can one gene make? 1.1 What is an E-value? 1.1 Which draft sequence is better? Section 1.2 What have we learned from the human genome draft sequences? - Overview of human genome first draft. - Can we describe a typical human gene? - When are the data sufficient? - Can the genome alter gene expression without changing the DNA sequence? 1.2 Whose DNA did we sequence? Chapter 2: Genome Sequence Acquisition and Section 2.1 Evolution and - How did eukaryotes evolve? - What is the origin of our species? 2.1 Are the hit numbers significantly different? 2.2 How do you know if the tree is right? Section 2.2 Genomic Identifications - How can we identify biological weapons? - How long can DNA survive? - How did tuberculosis reach North America? - How are newly emerging diseases identified? Section 2.3 Biomedical - Can we use genomic sequences to make new vaccines? - Can we make new types of antibiotics? - Can we invent new types of medications? - How can E. coli be lethal and in our intestines at the same time? 2.3 How can you tell if base compositions are different? Molecular Biology Course Biology Course Materials Biology Home Page © Copyright 2001 Department of Biology, Davidson College, Davidson, NC 28036 Send comments, questions, and suggestions to: email@example.com
<urn:uuid:d25051f5-29ac-4cb8-8129-71411b5296b2>
CC-MAIN-2013-20
http://www.bio.davidson.edu/COURSES/genomics/seq.html
2013-05-23T05:44:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.848737
422
More climate change targets missed: act now or 2 degree rise likely CO2 emissions were at a record high in 2010 according to the International Energy Agency (IEA) as growing economies pump more emissions into the atmosphere and threaten to help derail efforts to limit global warming to the agreed UN target of 2 degrees Celsius. The IEA says 2010's CO2 emissions were the highest in history and 80% of projected emissions for 2020 can't be stopped. While the financial crisis of 2009 was bad for economies, it was good for the environment, but the recovery - largely powered by emerging economies like China and India - has seen emissions jump 5% over the previous record year, 2008, to 30.6 Gigatonnes (Gt). The power industry will continue to grow according to the IEA with 80% of emissions for 2020 likely to come from existing or planned power production. "This significant increase in CO2 emissions and the locking in of future emissions due to infrastructure investments represent a serious setback to our hopes of limiting the global rise in temperature to no more than 2ºC," said Dr Fatih Birol, Chief Economist at the IEA who oversees the annual World Energy Outlook, the Agency's flagship publication. The 2 degree target was agreed by governments at the 2010 UN climate change conference in Cancun, Mexico. To hit that target would mean limiting the increase in CO2 to 5% over the levels in 2000. "Our latest estimates are another wake-up call," said Dr Birol. "The world has edged incredibly close to the level of emissions that should not be reached until 2020 if the 2ºC target is to be attained. Given the shrinking room for manoeuvre in 2020, unless bold and decisive decisions are made very soon, it will be extremely challenging to succeed in achieving this global goal agreed in Cancun." Coal accounted for 44% of the estimated CO2 emissions in 2010, oil for 36% and 20% came from natural gas. While more of the growth came from new economies, the developed world still emits almost double the amount of CO2 per head than China and nearly five times more than India.
<urn:uuid:f46c26cf-424b-43fa-9971-1dd67f7797cb>
CC-MAIN-2013-20
http://www.earthtimes.org/climate/more-climate-change-targets-missed-act-2-degree-rise/944/
2013-05-18T18:42:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947593
439
These games teach valuable skills and have a high fun and educational rating. Your child develops design and logic skills by learning the process of creating a product, through research, design and testing. Your child will create a cell phone for the senior citizen population. Your child develops skills in physical science and mechanics by going through a course and answering questions about physics. Your child develops skills about making predictions and understanding weather by learning about weather in different parts of the country and making decisions about outcomes. Your child learns about different causes and effects of automobile accidents.
<urn:uuid:b5f0e091-7223-4104-9fe5-d117ce0d29e7>
CC-MAIN-2013-20
http://www.zoodles.com/free-online-kids-games/edheads_second-2nd-grade
2013-05-24T15:37:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938654
112
We are living in the information age. The frontiers between different paper formats (text, map, photograph) and traditional analog electronic formats (sound recording, film), are becoming increasingly blurred because digital technology can combine all the above formats in a single record. Also, Treasury Board Secretariat has indicated that electronic commerce is the preferred means for the government to conduct its business. Those are two of the reasons why organizations are moving towards automating their information systems and creating more electronic documents. Other reasons are the speedier access to the information, the facility to share it worldwide, and the decreasing cost of storing the information electronically. Does this means paper records will disappear? Of course not. But now we live in an hybrid world in which electronic records will become more and more dominant. Electronic media is used for storing information in different formats (text, image, sound), just like "paper" is a medium for storing information in different formats (text, map, photograph). Before we define what an electronic record is, let's go back to the basics. There is only one criteria which makes a record, an electronic one. An electronic record contains machine-readable information, as opposed to a paper file which contains human-readable information. Machine-readable records cannot be read without the proper hardware and software. A coding process of the information (converting the data into an electronic signal) makes the record machine-readable. Once an electronic document has been printed, the print-out is not an electronic record, since the information is now in human-readable form. There are two methods of information coding Since computer technology predominates, the trend is to label as "analog" everything that is not produced by computers or digital electronics. There are three widespread electronic media: The development of less costly recordable technologies for optical disks have reduce the usage of magneto-optical technologies as a medium. There is a fourth electronic medium that is now obsolete: Paper: Key-punched card, punched paper tape (even though the medium is paper, the information is in machine-readable form). Is microform an electronic medium? Is a microfiche an electronic record? Microfilms and microfiches are not electronic records, because the information is not coded (converted into an electronic signal); the information is just reduced in size. Even though, we still need a microform reader (or a magnifying glass) to read the data, it is considered human-readable information. We do not need special equipment to convert the data from machine-readable back to human-readable form. From this point on, we will be talking about the digital method of coding information, or digital electronics, or computer technology. There are four main types of electronic files based on the type of information they contain: With each file type, the information may be recorded in a proprietary or non-proprietary format. Proprietary format is also referred to as "native format". One of the major non-proprietary format for text files is ASCII. An ASCII text file is also referred to as a flat file because it contains no text attributes or formats. Most word processing programs use the ASCII text file format as the base for a document, and apply their own proprietary format to the text. The file extension identifies the proprietary format of a specific application (e.g. document.wpd, the extension identifies the file as having the proprietary format of WordPerfect). Major word processing software comes with special import/export filters to display documents created with other word processing applications. On the Internet, text editor and web authoring software (Front Page 98) apply the HyperText Markup Language (HTML) to a ASCII file (flat file), to generate a document with standardized format codes, so that any browser (Netscape Navigator, MS-Internet Explorer, Mosaic, etc.) can display the document in the same fashion. The information elements relates to the medium and the format used to record or store the information. What coding method is use to create the electronic record? 1. Analog 2. Digital What medium is use to store the information? What kinds of files are we dealing with? * Initially most logical formats are proprietary. Once they become widely used, they become a de facto industry standard; such is the case for the TIFF, bitmap, MPEG, GIF, and other formats. The technology elements relates to the hardware and software used to read and manipulate the information recorded or stored on a given medium. Refers to the electronic equipment used for producing information in bits and bytes (1 byte = 8 bits). There are four categories of computer systems (relating to their physical size, and computing/processing power): Refers to the different programs (set of instructions) used to process the information created in bits and bytes. The four broad categories of software are: It is not the goal of this document to cover the different network topologies and protocols. From an information management point of view, we need to know where the records are stored: online, nearline or offline (see Information Retrieval). The three major levels of information retrieval are: First of all, a record is a record regardless of the medium. Records management concepts and principles developed for paper records, equally apply to electronic records. Electronic records like their paper counterparts, must be organized for timely retrieval, effective storage, and proper protection. Electronic records are also subject to the life cycle of information, from creation, to distribution, use, maintenance, storage, and disposition or preservation. The first step in a Records Management program is to perform a complete inventory of the organization's records. Electronic records must be part of the inventory process. They must be identified, described comprehensively, and linked or associated with the other records. The best and most used method to conduct an inventory of electronic records, is to identify and analyze the automated information system with which the records are associated. Electronic records are typically inventoried at the records series level, and then by program unit. At the inventory stage, identification and protection of essential electronic records is imperative. For organizations which the technology infrastructure is not part of their Essential Records program, it is a prudent practice to print their essential records, to have them in human-readable format, and to store them off-site. Electronic records must have retention periods established. The retention schedule must list all of the following characteristics of electronic records: Properly formulated retention schedules ensure the availability and use of electronic records for appropriate periods of time, while preventing the accumulation of obsolete records. Retention schedules also promote the efficient use of electronic storage media. Product obsolescence and the discontinuation of technologies imperil future access and use of electronic records. File compression softwares are used to reduce storage space on hard drives or back-up units, and bandwidth (amount of information transfer in a unit of time) on network systems. However, data compression adds another layer of software dependency to the management of electronic records. To minimize future problems, file compression should not be use for electronic records intended for long-term retention. ASCII text files minimize software dependance, and provide some protection against product obsolescence. Records managers should always consider the ASCII text format, instead of or in addition to, proprietary formats for electronic records needing a long retention period. Even though Email messages are usually written in a less formal fashion than letters or memos. They are potentially important records for the organization. Email is increasingly used to circulate draft documents for review, or to disseminate official documents. This kind of documentation constitutes a record. Typically, the IT people have the responsibility for making decision about email use. The IM and IT people have to work together to create and revise the organization's email policy, so it contains sound records management principles. The National Archives will not accept Email messages from electronic mail programs. It will accept the Email messages only if they are part of a computer-assisted records management system (CARMS) such as ForeMost of RIMS. The above statements should help us deal with the techno-jargon, and the ever changing technology associated with electronic records.
<urn:uuid:52635d6c-93b4-45d2-8bcc-be786bbca01a>
CC-MAIN-2013-20
http://www.collectionscanada.gc.ca/gouvernement/produits-services/007002-2028-e.html
2013-05-22T00:36:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897809
1,692
Sidebar Site Navigation Geerts Hopes to Answer Mysteries of Cloud Seeding Through Supercomputing Model June 6, 2012 — Bart Geerts likes to chase storms high in the mountains. And, oftentimes, he helps contribute to them. Geerts, a University of Wyoming professor of atmospheric science, studies cloud seeding and how different nuclei can affect and enhance snowfall. During cloud seeding, a form of artificial weather modification, silver iodide is released into the clouds through generators that have been strategically placed upwind of the ridges of the Medicine Bow and Sierra Madre mountains in southern Wyoming. The silver iodide facilitates ice crystal formation in super-cooled water clouds. During snowstorms, Geerts says it has been difficult to assess how much snowfall happens naturally and how much is artificially induced. Geerts uses Wyoming Cloud Radar and the UW King Air aircraft as tools to help him. But he needs more. Geerts hopes that his use of the NCAR-Wyoming Supercomputing Center (NWSC) will provide snowfall models that are as detailed or more so than those currently captured on radar -- and perhaps answer the following questions: "When measuring snowfall, what amount is natural snowfall, and what is artificially induced through cloud seeding?" and "Where are the most effective regions to use cloud seeding?" "We do not have a good understanding of the effectiveness of cloud seeding," Geerts says. "We don't yet know which clouds can be most effectively seeded. Through testing and observation, we can test the efficiency for seeding clouds in order to enhance snowfall." Geerts uses lidar and radar to collect precipitation data. Lidar, an acronym for light detection and ranging, is an optic remote sensing technology that can detect and measure cloud droplets in the atmosphere. Snow is detected by radar. From the aircraft, Geerts observes the effects of ground-based seeding. At the silver iodide generators located on the mountain ridges, propane burns a stick of silver iodide, which then releases the iodide crystals into the air. The radar and lidar map out the precipitation and clouds along the flight track in very fine detail, at a pixel resolution of about 100 feet. "With the supercomputer, we want to simulate the airflow and cloud down to the same resolution, about 100 feet. We want to see if the model can reproduce what our radar sees," Geerts says. "In radar and the models, we want to see what cloud seeding really does." Still, Geerts notes radar observations are limited in time and space. Radar observations are limited in time due to the high cost of operating a research aircraft for four hours. Because "the radar only captures transects of weather" below the flight level of the aircraft, very limited data are recorded and are not continuous in time or space, he says. But Geerts notes a computer model can run for the entire duration of a flight, in three dimensions. "It provides very rich data," he says. In his research, Geerts has been trying to represent unresolved features in cloud seeding with parameterization, which is essentially trying to explain some effect you can't resolve. "You know it's happening, but you can't see it. Modeling is intended to resolve these processes," he says. "These nuclei are microscopic to begin with. The airflow over mountains is very complex. That's why you really need high resolution. You can only get that through supercomputing." Due to water shortages and droughts in some states and in countries around the world, cloud seeding is seen as a potential way to increase water supplies for communities and to irrigate crops. Cloud seeding is typically paid for by water resource managers, power companies (hydropower) and agricultural interests. "Water is almost as important as oil is in the western United States. Water is quite a valuable commodity," Geerts says. Geerts, who teaches courses on weather analysis and forecasting, as well as an introduction to meteorology at UW, says he has always been fascinated by the weather. The NWSC is the result of a partnership among the National Center for Atmospheric Research (NCAR); the University of Wyoming; the state of Wyoming; Cheyenne LEADS; the Wyoming Business Council; Cheyenne Light, Fuel and Power; and the University Corporation for Atmospheric Research. NCAR is sponsored by the National Science Foundation (NSF). The NWSC will contain some of the world's most powerful supercomputers (1.5 petaflops, which is equal to 1.5 quadrillion computer operations per second) dedicated to improving scientific understanding of climate change, severe weather, air quality and other vital atmospheric science and geo-science topics. The center also will house a premier data storage (11 petabytes) and archival facility that holds irreplaceable historical climate records and other information. Bart Geerts, a UW professor of atmospheric science, plans to use the supercomputer in Cheyenne to better understand cloud seeding. He's particularly interested in determining how much snowfall is created due to the artificial inducement as well as figure out the best locations to use cloud seeding.
<urn:uuid:ffab2040-7f92-46a3-80ec-98918f269b99>
CC-MAIN-2013-20
http://www.uwyo.edu/uw/news/2012/06/geerts-hopes-to-answer-mysteries-of-cloud-seeding-through-supercomputing-model.html
2013-05-21T17:16:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958968
1,098
Back in chapter 2 I offered a simple argument for laissez-faire. If you leave people free to exchange goods on mutually acceptable terms, the result is to move all goods to their highest valued uses, producing the efficient allocation of existing goods. If a good is worth more to a potential consumer than it costs a potential producer to produce it, the latter will find it in his interest to produce the good and sell it to the former, with the result that goods get produced if and only if they are worth producing. This argument assumes that the only cost to me of selling one more unit of a good is the cost of producing it. But what if one effect of trying to sell more units is to drive down the price? Suppose, for example, that I own the only grocery store in a small town. The cost to me of selling an extra ten gallons a week of milk is not only what it costs me to get and sell the additional milk but also the lost revenue on milk I could have sold at a higher price if I had been content with selling less. A numerical example may make the argument clearer. I am choosing whether to sell a hundred gallons a week at $2 a gallon or a hundred and ten at $1.90. The cost to me of the extra ten gallons is $1.50/gallon. Expanding output gives me $19 of revenue, ten gallons of milk at $1.90/gallon, at a cost of only $15. That sounds like a good deal until it occurs to me that it also costs me $10 of lost revenue on the hundred gallons that I could have sold at the higher price and must now sell at the lower. If I considered only the cost and revenue directly associated with the additional gallons, I would keep selling more milk as long as the price was above my cost, since as long as price is above cost I am making money on each additional gallon. That describes the behavior of a seller in a perfectly competitive market—one with so many firms that his sales have no significant effect on price. But a monopoly, more generally a firm with some market power, knows that it can only sell more by selling at a lower price—which costs it revenue on the units that it could have sold at the higher price if it had been content with a lower volume. For me, that lost revenue is a cost. For everyone put together, it is only a transfer; what I lose on milk that my customers were willing to buy at the higher price, they gain. To see why selling at a price above cost is inefficient, consider a customer to whom an additional gallon of milk is worth $1.80—less than the price I am selling milk for but more than the cost to me of providing it. If I produced the extra gallon and gave it to him, I would lose $1.50, he would gain $1.80, for a net gain of $.30. The net gain would be the same if I produced the milk and sold it to him for $1.50, although the division of the gain between us would be different. As long as I am selling milk for more than its cost to me, there will be customers who value the milk at more than its cost but are not getting it, which is inefficient. This simple point, usually illustrated with a diagram showing the monopoly's cost curves, demand curve, and profit maximizing price, is the standard economic argument for the inefficiency of a monopoly seller. The failure to sell to everyone who values milk at more than it costs me is a problem for me as well as for economic efficiency; I am missing out on potential profits. One solution is to find some way of selling milk at different prices to different customers, charging a high price to those willing to pay it and a lower price to those who will only buy at a low price. In some cases such price discrimination is a practical option, but in many others it is not, either because I cannot tell who will or will not pay a high price or because I cannot prevent people to whom I charge a low price from reselling to people to whom I am trying to charge a high price. If I somehow solve these problems, and do it well enough so that I can sell to every customer at the highest price he is willing to pay, the standard argument against monopoly vanishes. As long as there is a customer who values milk at more than it costs me, it is in my interest to sell it to him. The result is that with perfect price discrimination it is in my private interest to sell the efficient quantity of milk, the quantity such that everyone who values it at more than its cost of production gets it. Perfect price discrimination is perfectly efficient, at least so far as quantity is concerned, although that is not true for the imperfect price discrimination that is usually the best a seller can do. Perfect price discrimination also results in all of the gain from the transaction going to the seller and none to the buyer, like a bilateral monopoly bargain where one side is a much better bargainer than the other. But since economic efficiency is concerned with the size of the net gain, not who gets it, that should be irrelevant to efficiency. Or perhaps not. Stay tuned for late breaking updates. In a competitive industry, above normal profits, profits that more than pay the normal rate of return on capital, make it in the interest of someone to start a new firm, driving output up and prices and profits down. So in a competitive industry in long-run equilibrium, firms sell at a price that just covers all of their costs, including a market return on the stockholders' capital. Economic profit, defined net of the normal cost of capital, including the capital of the stockholders, is zero. In a monopoly industry, on the other hand, there is only room for one firm. The result is at least the possibility of monopoly profit. The year is 1870. Somewhere west of civilization is a valley of fertile farmland, into which it will some day be worth building a rail line. Whoever builds the first line will have a monopoly; it will never pay to build a second. If the line is built in 1900, the total profit it will eventually produce after paying all costs, including a normal market return on the capital used to build it, will be $20 million. If the railroad is built before 1900, it will lose a million dollars a year until 1900 because there will not be enough people in the valley to support the cost of maintaining the rails. I, knowing these facts, propose to build the railroad in 1900. I am forestalled by someone who plans to build in 1899; $19 million is better than nothing, which is what he will get if he waits for me to build first. He is forestalled by someone willing to build still earlier. The railroad is built in 1880. The builder receives only the normal return on his capital for building it. The logic of the situation is identical to the logic of inefficiently early homesteading in chapter 10 and inefficient theft in chapter 3. In such a situation, monopoly profit ends up not as a transfer to the firm from its customers but a net loss. The higher the monopoly profit, the more resources firms burn up competing to be the one that gets it. If so, perfect discriminatory pricing is not the best solution to the problem of monopoly but the worst. Since each customer is buying at the highest price he is willing to pay, all of the gain from the firm's production is transferred to the firm as monopoly profit—and all of the monopoly profit is burned up in the cost of acquiring the monopoly. So the distribution of the gains from trade does matter after all, not as part of the definition of efficiency but, in this situation at least, as an incentive to inefficient behavior. We now have two arguments for the inefficiency of monopoly—each presented in a simplified scenario but with more general application. One is that a monopoly, in the process of maximizing its profit, sells too low a quantity at too high a price; if it expanded output, its customers would gain more than it would lose. The second is that the opportunity to acquire monopoly profits creates an incentive for inefficient rent seeking, for spending resources making sure that your firm, rather than someone else's, ends up with the monopoly. If monopoly is inefficient, what can we do about it? Before attempting that question, there is another we must answer first: The only grocery store in a small town is a natural monopoly. If someone tried to start a second store, one of the two would eventually go out of business because one store selling to everyone has lower average costs, and so can afford to sell at a lower price, than two stores each selling to only some of the customers. The same thing could happen on a larger scale in an industry where a large part of production cost was independent of how much was produced—the cost of designing a product, writing a computer program, tooling up a factory. The more units the fixed cost is divided among, the lower the average cost, so big firms can undersell small ones. Almost all firms have some fixed costs, so why isn't every industry a natural monopoly? One reason is that economies of scale in production are balanced by administrative diseconomies of scale. The bigger a firm is, the more layers there are between the president and the factory floor and the harder it is for the former to control what is happening on the latter. Once a firm has gotten big enough to take advantage of most of the potential economies in production cost, further growth may cost it more in higher administrative costs and less carefully managed factories than it gains in lower production costs. That is one reason that most industries consist of many firms. One reason there may be only one firm in an industry is natural monopoly. Another is that if someone tries to compete with him, the monopolist will call the police. The original meaning of "monopoly" was an exclusive right to sell something. Typically such monopolies were either sold by the government as a way of raising money or given to people the government liked, such as relatives of the king's mistresses. Monopolies of this sort are still common. An example is the Post Office: the Private Express Statutes make direct competition illegal. For a third source of monopoly, and one that brings us closer to legal issues associated with antitrust, consider an industry made up of only five large firms. It occurs to the president of one of them that if they all reduce production, prices will rise; they will gain more from higher prices than they lose from lower sales. The result is a cartel—a group of firms coordinating their behavior to hold output down and prices up as if they were a single monopoly. One problem for the cartel members is that while each of them is in favor of the others keeping output down and price up, each would like to expand its own output to take advantage of the high price. It can do so by chiseling on the cartel price, selling additional output for a little less to favored customers, defined as customers who can be lured away from a competitor and trusted not to tell anyone about the deal they are getting. Each cartel member gains by chiseling—at the expense of the others. If they do enough of it, they drive the price back down to what it was before the cartel was formed. One solution is for the members of the cartel to sign a contract agreeing to keep their output down and prevent chiseling by doing all of their selling through a common agent. In much of the world, such contracts are both legal and enforceable. In the United States, they are neither. They have long been unenforceable as contracts in restraint of trade; under current antitrust law they are also illegal. That is not true of a cartel agreement enforced by the government, such as the airline industry prior to deregulation. The enforcement agency enforcing the domestic airline cartel was the Civil Aeronautic Board, which had veto power over fare changes, including fare cuts. A similar function was served in the international market by IATA, most of whose members were government airlines whose owners could enforce the agreement by denying non-members landing rights in their countries. Another solution is for the firms to merge into a monopoly. This may raise costs somewhat, since the reason there were originally five firms instead of one was that the firms were already up to the size at which average cost was minimized. But it may also raise profits if the merged firm has enough of the market to be able to restrict output and drive up price. In the United States, such mergers are subject to disapproval by the antitrust division of the justice department. A friend of mine has suggested an ingenious way in which the antitrust division could distinguish "procompetitive" mergers, mergers designed to join firms that can produce more cheaply by combining their assets, from "anticompetitive" mergers designed to create a monopoly. A procompetitive merger makes things worse for other firms in the industry, since it produces a more efficient competitor. An anti-competitive merger makes things better for other firms in the industry, since they will benefit when it restricts output in order to drive up the price at which it—and they—can sell. All the antitrust division has to do, when a new merger is proposed, is see who objects. If the other firms in the industry object, it approves the merger; if they don't object, it rejects it. Unfortunately, this only works until the other firms catch on and revise their tactics accordingly. Cartels and anti-competitive mergers both result in both of the sorts of inefficiency described earlier. In either case, one consequence of the monopoly is to push prices above marginal cost, reducing output below its efficient level. In the case of a cartel, rent seeking takes the form of expenditures by the cartel members to enforce, and evade, the cartel restrictions, as well as bargaining costs over creating and maintaining the cartel. In the case of merger, the inefficiency of having a firm too big to minimize average cost is a rent seeking expenditure, paid by the merging firms in the process of getting a monopoly in order to transfer money from their customers to themselves. Suppose that in some industry economies and diseconomies of scale roughly balance; over a wide range of output, big firms and small firms can produce at about the same cost. It is widely believed that such a situation is likely to lead to an artificial monopoly; the usual example is the Standard Oil Trust under John D. Rockefeller. I am Rockefeller and have somehow gotten control of 90 percent of the petroleum industry. My firm, Standard Oil, has immense revenues, from which it accumulates great wealth; its resources are far larger than the resources of any smaller oil company or even all of them put together. As long as other firms exist and compete with me, I can earn only the normal market return on my capital. I decide to engage in predatory pricing, driving out my competitors by cutting my prices below my (and their) average cost. Both I and my competitors lose money; since I have more money to lose, they go under first. I now raise prices to a monopoly level. If any new firm considers entering the market to take advantage of the high prices, I point out what happened to my previous competitors and threaten to repeat the performance if necessary. This argument is an example of the careless use of verbal analysis. "Both I and my competitors are losing money . . ." sounds as though we are losing the same amount of money. We are not. If I am selling 90 percent of all petroleum, a particular competitor is selling 1 percent, and we both sell at the same price and have the same average cost, I lose $90 for every $1 he loses. My situation is worse than that. By cutting prices, I have caused the quantity demanded to increase; if I want to keep the price down, I must increase my production—and losses—accordingly. I lose (say) $95 for every $1 my competitor loses. My competitor, who is not trying to hold down the price, may be able to reduce his losses and increase mine by cutting his production, forcing me to sell still more oil at a loss. He can cut his losses by mothballing older refineries, running some plants half time, and failing to replace employees who move or retire. For every $95 I lose, he loses (say) $0.50. But although I am bigger and richer than he is, I am not infinitely bigger and richer; I am 90 times as big and about 90 times as rich. I am losing money more than 90 times as fast as he is; if I keep trying to drive him out by selling below cost, it is I, not he, who will go bankrupt first. Despite the widespread belief that Rockefeller maintained his position by selling oil below cost in order to drive competitors out of business, a careful study of the record of the antitrust case that led to the breaking up of Standard Oil found no evidence that he had ever done so. The story appears to be the historian's equivalent of an urban myth. [article link: McGee] In one incident, a Standard Oil official threatened to cut prices if a smaller firm, Cornplanter Oil, did not stop expanding and cutting into Standard's business. Here is the reply Cornplanter's manager gave, according to his own testimony: Well, I says, "Mr. Moffett, I am very glad you put it that way, because if it is up to you the only way you can get it is to cut the market, and if you cut the market I will cut you for 200 miles around, and I will make you sell the stuff," and I says, "I don't want a bigger picnic than that; sell it if you want to" and I bid him good day and left. That was the end of that. —quoted in John S. McGee, "Predatory Price Cutting: The Standard Oil (NJ) Case," Journal of Law and Economics, Vol. 2 (October, 1958), p. 137. Predatory pricing is not logically impossible; if Rockefeller can convince potential competitors that he is willing to lose an almost unlimited amount of money keeping them out, it is possible that no one will ever call his bluff, in which case it will cost him nothing. But the advantage in such a game seems to lie with the small firm, not the large, and the evidence suggests that the artificial monopoly is primarily a work of fiction. It exists in history books and antitrust law but is and always has been rare in the real world, possibly because most of the tactics it is supposed to use to maintain its monopoly do not work. Since competition is efficient, one might think that the solution to the inefficiency of monopoly is to break up the monopoly firm. But if a natural monopoly is broken up into smaller firms, average cost will go up—that is why it is a natural monopoly. Since average cost falls as output increases, one of the firms will expand, driving (or buying) out the others. We end up where we started, with a single monopoly firm. The inefficiency of monopoly is an argument for breaking up artificial monopolies or preventing their formation by laws against predatory pricing, but I have just argued that artificial monopolies created by predatory pricing are for the most part mythical. It is also an argument for breaking up monopolies created by government regulation of naturally competitive industries. But in the case of natural monopoly, perfect competition is simply not an option. We don't want every small town to have ten grocery stores. The cure that economics textbooks traditionally offered for the efficiency problems of natural monopoly was government regulation or ownership. One problem with this approach is that a regulator, or an official running a government monopoly, has objectives of his own—some combination of private benefit to himself and political gains for the administration that appointed him. A sensible policy for the regulator might be (on the historical evidence often is) to help the monopoly maximize profits in exchange for campaign contributions to the incumbent administration and a well paid future job for the regulator. Suppose we somehow solve that problem and put a natural monopoly under regulators who have only the best of intentions. After reading the first half of this chapter, they conclude that the solution is to force firms to charge marginal cost, to sell a gallon of milk, or, more realistically, a kilowatt hour of electricity, at exactly what it costs to produce. This leads to several problems. The first is finding out what the firm's costs are—real monopolies, outside of textbooks, do not come equipped with a diagram showing their cost curves. One approach is to simply watch, see what it costs to produce each unit of output, and set prices accordingly. But relating costs to output is not a simple matter of observation. To determine marginal cost, we have to know not only the cost of the quantity the firm is producing but also what it would cost to produce other quantities. A second problem is that the regulator observes what the firm does, not what it could do—and the firm knows the regulator is watching. It may occur to the firm's managers that if they arrange to produce the last few units in as expensive a fashion as possible, the regulators will observe a high marginal cost and permit them to charge a high price. Suppose the regulators see through any such deceits, correctly measure marginal cost, and set price equal to it. A natural monopoly exists because the cost of producing additional units decreases as output increases, giving a larger firm a cost advantage over a smaller firm. But if marginal cost is falling, then average cost, which includes the cost of the earlier and more expensive units, is higher than marginal cost. So if a natural monopoly is forced to sell at marginal cost it will eventually go broke or, if the regulation is anticipated, never come into existence. To prevent that, the regulator must find some way of making up the difference between price and average cost. One solution might be a subsidy paid for by the taxpayers. While this arguably makes economic sense it is in many cases not a practical option, since regulatory agencies are rarely provided, by Congress or state legislatures, with a blank check on the treasury. The usual alternative is to get the money from the monopoly's customers. Instead of requiring it to charge marginal cost, the regulators require it to charge average cost, a less efficient outcome but still better than the price the monopoly would set for itself. How does the regulator find out what average cost is? If he simply asks the firm's accountants to calculate how much it spent this year and sets next year's prices accordingly, the management of the firm has no incentive to hold down costs, especially the cost of things that make the life of management easier. Here again, management knows that the regulator is watching and modifies what it does accordingly. The real-world version of this approach to controlling natural monopolies is called "rate of return" regulation. The idea is to set a price that gives the stockholders of the regulated utility—the most common example of a regulated natural monopoly in the United States at present—a "fair rate of return" on their investment. The cost of inputs other than the stockholder's capital is set at what the regulatory commission thinks it ought to be, based on the experience of past years. How much do investors have to get to make it worth investing in utilities? The obvious answer is "the market rate of return"—but on how much capital? If regulators measure the size of the investment by how much investors initially put in, investors in new utilities face an unattractive gamble: if they guess wrong the company goes bankrupt and they lose everything, if they guess right they get only the market return on their investment. So a regulator who bases rate of return on historical costs must somehow add in a guesstimate of the risk premium that investors would have required to compensate them for the chance of losing their money. What about measuring the current value of the investment by the market value of the utility's stock and allowing the utility to set a price that gives a market return on that value? Unfortunately, this ends up as a circular argument. The value of the stock depends on how much money investors think the company will make, which depends on what price they think the regulators will permit it to charge. Whatever amount the regulators allow the utility to make will be the market return on the value of the stock, once the value of the stock has adjusted to the amount the utility is making. Regulatory commissions exist in the real world, hold hearings, and publish press releases describing what a fine job they are doing in protecting customers from greedy monopolies. What they really do, however, and what effect they really have, are far from clear. In a famous early article on the economics of regulation, George Stigler and Claire Friedland tried to determine the effect of utility regulation empirically, by looking at the returns to utilities in states where regulation came in at different times. So far as they could tell, there was no effect. One issue that antitrust law has paid a good deal of attention to is the possibility of a firm that has a monopoly in one market using it to somehow get a monopoly in another. A prominent recent example is the controversy over charges that Microsoft is trying to use its near monopoly in the market for desktop operating system to get a second monopoly in the market for web browsers. This issue appears in at least three different legal contexts: Vertical integration, retail price maintenance, and tie-in sales. In all three contexts, as we will see, the legal analysis that has been widely accepted by the courts is inconsistent with the relevant economic theory. And in all three cases, the result of showing that is to leave us with a puzzle. Having shown that the court's explanation of these practices is wrong, we have to explain why they nonetheless exist. Suppose steel production happens to be a natural monopoly and I have it. It occurs to me that making cars requires steel, and I am the only source. I accordingly buy up a car firm, refuse to sell steel to its competitors, and soon have a monopoly in cars as well. I am now collecting monopoly profit on both the steel industry and the auto industry, so I, and my stockholders, are happy. What is wrong with this strategy is not that it will not work but that it is unnecessary. If I want to drive the price of cars up, I don't need a car company to do it. All I have to do is raise the price at which I sell steel to the existing companies. The car companies will pay the higher price to me, pass the increase on to their customers, and so provide me with my monopoly return without any need for me to get into the car business. The reason this argument matters to the law is that one of the things antitrust law regulates is vertical mergers, regarded as suspect on the theory that they make it possible for the monopolist to expand his monopoly. The argument so far suggests not only that vertical mergers should not be suspect but that they should not happen, leaving us with the question of why they do. One reason, of course, is that it is sometimes cheaper for a firm to make its own inputs or sell its own output, with the result that, even where no question of monopoly is involved, we observe quite a lot of vertical integration. A more interesting reason, where a firm does have a monopoly at one stage of the production process, is that vertical merger is a way of reducing the inefficiency due to its monoopoly and, in the process, increasing the firm's profits. When my steel monopoly pushes up the price of the steel it sells to auto companies, they respond by using less steel and more aluminum and plastic. To the extent that the substitution is driven by my monopoly price, it is inefficient. The car company is using a hundred dollars of aluminum to substitute for steel that costs it a hundred and twenty dollars to buy but costs me only eighty dollars to produce. That represents a net loss of twenty dollars to efficiency and, potentially, to profit. One solution is for me to buy the car company. I then instruct its managers that in deciding when it is cheaper to use steel they should base their calculations on its real cost of eighty dollars, but that in pricing cars they should do their best to extract as much monopoly profit as possible. I thus eliminate one of the inefficiencies due to my monopoly price on steel, while still selling autos at a monopoly price and collecting the corresponding monopoly profits. Retail price maintanance is the practice of a producer controlling the price at which retailers are permitted to sell his products. For many years, federal law permitted states to decide whether or not such contracts were permitted and enforced. Under current law, explicit contracts of that sort are illegal everywhere, although in practice that rule is widely evaded—as you can easily check by a little on-line price comparisons of (for example) the latest models of Macintosh computer. One argument for banning retail price maintenance agreements is that they are a way in which the producer, who has a "monopoly" of selling his own products to retailers, extends that monopoly to the retail market, presumably in exchange for a share of the monopoly profits that doing so produces for the retailers. Here again, the problem with the argument is not that the strategy would not work but that it is unnecessary. A producer is free to charge retailers whatever price they are willing to pay. If he wants to raise the retail price, all he has to do is raise the wholesale price. Without any price maintenance agreement, the retailers will compete down their margin until it just covers their costs. Instead of getting a share of the revenue from the higher retail price, the producer gets all of it. Having explained why retail price maintenance does not exist, we are left with the puzzle of explaining why it does, why some producers attempt, where it is legal, to make and enforce agreements controlling the price at which their goods may be sold. I am a retailer of expensive hi-fidelity audio equipment. In order to sell it, I spend a considerable sum maintaining a showroom where potential customers can listen to different producers' equipment, consult with my expert salesmen, and so decide which products to buy. Judged by the state of my showroom, all is going well; my salesmen hardly have a free moment. Judging by my books, however, something is wrong; lots of people are looking but almost nobody is buying. While trying to solve this puzzle, I decide I am in need of some fresh air, so go out for a stroll. Just around the corner, I find the explanation—a new catalog discount store, with a small office and no showroom, selling the same products I sell at eighty percent of my price. Taped to the door is a map showing the location of my showroom. This is a problem both for me and for the producers of the audio equipment I sell. Since customers can get my expensive presale services for free and then buy from my lower cost competitor, I stop offering the presale services. I close down the showroom, fire most of my salesmen, and cut prices to match the competition. Customers no longer have the option of trying my goods before they buy. They respond by going to competing retailers selling different brands of equipment, brands whose manufacturers insist on a minimum price for their equipment sufficient to cover the cost of salesmen and showroom. Those retailers provide the presale services, secure in the knowledge that nobody can undercut their prices. Long ago, when computers required rooms instead of desktops and belonged only to large firms and governments, there was a company called IBM. Floppy disks had not yet been invented. To get information into a computer you punched it into a large deck of paper cards and ran it through a card reading machine. IBM had something close to a monopoly on selling and leasing large computers. One term in their agreement, one eventually declared illegal, was that customers had to use IBM punch cards. Why? Here again, the obvious answer is in order to extend the monopoly from computers to punch cards. Here again, that answer does not work. Punch cards are not exactly high tech items; lots of firms could produce them and did. IBM could require its customers to use its punch cards but had no control over what cards were used by people using other computers. If IBM took advantage of its monopoly on punch cards used with IBM computers by raising their price, the result would be to make using IBM computers more expensive. But they could have done that much more easily by simply raising their prices. Insisting that their customers use expensive punch cards instead of cheap ones is an indirect way of raising the price of the computer. It is tempting to reply that IBM can get away with expensive punch cards because their customers have nowhere else to go. But that is wrong. To begin with, their customers have the option of not using a computer at all, an option many firms took. They also have the option of using computers made by other firms—and will take it if IBM gets too expensive. The more fundamental response is that if IBM can insist on expensive punch cards without losing any customers, that is evidence that they could also have raised the price of their computers without losing any customers, in which case they should have done so. Once they have gotten to the profit maximizing price, the price at which further increases lose them more in sales than they gain in revenue per sale, any further increase, whether per computer or per card, makes profits lower, not higher. Again, I have explained too much. Having shown that IBM had no reason to insist on a tie-in between cards and computer, I must now explain why they did. One mundane explanation is that IBM cared about the quality of the punch cards. If something went wrong, they might have to service the machine, and if too many things went wrong, their reputation might suffer. One way of controlling quality was by making the cards. A similar explanation has been offered for an earlier round of antitrust cases involving a giant company, the IBM of its day, making shoe manufacturing machinery. [case link: United Shoe] A more interesting explanation is that IBM was engaged in a clever form of discriminatory pricing. The value of the same computer is different to different customers; ideally, IBM would like to charge a high price to a firm that gets a lot of use out of the computer, and is therefore willing to pay a high price, while charging a lower price—but enough to more than cover production cost—to more marginal users. Customers willing to pay a high price are unlikely to mention that fact to IBM. But, on average, high value customers are also high use customers. High use customers use a lot of punch cards. By requiring all customers to use IBM cards and charging a high price for them, IBM is, in effect, making the same computer more expensive to customers who use it more. Combining expensive cards with somewhat less expensive computers lets it keep the low use users, who are compensated for the high price of one with the low price of the other, while milking the high use users, who are the ones least likely to abandon their computers. I do not know when this explanation for tie-in sales was first offered by an economist, but I suspect that a lawyer beat us to it. The earliest tie-in case I have come across involved not a computer but a printing press. The tie-in was with the paper the press used. The attorney defending the company's right to require a tie-in offered a simple explanation. If the company covered all of its costs, fixed and variable, in the price of the press, small printers would be unable to afford it. By charging a lower price for the press and a higher price for the paper, the company made the combination affordable for small printers, who didn't use all that much paper, while covering its fixed cost with the extra money it made from big printers, who did. It was precisely the economist's explanation of tie-in sales as a form of discriminatory pricing—presented, as favorably as possible, from the monopolist's point of view. And it correctly pointed out the efficiency advantage produced by price discrimination—a larger quantity of output, due to the ability to cut prices for some customers, in this case indirectly, without cutting them for others. While these arguments imply that tie-in sales are sometimes efficient, it does not follow that they always are. One cost of requiring customers to buy expensive punch cards is that they will take expensive precautions to avoid using any more of then than necessary. That is inefficient if the cost of the precautions to the user is higher than the cost of the cards saved to IBM. The inefficiency due to overpricing the cards must be balanced against the efficiency gain due to making computers available to the lower use customers who would otherwise be priced out of the market. There are no theoretical grounds on which we can predict what the net effect will be; it might go either way. This chapter has been devoted mostly to explaining issues rather than analyzing legal alternatives. One reason is that both antitrust theory and antitrust law are complicated areas and I have done little work in either. But it is worth, at this point, at least trying to summarize the subject from a legal rather than a purely economic point of view. Antitrust law ultimately involves three different approaches to reducing the costs associated with monopoly: Controlling the formation of monopolies, regulating monopolies, and controlling efforts to misuse a legal monopoly. The formation of monopolies is controlled in three different ways. One is by restrictions on the mergers of large firms, where the antitrust division believes that the merger will have an "anticompetitive" effect. If two firms wish to merge, one of which controls forty percent of the pickle market and one fifty percent, the antitrust division may decide that ninety percent of pickles is too near a monopoly for comfort. In principle, their decision is based not only on the percentage of the market but also on how easy it is, if the merged firm tries to exploit the consumers of pickles with high prices, for other pickle producers to expand or for new firms to enter the market, and on how willing consumers are to substitute other things for pickles if pickles become too expensive. If the conclusion goes against the merger, the firms have the choice of either remaining separate or having one of them spin off its pickle business before merging. A second way in which formation of monopolies is controlled is by restrictions on behavior believed to create them, in particular on predatory pricing, selling below cost in order to drive competitors out and establish a monopoly. I argued earlier that such restrictions are a cure to an imaginary disease, but the antitrust division may not always agree. Similar arguments apply to controls over tie-in sales and retail price maintenance agreements. A final, and perhaps most important, way of controlling the formation of monopolies is by making it harder for firms in concentrated industries to cooperate, to form a virtual monopoly, a de facto cartel, by jointly holding quantity down and price up. One way of preventing that is by refusing to enforce cartel agreements, as is done in the United States and (more recently) the U.K. Another is by making such agreements, including secret price fixing agreements, illegal. Regulated monopolies in the United States are mostly public utilities—electricity, natural gas, water, telephones—and regulation is mostly at the state level. For reasons I have already discussed, it is unclear whether utility commissions can be trusted to try to produce efficient outcomes, whether they can do so if they try, and even whether they have any significant effect on the industries they regulate. The theoretical rule—set price equal to marginal cost, and find the money somewhere to cover the difference between that and average cost—is straightforward. The practical application is not. Controlling attempts to misuse a legal monopoly gets us into the topics I discussed under the general subject of extending monopolies. My arguments suggest that, while the behaviors in question may be a result of monopoly and may increase the monopoly's profits, it is not clear that they make the rest of us worse off. It is therefore also unclear whether there is any good reason to restrict them. For about a decade, roughly the eighties, federal antitrust activity was at a relatively low level, in part due to the influence of the sort of arguments I have just offered, arguments which suggest that antitrust activity often does more harm than good. More recently, it has revived again, largely targeted at the computer industry. The current showpiece is the Microsoft antitrust trial. One reason may be that software provides a particularly striking example of a natural monopoly. Once a computer program is written the cost of producing additional copies is close to zero, so the more copies you sell the lower the average cost. One result is that, at any given time, there is likely to be a single dominant product in each niche—one dominant word processor, one dominant photo editing program. Current champions are Microsoft Word and Adobe PhotoShop. Stanley Liebowitz, an economist studying these markets, tried an interesting experiment. He graphed market share in each of a variety of niches against average rating in computer magazine reviews. The pattern was striking. At any one time, there was usually a dominant product. It stayed dominant until one of its competitors started getting consistently better reviews, at which point the competitor rapidly took over the market. In an industry where, once a program is written, it costs relatively little to crank out another million copies, market share can change with startling speed. Just as the American form of marriage has been described as serial polygamy, so the software industry provides a striking example of serial competition. At any given instant there is a dominant product, but which one it is changes over time. During the relatively short history of the personal computer, the dominant spreadsheet on Intel machines has gone from VisiCalc, the original spreadsheet, to Lotus 1,2,3 to Microsoft Excel. The dominant word processor has gone from WordStar to WordPerfect to Microsoft Word. You may think you see a pattern there. So did the antitrust division. Microsoft has not always been the winner; Adobe, for example, continues to dominate a group of related niches involving graphics and desktop publishing. But Microsoft's share of successful software applications is high and rising. One explanation offered by its competitors is that ownership over the operating system, first MSDos and later Windows, gave Microsoft an unfair advantage in writing software, since they knew more than anybody else about the underlying code with which that software interacts. While that explanation sounds plausible, it is contradicted by a striking historical pattern. Insofar as Microsoft has such an advantage, it is limited to machines running their operating systems, so Microsoft applications ought to succeed only, or at least mostly, on Intel platforms. But Word and Excel are not only the dominant word processor and spreadsheet under Windows, they are the dominant ones on Macintosh computers as well. An obvious explanation is that Microsoft used its operating system advantage to obtain a dominant position in the Intel world and then spread from there to the Macintosh, taking advantage of the desire of Macintosh owners to use products compatible with what other people were using. While that sounds plausible, it does not fit the historical facts. In the early years of the Macintosh, the dominant word processor on Intel machines was WordStar. The dominant word processor on Macs was Word. In both the word processor market and the spreadsheet market, Microsoft first obtained a dominant position in the Macintosh market, where it had no more access to the operating system than anyone else and less than Apple (which produced a competing word processor) and then extended that to the Dos/Windows world. Network Externalities and the Qwerty/Dvorak Myth The latest version of an economic theory to explain monopoly and justify antitrust action goes by the name of "network externalities." The underlying idea is that there can be economies of scale associated with consumption as well as production. It is convenient for me to use the same word processor as people with whom I want to exchange documents, so the more people are using Word the greater the incentive for me to abandon my trusty WriteNow and go with the crowd. It is convenient for my telephone to be able to reach as many other people as possible, so the larger the size of the telephone network the greater the value it provides to each customer. As with economies of scale in production, the likely result is a natural monopoly. The classic example offered for the real-world importance of this effect is the Qwerty keyboard, the arrangement of keys on a conventional typewriter. According to the widely accepted story, the Qwerty layout was originally designed to slow typists down, in order to reduce the problem of keys jamming in early typewriters. It achieved success as a result of being used by the world's only touch typist in a crucial early typing contest. Once established, it maintained its position through the power of network externalities despite the existence of a greatly superior alternative, the Dvorak keyboard. In a world dominated by QWERTY machines, practically nobody wanted to learn Dvorak. Some years ago, Stanley Liebowitz and Stephen Margolis published an article, "The Fable of the Keys," demonstrating that every single fact in the above story was false. QWERTY was designed to prevent key jamming not by slowing typists but by putting pairs of letters that frequently followed each other on opposite sides of the keyboard, thus alternating between the two banks of keys in the early machines; that pattern is still a desirable one, since it means that typists tend to type with alternate hands, which is faster and less tiring. There were many early typing contests, different machines won different contests, and the recorded scores make it clear that there was no single typist with a large speed advantage over everyone else. Perhaps their most damning result concerned not Qwerty but its competitor. It turned out that the great superiority of Dvorak was demonstrated only in tests run or supervised by August Dvorak, its inventor. The advocates of network externalities, in taking the Dvorak/Qwerty case as evidence for their theory, were treating advertising puffery as scientific data. Tests by independent third parties interested in the possibility of adopting the new layout showed it to be at most a few percent faster than the existing standard. Liebowitz and Margolis did not claim that the network externality argument was impossible, although they have argued that it is mostly a relabelling of phenomena already familiar in the context of economies of scale and natural monopoly. What they claimed was that, at least in the typewriter case, its effects were unimportant. After all, while some typists need to be able to move from one typewriter to another, many others do not. Many writers preferred to do all their work on a single typewriter, especially back in the days of mechanical typewriters, which varied a good deal more than computer keyboards today. The costs of modifying a typewriter to change the keyboard layout were never terribly high, and became much lower when IBM introduced the Selectric, a model which permitted multiple interchangeable type balls. They became lower still when the world switched from typewriters to word processors, since a computer's keyboard layout can be remapped in software. Every Apple IIc made came with a built-in switch that toggled the keyboard between QWERTY and Dvorak. If Dvorak had been as much better as its advocates claimed, it should have rapidly established a dominant position among typists who did not have to be able to use other people's machines. Having demonstrated its superiority there, it should have spread. By now, Qwerty should have been relegated to the dustbin of history. It didn't happen that way. A similar issue arises in the context of computer software, where compatibility is again of significant value. Here again, Liebowitz and Margolis offer evidence that while the fact that other people are using a word processor may increase its value to me, the effect does not seem to be very large. If the main question deciding what word processor I use is what word processor everyone else uses, and similarly with other products, then the first dominant product should also be the last, since no competitor will ever have a chance to compete. It follows that I must be currently running WordStar or MacWrite, and that VisiCalc still owns the world of spreadsheets. It didn't happen that way. They also offer some less direct evidence. Suppose there are two equally good word processing programs, one with 95% of the market, one with 5%. If network externalities are important, the dominant program should be worth substantially more than its competitor to users—say a hundred dollars more. The rational monopolist should raise his price accordingly, to take advantage of his customers' willingness to pay. He won't raise it by the full hundred dollars, since at that price his competitor might start to expand, but raising it by somewhat less, say fifty dollars, permits him to both maintain his monopoly and exploit it. It follows that if externalities are important, the dominant product in each niche should cost more than its competitors. Empirically, that does not seem to be the case. Here again, the conclusion is not that network externalities do not exist but that they do not seem to matter very much, at least in this market. I have said a good deal about the background to recent antitrust actions but very little about the case that is the current high profile example. One reason is that by the time this book is published the Microsoft case will probably be over and something else occupying the headlines. Another reason is that I do not know enough of the detailed allegations in that case, or the evidence for and against them, to want to offer an opinion as to whether Microsoft has or has not been doing the various wicked things that its competitors accuse it of doing. My Academic Pages My Home Page Email to me
<urn:uuid:4fbbd6bf-cbbf-4ce8-be64-44ee3e26ab11>
CC-MAIN-2013-20
http://www.daviddfriedman.com/Laws_Order_draft/laws_order_ch_16.htm
2013-05-24T02:18:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968901
10,276
Creating Student Success In School, Work, and Life A child's education is not complete unless it includes the arts. In fact, the No Child Left Behind Act (NCLB) lists the arts among the core academic subjects, requiring schools to enable all students to achieve in the arts and to reap the full benefits of a comprehensive arts education. In spite of this federal direction, access to arts education in our schools is eroding. A report from the Center for Education Policy concludes that, since the enactment of NCLB, 22% of school districts sur- veyed have reduced instructional time for art and music. This is happening at a time when parents, employers, and civic leaders are demanding improvements to the learning environment that will make our schools places where all learners will access a complete education and opportunities to succeed. These demands cannot be met without comprehensive arts education in our nation’s schools. The Arts Prepare Students for School, Work, and Life As this country works to strengthen our foothold in the global economy, the arts equip students with a creative, competitive edge. To succeed in today's economy of ideas, students must mas- terfully use words, images, sounds, and motion to communicate. The arts provide the skills and knowledge students need to develop the creativity and determination necessary for success in today's global information age. The Arts Strengthen the Learning Environment Where schools and communities are delivering high-quality learning opportunities in, through, and about the arts for children, extraordinary results occur. A recent study by the Arts Education Partnership, Third Space: When Learning Matters, finds that schools with large populations of stu- dents in economic poverty - too often places of frustration and failure for both students and teachers - can be transformed into vibrant hubs of learning when the arts are infused into their culture and curriculum. The Arts Can Retain Teachers Who Love to Teach The retention of our best teachers is a daunting challenge. It can be met, however, by ensuring schools embrace the arts. Schools, especially those struggling, can retain their best teachers by becoming havens for creativity and innovation; places where students want to learn and teach- ers want to teach. As we aim to improve the teaching environment, the arts can help us retain our best future and current educators in our nation's schools. A comprehensive strategy for a complete education includes rigorous, sequential arts instruction in the classroom, as well as participation and learning in available community-based arts programs. Public schools have the responsibility for providing a complete education for all children, meeting the commit- ment put forth in NCLB. The federal commitment to arts education must be strengthened so that the arts are implemented as a part of the core curriculum of our nation's schools and are an integral part of every child's development. Achievement in and through the Arts Position: The Arts Help Close the Achievement Gap. Argument: The arts make a tremendous impact on the developmental growth of every child, leveling the "learning field" across socio-economic boundaries. The arts reach students not otherwise engaged, uniquely bridging the broad spec- trum of learning styles. Low achieving students often become high achievers in arts learning settings. Their success in the arts classroom often transfers to achievement in other subject areas. Students who participate in the arts outperform those who do not on virtually every measure. Researchers found that sustained learning in music and theater correlate to greater success in math and reading, with students from lower socio-economic backgrounds reaping the greatest ben- efits.1 It is now accepted that the arts are uniquely able to boost learning and achievement for young children, students with disabilities, students from economically disadvantaged circumstances, and students needing remedial instruction.2 Students in high-poverty schools benefit dramatically from arts education. The arts teach children the skills necessary to succeed in life, including learning to solve problems and make decisions; learning to think creatively; building self-esteem and self-discipline; articulating a vision; developing the ability to imagine what might be; and accepting responsibility to complete tasks from start to finish. Ask: Academic achievement for disadvantaged students should be strengthened by integrating successful arts education models into the schools. Urge high-poverty schools to use federal funds to ensure that a comprehensive arts education is available for all students and to integrate the arts into school curriculum to improve student achievement. Provide sup- port for local, state, and national partnerships that promote standards and strategies in support of arts education. Educational Equity in and through the Arts Position: The Arts Are a Core Academic Subject and Must Reach All Children. Argument: The federal government requires that a complete education for every child must include rigorous instruction in all “core academic subjects”- a designation given to the arts in the No Child Left Behind Act (NCLB). Unfortunately, national studies have shown that the implementation of NCLB has led to the erosion of art education in the schools, with 22% of surveyed school districts reporting a decrease in instructional time for art and music.1 U.S. Secretary of Education Margaret Spellings has said, “Many educators across the country have shown that a focus in NCLB on reading and math is not mutually exclusive of the arts and music. In fact, we all know that a well-rounded curriculum that includes the arts and music contributes to higher academic achievement.” A comprehensive arts education – fully integrated as a core subject of learning – fosters the creativity and innovation needed for a more competitive workforce. Department of Education Arts in Education (AIE) programs identify and disseminate successful models of arts instruction, integration, and professional development, and support the leadership initiatives of VSAarts and the John F. Kennedy Center for the Performing Arts. In addition, in-school and after-school learning partnerships with arts organizations which, when teamed with rigorous instruction in the arts during the school day, provide students with opportunities to achieve arts literacy. These programs decrease the frequency of delinquent behavior and school truancy, and improve overall aca- demic performance, communication skills, and the ability to complete work on tasks from start to finish. Ask: Congress must address the unintended consequences of NCLB that have diminished the presence of arts education in our schools; as one of NCLB's core academic subjects, preserve and strengthen the arts and improve the implementa- tion of the arts as a core academic subject at the state and local levels. Congress should also continue and strengthen support for programs and partnerships that maximize the capacity of the arts to reach all students, including the Department's AIE program, the primary Federal initiative for developing national models in arts education and profession- 1 Center on Education Policy. (2006). From the Capitol to the Classroom: Year 4 of the No Child Left Behind Act, March 2006. (p. xi). 2 Horowitz, R. & Webb-Dempsey, J. (2003). Promising signs of positive effects: Lessons from the multi-arts studies. In R. J. Deasy (Ed). Critical Links: Learning in the Arts and Student Academic and Social Development. Washington, DC: Arts Education Partnership. (p. 98-100). Mason, C.Y., Thormann, M. S., & Steedley, K. M. (2004). How Students with Disabilities Learn in and through the Arts. Washington, DC: VSAarts. (p. 19-25). 3 Center on Education Policy. (2006). From the Capitol to the Classroom: Year 4 of the No Child Left Behind Act, March 2006. (p. xi). Teachers and the Arts Position: The Retention of Arts Teachers Is Crucial to Creating Powerful Learning Communities and Maximizing Argument: One-third of new teachers leave the profession within three years; half within five years.4 Most affected are urban, rural, and minority communities with large populations of students in economic poverty. But schools have the ability to retain their best teachers by transforming schools - especially those drowning in frustration and failure for students and teachers alike - with the infusion of the arts into their curriculum. When schools embrace the arts, they can become vibrant and successful centers of learning and community life - places where students want to learn and teachers want to teach.5 For schools to develop this sense of community and collaboration through the arts, arts instruction for every child must be delivered by teachers with specific and expert arts and education knowledge. To do otherwise dilutes both the benefits in student achievement and opportunities for schools to retain their best teach- Ask: To provide students with a complete education, the arts must be both comprehensively learned and rigorous- ly taught as a core academic subject. In addition to providing students with essential skills to succeed in school, work, and life, rigorous arts education offers a methodology for learning that generates creative teaching solutions from which all teachers can benefit. Student learning will benefit by ensuring arts education specialists are the providers of rigorous arts instruction, continuing support for professional development of new and experienced teachers, and increasing federal support for the transformation of struggling schools into successful learning communities through Improve National Measurements of the Arts Position: The U.S. Department of Education Must Include the Arts in All Research and Data Collection Regarding the "Core Academic Subjects." Argument: NCLB and current U.S. Department of Education policy make it clear that decisions regarding education should be made on the basis of research. Furthermore, as this nation crafts major policies regarding the future of public education, it is imperative that sound research is available on the status of learning and teaching in our schools. The U.S. Department of Education is the only entity in a position to collect essential national demographic informa- tion and to guide policy research of this kind. In the past, influential data-gathering has taken place in a manner that excludes the collection of information on the arts. For example, the Department's January 1999 study on "Teacher Quality" specifically excluded arts teachers from the study sample. Meaningful research is needed to determine the status of dance, music, theater, and visual arts education. Since the arts are designated as a core academic subject, they should be included in all research and data collection efforts by the U.S. Department of Education. For example, the Fast Response Survey System (FRSS) report, Arts in Education in Public Elementary and Secondary Schools, is the only Department of Education-produced research report on the status of how arts education is deliv- ered in America's public schools. The last FRSS report on arts education featured data collected in the 1999-2000 school year. An updated report with the next round of data collection is long overdue. The National Assessment of Educational Progress in the Arts (NAEP) - the national arts "report card" - provides critical information about the arts skills and knowledge of our nation's students. The next NAEP is scheduled to be administered in 2008, and must stay on track. The FRSS and NAEP are essential to studying and improving access to the arts as a core academic subject. Ask: The U.S. Department of Education's research efforts must be strengthened by systematically including the arts in studies conducted on the condition of education, practices that improve academic achievement, and the effective- ness of Federal and other education programs. 4 Ingersoll, R. M. (2002). Teacher shortage: A case of wrong diagnosis and wrong prescription. NASSP Bulletin. 86. pp. 16-31. 5 Stevenson, L. M. & Deasy, R. J. (2005). Third Space: When Learning Matters. Washington, DC: Arts Education Partnership. (pp. 10-11). Creating Student Success In School, Work, and Life Alliance for Young Artists & Writers, Inc. Music for All Foundation American Alliance for Theatre and Education American Art Therapy Association Music Teachers National Association American Arts Alliance NAMM International Music Products Association American Association of Family and Consumer Sciences National A+ Schools Consortium American Association of Museums National Academy of Recording Arts & Sciences American Federation of Musicians National Art Education Association American Institute for Conservation of National Assembly of State Arts Agencies Historic & Artistic Works National Association for Sport & Physical Education American Library Association National Association of Elementary School Principals American Music Therapy Association National Association of Secondary School Principals American String Teachers Association National Association of State Boards of Education American Symphony Orchestra League National Dance Association Americans for the Arts National Dance Education Organization National Education Association Association for Supervision & Curriculum Development National Guild of Community Schools of the Arts Association of Art Museum Directors National Network for Folk Arts in Education Association of Independent Colleges of Art and Design National Parent Teacher Association Association of Performing Arts Presenters Association of Public Television Stations Binney & Smith, Inc. School Social Work Association of America Service Employees International Union Country Music Foundation State Education Agency Directors of Arts Education The American Architectural Foundation Educational Theatre Association The Grammy Foundation Educators for Social Responsibility The John F. Kennedy Center for the Performing Arts International Alliance for Invitational Education Theatre Communications Group International Council of Fine Arts Deans VH1 Save The Music Foundation Lincoln Center Institute for the Arts in Education Wolf Trap Foundation for the Performing Arts Young Audiences, Inc. MENC-The National Association for Music Education November 7, 2006
<urn:uuid:634c4627-917e-41f6-95e9-79bbc4c28cd2>
CC-MAIN-2013-20
http://pdfcast.org/pdf/arts-education
2013-06-18T04:41:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914249
2,880
A Reference Resource William T. Coleman, Jr. (1975–1977): Secretary of Transportation President Gerald Ford appointed William Thaddeus Coleman, Jr., to serve as the nation's fourth secretary of the Department of Transportation on March 7, 1975, replacing Claude Brinegar, who had resigned. Coleman was born in Philadelphia, Pennsylvania, in 1920, and attended local public schools. He graduated summa cum laude from the University of Pennsylvania in 1941 and magna cum laude from Harvard Law School in 1946. He began his legal career in 1947, serving as law clerk to Judge Herbert F. Goodrich of the Court of Appeals for the Third Circuit and clerk to Supreme Court Justice Felix Frankfurter in 1948. Coleman was one of the lead strategists and coauthor of the legal brief in Brown v. Board of Education (1954) in which the Supreme Court outlawed segregation in public schools. He served as a member of the NAACP's national legal committee, director and member of its executive committee, and president of the NAACP Legal Defense and Educational Fund. Coleman was also a member of President Dwight Eisenhower's Committee on Government Employment Policy (1959-1961), a senior consultant and assistant counsel to the President's Commission on the Assassination of President John F. Kennedy (1964), and a consultant to the U.S. Arms Control and Disarmament Agency (1963-1975). In 1969, he was a member of the U.S. delegation to the twenty-fourth session of the United Nations General Assembly. Coleman was also a member of the National Commission on Productivity from 1971 to 1972. He was senior partner in the law firm of Dilworth, Paxson, Kalish, Levy & Coleman at the time of his appointment to the Ford administration. When President Ford appointed him as secretary of Transportation, Coleman became only the second African American to serve in a cabinet post. At Transportation, Coleman was the point man for the administration's changes to the regulations governing the transportation industry. His most controversial decision was in allowing limited transatlantic service for the supersonic transport plant, the Concorde, a decision which angered the majority of environmental groups concerned largely with the effects of noise pollution. Close on the heels of the Concorde decision in terms of controversy was Coleman's decision to defer the mandatory installation of airbags in all new automobiles. At the end of the Ford administration, Coleman returned to practicing law.
<urn:uuid:d96bc933-5299-4058-8664-1d0cc0dcb40f>
CC-MAIN-2013-20
http://millercenter.org/president/ford/essays/cabinet/788
2013-05-22T21:51:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968676
492
Babyhood – it’s a time of face-pulling, baby talk (or parentese) and smiling. Babies spend their days looking at faces, watching parents and siblings, and refining emotional skills. This is the way they get the love, attention and stimulation they need to develop and feel safe in the world. During this time, your baby begins to learn what emotions are and what they’re for. By watching how you react when they express emotions, and by seeing you express your own feelings, babies start to know when they feel specific things, such as happiness, sadness, excitement or fearfulness. After about three months, babies also begin to learn that certain actions – such as smiling, cooing, crying or suddenly yelling louder than the television – can bring about emotional responses from grown-ups. Tips to help you connect with baby - When your baby first starts to deliberately catch your eye, look back into baby’s eyes. - When baby makes noises, show you’re listening. Try smiling, nodding, widening your eyes, lifting your brows and touching. You can also say things like, ‘What did you say?’ or ‘Aren’t you talking well!’. This all encourages your baby to keep communicating. - Help your baby to calm down after any emotional excitement. You can do this by stroking baby, saying gentle words and playing soothing music. This all helps baby to develop emotional control. - Maintain a regular routine. This helps your baby to feel comfortable and make sense of all the new sights, sounds, smells and tastes around. Researchers believe that seeing the way your face reacts when your baby does or says something helps baby to understand the world and form relationships. This is a period of rapid development and brain growth. By nine months, your baby’s brain has undergone a growth spurt that helps form connections between what baby sees, hears, tastes and feels. Increasingly independent and with improved motor skills, babies at this age can sit by themselves for short periods and might start crawling. As they begin to understand who they are, memory improves too. You’ll find your baby begins to get attached to people and objects. Separation anxiety often comes with attachment. To cope with this, your baby needs to learn that when things disappear, they also reappear. You can help baby by: - giving lots of physical affection, cuddly toys, and verbal reminders of where you are as you move around a room - playing fun games, such as peekaboo – these also give baby the experience of taking turns - encouraging time with other carers – this might also give you the chance to go out occasionally and leave baby in somebody else’s care. As your baby moves closer to 12 months, baby will become increasingly vocal. When baby begins to make sounds – ‘ba ba ba’, ‘da da da’ – repeat them back. Repetition in speech – ‘Are you hungry?’ ‘You’re hungry aren’t you?’ ‘Ohhh, I’m hungry’ – teaches babies the meaning of words and leads to the development of speech and language. It is never too early to start talking to your baby . Hearing lots of words helps your child’s intellectual development later on. By this age, your baby’s ability to experience different emotions and moods has developed considerably. Baby is also learning how to recognise when other people have emotions. Respond to emotional expressions – ‘Yes, I know you’re cranky, I’m coming back soon’. This helps your baby to identify emotions and understand the process of feeling better and worse. As the front part of the brain develops, babies at this age are better able to entertain and reassure themselves with familiar objects and people. They can move more and better, which means they can get away from things that upset or annoy them. You might find that your baby is also starting to want more independence! Keep baby involved and alert by: - doing things that make baby happy - changing activities when boredom or stress sets in - trying play ideas like reading books, playing with toys and walking around the park pointing at things. Research tells us that parents who talk about emotions with their babies help their children to understand and respond to other people’s emotions.
<urn:uuid:14400c4a-1449-43b1-8e0b-6eb1c363b901>
CC-MAIN-2013-20
http://raisingchildren.net.au/articles/connecting_with_your_baby.html/context/276
2013-06-20T09:08:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955548
926
Scottish Gaelic literature Before 1200 Gaidhlig was spoken in Scotland at least as early as the sixth century, when settlers from Ireland moved to the west of Scotland. There has been some debate on when the Gaidhlig language spoken in Scotland had become sufficiently distinct from that spoken in Ireland to justify calling it Scottish Gaelic. For much of the Middle Ages, the learned Gaelic elites of both Scotland and Ireland maintained close contacts and shared a literary form of Gaelic, which diverged from the spoken form. The bulk of early Gaelic verse to which Scottish origins can be ascribed was produced by the monastic community (familia) of St Columba at Iona. Dallán Forgaill (fl. late 6th century) was responsible for a eulogy of Columba, Amra Choluim Chille, which takes pride of place as one of the earliest literary works produced in Irish, and Beccán mac Luigdech (fl. 7th century) composed at least two poems in praise of the patron saint. Of the many vernacular poems written about Columba or attributed to him, only few can be claimed to be of Scottish origin. The Betha Adamnáin ("Life of Adomnán") incorporates anecdotal material which has been shown to come from Iona. A Scottish background has been suggested for the story related in the 9/10th-century prose text Scéla Cano meic Gartnáin, about the wanderings of the exiled Scottish king Cano mac Gartnáin. The Lebor Bretnach, an 11th-century Gaelic translation of the Historia Brittonum, has been regarded as the product of a flourishing Gaelic literary establishment at the monastery of Abernethy. It is possible that more Middle Gaelic literature was written in medieval Scotland than is often thought, but has not survived because the Gaelic literary establishment of eastern Scotland died out before the 14th century. Some Gaelic texts written in Scotland have survived in Irish sources. There survives a small body of medieval Scottish poetry. There seems to have been some patronage of Gaelic poetry by the later Pictish kings. In the thirteenth century, Muireadhach Albanach, Irish poet of the O'Dálaigh clan of poets wrote eulogies for the Mormaers of Lennox. He founded the MacMhuirich bardic family, a Scottish dynasty of poets. Muireadhach may have played a large role introducing the new "reformed" style of poetry which had been developing in Ireland in the twelfth century. Muireadhach's friend, Gille Brighde Albanach, was perhaps the most prolifically extant native Scottish poet. About 1218, Gille Brighde wrote a poem - Heading for Damietta - on his experiences of the Fifth Crusade. High Middle Ages Gaelic has a rich oral (beul-aithris) and written tradition, having been the language of the bardic culture of the Highland clans. However, according to Peter Berresford Ellis, the only extant manuscripts preceding the Book of the Dean of Lismore from 16th century are some notes in the Book of Deer, one 11th century poem and the Islay Charter of 1408, presumably due to the rest having been "destroyed by the anti-Gaelic administrators of the country". It is clear from John Barbour (d. 1395), and a plethora of other evidence, that the Fenian Cycle flourished in Scotland. There are allusions to Gaelic legendary characters in later Anglo-Scottish literature (oral and written). Reign of James IV The Book of Common Order was translated into Scottish Gaelic by Séon Carsuel (John Carswell), Bishop of the Isles, and printed in 1567. This is considered the first printed book in Scottish Gaelic though the language resembles classical Irish. Seventeenth century Mary Macleod (Mairi Nighean Alasdair Ruaidh) was a notable poetess during the 17th-century. Iain Lom (c. 1624–c. 1710) was a Royalist Scottish Gaelic poet appointed poet laureate in Scotland by Charles II at the Restoration. He delivered a eulogy for the coronation, and remained loyal to the Stuarts after 1688, opposing the Williamites and later, in his vituperative Oran an Aghaidh an Aonaidh, the 1707 Union of the Parliaments. Eighteenth century The Scottish Gaelic Enlightenment figure Alasdair mac Mhaighstir Alasdair compiled the first secular book in Scottish Gaelic to be printed: Leabhar a Theagasc Ainminnin (1741), a Gaelic-English glossary. The second secular book in Scottish Gaelic to be published was his poetry collection Ais-Eiridh na Sean Chánoin Albannaich (The Resurrection of the Ancient Scottish Language). His lexicography and poetry was informed by his study of old Gaelic manuscripts, an antiquarian interest which also influenced the orthography he employed. As an observer of the natural world of Scotland and a Jacobite rebel, Alasdair mac Mhaighstir Alasdair was the most overtly nationalist poet in Gaelic of the 18th century. His Ais-Eiridh na Sean Chánoin Albannaich was reported to have been burned in public by the hangman in Edinburgh. He was influenced by James Thomson's The Seasons as well as by Gaelic "village poets" such as Iain Mac Fhearchair (John MacCodrum). As part of the oral literature of the Highlands, few of the works of such village poets were published at the time, although some have been collected since. Scottish Gaelic poets produced laments on the Jacobite defeats of 1715 and 1745. Mairghread nighean Lachlainn and Christina Ferguson are among woman poets who reflected on the crushing effects on traditional Gaelic culture of the aftermath of the Jacobite uprisings. A consequent sense of desolation pervaded the works of Scottish Gaelic writers such as Dughall Bochanan which mirrored many of the themes of the graveyard poets writing in England. A legacy of Jacobite verse was later compiled (and adapted) by James Hogg in his Jacobite Reliques (1819). Donnchadh Bàn Mac an t-Saoir (usually Duncan Ban MacIntyre in English; 20 March 1724 – 14 May 1812) is one of the most renowned of Scottish Gaelic poets and formed an integral part of one of the golden ages of Gaelic poetry in Scotland during the 18th century. He is best known for his poem about Beinn Dorain; "Moladh Beinn Dòbhrain" (English: "Praise of Ben Doran"). Most of his poetry is descriptive and the influence of Alasdair MacMhaighstir Alasdair is notable in much of it. Despite the Jacobite upheavals during his lifetime, it was his experience as a gamekeeper in Argyll and Perthshire in the employ of the Duke of Argyll which had greatest impact upon his poetry. Moladh Beinn Dòbhrain, stems from this period. The significance of Duncan Bàn's nature themed poetry is such that it has, along with that of MacMhaighstir Alasdair, been described as "the zenith of Gaelic nature poetry". The Ossian of James Macpherson Bible translation An Irish Gaelic translation of the Bible dating from the Elizabethan period, but revised in the 1680s, was in use until the Bible was translated into Scottish Gaelic. Author David Ross notes in his 2002 history of Scotland that a Scottish Gaelic version of the Bible was published in London in 1690 by the Rev. Robert Kirk, minister of Aberfoyle; however it was not widely circulated. The first well-known translation of the Bible into modern Scottish Gaelic was begun in 1767 when Dr James Stuart of Killin and Dugald Buchanan of Rannoch produced a translation of the New Testament. Very few European languages have made the transition to a modern literary language without an early modern translation of the Bible. The lack of a well-known translation until the late 18th century may have contributed to the decline of Scottish Gaelic. 19th century Ewen MacLachlan translated the first eight books of Homer's Iliad into Scottish Gaelic. He also composed and published his own Gaelic Attempts in Verse (1807) and Metrical Effusions (1816), and contributed greatly to the 1828 Gaelic–English Dictionary. The poetry of Allan MacDonald) (1859–1905) is mainly religious in nature. He composed hymns and verse in honour of the Blessed Virgin, the Christ Child, and the Eucharist. However, several secular poems and songs were also composed by him. In some of these, MacDonald praises the beauty of Eriskay and its people. In his verse drama, Parlamaid nan Cailleach (The Old Wives' Parliament), he lampoons the gossiping of his female parishioners and local marriage customs. 20th century Since about 1900, plays have been written and performed in Scottish Gaelic. The first novel in Scottish Gaelic was John MacCormick's Dùn-Àluinn, no an t-Oighre 'na Dhìobarach, which was serialised in the People's Journal in 1910, before publication in book form in 1912. The publication of a second Scottish Gaelic novel, An t-Ogha Mòr by Angus Robertson, followed within a year. Dòmhnall Ruadh Chorùna was a Scottish Gaelic poet who served in the First World War, and as a war poet described the use of poison gas in his poem Òran a' Phuinnsuin ("Song of the Poison"). His poetry is part of oral literature, as he himself never learnt to read and write in his native language. As part of the Scottish Gaelic Renaissance, Sorley MacLean's work in Scottish Gaelic in the 1930s gave new value to modern literature in that language. Iain Crichton Smith was more prolific in English but also produced much Gaelic poetry and prose, and also translated some of the work of Sorley Maclean from Gaelic to English, as well as some of his own poems originally composed in Gaelic. Much of his English language work was related to, or translated from, Gaelic equivalents. Modern Gaelic poetry has been most influenced by Symbolism, transmitted via poetry in English, and by Scots poetry. Traditional Gaelic poetry utilised an elaborate system of metres, which modern poets have adapted to their own ends. George Campbell Hay looks back beyond the popular metres of the 19th and 20th centuries to forms of early Gaelic poetry. Donald MacAuley's poetry is concerned with place and community. The following generation of Gaelic poets writing at the end of the 20th century lived in a bilingual world to a greater extent than any other generation, with their work most often accompanied in publication by a facing text in English. Such confrontation has inspired semantic experimentation, seeking new contexts for words, and going as far as the explosive and neologistic verse of Fearghas MacFhionnlaigh (1948- ). Scottish Gaelic poetry has been the subject of translation not only into English, but also into other Celtic languages: Maoilios Caimbeul and Màiri NicGumaraid have been translated into Irish, and John Stoddart has produced anthologies of Gaelic poetry translated into Welsh. Scottish Gaelic literature is currently experiencing a revival. With regard to Gaelic poetry this includes the Great Book of Gaelic An Leabhar Mòr, a Scottish Gaelic, English and Irish language collaboration featuring the work of 150 poets, visual artists and calligraphers. Established contemporary poets in Scottish Gaelic include Meg Bateman, Maoilios Caimbeul, Rody Gorman, Aonghas MacNeacail and Angus Peter Campbell. Gaelic prose has expanded also, particularly with the development since 2003 of the Ùr-sgeul series published by CLÀR, which encourages new works of Gaelic fiction from both established and new writers. Angus Peter Campbell, besides three Scottish Gaelic poetry collections, has produced five Gaelic novels: An Oidhche Mus Do Sheol Sinn (2003), Là a' Deanamh Sgeil Do Là (2004), An Taigh-Samhraidh (2006) and Tilleadh Dhachaigh (2009) and Fuaran Ceann an t-Saoghail (2011). Other established fiction writers include Alasdair Caimbeul and his brother Tormod Caimbeul, Catriona Lexy Campbell, Alison Lang, Dr Finlay MacLeod, Iain F. MacLeod, Norma MacLeod, Mary Anne MacDonald and Duncan Gillies. New fiction writers include Mairi E. MacLeod and the writers of the An Claigeann Damien Hirst (Ùr-sgeul, 2009) and Saorsa (Ùr-sgeul, 2011) anthologies. Most recently, the Gaelic drama group Tog-I, established by Arthur Donald, has attempted to revive the sector. See also - Scottish literature - Book of Deer - Islay Charter - Book of the Dean of Lismore - Glenmasan manuscript - Fernaig manuscript - Alasdair MacMhaighstir Alasdair - James Macpherson - Ewen MacLachlan - Clancy, "Scottish Gaelic literature (to c. 1200)", p. 1276. - See Thomas Owen Clancy and G. Márkus, ed. (1995). Amra Choluimb Chille, Iona: the earliest poetry of a Celtic monastery. Edinburgh. pp. 96–128. - Peter Berresford Ellis: MacBeth, High King of Scotland, 1040-57, 1980 - Felicity Heal Reformation in Britain and Ireland - Page 282 2005 "In Irish the catechism long preceded the printing of the New Testament, while in Scottish Gaelic the Form of Common Order was printed in 1567, the full Bible not until 1801. Manx Gaelic had no Bible until the eighteenth century:" - Watson, Roderick (2007). The Literature of Scotland. Houndmills: Palgrave Macmillan. ISBN 9780333666647. - Crawford, Robert (2007). Scotland's Books. London: Penguin. ISBN 9780140299403. - Calder, George (editor and translator). The Gaelic Songs of Duncan MacIntyre. Edinburgh: John Grant, 1912. - Gaelic Song - An Introduction - Mackenzie, Donald W. (1990-92). "The Worthy Translator: How the Scottish Gaels got the Scriptures in their own Tongue". Transactions of the Gaelic Society of Inverness 57: 168–202. - Ross, David. Scotland: History of a Nation. Geddes & Grosset, 2002. - THE FORGOTTEN FIRST: JOHN MACCORMICK’S DÙN-ÀLUINN - MacAuley, Donald (1976). Nua-bhàrdachd Ghàidhlig - Modern Scottish Gaelic Poems. Southside. - Whyte, Christopher (1991). An Aghaidh na Sìorraidheachd - In the Face of Eternity. Edinburgh: Polygon. ISBN 0748660917. - Poetry in the British Isles: Non-Metropolitan Perspectives. University of Wales Press. 1995. ISBN 0708312667. - Leabhar Mor website summary of book - Simon MacKenzie Scotsman obituary - Clancy, Thomas Owen (2006). "Scottish Gaelic literature (to c. 1200)". In John T. Koch. Celtic Culture. A Historical Encyclopedia. 5 volumes 4. Santa Barbara, Denver and Oxford: ABC Clio. pp. 1276–7. Further reading - Black, Ronald I.M. (ed.). An Lasair: an anthology of 18th-century Scottish Gaelic verse. Edinburgh, 2001. - Black, Ronald I.M. (ed.). An Tuil: an anthology of 20th-century Scottish Gaelic verse. Edinburgh, 1999. - Bruford, Alan. Gaelic folktales and medieval romances: a study of the early modern Irish romantic tales and their oral derivatives. Dublin, 1969. - Campbell, J.F. (ed.). Leabhar na Féinne: heroic Gaelic ballads collected in Scotland chiefly from 1512 to 1871. London, 1872. PDF available from the Internet Archive - Clancy, Thomas Owen. "King-making and images of kingship in medieval Gaelic literature." In The Stone of Destiny: artefact and icon, edited by R. Welander, D.J. Breeze and T.O. Clancy. Society of Antiquaries of Scotland Monograph Series 22. Edinburgh: Society of Antiquaries of Scotland, 2003. pp. 85–105. - MacLachlan, Ewen. Ewen MacLachlan's Gaelic Verse. Aberdeen University Studies 114. 2nd ed. Aberdeen: Dept. of Celtic, 1980 (1937). - Ó Baoill, Colm and Donald MacAulay. Scottish Gaelic vernacular verse to 1730: a checklist. Revised edition. Aberdeen: Department of Celtic, University of Aberdeen, 2001. - Ó Baoill, Colm. Mairghread nighean Lachlainn: song-maker of Mull. An edition and study of the extant corpus of her verse in praise of the Jacobite Maclean leaders of her time. Edinburgh: Scottish Gaelic Text Society, 2009. - Ó Háinle, Cathal and Donald E. Meek. Unity in diversity: studies in Irish and Scottish Gaelic language, literature and history. Dublin, 2004. - Storey, John "Ùr-Sgeul: Ag Ùrachadh Litreachas is Cultar na Gàidhlig . . . Dè an Ath Cheum?" Edinburgh: Celtic and Scottish Studies, 2007 PDF available from University of Edinburgh - Storey, John "Contemporary Gaelic fiction: development, challenge and opportunity” in Lainnir a’ Bhùirn' - The Gleaming Water: Essays on Modern Gaelic Literature, edited by Emma Dymock & Wilson McLeod. Edinburgh: Dunedin Academic Press, 2011. - Watson, Moray An Introduction to Gaelic Fiction. Edinburgh: Edinburgh University Press, 2011 - Watson, William J. (ed.). Bardachd Albannach: Scottish verse from the Book of the Dean of Lismore. Edinburgh: The Scottish Gaelic Texts Society, 1937.
<urn:uuid:d2837aab-3370-4019-a6ce-625f6d079dad>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Scottish_Gaelic_literature
2013-05-25T05:47:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934779
4,054
On Friday, January 6th, The June Buchanan School’s Middle School Science students traveled to the East Kentucky Science Center in Prestonsburg, KY for the beginning of their science class unit, “Amazing Space.” The students have begun engaging in a collaborative unit designed by Middle School Science instructor Karen Bailey and Spanish instructor Tamara Kunkel to teach key earth-space concepts. JBS students are learning the various properties of planet Earth, motions of celestial bodies, characteristics of the moon, moon phases, and key facts about the planets, stars, and much more in English and in Spanish! Students have and will be doing a plethora of learning activities, some of which include the design of a Wikispace. The Wikispace will include students’ self-made virtual tours of the solar system and student work throughout the unit. In addition, students will be researching topics of interest and blogging about their finds concerning black holes, life on other planets, current space missions, and more! In order to kick off this unit of study, students visited the Science Center to engage in a series of programs, such as the Energy Exhibit, Comets in the Classroom, Oasis in Space planetarium show, and a laser show titled Pink Floyd’s Dark Side of the Moon. At the end of the unit, students will travel to the Challenger Learning Center in Hazard, KY to take on the role of astronauts in a simulated mission. Instructor Karen Bailey states, “Hands-on learning and cross curriculum connections truly make the content we are learning stick with students. I am thrilled that students are blessed to be so close to incredible space resources in this region.”
<urn:uuid:bffdb6ad-6e3a-4972-a61f-766ba6846f63>
CC-MAIN-2013-20
http://www.alc.edu/2012/01/jbs-amazing-space-unit-kicks-off-at-the-science-center/
2013-05-18T17:37:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931781
343
What is plastic surgery? Plastic surgery is a medical specialty focused on the reconstruction of injured, impaired and defective parts of the face and body. Reconstructive plastic surgery may be required as a result of birth defects or developmental disorders, injuries, burns and tumors or other diseases. Although reconstructive plastic surgery may restore or improve a patients’ appearance, the primary focus of plastic surgery is the restoration of the body. The term “plastic” derives from the Greek term “plastikos” which means “able to be molded”. What procedures does a plastic surgeon perform? Generally speaking, reconstructive plastic surgery involves the removal of tumors, repair of fractured bones and lacerations on the face and body, surgery on the hands, breast reduction (in women and men) and reconstruction, repair of cleft palates and other congenital abnormalities and skin grafts to repair severe burns. Plastic surgeons can also create new body parts from tissue and skin, such as an ear, and attach it to a patient; they also reattach amputated extremities. Plastic surgery is an extensive field that has been broken down into the following subspecialties: Is cosmetic surgery different than plastic surgery? - Microsurgery – Microsurgery entails the transfer of tissue, skin, muscle, bone or fat to other areas of the body where needed on the face and body, as well as the reconnection of blood vessels. Popular procedures include breast reconstruction, head and neck reconstruction and hand surgery. - Hand surgery – A part of microsurgery, hand surgery is practiced by hand surgeons, plastic surgeons, orthopedic surgeons and general surgeons. It focuses on the reconstruction of acute and chronic diseases of the hand and wrist, nerve problems (carpal tunnel syndrome) and the correction of congenital malformations of the hands as well as the reattachment of amputated extremities. - Burn surgery – Acute burn surgery is the treatment immediately after a burn; reconstructive burn surgery takes place after the burn wounds have healed. Common burn surgery procedures are skin grafts, in which areas of skin are transplanted to cover a burned area of skin. - Craniofacial surgery – Pediatric craniofacial surgery focuses on the reconstruction of congenital anomalies of the hard and soft tissues of the face, such as cleft lip and palate. Adult craniofacial surgery focuses on the reconstruction of facial fractures, as well as orbital reconstruction (area of the face around the eyes) and orthognathic surgery (surgery on the jaw and face to correct conditions such as sleep apnea or TMJ). - Pediatric surgery – Pediatric surgery is a blend of hand surgery and craniofacial surgery, dealing with deformities and defects in children. Cosmetic surgery is a subspecialty of plastic surgery. However, unlike plastic surgery, cosmetic surgery is primarily focused on enhancing a patient’s appearance. Cosmetic surgery can be performed on all areas of the head, neck and body, and is typically elective (i.e. the necessity is at the discretion of the patient). Some of the most common cosmetic procedures include breast surgery (either augmentation such as with implants sometimes desired after surgery for removal of cancers of the breast, or breast reduction desired for proportion or to improve comfort), rhinoplasty or “nose job”, blepharoplasty or “eyelid surgery”, abdominoplasty or “tummy tuck” and liposuction. Popular nonsurgical procedures include Botox injections and laser hair removal.
<urn:uuid:0b60234a-ef95-422c-bd83-ff5ce73bb8f9>
CC-MAIN-2013-20
http://www.lifescript.com/doctor-directory/hand-surgeon/broomfield-colorado-co-mitchell-alex-fremling-md.aspx
2013-05-25T19:36:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939326
745
A group of artificial satellites working in concert is known as a satellite constellation. Such a constellation can be considered to be a number of satellites with coordinated ground coverage, operating together under shared control, synchronised so that they overlap well in coverage and complement rather than interfere with other satellites' important coverage. Low Earth orbiting satellites (LEOs) are often deployed in satellite constellations, because the coverage area provided by a single LEO satellite only covers a small area that moves as the satellite travels at the high angular velocity needed to maintain its orbit. Many LEO satellites are needed to maintain continuous coverage over an area. This contrasts with geostationary satellites, where a single satellite, moving at the same angular velocity as the rotation of the Earth's surface, provides permanent coverage over a large area. Examples of satellite constellations include the Global Positioning System (GPS), Galileo and GLONASS constellations for navigation and geodesy, the Iridium and Globalstar satellite telephony services, the Disaster Monitoring Constellation and RapidEye for remote sensing, the Orbcomm messaging service, Russian elliptic orbit Molniya and Tundra constellations, the large-scale Teledesic and Skybridge broadband constellation proposals of the 1990s, and the proposed LEO global backhaul constellation named COMMStellation™. Broadband applications benefit from low-latency communications, so LEO satellite constellations provide an advantage over a geostationary satellite, where minimum theoretical latency is about 125 milliseconds, compared to 1–4 milliseconds for a LEO satellite. A LEO satellite constellation can also provide more system capacity by frequency reuse across its coverage, with spot beam frequency use being analogous to the frequency reuse of cellular radio towers. A group of formation-flying satellites very close together and moving in almost identical orbits is known as a satellite cluster or Satellite formation flying. Walker Constellation There are a large number of constellations that may satisfy a particular mission. Usually constellations are designed so that the satellites have similar orbits, eccentricity and inclination so that any perturbations affect each satellite in approximately the same way. In this way, the geometry can be preserved without excessive station keeping thereby reducing the fuel usage and hence increasing the life of the satellites. Another consideration is that the phasing of each satellite in an orbital plane maintains sufficient separation to avoid collisions or interference at orbit plane intersections. Circular orbits are popular, because then the satellite is at a constant altitude requiring a constant strength signal to communicate. A class of circular orbit geometries that has become popular is the Walker Delta Pattern constellation. This has an associated notation to describe it which was proposed by John Walker. His notation is: - i: t/p/f where: i is the inclination; t is the total number of satellites; p is the number of equally spaced planes; and f is the relative spacing between satellites in adjacent planes. The change in true anomaly (in degrees) for equivalent satellites in neighbouring planes is equal to f*360/t. For example, the Galileo Navigation system is a Walker Delta 56°:27/3/1 constellation. This means there are 27 satellites in 3 planes inclined at 56 degrees, spanning the 360 degrees around the equator. The "1" defines the phasing between the planes, and how they are spaced. The Walker Delta is also known as the Ballard rosette, after A. H. Ballard's similar earlier work. Ballard's notation is (t,p,m) where m is a multiple of the fractional offset between planes. Another popular constellation type is the near-polar Walker Star, which is used by Iridium. Here, the satellites are in near-polar circular orbits across approximately 180 degrees, travelling north on one side of the Earth, and south on the other. The active satellites in the full Iridium constellation form a Walker Star of 86.4°:66/6/2, i.e. the phasing repeats every two planes. Walker uses similar notation for stars and deltas, which can be confusing. See also Example satellite constellations In use - A-train (satellite constellation) - Compass navigation system - Disaster Monitoring Constellation - Global Positioning System - Iridium satellite constellation - Sirius Satellite Radio - XM Satellite Radio Satellite constellation simulation tools: - AVM Dynamics Satellite Constellation Modeler - SaVi Satellite Constellation Visualization - Transfinite Visualyse Professional - J. G. Walker, Satellite constellations, Journal of the British Interplanetary Society, vol. 37, pp. 559-571, 1984 - A. H. Ballard, Rosette Constellations of Earth Satellites, IEEE Transactions on Aerospace and Electronic Systems, Vol 16 No. 5, Sep. 1980. - J. G. Walker, Comments on "Rosette constellations of earth satellites", IEEE Transactions on Aerospace and Electronic Systems, vol. 18 no. 4, pp. 723-724, November 1982.
<urn:uuid:7559e518-bdaa-49c1-8b7a-820e09da17f0>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Satellite_constellation
2013-05-22T21:59:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.870527
1,053
Every row and column in the diagram contains skyscrapers of different heights, exactly those heights indicated at the side of the puzzle (so sometimes some places remain empty, when for example heights 1~4 are used in a 6x6 puzzle). No two skyscrapers of the same height are in the same row or column. The numbers around the diagram denote how many skyscrapers are visible from that direction: higher skyscrapers block lower ones. In this example, we need to fill in skyscrapers of heights 1 to 4. In the top right there is a 4 next to the diagram, which means that all scrapers in the top row are visible from the right. This is only possible if they have decreasing heights from left to right, so the top row reads 4-3-2-1. In the bottom right, there is a 1 next to the diagram. This means only one scraper is visible from the right, so this must be the highest one: the 4. In the bottom column, there needs to be a scraper of height 3. It cannot be in the leftmost column, since in that column three scrapers need to be visible from below, and a 3 would block both the 1 and the 2. It also cannot be in the second column, since there already is a 3 in that column. So it must be in the third column. To complete the bottom row, look at the 3 to the left of it: if the 1 would be in the first column and the 2 in the second, then all four scrapers would be visible from the left, so it must be the other way around. In the second row, five scrapers are visible in total from both left and right. Since the highest one is the only that can be seen from two sides, all other scrapers must be visible. This means that the 4 must be in the third position. The 3 cannot be in the first position because it would block the second position from view, it cannot be in the second position because there already is a 3 in that column, so it must be in the third position. The 1 and the 2 must both be visible from the left, so the 1 comes first. To complete the puzzle, we just need to fill in the missing number in every column. Puzzles in this genre
<urn:uuid:c236a671-d94b-49a3-9f17-e8b0786fce4b>
CC-MAIN-2013-20
http://www.puzzlepicnic.com/genre?flats
2013-06-18T04:34:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938332
478
Want to stay on top of all the space news? Follow @universetoday on Twitter Just when you thought you’d figured out all the ways to blow up, nature reveals a new way. This latest class of explosion is called a hybrid gamma-ray burst, and it was discovered by NASA’s Swift satellite. As with most gamma-ray bursts, this explosion probably indicates the birth of a new black hole in the Universe; however, the explosion itself was different from what astronomers have seen before. First, a little about gamma-ray bursts. They come in two varieties: long and short. The long bursts can go on for more than two seconds (yeah, that’s the long variety), and appear to be caused when the core of a massive star collapses into a black hole. Two seconds from star to black hole. The short variety can burst for milliseconds, and appear to be the merger of two compact objects. For example, if you have two neutron stars orbiting one another, their orbits will eventually decay to the point that they merge. Not just neutron stars, though, you could have a black hole and a neutron star. This new explosion detected by Swift lasted for 102 seconds. That’s in the long territory; however, the light curve better matched the characteristics of a short explosion. It was like neutron stars were merging for nearly two minutes, when they should have taken only milliseconds. Unfortunately, astronomers have no idea what caused this. “This is brand new territory; we have no theories to guide us,” noted one of the astronomers, Neil Gehrels at NASA’s Goddard Space Flight Center. One interesting theory is that it was actually the merger of a neutron star or a black hole with a white dwarf. Instead of the instantaneous collision, the white dwarf took a full 102 seconds to be torn apart. Thanks to David Alexander Kann for additional details on this.
<urn:uuid:a33db15c-7585-402e-a37f-fa28ce68bd07>
CC-MAIN-2013-20
http://www.universetoday.com/1109/heres-a-new-way-to-explode-hybrid-gamma-ray-burst/
2013-05-19T10:35:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946281
398
IN the past two decades, the African continent has experienced violent civil conflicts that have taken a toll on the socioeconomic development of the affected states. However, the biggest losers could be the women who were abducted and raped by rebels in the conflicts. Some of the women have been held against their will and forced into marriages, revealed a research conducted on the prolonged conflict that affected Sierra Leone, Liberia, Uganda, Rwanda and Democratic Republic of Congo which was launched in Nairobi on Thursday. In exchange for the marriage, the women normally get protection from the rebels as well as food and shelter, said researchers. They said it is one of the forms of violence perpetrated against women alongside rape, losing land and as well as witnessing families' members being killed. It not only violates international human rights laws as well as many national domestic laws, some scholars also regard forced marriages as another form of sexual slavery. Canadian-based York University Professor of Socio-legal Studies Annie Bunting was the author of the report. She said that fighting soldiers normally need the women for sex, labour, taking care of children, cultivating as well as supporting the strategy for war. Her 3-year study on forced marriages found that these unions are also not recognised under African customary laws. In Uganda, the Lords Resistance Army (LRA), who fought the government in Northern Uganda for over 20 years, used forced conjugal associations in their armed conflicts. The LRA are said to have developed a system to force the kidnapped women to act as wives for the soldiers. They also played a critical role in boosting the morale of the militias. Sierra Leone was engulfed in a ten-year bloody civil war between 1992 and 2002. The kidnapped women were enlisted in order to provide labour to fight government troops. In Liberia, the Truth and Reconciliation Commission report released indicated that gender based violence, including forced marriages was a common practice. In addition, children born in these marriages were later taken from the women after the end of the conflict. The women and girls often experienced difficulties in getting the society to accept them back into their societies. She added that activists and survivors or gender violence are themselves at risk. However, the treatment of children born in captivity varied from country to country. In some of the conflict countries, they were fully embraced by the militias, while in some they were considered as outcasts. Bunting said that in order to discourage such practices, the rebels who perpetrate the crimes should be subject to criminal sanctions. "You have to uphold the commanders responsible so as to act as deterrence for future crimes," she said. However, during the Genocide in Rwanda, cases of forced marriages were few probably due to the relatively short duration of the conflict. Director of the British East African Institute Dr Ambreena Manji said that victims of gender violence should get reparations. In Sierra Leone, over 230 women survivors have received direct financial benefits while another 650 were trained in various skills. The UN General Assembly in October 2005 emphasised the need for reparations if human rights are violated. Manji added that international community is now focusing on this new crime. "In order to prevent history from repeating itself some form of compensation to victims are necessary," he said. The authorities of post conflict nations should therefore consider reparations both for communities and individual victims. Kenya's Kisii University College Law lecturer Wycliffe Otiso said that forced marriage is prevalent in conflicts due to certain factors. "Rebels in wars normally wield a lot of power and therefore have a chance of abducting women. There is therefore need to put in place legal mechanisms so as to bring justice to victims," he said.
<urn:uuid:a19f259a-8619-47ee-ab84-9b62c4c7231f>
CC-MAIN-2013-20
http://allafrica.com/stories/201212310190.html?viewall=1
2013-06-18T22:25:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.981788
751
Looking at Structures: Structure Factors and Electron Density In a typical crystallographic experiment, a crystal is subjected to a narrow beam of intense X-rays, and the diffraction pattern is observed with a detector or a sheet of film. This pattern forms a characteristic array of spots, commonly referred to as reflections. Crystallographers measure the intensity of these reflections and use the information to determine the distribution of electrons in the crystal. The result is a map of the crystal that shows the distribution of electrons at each point, which may then be interpreted to find coordinates for each atom in the crystallized molecules. Two pieces of information are needed to create an electron density map: the amplitude of X-rays in each reflection and the phase of X-rays in each reflection. Together, this information is used to define a complex number, termed the structure factor, which is used to calculate the electron density map. In a typical experiment, the amplitudes of the structure factors are obtained by measurement of the reflection intensities. The phases, however, are more tricky to measure, and crystallographers have developed several methods to estimate them. The traditional method for estimating phases, termed isomorphous replacement, is to add a few electron-dense atoms, such as metal ions, to the crystal, and compare the diffraction pattern with similar crystals that do not include the heavy atoms. Looking at the differences, researchers can find the location of the heavy atoms, and then estimate phases based on their locations. Molecular replacement is also commonly used to estimate phases. In this case, the researcher uses a previously-solved structure of the molecule as a starting model, and calculates phases based on it. More recently, anomalous scattering of X-rays has become a common method for determining phases. In these cases, special atoms like selenium or bromine are added to the molecules, and the wavelength of the X-rays is carefully tuned to give anomalous scattering. By looking at small differences in symmetrical reflections in the diffraction pattern, the phases may be estimated directly. For many of the structures in the PDB, the authors have deposited the primary crystallographic data along with the atomic model that was solved using the data. These data files may be found in the "Experimental Details" box in the structure summary page. The files include a list of all of the reflections that were used in the structure determination. A typical file includes the h, k, and l indices for each reflection, a measure of the amplitude or intensity of the reflection, and often a measure of the standard uncertainty (sigma) of the reflection. The file often may include other pieces of information, such as a flag to identify reflections used for free R-value calculations or other details of the experiment. Tip: The Astex Viewer may be used to create an interactive visualization of the electron density based on the experimental data provided by the authors. It may be reached through the "EDS" link in the "Experimental Details" panel of the structure summary page. Tip: You will find selenomethionine amino acids in many recent structures of proteins. This is a common way that researchers add selenium to proteins for use in determining phases by anomalous scattering. Since the selenium is chemically similar to sulfur, we expect that the protein structure will be similar to the form with the normal methionine amino acids. The left image shows one plane through the three-dimensional diffraction pattern of a DNA crystal. Each spot has a characteristic intensity that is related to the distribution of electrons in the crystal. For instance, the row of dark spots 10 rows above and below the center are characteristic of the stacking of bases in DNA. The right image shows the electron density derived from the diffraction pattern of PDB entry 6bna, created using the Astex Viewer. The view shows one base pair with a guanine and a bromocytosine. The blue contours enclose most of the electrons, and show the overall shape of the bases, and the yellow contours enclose only regions with high electron density, such as the electron-dense bromine atom.
<urn:uuid:7331ca2c-36a2-41a8-bcdf-71cba096f663>
CC-MAIN-2013-20
http://www.pdb.org/pdb/101/static101.do?p=education_discussion/Looking-at-Structures/structurefactors.html
2013-05-18T06:55:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918344
861
Science Fair Project Encyclopedia A person who emits noise (with the voice or otherwise) either loudly or a lot of the time can be described as loud. Whether this is an insult or a compliment is a matter of personal preference: some people self-describe as "loud" while many others consider "loud" people to be intensely irritating. Units used to measure loudness: The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:f58ac637-9f2f-43aa-8dec-6fbd0e92d3d5>
CC-MAIN-2013-20
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Loudness
2013-06-20T09:08:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.902789
111
Civilians protected under international humanitarian law During the past 60 years the main victims of war have been civilians. The protection of civilians during armed conflict is therefore a cornerstone of international humanitarian law. This protection extends to their public and private property. IHL also identifies and protects particularly vulnerable civilian groups such as women, children and the displaced. During World War II, and in many of the conflicts since, civilians have been the main victims of armed conflict. Civilians have always suffered in war, but the brutal impact of World War II, which included mass extermination, indiscriminate attacks, deportations, hostage taking, pillage and internment, took a high toll of civilian life. The response of the international community was the Fourth Geneva Convention adopted in 1949. Before 1949 the Geneva Conventions protected wounded, sick, shipwrecked and captured combatants. The “civilians’ convention” recognized the changing nature of warfare and established legal protection for any person not belonging to armed forces or armed groups. The protection also included civilian property. Such protection was later reinforced with the adoption of the Additional Protocols to the Geneva Convention in 1977. IHL provides that civilians under the power of enemy forces must be treated humanely in all circumstances, without any adverse distinction. They must be protected against all forms of violence and degrading treatment, including murder and torture. Moreover, in case of prosecution, they are entitled to a fair trial affording all essential judicial guarantees. The protection of civilians extends to those trying to help them, in particular medical units and humanitarian or relief bodies providing essentials such as food, clothing and medical supplies. The warring parties are required to allow access to such organizations. The Fourth Geneva Convention and Additional Protocol I specifically require belligerents to facilitate the work of the ICRC. While IHL protects all civilians without discrimination, certain groups are singled out for special mention. Women and children, the aged and sick are highly vulnerable during armed conflict. So too are those who flee their homes and become internally displaced or refugees. IHL prohibits forced displacements by intimidation, violence or starvation. Families are often separated in armed conflict. States must take all appropriate steps to prevent this and take action to re-establish family contact by providing information and facilitating tracing activities. The protection of civilians provided by the Geneva Conventions and Additional Protocols is extensive. The problem of the past 50 years has been application. Neither States nor non-State armed groups have respected their obligations adequately. Civilians have continued to suffer excessively in almost every armed conflict. In some conflicts civilians have been specifically targeted and subjected to terrible atrocities, ignoring the very basis of the Geneva Conventions, respect for the human person. It is for this reason that the ICRC continues to press States to respect and ensure respect for the principles of IHL, especially the protection of civilians.
<urn:uuid:82d15144-6b54-4f50-9d46-c9eb65b4f31f>
CC-MAIN-2013-20
http://www.icrc.org/eng/war-and-law/protected-persons/civilians/overview-civilians-protected.htm
2013-05-22T14:52:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945606
580
January 2, 2013 Who Owns Outer Space? Rand Simberg looks at the basic issues: Despite the progress in technology, and the appeal of valuable resources, space settlement has been hampered by the lack of a clearly defined legal regime for recognizing property rights in space under current U.S. and international law. There is in fact some slight internationally recognized legal precedent for retaining ownership of resources mined in space, as lunar samples returned to Earth on both U.S. and Soviet missions (the latter robotically) have been exchanged for other tokens of value. But actually owning the portion of the celestial body from which the resources are harvested — as in a traditional mining claim — is more problematic. Without legally recognized rights to buy, own, and sell titled property, it is difficult if not impossible to raise capital to develop land or extract the resources it holds. Property rights have long been considered one of the pillars of prosperity in the modern world, and their absence in space — due to the contingencies of the history of international law during the early space age — partly explains why we have not yet developed that final frontier. (HT: Andrew Sullivan) January 2, 2013 | Permalink TrackBack URL for this entry: Listed below are links to weblogs that reference Who Owns Outer Space?:
<urn:uuid:706e1954-c4aa-40b1-b69d-6091463a25ce>
CC-MAIN-2013-20
http://lawprofessors.typepad.com/property/2013/01/who-owns-outer-space.html
2013-05-22T14:40:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941211
266
|Sea Level is Not Rising| |Written by Professor Nils-Axel Mörner| |Friday, 07 December 2012| - - At most, global average sea level is rising at a rate equivalent to 2-3 inches per century. It is probably not rising at all. - - Sea level is measured both by tide gauges and, since 1992, by satellite altimetry. One of the keepers of the satellite record told Professor Mörner that the record had been interfered with to show sea level rising, because the raw data from the satellites showed no increase in global sea level at all. - - The raw data from the TOPEX/POSEIDON sea-level satellites, which operated from 1993-2000, shows a slight uptrend in sea level. However, after exclusion of the distorting effects of the Great El Niño Southern Oscillation of 1997/1998, a naturally-occurring event, the sea-level trend is zero. - - The GRACE gravitational-anomaly satellites are able to measure ocean mass, from which sea-level change can be directly calculated. The GRACE data show that sea level fell slightly from 2002-2007. - - These two distinct satellite systems, using very different measurement methods, produced raw data reaching identical conclusions: sea level is barely rising, if at all. - - Sea level is not rising at all in the Maldives, the Laccadives, Tuvalu, India, Bangladesh, French Guyana, Venice, Cuxhaven, Korsør, Saint Paul Island, Qatar, etc. - - In the Maldives, a group of Australian environmental scientists uprooted a 50-year-old tree by the shoreline, aiming to conceal the fact that its location indicated that sea level had not been rising. This is a further indication of political tampering with scientific evidence about sea level. - - Modelling is not a suitable method of determining global sea-level changes, since a proper evaluation depends upon detailed research in multiple locations with widely-differing characteristics. The true facts are to be found in nature itself. - - Since sea level is not rising, the chief ground of concern at the potential effects of anthropogenic “global warming” – that millions of shore-dwellers the world over may be displaced as the oceans expand – is baseless. - - We are facing a very grave, unethical “sea-level-gate”. Observational facts indicate that sea level is by no means rapidly rising. It is quite stable. This is the case in key sites like the Maldives, Bangladesh, Tuvalu, Vanuatu, Saint Paul Island, Qatar, French Guyana, Venice, and northwest Europe. Tide gauges tend to exaggerate rising trends because of subsidence and compaction. Full stability over the last 30-50 years is indicated in sites like Tuvalu, India, the Maldives (and also the Laccadives to the north of the Maldives), Venice (after subtracting the subsidence factor), Cuxhaven (after subtracting the subsidence factor), and Korsør (a stable hinge for the last 8 ,000 years). Satellite altimetry is shown to record variations around a stable zero level for the entire period 1992- 2010. Reported trends in the order of 3 mm/year represent “interpretational records,” after the application of subjective “personal calibrations” which cannot be substantiated by observational facts. Therefore, we can now return to Fig. 1 and claim that the “models” (upper curve) provide an illusory picture of a strong sea-level rise and that the “observations” (lower curve) provide a good reconstruction of the actual changes in sea level over the last 170 years, with stability over the last 40 years. We can now return to the spectrum of present-day sea level rates (Fig. 2) and evaluate the various values proposed. This is illustrated in Fig. 16. Only rates in the order of 0.0 mm/year to maximum 0.7 mm/year seem realistic. This fits well with the values proposed for year 2100 by INQUA (2000) and Mörner (2004), but differs significantly from the values proposed by the IPCC (2001, 2007). If sea level is not rising fast, and is not going to rise fast, then the greatest threat imagined by the IPCC disappears. The idea of an ever-rising sea drowning tens of thousands of people and forcing hundreds of thousands or even millions of people to become sea-level refugees is simply a grave error, hereby revealed as an illusion. The true facts are to be found in nature itself. They are certainly not to be found at the modelling consoles. Some data depend heavily on interpretation. Other evidence, however, is clear and straightforward. Consider trees. I have often said that “trees don’t lie”: see e.g. Mörner, 2007c. In that paper, I described the significance of the lonely tree by the shore in the Maldives which indicated that sea level had been stable for 50-60 years. A group of Australian environmental “scientists”, realizing that the location of the tree was fatal to their notion of ever-rising sea level, uprooted it and left it, still in leaf, lying on the strand. There are also the trees on the beach in Sundarban, indicating significant coastal erosion (caused in part by the clearance of mangroves to make way for shrimp-farms) but no sea level rise at all (Mörner, 2007c, 2010a). I hope that by this research we can free the world from the artificial crisis to which the IPCC has condemned it. There will be no extensive or disastrous global sea-level rise in the near future. That was the main threat in the IPCC’s arsenal of bugaboos, and now it is gone.
<urn:uuid:9693daf8-a966-4c04-9f81-b780954d38bb>
CC-MAIN-2013-20
http://hockeyschtick.blogspot.com/2012/12/new-paper-by-sea-level-expert-concludes.html
2013-05-24T08:57:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931509
1,240
Build Your Family Tree at the Library Published on Thursday, February 18, 2010 - 3:42pm Ever wonder about your ancestors? The DC Public Library can help. Check out Heritage Quest, a free database that includes U.S. Census records from 1790 to 1930 and Freedman’s Bank records. Incorporated in 1865 to benefit freed slaves, Freedman’s Bank records contain information about the depositors and their families from 1865 to 1874. In some cases, the names of former slave owners are included. Accessing this database only requires a DC Public Library card. Before starting database research, a few tips will help get better results: - Decide who to research. Collect the real name of ancestors, their siblings, how old they would be and where they lived in 1930. - Search broadly. Enter the last name, year of birth, and city/state in Heritage Quest’s census database. - Work backwards. Review the 1930 Census information, then plan to search the 1920 Census. Continue stepping back through available census data. - Explore Freedman’s Bank Records. Find the surname of an ancestor and where they may have lived from 1865 to 1874. Heritage Quest is one of many resources available to research family history. To learn more about the Library’s databases, visit dclibrary.org or any Library location.
<urn:uuid:afbe2615-1cd5-4e5f-a196-7b678fc3a315>
CC-MAIN-2013-20
http://dclibrary.org/node/4595
2013-05-25T05:53:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920398
289
Rooted at the base of Sinai, the Israelites grow restive as they wait for Moshe to descend from the mountain’s summit. Turning to Aharon, they demand, “Rise up, make for us gods who will go before us, for Moshe – this man who brought us out of the land of Egypt – we do not know what has become of him!” Aharon responds by instructing the people to contribute gold, which he fashions into a molten calf. He then proclaims, “A festival for the Lord tomorrow!” Rising early the next morning, people bring offerings and celebrate with food, drink and revelry. Even before Moshe descends from the mountain, God informs him of the sin of the golden calf and threatens the nation with immediate extinction, only relenting after Moshe’s impassioned pleas. The perpetrators of the sin are punished and the rest of the nation earns forgiveness through repentance. The sin of the golden calf remains, however, according to rabbinic thought, a seminal transgression that continues to affect the Jewish people in countless ways across the centuries. No event within Jewish history is more puzzling or more frightening than the chet ha’egel. How could the people who experienced the Exodus from Egypt, the parting of the Reed Sea, the defeat of Amalek, the gift of the manna and the powerful Revelation at Mount Sinai fail so completely in the very shadow of that mountain? Forty days earlier, against the dramatic backdrop of God’s manifestation at Sinai, the Israelites heard the clear commandment against idol worship. How could they now, at the first sign of difficulty, create and deify a golden calf? In a different vein, the rabbis maintain that the sin of the golden calf reverberates across the ages, affecting each era of Jewish history. And yet, the chet ha’egel seems irrelevant to our lives – an ancient event rooted in idolatrous practices distant from our experience. What possible eternal message might be contained in what the rabbis clearly perceive to be a formative, instructive tragedy? In spite of the apparent disconnect between the chet ha’egel and the backdrop against which it occurs, initial sources do view and identify this sin as an outright case of idol worship. “By worshiping the calf, the Israelites clearly indicated their acceptance of idolatry,” the Talmud proclaims, mirroring a position which finds even earlier voice in a passage of Tehillim: “They exchanged their glory for the image of a bull that feeds on grass.” Similar opinions are found in the Midrash, as well. A powerfully insightful approach to the behavior of the Israelites at the foot of Sinai can be gleaned from the writings of the Rambam. In his Guide to the Perplexed, this great scholar develops the principle that human behavior does not change abruptly and that a people cannot journey immediately from one extreme to the other: “It is not in man’s nature to be reared in slavery…and then ‘wash his hands’ and suddenly be able to fight the descendents of giants [the inhabitants of the land of Canaan].” The Rambam goes on to explain that the full transformation of the Israelites eventually requires a forty-year period of wandering and “schooling” in the wilderness – a period during which they acquire the traits necessary for successful nationhood. Abrupt events, no matter how miraculous and awe-inspiring, do not carry the power to make fundamental changes to human nature. True behavioral change is gradual. In spite of all they had seen and experienced, the Israelites standing at the foot of Sinai were unable to make the leap beyond their idolatrous origins. Battered by the fearful forces surrounding them, bewildered by Moshe’s apparent disappearance, they return to the comfort of the familiar – and create an idol of gold. In stark contrast to those who view the actions of the Israelites at Sinai as classically idolatrous, numerous scholars offer radically different approaches to the chet ha’egel. Rabbi Yehuda Halevi, for example, maintains that the Israelites are actually motivated by a desire to worship God effectively. Reared among religions that make extensive use of physical images, the Israelites feel unable to approach their God in the absence of a tangible symbol towards which to focus their devotion. The people fully expect that Moshe, with his descent from Mount Sinai, will bring such a symbol: the Tablets of Testimony (inscribed with the Ten Declarations). When they conclude that Moshe has failed to return with the tablets, the Israelites turn to Aharon and demand a substitute. Rabbi Yehuda goes on to explain that the nation’s transgression lies not in their fundamental intent or assumptions, but in their methods. Symbols are certainly critical to Judaism, as can be seen from the extensive use of symbolic ritual in the building and operation of the Mishkan. Only symbols that flow from God’s law, however, are acceptable. The Israelites have no right to devise and create their own mechanism through which to approach God. Their sin can be compared, says Rabbi Yehuda, to an individual who enters a doctor’s dispensary and prescribes drugs – thereby killing the patients who would have been saved had they been given the proper dosage by the doctor himself. Numerous later authorities follow in the footsteps of Rabbi Yehuda Halevi’s interpretation, some with attribution and some without. In his work the Beis Halevi, Rabbi Yosef Dov Halevi Soloveitchik offers a slightly variant approach. The Israelites know that the ritual service will be performed by a specific individual, Aharon, and will be conducted in a specific location, the Mishkan. They therefore believe that they have the right to create their own “Tabernacle” as they see fit. They fail to realize, however, that each detail of the Sanctuary is purposeful, filled with divinely ordained mystery and meaning. Other commentaries, including the Ramban, Ibn Ezra and Rabbi Shimshon Raphael Hirsch, focus on the wording of the Israelites’ demand of Aharon: “Rise up, make for us gods who will go before us, for Moshe – this man who brought us out of the land of Egypt – we do not know what has become of him! ” The Israelites, they say, are not attempting to replace God. They are, instead, attempting to replace Moshe. Deeply frightened by Moshe’s apparent disappearance (their fear exaggerated, the rabbis say, by an error they make in computing the days of Moshe’s absence), the people feel unable to approach God without the benefit of the only leader they have known. They therefore demand of Aharon that he create a new “leader.” The sin of the Israelites, says Hirsch, lies in the “erroneous idea that man can make, may make, must make a ‘Moses’ for himself…” The grave error in their thinking is their belief that in order to bridge the unimaginable chasm between man and the Divine, an intermediary is required. This suggestion is diametrically opposed to the fundamental Jewish belief in man’s ability to forge his own direct and personal relationship with God. Adapted from one of the multiple essays on this parsha in Unlocking the Torah Text by Rabbi Shmuel Goldin.
<urn:uuid:54e5489e-10dd-4bb8-b422-00526cdd48c2>
CC-MAIN-2013-20
http://www.ou.org/ou/print_this/81843
2013-05-22T00:07:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950331
1,587
How today’s fiscal austerity is reminiscent of World War I’s economic misunderstandings When World War I broke out in August 1914, economists on both sides forecast that hostilities could not last more than about six months. Wars had grown so expensive that governments quickly would run out of money. It seemed that if Germany could not defeat France by springtime, the Allied and Central Powers would run out of savings and reach what today is called a fiscal cliff and be forced to negotiate a peace agreement. But the Great War dragged on for four destructive years. European governments did what the United States had done after the Civil War broke out in 1861 when the Treasury printed greenbacks. They paid for more fighting simply by printing their own money. Their economies did not buckle and there was no major inflation. That would happen only after the war ended, as a result of Germany trying to pay reparations in foreign currency. This is what caused its exchange rate to plunge, raising import prices and hence domestic prices. The culprit was not government spending on the war itself (much less on social programs). But history is written by the victors, and the past generation has seen the banks and financial sector emerge victorious. Holding the bottom 99% in debt, the top 1% are now in the process of subsidizing a deceptive economic theory to persuade voters to pursue policies that benefit the financial sector at the expense of labor, industry, and democratic government as we know it. Wall Street lobbyists blame unemployment and the loss of industrial competitiveness on government spending and budget deficits – especially on social programs – and labor’s demand to share in the economy’s rising productivity. The myth (perhaps we should call it junk economics) is that (1) governments should not run deficits (at least, not by printing their own money), because (2) public money creation and high taxes (at lest on the wealthy) cause prices to rise. The cure for economic malaise (which they themselves have caused), is said to be less public spending, along with more tax cuts for the wealthy, who euphemize themselves as “job creators.” Demanding budget surpluses, bank lobbyists promise that banks can provide the economy with enough purchasing power to grow. Then, when this ends in crisis, they insist that austerity can squeeze out enough income to enable private-sector debts to be paid. The reality is that when banks load the economy down with debt, this leaves less to spend on domestic goods and services while driving up housing prices (and hence the cost of living) with reckless credit creation on looser lending terms. Yet on top of this debt deflation, bank lobbyists urge fiscal deflation: budget surpluses rather than pump-priming deficits. The effect is to further reduce private-sector market demand, shrinking markets and employment. Governments fall deeper into distress, and are told to sell off land and natural resources, public enterprises, and other assets. This creates a lucrative market for bank loans to finance privatization on credit. This explains why financial lobbyists back the new buyers’ right to raise the prices they charge for basic needs, creating a united front to endorse rent extraction. The effect is to enrich the financial sector owned by the 1% in ways that indebt and privatize the economy at large – individuals, business and the government itself. This policy was exposed as destructive in the late 1920s and early 1930s when John Maynard Keynes, Harold Moulton and a few others countered the claims of Jacques Rueff and Bertil Ohlin that debts of any magnitude could be paid if governments would impose deep enough austerity and suffering. This is the doctrine adopted by the International Monetary Fund to impose on Third World debtors since the 1960s, and by European neoliberals defending creditors imposing austerity on Ireland, Greece, Spain and Portugal. This pro-austerity mythology aims to distract the public from asking why peacetime governments can’t simply print the money they need. Given the option of printing money instead of levying taxes, why do politicians only create new spending power for the purpose of waging war and destroying property, not to build or repair bridges, roads and other public infrastructure? Why should the government tax employees for future retirement payouts, but not Wall Street for similar user fees and financial insurance to build up a fund to pay for future bank over-lending crises? For that matter, why doesn’t the U.S. Government print the money to pay for Social Security and medical care, just as it created new debt for the $13 trillion post-2008 bank bailout? (I will return to this question below.) The answer to these questions has little to do with markets, or with monetary and tax theory. Bankers claim that if they have to pay more user fees to pre-fund future bad-loan claims and deposit insurance to save the Treasury or taxpayers from being stuck with the bill, they will have to charge customers more – despite their current record profits, which seem to grab everything they can get. But they support a double standard when it comes to taxing labor. Shifting the tax burden onto labor and industry is achieved most easily by cutting back public spending on the 99%. That is the root of the December 2012 showdown over whether to impose the anti-deficit policies proposed by the Bowles-Simpson commission of budget cutters whom President Obama appointed in 2010. Shedding crocodile tears over the government’s failure to balance the budget, banks insist that today’s 15.3% FICA wage withholding be raised – as if this will not raise the break-even cost of living and drain the consumer economy of purchasing power. Employers and their work force are told to save in advance for Social Security or other public programs. This is a disguised income tax on the bottom 99%, whose proceeds are used to reduce the budget deficit so that taxes can be cut on finance and the 1%. To paraphrase Leona Helmsley’s quip that “Only the little people pay taxes,” the post-2008 motto is that only the 99% have to suffer losses, not the 1% as debt deflation plunges real estate and stock market prices to inaugurate a Negative Equity economy while unemployment rates soar. There is no more need to save in advance for Social Security than there is to save in advance to pay for war. Selling Treasury bonds to pay for retirees has the identical monetary and fiscal effect of selling newly printed securities. It is a charade – to shift the tax burden onto labor and industry. Governments need to provide the economy with money and credit to expand markets and employment. They do this by running budget deficits, and this can be done by creating their own money. That is what banks oppose, accusing it of leading to hyperinflation rather than help economies grow. Their motivation for this wrong accusation is self-serving and their logic is deceptive. Bankers always have fought to block government from creating its own money – at least under normal peacetime conditions. For many centuries, government bonds were the largest and most secure investment for the financial elites that hold most savings. Investment bankers and brokers monopolized public finance, at substantial underwriting commissions. The market for stocks and corporate bonds was rife with fraud, dominated by insiders for the railroads and great trusts being organized by Wall Street, and the canal ventures organized by French and British stockbrokers. However, there was little alternative to governments creating their own money when the costs of waging an international war far exceeded the volume of national savings or tax revenue available. This obvious need quieted the usual opposition mounted by bankers to limit the public monetary option. It shows that governments can do more under force majeur emergencies than under normal conditions. And the September 2008 financial crisis provided an opportunity for the U.S. and European governments to create new debt for bank bailouts. This turned out to be as expensive as waging a war. It was indeed a financial war. Banks already had captured the regulatory agencies to engage in reckless lending and a wave of fraud and corruption not seen since the 1920s. And now they were holding economies hostage to a break in the chain of payments if they were not bailed out for their speculative gambles, junk mortgages and fraudulent loan packaging. Their first victory was to disable the ability – or at least the willingness – of the Treasury, Federal Reserve and Comptroller of the Currency to regulate the financial sector. Goldman Sachs, Citicorp and their fellow Wall Street giants hold veto power the appointment of key administrators at these agencies. They used this beachhead to weed out nominees who might not favor their interests, preferring ideological deregulators in the stripe of Alan Greenspan and Tim Geithner. As John Kenneth Galbraith quipped, a precondition for obtaining a central bank post is tunnel vision when it comes to understanding that governments can create their credit as readily as banks can. What is necessary is for one’s political loyalties to lie with the banks. In the post-2008 financial wreckage it took only a series of computer keystrokes for the U.S. Government to create $13 trillion in debt to save banks from suffering losses on their reckless real estate loans (which computer models pretended would make banks so rich that they could pay their managers enormous salaries, bonuses and stock options), insurance bets gone bad (underpricing risk to win business to pay their managers enormous salaries and bonuses), arbitrage gambles and outright fraud (to give the illusion of earnings justifying enormous salaries, bonuses and stock options). The $800 billion Troubled Asset Relief Program (TARP) and $2 trillion of Federal Reserve “cash for trash” swaps enabled the banks to continue their remuneration of executives and bondholders with hardly a hiccup – while incomes and wealth plunged for the remaining 99% of Americans. A new term, Casino Capitalism, was coined to describe the transformation that finance capitalism was undergoing in the post-1980 era of deregulation that opened the gates for banks to do what governments hitherto did in time of war: create money and new public debt simply by “printing it” – in this case, electronically on their computer keyboards. Taking the insolvent Fannie Mae and Freddie Mac mortgage financing agencies onto the public balance sheet for $5.2 trillion accounted for over a third of the $13 trillion bailout. This saved their bondholders from having to suffer losses from the fraudulent appraisals on the junk mortgages with which Countrywide, Bank of America, Citibank and other “too big to fail” banks had stuck them. This enormous debt increase was done without raising taxes. In fact, the Bush administration cut taxes, giving the largest cuts to the highest income and wealth brackets who were its major campaign contributors. Special tax privileges were given to banks so that they could “earn their way out of debt” (and indeed, out of negative equity). The Federal Reserve gave a free line of credit (Quantitative Easing) to the banking system at only 0.25% annual interest by 2011 – that is, one quarter of a percentage point, with no questions asked about the quality of the junk mortgages and other securities pledged as collateral at their full face value, which was far above market price. This $13 trillion debt creation to save banks from having to suffer a loss was not accused of threatening economic stability. It enabled them to resume paying exorbitant salaries and bonuses, dividends to bondholders and also to pay counterparties on casino-capitalist arbitrage bets. These payments have helped the 1% receive a reported 93% of the gains in income since 2008. The bailout thus polarized the economy, giving the financial sector more power over labor and consumers, industry and the government than has been the case since the late 19th-century Gilded Age. All this makes today’s financial war much like the aftermath of World War I and countless earlier wars. The effect is to impoverish the losers, appropriate hitherto public assets for the victors, and impose debt service and taxes much like levying tribute. “The financial crisis has been as economically devastating as a world war and may still be a burden on ‘our grandchildren,’” Bank of England official Andrew Haldane recently observed. “‘In terms of the loss of incomes and outputs, this is as bad as a world war.’ he said. The rise in government debt has prompted calls for austerity – on the part of those who did not receive the giveaway. ‘It would be astonishing if people weren’t asking big questions about where finance has gone wrong.’” But as long as the financial sector is winning its war against the economy at large, it prefers that people believe that There Is No Alternative. Having captured mainstream economics as well as government policy, finance seeks to deter students, voters and the media from questioning whether the financial system really needs to be organized in the way it is. Once such a line of questioning is pursued, people may realize that banking, pension and Social Security systems and public deficit financing do not have to be organized in the way they are. There are better alternatives to today’s road to austerity and debt peonage. Today’s financial war against the economy at large Today’s economic warfare is not the kind waged a century ago between labor and its industrial employers. Finance has moved to capture the economy at large, industry and mining, public infrastructure (via privatization) and now even the educational system. (At over $1 trillion, U.S. student loan debt came to exceed credit-card debt in 2012.) The weapon in this financial warfare is no larger military force. The tactic is to load economies (governments, companies and families) with debt, siphon off their income as debt service and then foreclose when debtors lack the means to pay. Indebting government gives creditors a lever to pry away land, public infrastructure and other property in the public domain. Indebting companies enables creditors to seize employee pension savings. And Indebting labor means that it no longer is necessary to hire strikebreakers to attack union organizers and strikers. Workers have become so deeply indebted on their home mortgages, credit cards and other bank debt that they fear to strike or even to complain about working conditions. Losing work means missing payments on their monthly bills, enabling banks to jack up interest rates to levels that used to be deemed usurious. So debt peonage and unemployment loom on top of the wage slavery that was the main focus of class warfare a century ago. And to cap matters, credit-card bank lobbyists have rewritten the bankruptcy laws to curtail debtor rights, and the referees appointed to adjudicate disputes brought by debtors and consumers are subject to veto from the banks and businesses that are mainly responsible for inflicting injury. The aim of financial warfare is not merely to acquire land, natural resources and key infrastructure rents as in military warfare; it is to centralize creditor control over society. In contrast to the promise of democratic reform nurturing a middle class a century ago, we are witnessing a regression to a world of special privilege in which one must inherit wealth in order to avoid debt and job dependency. The emerging financial oligarchy seeks to shift taxes off banks and their major customers (real estate, natural resources and monopolies) onto labor. Given the need to win voter acquiescence, this aim is best achieved by rolling back everyone’s taxes. The easiest way to do this is to shrink government spending, headed by Social Security, Medicare and Medicaid. Yet these are the programs that enjoy the strongest voter support. This fact has inspired what may be called the Big Lie of our epoch: the pretense that governments can only create money to pay the financial sector, and that the beneficiaries of social programs should be entirely responsible for paying for Social Security, Medicare and Medicaid, not the wealthy. This Big Lie is used to reverse the concept of progressive taxation, turning the tax system into a ploy of the financial sector to levy tribute on the economy at large. Financial lobbyists quickly discovered that the easiest ploy to shift the cost of social programs onto labor is to conceal new taxes as user fees, using the proceeds to cut taxes for the elite 1%. This fiscal sleight-of-hand was the aim of the 1983 Greenspan Commission. It confused people into thinking that government budgets are like family budgets, concealing the fact that governments can finance their spending by creating their own money. They do not have to borrow, or even to tax (at least, not tax mainly the 99%). The Greenspan tax shift played on the fact that most people see the need to save for their own retirement. The carefully crafted and well-subsidized deception at work is that Social Security requires a similar pre-funding – by raising wage withholding. The trick is to convince wage earners it is fair to tax them more to pay for government social spending, yet not also to ask the banking sector to pay similar a user fee to pre-save for the next time it itself will need bailouts to cover its losses. Also asymmetrical is the fact that nobody suggests that the government set up a fund to pay for future wars, so that future adventures such as Iraq or Afghanistan will not “run a deficit” to burden the budget. So the first deception is to treat only Social Security and medical care as user fees. The second is to aggravate matters by insisting that such fees be paid long in advance, by pre-saving. There is no inherent need to single out any particular area of public spending as causing a budget deficit if it is not pre-funded. It is a travesty of progressive tax policy to only oblige workers whose wages are less than (at present) $105,000 to pay this FICA wage withholding, exempting higher earnings, capital gains, rental income and profits. The raison d’être for taxing the 99% for Social Security and Medicare is simply to avoid taxing wealth, by falling on low wage income at a much higher rate than that of the wealthy. This is not how the original U.S. income tax was created at its inception in 1913. During its early years only the wealthiest 1% of the population had to file a return. There were few loopholes, and capital gains were taxed at the same rate as earned income. The government’s seashore insurance program, for instance, recently incurred a $1 trillion liability to rebuild the private beaches and homes that Hurricane Sandy washed out. Why should this insurance subsidy at below-commercial rates for the wealthy minority who live in this scenic high-risk property be treated as normal spending, but not Social Security? Why save in advance by a special wage tax to pay for these programs that benefit the general population, but not levy a similar “user fee” tax to pay for flood insurance for beachfront homes or war? And while we are at it, why not save another $13 trillion in advance to pay for the next bailout of Wall Street when debt deflation causes another crisis to drain the budget? But on whom should we levy these taxes? To impose user fees for the beachfront reconstruction would require a tax falling mainly on the wealthy owners of such properties. Their dominant role in funding the election campaigns of the Congressmen and Senators who draw up the tax code suggests why they are able to avoid prepaying for the cost of rebuilding their seashore property. Such taxation is only for wage earners on their retirement income, not the 1% on their own vacation and retirement homes. By not raising taxes on the wealthy or using the central bank to monetize spending on anything except bailing out the banks and subsidizing the financial sector, the government follows a pro-creditor policy. Tax favoritism for the wealthy deepens the budget deficit, forcing governments to borrow more. Paying interest on this debt diverts revenue from being spent on goods and services. This fiscal austerity shrinks markets, reducing tax revenue to the brink of default. This enables bondholders to treat the government in the same way that banks treat a bankrupt family, forcing the debtor to sell off assets – in this case the public domain as if it were the family silver, as Britain’s Prime Minister Harold MacMillan characterized Margaret Thatcher’s privatization sell-offs. In an Orwellian doublethink twist this privatization is done in the name of free markets, despite being imposed by global financial institutions whose administrators are not democratically elected. The International Monetary Fund (IMF), European Central Bank (ECB) and EU bureaucracy treat governments like banks treat homeowners unable to pay their mortgage: by foreclosing. Greece, for example, has been told to start selling off prime tourist sites, ports, islands, offshore gas rights, water and sewer systems, roads and other property. Sovereign governments are, in principle, free of such pressure. That is what makes them sovereign. They are not obliged to settle public debts and budget deficits by asset selloffs. They do not need to borrow more domestic currency; they can create it. This self-financing keeps the national patrimony in public hands rather than turning assets over to private buyers, or having to borrow from banks and bondholders. Why today’s fiscal squeeze adds to the economy’s costs and imposes needless austerity The financial sector promises that privatizing roads and ports, water and sewer systems, bus and railroad lines (on credit, of course) is more efficient and will lower the prices charged for their services. The reality is that the new buyers put up rent-extracting tollbooths on the infrastructure being sold. Their break-even costs include the high salaries and bonuses they pay themselves, as well as interest and dividends to their creditors and backers, spending on stock buy-backs and political lobbying. Public borrowing creates a dependency that shifts economic planning to Wall Street and other financial centers. When voters resist, it is time to replace democracy with oligarchy. “Technocratic” rule replaces that of elected officials. In Europe the IMF, ECB and EU troika insists that all debts must be paid, even at the cost of austerity, depression, unemployment, emigration and bankruptcy. This is to be done without violence where possible, but with police-state practices when grabbers find it necessary to quell popular opposition. Financializing the economy is depicted as a natural way to gain wealth – by taking on more debt. Yet it is hard to think of a more highly politicized policy, shaped as it is by tax rules that favor bankers. It also is self-terminating, because when public debt grows to the point where investors (“the market”) no longer believe that it can be repaid, creditors mount a raid (the military analogy is appropriate) by “going on strike” and not rolling over existing bonds as they fall due. Bond prices fall, yielding higher interest rates, until governments agree to balance the budget by voluntary pre-bankruptcy privatizations. Selling saved-up Treasury bonds to fund public programs is like new deficit borrowing If the aim of America’s military spending around the world is to prepare for future warfare, why not aim at saving up a fund of $10 trillion or even $30 trillion in advance, as with Social Security, so that we will have the money to pay for it? The answer is that selling saved-up Treasury bills to finance Social Security, military spending or any other program has the same monetary and price effect as issuing new Treasury bills. The impact on financial markets – and on the private sector’s holding of government debt – by paying Social Security out of past savings – that is, by selling the Treasury securities in which Social Security funds are invested – is much like borrowing by selling new securities. It makes little difference whether the Treasury sells newly printed IOUs, or sells bonds that it has been accumulating in a special fund. The effect is to increase public debt owed to the financial sector. If the savings are to be invested in Treasury bonds (as is the case with Social Security), will this pay for tax cuts elsewhere in the budget? If so, will these cuts be for the wealthy 1% or the 99%? Or, will the savings be invested in infrastructure, or turned over to states and cities to help balance their budget shortfalls and underfunded pension plans? Another problem concerns who should pay for this pre-saving. The taxes needed to pre-fund a savings build-up siphon off income from somewhere in the economy. How much will the economy shrink by diverting income from being spent on goods and services? And whose income will taxed? These questions illustrate how politically self-interested it is to single out taxing wages to save for Social Security in contrast to war-making and beach-house rebuilding. Government budgets usually are designed to be in balance under normal peacetime conditions, so most public debt has been brought into being by war (prior to today’s financial war of slashing taxes on the wealthy). Adam Smith’s Wealth of Nations (Book V) traced how each new British bond issue to raise funds for a military action had a dedicated tax to pay its interest charges. The accumulation of such war debts thus raised the cost of living and hence the break-even price of labor. To prevent this from undercutting of British competitiveness, Smith urged that wars be waged on a pay-as-you-go basis – by full taxation rather than by borrowing and entailing interest payments and taxes (as the debt itself rarely was amortized). Smith thought that populations should feel the cost of war directly and immediately, presumably leading them to be vigilant in checking grandiose projects of empire. The United States issued fiat greenback currency to pay for much of its Civil War, but also issued bonds. In analyzing this war finance the Canadian-American astronomer and monetary theorist Simon Newcomb pointed out that all wars must be paid for in the form of tangible material and lives by the generation that fights them. Paying for the war by borrowing from bondholders, he explained, involved levying taxes to pay the interest. The effect was to transfer income from the Western states (taxpayers) to bondholders in the East. In the case of Social Security today the beneficiary of government debt is still the financial sector. The economy must provide the housing, food, health care, transportation and clothing to enable retirees to live normal lives. This economic surplus can be paid for either out of taxation, new money creation or borrowing. But instead of “the West,” the major payers of the Social Security tax are wage earners across the nation. Taxing labor shrinks markets and forces the economy into austerity. Quantitative easing as free money creation – to subsidize the big banks The Federal Reserve’s three waves of Quantitative Easing since 2008 show how easy it is to create free money. Yet this has been provided only to the largest banks, not to strapped homeowners or industry. An immediate $2 trillion in “cash for trash” took the form of the Fed creating new bank-reserve credit in exchange for mortgage-backed securities valued far above market prices. QE2 provided another $800 billion in 2011-12. The banks used this injection of credit for interest rate arbitrage and exchange rate speculation on the currencies of Brazil, Australia and other high-interest-rate economies. So nearly all the Fed’s new money went abroad rather than being lent out for investment or employment at home. U.S. Government debt was run up mainly to re-inflate prices for packaged bank mortgages, and hence real estate prices. Instead of alleviating private-sector debt by writing down mortgages in line with the homeowners’ ability to pay, the Federal Reserve and Treasury created money to support property prices – to push the banking system’s balance sheets back above negative net worth. The Fed’s QE3 program in 2012-13 created money to buy mortgage-backed securities each month, to provide banks with money to lend to new property buyers. For the economy at large, the debts were left in place. Yet commentators focused only on government debt. In a double standard, they accused budget deficits of inflating wages and consumer prices, yet the explicit aim of quantitative easing was to support asset prices. Inflating asset prices on credit is deemed to be good for the economy, despite loading it down with debt. But public spending into the “real” economy, raising employment levels and sustaining consumer spending, is deemed bad – except when this is financed by personal borrowing from the banks. So in each case, increasing bank profits is the standard by which fiscal policy is to be judged! The result is a policy asymmetry that is opposite from what most epochs have deemed fair or helpful to economic growth. Bankers and bondholders insist that the public sector borrow from them, blocking the government’s power to self-finance its operations – with one glaring exception. That exception occurs when the banks themselves need free money creation. The Fed provided nearly free credit to the banks under QE2, and Chairman Ben Bernanke promised to continue this policy until such time as the unemployment rate drops to 6.5%. The pretense is that low interest rates spur employment, but the most pressing aim is to provide easy credit to revive borrowing and bid asset prices back up. Fiscal deflation on top of debt deflation The main financial problem with funding war occurs after the return to normalcy, when creditors press for budget surpluses to roll back the public debt that has been run up. This imposes fiscal austerity, reducing wages and commodity prices relative to the debts that are owed. Consumer spending shrinks and prices decline as governments spend less, while higher taxes withdraw revenue. This is what is occurring in today’s financial war, much as it has in past military postwar returns to peace. Governments have the power to resist this deflationary policy. Like commercial banks, they can create money on their computer keyboards. Indeed, since 2008 the government has created debt to support the Finance, Insurance and Real Estate (FIRE) sector more than the “real” production and consumption economy. In contrast to public spending for goods and services (or social programs that increase market demand), most of the bank credit that led to the 2008 financial collapse was created to finance the purchase property already in place, stocks and bonds already issued, or companies already in existence. The effect has been to load down the economy with mortgages, bonds and bank debt whose carrying charges eat into spending on current output. The $13 trillion bank subsidy since 2008 (to enable banks to earn their way out of negative equity) brings us back to the question of why taxes should be levied on the 99% to pre-save for Social Security and Medicare, but not for the bank bailout. Current tax policy encourages financial and rent extraction that has become the major economic problem of our epoch. Industrial productivity continues to rise, but debt is growing even more inexorably. Instead of fueling economic growth, this of credit/debt threatens to absorb the economic surplus, plunging the economy into austerity, debt deflation and negative equity. So despite the fact that the financial system is broken, it has gained control over public policy to sustain and even obtain tax favoritism for a dysfunctional overgrowth of bank credit. Unlike the progress of science and technology, this debt is not part of nature. It is a social construct. The financial sector has politicized it by pressing to privatize economic rent rather than collect it as the tax base. This financialization of rent-extracting opportunities does not reflect a natural or inevitable evolution of “the market.” It is a capture of market structures and fiscal policy. Bank lobbyists have campaigned to shift the economic arena to the political sphere of lawmaking and tax policy, with side battlegrounds in the mass media and universities to capture the hearts and minds of voters to believe that the quickest and most efficient way to build up wealth is by bank credit and debt leverage. Budget deficits as an antidote to austerity Public debts everywhere are growing, as taxes only cover part of public spending. The least costly way to finance this expenditure is to issue money – the paper currency and coins we carry in our pockets. Holders of this currency technically are creditors to the government – and to society, which accepts this money in payment. Yet despite being nominally a form of public debt, this money serves as public capital inasmuch as it is not normally expected to be repaid. This government money does not bear interest, and may be thought of as “equity capital” or “equity money,” and hence part of the economy’s net worth. If taxes did fully cover government spending, there would be no budget deficit – or new public money creation. Government budget deficits pump money into the economy. Conversely, running a budget surplus retires the public debt or currency outstanding. This deflationary effect occurred in the late 19th-century, causing monetary deflation that plunged the U.S. economy into depression. Likewise when President Bill Clinton ran a budget surplus late in his administration, the economy relied on commercial banks to supply credit to use as the means of payment, charging interest for this service. As Stephanie Kelton summarizes this historical experience: The federal government has achieved fiscal balance (even surpluses) in just seven periods since 1776, bringing in enough revenue to cover all of its spending during 1817-21, 1823-36, 1852-57, 1867-73, 1880-93, 1920-30 and 1998-2001. We have also experienced six depressions. They began in 1819, 1837, 1857, 1873, 1893 and 1929. Do you see the correlation? The one exception to this pattern occurred in the late 1990s and early 2000s, when the dot-com and housing bubbles fueled a consumption binge that delayed the harmful effects of the Clinton surpluses until the Great Recession of 2007-09. When taxpayers pay more to the government than the economy receives in public spending, the effect is like paying banks more than they provide in new credit. The debt volume is reduced (increasing the reported savings rate). The resulting austerity is favorable to the financial sector but harmful to the rest of the economy. Most people think of money as a pure asset (like a coin or a $10 dollar bill), not as being simultaneously a public debt. But to an accountant, a balance sheet always balances: Assets = Liabilities + Net Worth. This liability-side ambivalence is confusing to most people. It takes some time to think in terms of offsetting assets and liabilities as mirror images of each other. Much as cosmologists assume that the universe is symmetrical – with positively charged matter having an anti-matter counterpart somewhere at the other end – so accountants view the money in our pocket as being created by the government’s deficit spending. Holders of the Federal Reserve’s paper currency technically can redeem it, but they will simply get paid in other denominations of the same currency. The word “redeem” comes from settling debts. This was the purpose for which money first came into being. Governments redeem money by accepting it for tax payment. In addition to issuing paper currency, the Federal Reserve injects money into the economy by writing checks electronically. The recipients (usually banks selling Treasury bonds or, more recently, packages of mortgage loans) gain a deposit at the central bank. This is the kind of deposit that was created by the above-mentioned $13 trillion in new debt that the government turned over to Wall Street after the September 2008 crisis. The price impact was felt in financial asset markets, not in prices for goods and services or labor’s wages. This Federal Reserve and Treasury credit was not counted as part of the government’s operating deficit. Yet it increased public debt, without being spent on “real” GDP. The banks used this money mainly to gamble on foreign exchange and interest-rate arbitrage as noted above, to buy smaller banks (helping make themselves Too Big To Fail), and to keep paying their managers high salaries and bonuses. This monetization of debt shows how different government budgets are from family budgets. Individuals must save to pay for retirement or other spending. They cannot print their own money, or tax others. But governments do not need to “save” (or tax) to pay for their spending. Their ability to create money means that they do not need to save in advance to pay for wars, Social Security or other needs. Keynesian deficit spending vs. bailing out Wall Street to keep the debt overhead in place There are two kinds of markets: hiring labor to produce goods and services in the “real” economy, and transactions in financial assets and property claims in the FIRE sector. Governments can run budget deficits by financing either of these two spheres. Since President Franklin Roosevelt’s WPA programs in the 1930s, along with his public infrastructure investment in roads, dams and other construction – and military arms spending after World War II broke out – “Keynesian” spending on goods and services has been used to hire labor or pay for social programs. This pumps money into the economy via the GDP-type transactions that appear in the National Income and Product Accounts. It is not inflationary when unemployment exists. However, the debt that characterized the Paulson-Geithner bailout of Wall Street was created not to spend on goods and services, but to buy (or take liability for) mortgages and bank loans, insurance default bets and arbitrage gambles. The aim was to subsidize financial losses while keeping the debt overhead in place, so that banks and other financial institutions could “earn their way” out of negative net worth, at the economy’s expense. The idea was that they could start lending again to prevent real estate prices from falling further, saving them from having to write down their debt claims to bring levels back down within the ability to be paid. Why tax the economy at all? And why financial and tax reform should go together. Taxes pay for the cost of government by withdrawing income from the parties being taxed. From Adam Smith through John Stuart Mill to the Progressive Era, general agreement emerged that the most appropriate taxes should not fall on labor, capital or on sales of basic consumer needs. Such taxes raise the break-even cost of employing labor. In today’s world, FICA wage withholding for Social Security raises the price that employers must pay their work force to maintain living standards and buy the products they produce. However, these economists singled out one kind of tax that does not increase prices: taxes on the land’s rental value, natural resource rents and monopoly rents. These payments for rent-extraction rights are not a return to “factors of production,” but are privatized levy reflecting privileges that have no ongoing cost of production. They are rentier rake-offs. Land is the economy’s largest asset. A site’s rental value is set by market conditions – what people pay for being able to live in a good location. People pay more to live in prestigious and convenient neighborhoods. They pay more if there is local investment in roads and public transportation, and if there are parks, museums and cultural centers nearby, or nice shopping districts. People also pay more as the economy grows more prosperous, because one of the first things they desire is status, and in today’s world this is defined largely by where one lives. Landlords do not create this site value. But speculators may seek to ride the wave by buying property on credit, where the rate of land-price gain exceeds the interest rate. This “capital” gain is the proverbial free lunch. It is created by public investment, by the general level of prosperity, and by the terms on which banks extend credit. In a nutshell, a property is worth whatever a bank will lend, because that is the price that new buyers will be able to pay for it. This logic was more familiar to the public a century ago than it is today. A property tax to collect this “free lunch” rent is paid out of the rent. This leaves less to be capitalized into new interest-bearing loans – while freeing the government from having to tax labor and industrial capital. So this tax not only is “less bad” than others; it is actively desirable to reduce the debt overhead. Rent levels are not affected, but the government collects the rent instead of the property owner or, at one remove, the mortgage banker who turns this rent into a flow of interest by advancing the purchase price of rent-yielding properties to new buyers. Real estate was the major source of rising net worth and wealth for America’s middle class for over sixty years, from the return to peace in 1945 until the 2008 financial collapse. Rising property prices were fueled largely by banks providing mortgage credit on easier terms. But by 2008 these terms had reached their limit. Interest rates were seemingly as low as they could go. So were down payments (zero down payment) and amortization rates (zero, with interest-only loans) and property values were becoming fictitious as a result of a tidal wave of fraud by the banking system’s property appraisers, while the income statements of borrowers also was becoming fictitious (“liars’ loans,” with the main liars being the mortgage writers). If the rise in real estate prices (mainly site values) had been taxed, there would have been no financial overgrowth, because this price-gain would have been collected as the tax base. The government would not have needed to tax labor either via income tax, FICA wage withholding or consumer sales. And taken in conjunction with the government’s money-creating power, there would have been little need for public debt to grow. Taxing rent extraction privileges thus would minimize debt levels and taxes on the 99%. The next leading form of economic rent is taken by oil, gas and mining companies from the mineral deposits created by nature, as well as by owners or leasers of forests and other natural resources. Classical economics from David Ricardo onward defined such income received by landlords, mining companies, forestry and fisheries as “economic rent.” It is not profit on capital investment, because nature has provided the resource, not human labor or expenditure on capital – except for tangible capital investment in the buildings erected on the land, saws to cut down trees, earth-moving equipment to do the mining, and so forth. The basic contrast is between a productive industrial economy and a rent-extracting one in which special privileges, monopoly pricing and economic rents divert spending away from tangible capital investment and real output. Classical economists defined economic rent generically as “empty” pricing in excess of technologically necessary costs of production. This would include payments to pharmaceutical companies, health management organizations (HMOs) and monopolies above their necessary cost of doing business. Much like paying debt service, such economic rent siphons market revenue away from tangible production and consumption. It was to demonstrate this that Francois Quesnay developed the first national income statistics, the Tableau Économique. His aim was to show that the landed aristocracy’s rental rake-offs should form the basis for taxation rather than the excise taxes that were burdening industry and making it uncompetitive. But for the past hundred years, commercial banks have opposed property taxes, because taxing the land’s rent would mean less left over to pay interest. Some 80 percent of bank loans are for real estate, mainly to capitalize the rental value left untaxed. A property and wealth tax would reduce this market – along with the government’s need to borrow, and hence to pay interest to bondholders. And without a fiscal squeeze there would have been less of an opportunity for the financial sector to push to privatize what remains of the public domain. Today’s central financial problem is that the banking system lends mainly for rent extraction opportunities rather than for tangible capital investment and economic growth to raise living standards. To maximize rent, it has lobbied to untax land and natural resources. At issue in today’s tax and financial crisis is thus whether the world is going to have an economy based on progressive industrial democracy or a financialized and polarizing rent-extracting society. The ideological crisis underlying today’s tax and financial policy From antiquity and for thousands of years, land, natural resources and monopolies, seaports and roads were kept in the public domain. In more recent times railroads, subway lines, airlines, and gas and electric utilities were made public. The aim was to provide their basic services at cost or at subsidized prices rather than letting them be privatized into rent-extracting opportunities. The Progressive Era capped this transition to a more equitable economy by enacting progressive income and wealth taxes. Economies were liberating themselves from the special privileges that European feudalism and colonialism had granted to favored insiders. The aim of ending these privileges – or taxing away economic rent where it occurs naturally, as in the land’s site value and natural resource rent – was to lower the costs of living and doing business. This was expected to make progressive economies more competitive, obliging other countries to follow suit or be rendered obsolete. The era of what was considered to be socialism in one form or another seemed to be at hand – rising role of the public sector as part and parcel of the evolution of technology and prosperity. But the landowning and financial classes fought back, seeking to expunge the central policy conclusion of classical economics: the doctrine that free-lunch economic rent should serve as the tax base for economies seeking to be most efficient and fair. Imbued with academic legitimacy by the University of Chicago (which Upton Sinclair aptly named the University of Standard Oil) the new post-classical economics has adopted Milton Friedman’s motto: “There Is No Such Thing As A Free Lunch” (TINSTAAFL). If it is not seen, after all, it has less likelihood of being taxed. The political problem faced by rentiers – the “idle rich” siphoning off most of the economy’s gains for themselves – is to convince voters to agree that labor and consumers should be taxed rather than the financial gains of the wealthiest 1%. How long can they defer people from seeing that making interest tax-exempt pushes the government’s budget further into deficit? To free financial wealth and asset-price gains from taxes – while blocking the government from financing its deficits by its own public option for money creation – the academics sponsored by financial lobbyists hijacked monetary theory, fiscal policy and economic theory in general. On seeming grounds of efficiency they claimed that government no longer should regulate Wall Street and its corporate clients. Instead of criticizing rent seeking as in earlier centuries, they depicted government as an oppressive Leviathan for using its power to protect markets from monopolies, crooked drug companies, health insurance companies and predatory finance. This idea that a “free market” is one free for Wall Street to act without regulation can be popularized only by censoring the history of economic thought. It would not do for people to read what Adam Smith and subsequent economists actually taught about rent, taxes and the need for regulation or public ownership. Academic economics is turned into an Orwellian exercise in doublethink, designed to convince the population that the bottom 99% should pay taxes rather than the 1% that obtain most interest, dividends and capital gains. By denying that a free lunch exists, and by confusing the relationship between money and taxes, they have turned the economics discipline and much political discourse into a lobbying effort for the 1%. Lobbyists for the 1% frame the fiscal question in terms of “How can we make the 99% pay for their own social programs?” The implicit follow-up is, “so that we (the 1%) don’t have to pay?” This is how the Social Security system came to be “funded” and then “underfunded.” The most regressive tax of all is the FICA payroll tax at 15.3% of wages up to about $105,000. Above that, the rich don’t have to contribute. This payroll tax exceeds the income tax paid by many blue-collar families. The pretense is that not taxing these free lunchers will make economies more competitive and pull them out of depression. The reality is the opposite: Instead of taxing the wealthy on their free lunch, the tax burden raises the cost of living and doing business. This is a major reason why the U.S. economy is being de-industrialized today. The key question is what the 1% do with their revenue “freed” from taxes. The answer is that they lend it out to indebt the 99%. This polarizes the economy between creditors and debtors. Over the past generation the wealthiest 1% have rewritten the tax laws to a point where they now receive an estimated 66% – two thirds – of all returns to wealth (interest, dividends, rents and capital gains), and a reported 93% of all income gains since the Wall Street bailout of September 2008. They have used this money to finance the election campaigns of politicians committed to shifting taxes onto the 99%. They also have bought control of the major news media that shape peoples’ understanding of what is happening. And as Thorstein Veblen described nearly a century ago, businessmen have become the heads most universities and directed their curriculum along “business friendly” lines. The clearest way to analyze any financial system is to ask Who/Whom. That is because financial systems are basically a set of debts owed to creditors. In today’s neo-rentier economy the bottom 99% (labor and consumers) owe the 1% (bondholders, stockholders and property owners). Corporate business and government bodies also are indebted to this 1%. The degree of financial polarization has sharply accelerated as the 1% are making their move to indebt the 99% – along with industry, state, local and federal government – to the point where the entire economic surplus is owed as debt service. The aim is to monopolize the economy, above all the money-creating privilege of supplying the credit that the economy needs to grow and transact business, enabling them to extract interest and other fees for this privilege. The top 1% have nearly succeeded in siphoning off the entire surplus for themselves, receiving 93% of U.S. income growth since September 2008. Their control over the political process has enabled them to use each new financial crisis to strengthen their position by forcing companies, states and localities to relinquish property to creditors and financial investors. So after monopolizing the economic surplus, they now are seeking to transfer to themselves the economic infrastructure, land and natural resources, and any other asset on which a rent-extracting tollbooth can be placed. The situation is akin to that of medieval Europe in the wake of the Nordic invasions. The supra-national force of Rome in feudal times is now situated in Washington, with Christianity replaced by the Washington Consensus wielded via the IMF, World Bank, WTO and its satellite institutions such as the European Central Bank, backed by the moral and ideological role academic economists rather than the Church. And on the new financial battlefield, Wall Street underwriters have used the crisis as an opportunity to press for privatization. Chicago’s strong Democratic political machine sold rights to install parking meters on its sidewalks, and has tried to turn its public roads into privatized toll roads. And the city’s Mayor Rahm Emanuel has used privatization of its airport services to break labor unionization, Thatcher-style. The class war is back in business, with financial tactics playing a leading role barely anticipated a century ago. This monopolization of property is what Europe’s medieval military conquests sought to achieve, and what its colonization of foreign continents replicated. But whereas it achieved this originally by military conquest of the land, today’s 1% do it l by financializing the economy (although the military arm of force is not absent, to be sure, as the world saw in Chile after 1973). The financial quandary confronting us The economy’s debt overhead has grown so large that not everyone can be paid. Rising default rates pose the question age-old question of Who/Whom. The answer almost always is that big fish eat little fish. Big banks (too big to fail) are eating little banks, while the 1% try to take the lion’s share for themselves by annulling public and corporate debts owed to the 99%. Their plan is to downgrade Social Security and Medicare savings to “entitlements,” as if it is a matter of sound fiscal choice not to pay low-income payers while rentiers at the top re-christen themselves “job creators,” as if they have made their gains by helping wage-earners rather than waging war against them. The problem is not Social Security, which can be paid out of normal tax revenue, as in Germany’s pay-as-you-go system. This fiscal problem – untaxing real estate, oil and gas, natural resources, monopolies and the banks – has been depicted as financial – as if one needs to save in advance by a special tax to lend to the government to cut taxes on the 99%. The real pension cliff is with corporate, state and local pension plans, which are being underfunded and looted by financial managers. The shortfall is getting worse as the downturn reduces local tax revenues, leaving states and cities unable to fund their programs, to invest in new public infrastructure, or even to maintain and repair existing investments. Public transportation in particular is suffering, raising user fees to riders in order to pay bondholders. But it is mainly retirees who are being told to sacrifice. (The sanctimonious verb is “share” in the sacrifice, although this evidently does not apply to the 1%.) The bank lobby would like the economy to keep trying to borrow its way out of debt and thus dig itself deeper into a financial hole that puts yet more private and public property at risk of default and foreclosure. The idea is for the government to “stabilize” the financial system by bailing out the banks – that is, doing for them what it has not been willing to do for recipients of Social Security and Medicare, or for states and localities no longer receiving revenue sharing, or for homeowners in negative equity suffering from exploding interest rates even while bank borrowing costs from the Fed have plunged. The dream is that the happy Greenspan financial bubble can be recovered, making everyone rich again, if only they will debt-leverage to bid up real estate, stock and bond prices and create new capital gains. Realizing this dream is the only way that pension funds can pay retirees. They will be insolvent if they cannot make their scheduled 8+%, giving new meaning to the term “fictitious capital.” And in the real estate market, prices will not soar again until speculators jump back in as they did prior to 2008. If student loans are not annulled, graduates face a lifetime of indentured servitude. But that is how much of colonial America was settled, after all – working off the price of their liberty, only to be plunged into the cauldron of vast real estate speculations and fortunes-by-theft on which the Republic was founded (or at least the greatest American fortunes). It was imagined that such bondage belonged only to a bygone era, not to the future of the West. But we may now look back to that era for a snapshot of our future. The financial plan is for the government is to supply nearly free credit to the banks, so that they can to lend debtors enough – at the widest interest-rate markups in recent memory (what banks charge borrowers and credit-card users over their less-than-1% borrowing costs) – to pay down the debts that were run up before 2008. This is not a program to increase market demand for the products of labor. It is not the kind of circular flow that economists have described as the essence of industrial capitalism. It is a financial rake-off of a magnitude such as has not existed since medieval European times, and the last stifling days of the oligarchic Roman Empire two thousand years ago. Imagining that an economy can be grounded on these policies will further destabilize the economy rather than alleviate today’s debt deflation. But if the economy is saved, the banks cannot be. This is why the Obama Administration has chosen to save the banks, not the economy. The Fed’s prime directive is to keep interest rates low – to revive lending not to finance new business investment to produce more, but simply to inflate the asset prices that back the bank loans that constitute bank reserves. It is the convoluted dream of a new Bubble Economy – or more accurately a new Great Giveaway. Here’s the quandary: If the Fed keeps interest rates low, how are corporate, state and local pension plans to make the 8+% returns needed to pay their scheduled pensions? Are they to gamble more with hedge funds playing Casino Capitalism? On the other hand, if interest rates rise, this will reduce the capitalization multiple at which banks lend against current rental income and profits. Higher interest rates will lower prices for real estate, corporate stocks and bonds, pushing the banks (and pension funds) even deeper into negative equity. So something has to give. Either way, the financial system cannot continue along its present path. Only debt write-offs will “free” markets to resume spending on goods and services. And only a shift of taxes onto rent-yielding property and tollbooths, finance and monopolies will save prices from being loaded down with extractive overhead charges and refocus lending to finance production and employment. Unless this is done, there is no way the U.S. economy can become competitive in international markets, except of course for military hardware and intellectual property rights for escapist cultural artifacts. The solution for Social Security, Medicare and Medicaid is to de-financialize them. Treat them like government programs for military spending, beachfront rebuilding and bank subsidies, and pay their costs out of current tax revenue and new money creation by central banks doing what they were founded to do. Politicians shy away from confronting this solution mainly because the financial sector has sponsored a tunnel vision that ignores the role of debt, money, and the phenomena of economic rent, debt leverage and asset-price inflation that have become the defining characteristics of today’s financial crisis. Government policy has been captured to try and save – or at least subsidize – a financial system that cannot be saved more than temporarily. It is being kept on life support at the cost of shrinking the economy – while true medical spending for real life support is being cut back for much of the population. The economy is dying from a financial respiratory disease, or what the Physiocrats would have called a circulatory disorder. Instead of freeing the economy from debt, income is being diverted to pay credit card debt and mortgage debts. Students without jobs remain burdened with over $1 trillion of student debt, with the time-honored safety valve of bankruptcy closed off to them. Many graduates must live with their parents as marriage rates and family formation (and hence, new house-buying) decline. The economy is dying. That is what neoliberalism does. Now that the debt build-up has run its course, the banking sector has put its hope in gambling on mathematical probabilities via hedge fund capitalism. This Casino Capitalist has become the stage of finance capitalism following Pension Fund capitalism – and preceding the insolvency stage of austerity and property seizures. The open question now is whether neofeudalism will be the end stage. Austerity deepens rather than cures public budget deficits. Unlike past centuries, these deficits are not being incurred to wage war, but to pay a financial system that has become predatory on the “real” economy of production and consumption. The collapse of this system is what caused today’s budget deficit. Instead of recognizing this, the Obama Administration is trying to make labor pay. Pushing wage-earners over the “fiscal cliff” to make them pay for Wall Street’s financial bailout (sanctimoniously calling their taxes “user fees”) can only shrink of market more, pushing the economy into a fatal combination of tax-ridden and debt-ridden fiscal and financial austerity. The whistling in the intellectual dark that central bankers call by the technocratic term “deleveraging” (paying off the debts that have been run up) means diverting yet more income to pay the financial sector. This is antithetical to resuming economic growth and restoring employment levels. The recent lesson of European experience is that despite austerity, debt has risen from 381% of GDP in mid-2007 to 417% in mid—2012. That is what happens when economies shrink: debts mount up at arrears (and with stiff financial penalties). But even as economies shrink, the financial sector enriches itself by turning its debt claims – what 19th-century economists called “fictitious capital” before it was called finance capital – into a property grab. This makes an unrealistic debt overhead – unrealistic because there is no way that it can be paid under existing property relations and income distribution – into a living nightmare. That is what is happening in Europe, and it is the aim of Obama Administration of Tim Geithner, Ben Bernanke, Erik Holder et al. They would make America look like Europe, wracked by rising unemployment, falling markets and the related syndrome of adverse social and political consequences of the financial warfare waged against labor, industry and government together. The alternative to the road to serfdom – governments strong enough to protect populations against predatory finance – turns out to be a detour along the road to debt peonage and neofeudalism. So we are experiencing the end of a myth, or at least the end of an Orwellian rhetorical patter talk about what free markets really are. They are not free if they are to pay rent-extractors rather than producers to cover the actual costs of production. Financial markets are not free if fraudsters are not punished for writing fictitious junk mortgages and paying ratings agencies to sell “opinions” that their clients’ predatory finance is sound wealth creation. A free market needs to be regulated from fraud and from rent seeking. The other myth is that it is inflationary for central banks to monetize public spending. What increases prices is building interest and debt service, economic rent and financial charges into the cost of living and doing business. Debt-leveraging the price of housing, education and health care to make wage-earners pay over two-thirds of their income to the FIRE sector, FICA wage withholding and other taxes falling on labor are responsible for de-industrializing the economy and making it uncompetitive. Central bank money creation is not inflationary if it funds new production and employment. But that is not what is happening today. Monetary policy has been hijacked to inflate asset prices, or at least to stem their decline, or simply to give to the banks to gamble. “The economy” is less and less the sphere of production, consumption and employment; it is more and more a sphere of credit creation to buy assets, turning profits and income into interest payments until the entire economic surplus and repertory of property is pledged for debt service. To celebrate this as a “postindustrial society” as if it is a new kind of universe in which everyone can get rich on debt leveraging is a deception. The road leading into this trap has been baited with billions of dollars of subsidized junk economics to entice voters to act against their interests. The post-classical pro-rentier financial narrative is false – intentionally so. The purpose of its economic model is to make people see the world and act (or invest their money) in a way so that its backers can make money off the people who follow the illusion being subsidized. It remains the task of a new economics to revive the classical distinction between wealth and overhead, earned and unearned income, profit and rentier income – and ultimately between capitalism and feudalism. No such benefits were given to homeowners whose real estate fell into negative equity. For the few who received debt write-downs to current market value, the credit was treated as normal income and taxed! Philip Aldrick, “Loss of income caused by banks as bad as a ‘world war’, says BoE’s Andrew Haldane,” The Telegraph, December 3, 2012. Mr. Haldane is the Bank’s executive director for financial stability. Stephanie Kelton, “The ‘Fiscal Cliff’ Hoax,” http://www.latimes.com/news/opinion/commentary/la-oe-kelton-fiscal-cliff-economy-20121221,0,2129176.story, December 21, 2012.
<urn:uuid:9d20c811-d969-4a04-932f-c8ebcb372980>
CC-MAIN-2013-20
http://michael-hudson.com/2012/12/americas-deceptive-2012-fiscal-cliff/
2013-05-25T13:08:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960098
13,469
Under the collaboration, Diversa will use its proprietary technologies to extract DNA from environmental samples and make gene libraries, while JGI will perform DNA sequencing. All DNA sequence data from the collaboration will be provided to Diversa and deposited in GenBank within six months of the completion of sequencing to allow public access by scientists around the world. "The microbial world is the next genomic frontier," said JGI Director Eddy Rubin, M.D., Ph.D. "The human genome has been sequenced, and now we're ready to tackle the larger and more complex challenge of sequencing microbial diversity." "We believe the scientific, environmental, and commercial benefits from this project will be considerable," Rubin continued, "and we're pleased to be working with Diversa, a company that has clearly demonstrated leadership in legally and efficiently accessing the vast microbial diversity present in the environment." "There are more genes in a handful of soil than in the entire human genome," said Jay M. Short, Ph.D., President and Chief Executive Officer of Diversa. "At Diversa, we are committed to developing products from the rich genomic resource of uncultured microbes living in nearly every environment on earth. We believe that our sequencing collaboration with JGI will contribute greatly to our understanding and utilization of microbial genes." Microbes, the oldest form of life on Earth, inhabit nearly every environment and can thrive under extreme conditions of heat, cold, pressure, and radiation. Although microbes represent the vast majority of life on the planet, more than 99% have not been cultured, and consequently their genomic diversity has been lar Contact: Charles Osolin DOE/Joint Genome Institute
<urn:uuid:ee1ec01a-4538-4f1c-bc18-67a809e592b3>
CC-MAIN-2013-20
http://news.bio-medicine.org/biology-news-2/JGI-and-Diversa-Corp--announce-large-scale-microbial-sequencing-collaboration-4895-1/
2013-05-26T02:34:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940862
343
Speaking from the technology perspective, naming is specific depending on the device characteristics, two main categories being landline and mobile. Landline is a device receiving signal through a fixed phone line (which is not always a circuit; sometimes the device is a pretty large phone with a SIM making it quite mobile, sometimes handheld, still it's considered a landline phone). Mobile phones have a couple different technology-dependent types: - cellular (or cell for short) are called the devices utilizing signal received through a "cellular network" - satellite devices are powered by the satellite network These terms describe your device in respect to differences implied by phone networks, but the total of non-landline phones are mobile. Also there are smartphones. This term distinguishes the device in a bit different dimension; it describes the capabilities as opposed to older handheld devices (smartphones are the devices that combine a microcomputer and a telephone). So, strictly speaking, if you want to be specific to different types of devies you should use different terms in different cases. That would make a lot of difference if you wanted, per se, sell software for a particular kind of devices.
<urn:uuid:bae4b13a-22f7-4f02-88c7-606f6192c42c>
CC-MAIN-2013-20
http://english.stackexchange.com/questions/10094/cell-phone-cell-mobile-phone-whats-the-correct-term/13195
2013-05-19T10:16:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942344
237
Native. Indigenous. Non-native. Exotic. Invasive. What do these words mean to you? No matter what the context, these terms have strong connotations. It's no wonder that volunteers sometimes balk when we use these hot-button terms to describe plant species existing in the Golden Gate National Parks. Also, we all make the mistake of using these terms interchangeably when they really mean very different things. "I wasn't born in this country," a high school student once said during a work day, "Does that mean that I'm a non-native? Does that mean that I'm viewed as a bad thing that needs to be removed?" Since this incident and my participation in a UC Davis workshop on educating the public about invasive species, I am committed to clearing up the misconceptions about what these words mean and the context in which they should be used. I think environmental educators, myself included, need to be more careful, sensitive, and consistent in their explanations of habitat restoration. Lastly, I believe it is imperative that we provide the ecological background necessary for our volunteers to understand why the plant removal they perform is important to maintaining their parks. Let’s take a closer look at the words we use. Native or indigenous species are organisms that have evolved and existed in an area for hundreds or thousands (or millions!) of years, without the help of humans. They are adapted to the ecosystem's geology, hydrology, and climate. They fit into a community food web where they gain enough energy and nutrients to survive and are a food source for other organisms living in the community. A native plant and animal community is in balance with the environment, and can sustain itself for many years until abiotic conditions change. Non-native, exotic, or introduced species are organisms that have been brought to an area intentionally or unintentionally by humans. Some species are brought to a new area because of their beauty or beneficial uses. Others hitch a ride on cargo ships or on the bottom of our shoes. However, it is key to understand that only a small fraction of these species are harmful to native/indigenous communities. In other words, most of these species either do not survive in native communities or naturalize in these communities without negatively affecting the plants and animals already there. The small fraction of non-native/exotic species that do harm native ecosystems are called “invasive” species. These species enter native ecosystems and survive so well that they take up space normally inhabited by native species. They cause an imbalance to the communities and upset the food web that had long been established there. How do they do this? First, it’s important to remember that invasive species in one area are native species in another. In their native community, these species have predators or bio-controls that keep their numbers in check. They are part of a stable, balanced food web. However, when they are introduced to a new community, they may no longer have natural predators or controls on their population growth. When times are good, what do living organisms do? They successfully reproduce! It's easy for these invasive species to outcompete native species that have predators and bio-controls. And so, the picture becomes a little bit clearer. According to the California Invasive Plant Council, we have 4,200 native plant species in California, and 1,800 non-native/exotic species. Of those 1,800, only 200 are considered invasive. Case in point: Few non-natives are actually invasive. When a high school student says, "I wasn't born in this country; does that mean that I'm a non-native? Does that mean that I'm viewed as a bad thing that needs to be removed?", we have already failed that student. We should have defined our words and created a context of understanding, so that those questions would not need to be asked. Parks For All Forever is the motto of the Conservancy. It is vital that we use our words so that all people feel invited and welcome in these parks and that we thoroughly explain how the habitat restoration we are doing is a way that we are preserving these parks forever. Learn how we control invasives in the Golden Gate National Parks; register for this upcoming Park Academy class >> By Elise Hinman Community Outreach and Restoration Intern Return to Park E-ventures archive or read more articles about related:
<urn:uuid:fa8afff4-1b70-4ffc-aab8-434f4e4acf0e>
CC-MAIN-2013-20
http://www.parksconservancy.org/about/newsletters/park-e-ventures/2012/06-psp.html
2013-05-25T20:37:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706298270/warc/CC-MAIN-20130516121138-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962592
905
Join the Endangered Wildlife Trust and SANParks in a photographic survey of Saddle-billed Storks in the Kruger National Park. The survey started on 1 September 2009 and will run for a full calendar year. This survey forms part of a research project that will be conducted over the next three years on the population status of Saddle-billed Storks, one of Kruger’s rarities, and one of the “Big Six” birds. “Census operations on any species within the boundaries of the Kruger National Park are important to help us get an idea of that species’ status within the context of biodiversity management,” says Marcelle van Hoven, the project’s coordinator. “The last Saddlebilled Stork survey conducted in 1993 suggested that there were less than 60 of these birds left in the Park.” Saddle-billed Storks (Ephippiorhynchus senegalensis) are distinctly identifiable by their large size (they stand about 150 cm tall), sharply contrasting black and white plumage and yellow lappet (saddle-like structure) on the bill. The males have a dark eye with two small yellow wattles at the base of the bill, while females have a yellow eye. These birds can also be individually recognised by the details of the front edge of the black band across the red bill. Side-on photographs of all the birds, from both the left and right angles, will be used in identification during the survey. Saddle-billed Storks are classified as Endangered in South Africa. They breed slowly and are dependant on extensive wetland habitats, which are under increasing pressure from humans. The flow regimes of rivers passing through the Kruger National Park are expected to change in response to catchment developments outside the Park, and this, together with the removal of artificial water impoundments within the Park, may have a negative impact on this species. In South Africa, Saddle-billed Storks are largely confined to the north-eastern tropical lowland with the majority of the population residing along the riverine habitat in the Kruger National Park. They normally occur in pairs, are strongly territorial and remain in the same area for years. Visitors who spot a Saddle-billed Stork are asked to take a clear photograph of both sides of the bird’s face and bill and to record information about the sighting including the date, time, location, name of nearby water source, bird’s gender, juveniles present and any other notes that might be relevant. A Saddle-billed Stork census weekend is also planned in the Kruger National Park for later this year, where photographers with the powerful lenses can contribute to this project. Send all sighting details and photographs to email@example.com. This project is sponsored by Tinga Private Game Lodge and Custom African Tours & Safaris.
<urn:uuid:90aa76cd-78a4-4156-86b1-302fa74ce895>
CC-MAIN-2013-20
http://www.krugerpark.co.za/krugerpark-times-e-3-spot-a-stork-and-support-science-25067.html
2013-05-24T08:38:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937496
612
Heinrich HimmlerWorld War II Figure / Military Leader Born: 7 October 1900 Died: 23 May 1945 (suicide by cyanide) Birthplace: Munich, Germany Best known as: Hitler's head of the Gestapo during World War II Name at birth: Heinrich Luitpold Himmler Heinrich Himmler was a high-ranking Nazi during World War II, a member of Adolf Hitler's inner circle and one of the primary architects of the Holocaust. Himmler was a Bavarian chicken farmer who joined the Nazi party early on, participated in the 1923 attempt to overthrow the government (Munich's "Beer Hall Putsch") and became head of Hitler's personal bodyguard, the SS (Schutzstaffel), by 1929. From 1936 until the end of the war in 1945, Himmler was the head of the Gestapo (Geheime Staatspolizei), a national police force that had absorbed the SS and the national security service, the SD (Sicherheitsdienst). Adolf Hitler named him the Minister of Interior in 1943, and Himmler controlled the civil service and courts as well as the national police and the secret police. Even other Nazis were wary of the Gestapo's broad powers, which included the ability to execute disloyal Germans. Himmler is said to have been the driving force behind the "Final Solution," the organized attempt to exterminate Jews. He set up labor camps and concentration camps and organized the incarceration and execution of political enemies, homosexuals and non-Nordics like Poles, Jews and Romani. He's said to be responsible for murdering as many as 6 million Jews. As Germany's defeat seemed certain, Himmler approached the Western Allies in April of 1945, in an attempt to make a deal for surrender. Word got back to Hitler, who ordered his arrest. Himmler tried to escape to Bavaria under the name of Heinrich Hitzinger, but was arrested by the British on 22 May 1945. He committed suicide with a cyanide pill the next day. The 22 May 1945 arrest report by the British described Himmler as "an unimpressive figure with several days growth of beard, long hair, no glasses and a patch over one eye. He was dressed in an odd collection of civilian garments, with a blue raincoat on top." Copyright © 1998-2013 by Who2?, LLC. All rights reserved. More on Heinrich Himmler from Infoplease: Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved.
<urn:uuid:bebae952-0e83-43b4-8222-f337ed57aea2>
CC-MAIN-2013-20
http://www.infoplease.com/biography/var/heinrichhimmler.html
2013-05-21T10:13:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961182
539