text
stringlengths 174
640k
| id
stringlengths 47
47
| dump
stringclasses 17
values | url
stringlengths 14
1.94k
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 43
156k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
|---|---|---|---|---|---|---|---|---|---|
Balay Rehabilitation Center, Inc. is a non-governmental organization that works for the psychosocial relief and rehabilitation of survivors of human rights violations. It provides services primarily to the survivors of torture and organized violence, as well as to those who are displaced by armed conflict (IDPs).
The word balay, in many dialects in the Philippines, means a house, a shelter or a home. The name itself depicts protection, safety, and nurturance of well-being. In the course of the organization’s existence, the word balay have also signified a space where the people can work towards empowerment and development.
The massive human rights violations during the martial law period led to the founding of BALAY on September 27, 1985. One of its prime movers was the late Senator Jose Diokno, a distinguished champion of human rights who became the chairperson of the Presidential Committee on Human Rights following the downfall of the Marcos dictatorship in 1986. Other founders were Dr. Mita Pardo de Tavera who served as the first Secretary of the Department of Social Welfare and Development (DSWD) under the Aquino administration; and Dr. Mariano Caparas, a physician who is an ardent proponent of health and human rights.
During its early years, its staff and volunteers have helped in documenting cases of torture and provided political prisoners and their families of their immediate needs. Balay also served as a half-way house for human rights defenders who were released from detention. Psychosocial programs addressed the individual, group and family dimensions. Not long after, it developed a program for youth and children and offered livelihood assistance to its partners as part of psychosocial support.
The unprecedented rise of the phenomenon of internal displacement due to armed conflict and development aggression in the 1990s challenged BALAY to broaden the scope of its work. Around the mid-1990s, its general assembly decided to serve another type of target population – the uprooted peoples who have been affected by traumatic armed conflicts, particularly in Mindanao. From thereon, Balay embarked on a holistic community-based psychosocial approach in its project sites while continuing its therapeutic partnerships with political prisoners, torture victims and other survivors of human rights violations.
As the country made its transition from authoritarian regime (under the Marcos regime) to democracy, the occurrence of torture remains to be pervasive. However, it may be argued that torture have assumed a different form in post-Marcos Philippines. While, torture before was directed towards political activists – torture as a form of political repression – torture is on the rise in urban centers and is increasingly being used to fight criminality and eliminate society of its “scum”. In addition, this brutal action is often directed against young people. This is manifested in the countless reports of “salvaging” and ill treatment perpetrated by the police against those who are suspected of doing a crime. This does not mean that torture against political activists does not exist or is declining. This however suggests that torture (1) as an urban phenomenon and (2) as an instrument to fight crime is increasingly being relevant. Having these context in mind, Balay in 2003, have embarked on to launch a psychosocial program in Bagong Silang, Caloocan (an open community unlike jails and prisons which are closed institutions) to facilitate the prevention of torture and help mitigate its effects on survivors.The psychosocial intervention targeted young people who were considered to be most vulnerable to violence and in experiencing torture.
At present, the organization consists of an interdisciplinary staff and volunteers who have academic backgrounds and experience in the fields of psychology, social work, community development, popular education, social enterprise development, and peace and human rights advocacy.
Balay envisions a free, just, peaceful and human society where individuals, families and communities have the opportunity to develop their potential to the fullest for their own well-being and for society. It aspires for a society where human rights are respected and fulfilled and where human dignity and equality are upheld.
BALAY facilitates the holistic psychosocial development response and advocates for the rights of IDPs, political prisoners, torture survivors as well as other victims of human rights.
To develop partnerships among victims of human rights violations regardless of political beliefs, race, religion, culture, age and gender to help them regain their capacities for active participation in the family, community and societal affairs.
To help develop youth and children into “zones of peace” and as peace-builders in families, communities and in the wider society.
Psychosocial Development Program for Internally Displaced Persons (PDP-IDPs)
BALAY’s program in Mindanao combines projects and activities geared towards the healing, empowerment and development of IDPs, with special focus on the most vulnerable sectors particularly the children and young people. At present, BALAY’s project sites cover more than 45 villages in different municipalities in Mindanao.
Space for Peace
The Space for Peace in Pikit, North Cotabato consists of seven villages populated by Maguindanaons, Manobos, Bisaya and other families of migrants and settlers. Current projects include establishing child-friendly spaces for psychosocial activities, peace camps, counseling, and life-skills training for young people, peace advocacy and the promotion of natural farming system and cooperative as livelihood undertaking. The activities aim to help consolidate, preserve and expand the gains of the Space for Peace in conflict-prevention, community rehabilitation and multi-cultural cooperation to address the social trauma arising from war.
Peace Education, Psychosocial Training and Risk Reduction Project
Peace Education, Psychosocial Training and Risk Reduction Project in Datu Paglas and Paglat in Maguindanao, Tulunan in North Cotabato, and Columbio in Sultan Kudarat Province. The project partners are young people, Barangay officials and other community duty bearers in more than 30 villages and local government unites. The goal is to build a network of young people who have basic knowledge and skills in peace-building, human rights, peer counseling and to help community institutions for local governance to adopt disaster preparedness and risk management mechanisms to reduce their vulnerabilities to complex emergency situation.
Promoting Children as “Zones of Peace”
Promoting Children as “Zones of Peace” in a Lumad community in Brgy. Angga-an in Damulog, Bukidnon. It introduces the idea of “child-oriented” governance in the community through the revival of the “school of living tradition,” psychosocial activities for children, primary health and nutrition component, youth participation in community activities, and awareness raising on the rights of the child and peace education. The activities are intended to strengthen the community resources to promote their area as a zone of peace.
Psychosocial Development Program for Survivors of Torture and Organized Violence (PDP-STOV)
Torture is a crime under domestic (RA 9745) and international law and is prohibited in all circumstances even in situations of armed conflict. Though legal instruments and treaties (i.e. RA 9745 and UNCAT) are already established in order to prevent torture from happening, the practice of torture in the Philippines is still pervasive. The practice of torture is routinely employed by State authorities to quash the expression of legitimate political dissent or as a shortcut method to address the problem of criminality and keep peace and order. In any case, the practice of torture is deeply rooted in the worsening problem of poverty and inequalities within the Philippine society. Torture destroys the will and spirit of the victim, alters his or her relationship with others (traumatization) and instills fear and anxiety to family and community members of the victim (collective traumatization).
Balay provides a comprehensive psychosocial and developmental intervention to around 100 inmates who have been denied of their freedom and tortured under political circumstances. The objective is to manage and reduce their suffering from their torture experience, improve the condition of prisons (thru jail advocacy), and enable them to increase their (positive) coping abilities and resources until they obtain justice. The kind of support includes counseling and related therapeutic activities, psycho-education, life-skills enhancement, food and non-food support, as well as legal and health assistance not covered by other service providers. Psychosocial support is also extended to family members of detainees and prisoners, including the facilitation of prison visits. The focus jails are the (1) Metro Manila District Jail (MMDJ) and the (2) National Bilibid Prison (NBP) in the National Capital Region; and the (1) Compostella Valley Provincial Jail and (2) Davao City Jail in Mindanao.
Located in Luzon, Visayas and Mindanao, these jails are visited quarterly by BALAY staff to deliver relief and welfare assistance to political prisoners, document reports of torture, and monitor prison conditions. Counseling is also facilitated for individuals and groups who are in need. BALAY, in line with respecting the rights and promoting a humane treatment of the detainees, recommends actions to the authorities for transformative actions.
Community-based Program in Bagong Silang, Caloocan City
The community-based activities in Bagong Silang – the largest Barangay, in terms of land area – are intended to provide rehabilitation assistance to young people who have suffered from torture. They are also geared at influencing duty bearers (i.e. parents, Police and Barangay officials) to pursue programs that will enhance the well-being of the Salinlahi. Activities include individual and group counseling and other related therapeutic activities, family enrichment sessions, psycho-education, life-skills training, human rights education and paralegal training. BALAY works with the local social service office and the council for the protection of children.
Freedom from Torture Advocacy
BALAY is one of the conveners of the United Against Torture Coalition (UATC), composed of more than 20 organizations working for the effective implementation of the Anti-Torture Act of 2009 (RA 9745). For years, it has been supporting the observance of the international Day in Support of Torture Victims. It also campaigns for the ratification of the Optional Protocol to the Convention Against Torture (OPCAT) and pushes for the creation of a National Preventive Mechanism (NPM) to monitor prison conditions and dissuade state agents from committing torture and other forms of organized violence. As the secretariat of the Inter-Agency Committee for Prison Reforms (IACPR), BALAY works with the Commission on Human Rights (CHR), prison authorities (i.e. BJMP) and other executive departments for the improvement of jail conditions.
Advocacy for Peace and Humanitarian Protection
BALAY works for the passage of the law to protect internally displaced persons and for the establishment of a national mechanism for humanitarian protection of civilians affected by armed conflict. It is a founding member of the Mindanao People’s Caucus (MPQ). MPQ is a peace group that brings the voices of the affected civilians in the peace talks between the Moro Islamic Liberation Front (MILF) and the Philippine Government. It is also one of the conveners of the Mindanao Solidarity Network (MSN) and the Civil Society Initiatives for International Humanitarian Law (CSI-IHL) which contributes in promoting humanitarian protection while building a peace constituency in support of the peace initiative in the Philippines.
Youth and Children Development Program (YCDP)
A major project under the YCDP is the building of a network of young people to articulate their views to enable them to participate in the peace process. Guided by the idea of “children as zones of peace,” BALAY sets up a mobile peace training program for young people in Mindanao. It also organizes peace camps and leadership seminars to develop young peace builders and human rights defenders. It works with the Department in mainstreaming peace education in schools and promotes a child-oriented governance in communities covered by its projects.
|
<urn:uuid:b78a19ef-7640-4c0e-b837-1d1fc2e0c202>
|
CC-MAIN-2013-20
|
http://www.balayph.net/about-us.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948026
| 2,495
| 2.5625
| 3
|
Hot Dog History
In honor of National Hot Dog Month, some answers to those dogging questions
July is National Hot Dog Month, and according to the National Hot Dog and Sausage Council, Americans will be consuming the infamous little red tubes of "meat" in record numbers this summer.
The Council estimates that over seven billion hot dogs will be eaten by Americans between Memorial Day and Labor Day. During the July 4th weekend alone (the biggest hot-dog holiday of the year), 155 million will be downed.
Every year, Americans eat an average of 60 hot dogs each. They are clearly one of the country's most loved, but most misunderstood, comfort foods. Below you'll find some frequently asked questions regarding the hot dog. For more information, visit the Council's website at www.hot-dog.org. Bon appétit.
How did the hot dog get its name?
The term "hot dog" is credited to sports cartoonist Tad Dorgan. At a 1901 baseball game at the Polo Grounds in New York, vendors began selling hot dachsund sausages in rolls.
From the press box, Dorgan could hear the vendors yelling, "Get your dachshund sausages while they're red hot!" He sketched a cartoon depicting the scene but wasn't sure how to spell "dachshund" so he called them simply, "hot dogs." And the rest is history.
What exactly is a hot dog made of?
Nope. You're not allowed to ask that one. And do you really want to know anyway? For the record, the Council refers to the actual meat as "specially selected meat trimmings." They would like to point out, however, that thanks to stricter U.S. Department of Agriculture rules, hot-dog meat has become much leaner and, unless otherwise indicated, must be made from muscle (as most meat found in supermarkets is).
Most supermarket hot dogs use cellulose casings, which are removed before packaging. Some, however, still use the traditional natural casings, made from animal intestines.
By law, a hot dog can contain up to 3.5 percent of "non-meat ingredients." Don't be scared. This is usually just some type of milk or soy product used to add to the nutritional value. Many hot dogs may be relatively high in fat and sodium, but they are also a good source of protein, iron, and other necessary vitamins.
What is the most popular condiment for a hot dog?
Council research shows that for adults, mustard is the condiment of choice, while children prefer ketchup. That said, preferences do change from region to region. For instance, hot dogs in New York are generally served with a lighter mustard and steamed onions, while Chicago hot dogs can come with mustard, relish, onions, tomato slices, or pretty much anything at all.
Kids were also asked what condiment they would use "if their moms weren't watching," and 25 percent opted for chocolate sauce.
Do I spread my condiment on the meat or on the bread?
Always dress the dog and not the bun. The Council also recommends the following order for condiment application: first wet (mustard for example), then chunky (relish or onions), then cheese if desired, then any spices.
What type of wine should I have with my hot dog?
OK, so maybe this question wouldn't fall into the "frequently asked" category. But the Council does suggest darker red wines for a spicier hot dog/sausage and dry white wines for milder dogs. They also recommend that wine never be brought to a hot dog barbecue, but beer is OK. For kids, lemonade and iced tea are the recommended beverages.
Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved.
|
<urn:uuid:90893774-f16c-461d-adbf-ba1f04e75a86>
|
CC-MAIN-2013-20
|
http://www.infoplease.com/spot/hotdog1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954768
| 798
| 2.625
| 3
|
Medical billing and medical coding go hand in hand, and are often performed by the same professionals. But although they work together as vital pieces in the business of health care, they are distinct professions with their own responsibilities.
Medical Coding: Translation
Medical coding is the process of using specific codes to identify medical procedures and services for billing and reimbursement by the patients’ insurance companies.
A medical coder reads the patient’s medical file — including medical history, current diagnosis, medicine prescribed and services performed — and assigns the appropriate code based on their coding knowledge. The codes are entered into the computer and are universally recognized by other health care professionals to accurately reflect the patient’s case. (See: medical coder job description.)
Medical Billing: Correspondence
Once the procedure and service codes are determined, the medical biller transmits the claim to the insurance company for payment. Medical billing is the process of submitting and following up on claims to insurance companies in order to receive payment for services rendered by a health care provider.
A medical biller ensures that the patient and health insurance company are properly billed for all procedures. Approved claims are reimbursed, while rejected claims are researched and amended. (See: medical biller job description.)
Since they work so closely, billers are also familiar with the medical codes that coders use in their job, and in smaller offices the same person may fill both functions.
|
<urn:uuid:57f6c33d-ec11-49fd-b69a-1a00ae264b71>
|
CC-MAIN-2013-20
|
http://www.medicalbillingschool.org/articles/medical-billing-vs-medical-coding-what-s-the-difference
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.952441
| 291
| 3.265625
| 3
|
Nowadays it's obvious that nature is the most uncontrollable thing in our universe. During all years of constant studying and knowledge acquisition we have managed just to predict some natural events, but not to control them. Among the great amount of sciences dealing with natural phenomena we'll talk about phenology, namely plant phenology.
So what is phenology? There is a lot of definitions according to which phenology is the study of the relationship between climate and the timing of periodic natural phenomena such as migration of birds, bud bursting, or flowering of plants.
Phenology is an old scientific discipline. Centuries ago people already recognized that the timing of life cycle events could provide information concerning the development of plants and animals. It was useful most of all for agricultural purposes. To get a better understanding of the timing of life cycle events, several monitoring networks were set up all over the world. In the last three decades of the 20th century, however, many phenological networks were reduced in size or stopped because of a decrease in agricultural importance of these networks.
However, the importance of phenology as a science remains an undeniable fact.
Why do we need phenology and what is its importance lie in? As was mentioned above phenology is the science of recording natural regularly occurring events and for today it has already provided some of the longest written biological records. The following gaining of such valuable information on seasonal occurrences will give the possibility to demonstrate how climate change is affecting the wildlife habitats.
Plant phenology investigates the vegetative cover changes taking place nowadays and compares it with the previous years. Such investigation enables understanding and predicting the possible changes in future. Tree phenology scrutinizes the timing of periodic biological phases, the causes of their timing concerning biotic and antibiotic forces, and the interrelation among phases of the same or different species.
Plant phenology subject of study is the "phases", or "phenophases", that may be the date of first flowering, budbreak, unfolding of first leaf, etc. The timing of phenophases is very important in biological systems and processes as it influences factors like the length of the growing season, frost damage, timing and duration of pests and diseases, water fluxes, nutrient budgets, carbon sequestration and food availability.
With the global warming the phenophases have changed greatly. Plant phenology offers real evidence that climate change is happening now and that it is already having a significant effect on our wildlife. Trees are coming into leaf sooner, and some typical spring flowers are increasingly being seen coming into bloom in November and December. The same things happen to animals, namely birds and butterflies that appear earlier.
Changes in climate and calendar dates are not reliable basis for establishing certain rules and management decisions. Heat measurement accumulated over time provides a physiological time scale that is biologically more accurate than calendar days. Speaking about the temperature influence on growth and development of life creatures and plants, phenologists use two parameters: the lower developmental threshold and the upper developmental threshold.
The lower developmental threshold for a species is the temperature below which development stops. The upper developmental threshold is the temperature at which the rate of growth or development begins to decrease. Both lower and upper thresholds are defined through thorough research and are unique for different unique organisms. The amount of heat needed by an organism to develop is known as physiological time.
Physiological time is often expressed in units called degree-days. For instance: if a species has a lower developmental threshold of 52° F, and the temperature remains at 52°F (maximum 1° higher) for 24 hours, this day is defined as one degree-day.
It's impossible to speak about the exact timing of the development of plants and animals. It can vary quite markedly from year to year.
The most obvious reason for such variability and irregularity is the weather. After a warm start of the year, the growing season can easily begin one month earlier than after a cold start of the year.
|
<urn:uuid:5222e801-7b98-43f2-94b6-1de6f9f66023>
|
CC-MAIN-2013-20
|
http://www.syl.com/travel/plantphenologythebackroundofthisfascinatingscienceyesterdayandtoday.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960738
| 802
| 3.515625
| 4
|
American Heritage® Dictionary of the English Language, Fourth Edition
- Pickering, Edward Charles 1846-1919. American astronomer noted for his work on stellar photometry. His brother William Henry Pickering (1858-1938) discovered Phoebe, the ninth moon of Saturn (1899), and predicted the existence of Pluto (1919).
Century Dictionary and Cyclopedia
- n. A pickerel.
- n. A percoid fish, the sauger, Stizostedion canadense.
- n. A town in North Yorkshire, England.
- n. A city in Ontario, Canada.
- n. A topographic surname from the town.
- n. Timothy Pickering US statesman.
GNU Webster's 1913
- n. (Zoöl.) The sauger of the St.Lawrence River.
“PICKERING -- A Durham police officer was assisted by civilians when he rescued a child trapped in a vehicle on Hwy. 407 in Pickering Sept. 8.”
“Straight North of Pickering is the town of Claremont, Ontario.”
“Joe Pickering is a Software Engineer and an eBay hobbyist.”
“Gas price watchdog Dan McTeague, a federal Liberal candidate and incumbent in Pickering-Scarborough East, said motorists in Toronto and London, Ont., are seeing the biggest decrease.”
“Yes, of course one can find a private tutor in Spanish, even in Tacoma, Washington where Ron Pickering is listed as being.”
“Rachel sensed an impending drama in the President's eyes and recalled Pickering's hunch that the White House had something up its sleeve.”
“I told Betty to call Pickering, and when he came in I related my story.”
“Polly, Polly!" called Pickering quite distinctly, in a tone of anguish.”
“In a town called Pickering, about 45 minutes East of Toronto, there's a small blustery hill near the lake topped with weatherworn wooden poles.”
“Pickering," a craft carrying a battery of sixteen guns, and a crew of forty-seven men.”
Looking for tweets for Pickering.
|
<urn:uuid:3f7aa403-f2c8-448a-8a2b-d5ab7670af13>
|
CC-MAIN-2013-20
|
http://www.wordnik.com/words/Pickering
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.886609
| 486
| 2.90625
| 3
|
The Assumption is a puzzle to many Catholics. It's one of the mysteries of the rosary, but scriptural rosary books struggle to find quotes to go along with it. It's a holy day of obligation (Aug. 15), but even the most devout Catholics don't seem to know a lot about it. Herewith, some questions and answers.
Why is it called “the Assumption” to start with?
The word “assume” comes from the Latin verb “to take.” Mary is “taken” into heaven. We use the word assume to mean “to take” also: to take a certain meaning, to take on a certain form, to take on a responsibility. In the Assumption, Christ assumes Mary into heaven, body and soul.
Anyway, in the Eastern Church it's not called the Assumption. It's called the Dormition or “Falling Asleep and Departure.”
Isn't it a new dogma?
It's old and new.
Old, because the feast of the Dormition of Mary was celebrated in the Byzantine Church before the year 500.-St. Gregory of-Tours wrote about the Assumption in the sixth century. The theology of the Assumption was articulated in fine theological detail by the 700s, in the three sermons St. John Damascene preached for this feast. In his second sermon, he states that belief in Mary's Assumption comes from long-standing tradition, which he-was merely handing down.
New, because the formal declaration of this dogma only occurred in 1950, the most recent use of a pope's formal, ex cathedra authority. At that time, Pope Pius XII issued a bull formally defining, as part of the deposit of faith, the fact that “the Immaculate Mother of God, the ever-virgin Mary, was on the completion of her earthly life assumed body and soul into the glory of heaven” (Munificentissimus Deus, 1950.)
Is it mentioned in the Bible?
There are, in fact, clear scriptural supports for Mary's Assumption. Two Old Testament figures, Elijah and Enoch, were taken into the next life without dying (Genesis 5:24; 2 Kings 2:11).-Matthew's Gospel relates that, after Christ's death, “many bodies of the saints who had fallen asleep were raised.”
Jimmy Akin,-director of apologetics and evangelization-at Catholic Answers in El Cajon, Calif., notes that it would be odd to think this resurrection was only temporary — surely they were taken to heaven a short while later. So there is scriptural precedent for-some people receiving the gift of resurrection before the end of the world. That Christ would grant this privilege to his immaculate mother is quite believable.
A more-obvious support from Scripture occurs in the second reading for the feast, says Akin. “The Church traditionally has seen an allusion to Mary's Assumption in Revelation 12, where John sees the sign of the woman in heaven,” he says. “While there is an allusion here it is not an explicit statement.”
But if the Bible doesn't explicitly mention it, how can we believe it?
Most Protestants reject belief in Mary's Assumption because it seems to lack a scriptural “proof text.” This attitude points to a basic divergence between Catholics and Protestants that-is deeper than the issue of Marian devotion.
Protestants hold that the Bible alone is to determine what Christians should believe. Not so in the Catholic Church, Akin points out.
“Doctrines don't have to be found in Scripture to be true,” Akin points out. “Scripture does not teach that it is the source of all doctrine. As a result, the best sources for some teachings can be the traditions recorded in the early Church Fathers, as is the case with the Assumption. Pope Pius XII drew upon these early Christian traditions when he infallibly proclaimed this dogma. This was another case of the pope using his ability to engage the Church's infallibility to confirm particular traditions that had been passed down from the Apostles.”
What evidence do we have of the Assumption?
Well, it's hard to find evidence that someone left the earth — but one bit of evidence that Mary's body is in heaven is found in the fact that no church or city ever-laid claim to-the relics of Mary. In the early ages of Christianity, the bones of an apostle or martyr were considered prized possessions. There were often article bitter disputes over which church had the better claim to various relics, and sometimes less-than-vir-tuous actions were taken to obtain possession.
If there was ever any question as to what happened to the body of Our Lady, we can be sure that someone would have proudly claimed her mortal remains. Indeed, there are rival claims to the location of her tomb — Ephesus and Jerusalem. But both tombs are empty.
If everyone was so certain about the Assumption from early times, why did the Pope have to make a special dogmatic declaration about it? And why define the Assumption in the middle of the Space Age? Doesn't it make the Church look out of touch with the modern world?
Father Christopher Armstrong has a doctorate in sacred theology from the International Marian Research Institute in Dayton, Ohio (the U.S. branch of Rome's Marianum). He is a pastor and former chancellor of the Archdiocese of Cincinnati. And he thinks the definition of the Assumption did indeed answer a need of the times.
“It was very opportune [to define the dogma], when you see where the world was in 1950,” says Father Armstrong.
“At that time most of the world's Catholics lived in Europe, which was still reeling from the carnage and human degradation of the Second World War. It was still witnessing the horrors of totalitarian ideology and atheism. Declaring the Assumption of Mary was a reaffirmation of the dignity of the human person — that there is a real value to the human body,” he said.
“And at the same time, it was a reaffirmation of the human person as body and soul. The punishment for Adam and Eve was death,” he said. “The body became corruptible. The Assumption is a reminder that we are destined to follow the pattern of the Resurrection, that body and soul are meant to be incorruptible, impassable and immortal.”
What does the Assumption teach us about ourselves?
It is a sign-of hope for our own future resurrection from the dead and assumption into heaven. “Mary is both an icon of the Church and of the individual believer,” says Akin. “As a special grace, God allowed her to share in the benefits of following Christ early. Her Immaculate Conception points to the fact that God will one day free all of the elect from every trace of sin, and her Assumption points-to the fact that one day all of the elect will be caught up body and soul to be with Christ (1 Thessalonians 4:15-17). For us, this will happen at the end of the world but God has allowed us a glimpse of our destiny by giving this gift to Mary early.”
“You might say she was carried away by love, the love of her Son,” adds Franciscan Father Patrick Greenough, national director of the Militia Immaculata, the Marian movement founded by St. Maximilian Kolbe, and guardian of Marytown in Libertyville, Ill. “She could not remain separate from him in any way. He had dwelt, body and soul, in her womb, so she was to dwell with him in heaven, body and soul. With us, our bodies and spirits are often at war — just think how hard it is to get up for Mass on Sunday morning or to refrain from overstuffing yourself at a buffet. But Mary did not have that division within her. Her body and soul were always united. It is only fitting that they remain that way into eternity.”
Okay, so it reminds us of heaven. How should it affect our lives now?
Father Armstrong-believes that the meaning of the Assumption of Mary-is best expressed in the preface of the Mass for the feast: “Today the Virgin Mother of God was taken up into heaven to be the beginning and pattern of the Church in its perfection and a sign of hope and comfort for your people on their pilgrim way,’” he reads. Then he adds: “What happened to Mary is going-to happen to every faithful Christian.”
Daria Sockey writes from Cincinnati.
|
<urn:uuid:d32ea2f5-04b9-4463-817e-cc6d410016b1>
|
CC-MAIN-2013-20
|
http://www.ncregister.com/site/article/the_assumption_questions_and_answers1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.969379
| 1,844
| 3.21875
| 3
|
Cable that is run in the plenum spaces of buildings. In building construction, the plenum (pronounced PLEH-nuhm) is the space that is used for air circulation in heating and air conditioning systems, typically between the structural ceiling and the suspended ceiling or under a raised floor. The plenum space is typically used to house the communication cables for the buildings computer and telephone network(s). However, use of plenum areas for cable storage poses a serious hazard in the event of a fire as once the fire reaches the plenum space there are few barriers to contain the smoke and flames. Plenum cable is coated with a fire-retardant coating (usually Teflon) so that in case of a fire it does not give off toxic gasses and smoke as it burns. Twisted-pair and coaxial versions of cable are made in plenum versions.
Featured Partners Sponsored
- Increase worker productivity, enhance data security, and enjoy greater energy savings. Find out how. Download the “Ultimate Desktop Simplicity Kit” now.»
- Find out which 10 hardware additions will help you maintain excellent service and outstanding security for you and your customers. »
- Server virtualization is growing in popularity, but the technology for securing it lags. To protect your virtual network.»
- Before you implement a private cloud, find out what you need to know about automated delivery, virtual sprawl, and more. »
|
<urn:uuid:38f4c23c-5f18-413c-bd52-9ef2d1f0e0d8>
|
CC-MAIN-2013-20
|
http://www.webopedia.com/TERM/P/plenum_cable.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.920178
| 297
| 2.875
| 3
|
The cost of a Raspberry Pi computer you can buy today is $25. It has a 700 MHz CPU with 256 MB RAM. In 2001, the Power Mac G4 Cube, with 450 MHz CPU with 64 MB RAM, cost $1,799. That is how much hardware prices have fallen. Meanwhile, a LEGO X-Wing costs $59.99.
So for $25 anyone can work on a project that uses computers at its heart, and if something breaks, they can just go buy a new one. This makes small Linux computers like the Raspberry Pi and Arduino boards the hardware DIYers’ new LEGO bricks. Last month, tens of thousands of makers from around the world came together at Maker Faire. Kids were begging their parents to help them build RC planes, buy them kits with Arduino boards and learning how to solder.
Will the DIY movement produce the next Apple?
Many of the kits these kids were using weren’t made by billion dollar corporations – they were made by cottage industry electronics businesses, hobbyists, and “fantrepreneurs.” Yes, as Chris Anderson says in his new book “Makers”, we are at the start of a hardware revolution – led from the ground up, in your home.
We have come full circle – back to April 1, 1976 when Steve Jobs, Steve Wozniak, and Ronald Wayne started selling the Apple 1 computer kit. Today’s kit owes its creation to the Arduino project which pioneered this space. The Arduino board is a small, basic, almost disposable piece of hardware that integrated with a simple development environment. Originally intended for university-student projects, it quickly exploded into mainstream DIY culture – today Radioshack even stocks them.
Raspberry Pi on the other hand is a full Linux computer for basically the same price. And as such it has a vast library of existing building blocks that hackers can call upon. Raspberry Pi’s original stated goal is to help kids learn how to program on a computer without fear of breaking it. But at $25 dollars its allure is irresistible to hackers and inventors – people have been using them for a wider range of ideas – like building a supercomputer out of LEGOs.
Raspberry Pi only went on sale in February, and has also sold hundreds of thousands since then. Here are a few examples of the explosion of projects the Pi is enabling:
- An open source disaster relief drone;
- A Quadcopter Raspberry Pi;
- A voice controlled robot; and
- An XBMC Media Center for managing streaming media.
The rise of these Arduino and Rasbperry Pi projects is a symptom of a larger change. Because of the many niches, cost of production, and speed of innovation, it isn’t the big companies that make these kits and parts. It is small one-person hardware companies and hobbyists around the world. A few examples are:
- Jason Huggins in Chicago, who makes the Robot that plays Angry Birds;
- LogicalZero in Boston which makes GAMBY, an Arduino Retro Gaming Shield; and
- Electronic Laboratory in the UK, which makes MiniStylophone Kits.
The result of this movement will be the innovation that our kids build on top of it. At the Maker Faire, while I waited in line for a hotdog, I overheard two banker types behind me. “It is amazing how many people are here,” one said. The other countered with, “What’s great is seeing all of the kids.”
As the internet was for my generation, hardware is for the current generation. The Maker movement proves this, and every day more and more small business pop up selling the kits, parts, and gadgets to support them. I may be a bit biased as I run tindie, a marketplace for people to buy and sell homemade technology, but the success of Arduino & Raspberry Pi only reinforce my bet on the maker trend.
Recently Jay Goldberg wrote, that “hardware is dead” – arguing that the drop in hardware prices is killing margins for the large producers to the point where is impossible to make revenue off commodity technology. It is true – prices are falling quicker than the large companies can innovate. However that price drop has opened an entirely new marketplace for smaller companies to emerge. Hardware isn’t dead – it’s moving back into garages where it started.
Emile Petrone is the CEO of Tindie, a site that sells hardware kits.
|
<urn:uuid:45c95552-6972-4a1f-9ffb-43c3996b11c4>
|
CC-MAIN-2013-20
|
http://gigaom.com/2012/10/12/what-happens-when-computers-are-cheaper-than-lego-blocks/?utm_source=social&utm_medium=twitter&utm_campaign=gigaom
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950428
| 935
| 3.015625
| 3
|
In 1911 the McClungs and their 4 children moved to Winnipeg, where their fifth child was born. The Winnipeg women's rights and reform movement welcomed Nellie as an effective speaker who won audiences with humorous arguments. She played a leading role in the 1914 Liberal campaign against Sir Rodmond ROBLIN's Conservative government, which had refused women suffrage, but moved to Edmonton before the Liberals won in Manitoba in 1915.
In Alberta she continued the fight for female suffrage and for PROHIBITION, dower rights for women, factory safety legislation and many other reforms. She gained wide prominence from addresses in Britain at the Methodist Ecumenical Conference and elsewhere (1921) and from speaking tours throughout Canada and the US, and was a Liberal MLA for Edmonton, 1921-26.
In 1933 the McClungs moved to Vancouver Island, where Nellie completed the first volume of her autobiography, Clearing in the West: My Own Story (1935, repr 1976), and wrote short stories and a syndicated column. In all, she published 16 books, including In Times Like These (1915, repr 1975). Her active life continued: in the Canadian Authors Association, on the CBC's first board of governors, as a delegate to the League of Nations in 1938 and as a public lecturer.
Forgotten for a decade, she was rediscovered by feminists in the 1960s. Although some criticized her maternalistic support of the traditional family structure, most credited her with advancing the feminist cause in her day and recognizing the need for further progress such as the economic independence of women.
See also WOMEN'S MOVEMENT.
Author M.E. HALLETT
Links to Other Sites
View Historica’s Heritage Minute devoted to Nellie McClung.
The Famous 5
This website focuses on the Famous 5 and their struggle to advance the legal rights of Canadian women. From the Alberta Online Encyclopedia.
The “Persons” Case
A brief overview of the historic “Persons Case” from the Parliament of Canada website.
Are Women Persons? The “Persons” Case
An online feature about the legal implications of the "Persons" Case. From Library and Archives Canada.
A profile of Nellie McClung, Canadian writer, suffragette, and activist. From the Calgary Herald feature "Best of Alberta."
Charlotte Gray - Nellie McClung
Watch a video of Allan Gregg interviewing Charlotte Gray about Nellie McClung and the "mock parliament" episode. From the TVO website.
Growing a Race: Nellie L. McClung and the Fiction of Eugenic Feminism
See online excerpts from Cecily Devereux's book that provides a historical context for Nellie McClung's views on the sensitive issue of eugenics. From Google Books.
Growing a Race: Nellie L. McClung and the Fiction of Eugenic Feminism (review)
See an excerpt of a review of Cecily Devereux's book "Growing a Race: Nellie L. McClung and the Fiction of Eugenic Feminism." From the Project MUSE website.
Shawnadithit grew anxious waiting for her uncle, Longnon, to return to camp at the junction of Badger Brook and the Exploits River, deep in the wilds of Newfoundland...
|
<urn:uuid:2ce0ca38-3ddf-4cef-aec5-97e0ccadf6de>
|
CC-MAIN-2013-20
|
http://www.thecanadianencyclopedia.com/articles/nellie-letitia-mcclung
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939528
| 697
| 2.734375
| 3
|
2012: 9th Warmest Year Since 1880, Find NASA Scientists
17 January, 2013
2012 was the ninth warmest of any year since 1880, said NASA scientists. They apprehend on the current course of GHG increases each successive decade to be warmer than the previous decade.
The scientists said: 2012 was continuing a long-term trend of rising global temperatures. With the exception of 1998, the nine warmest years in the 132-year record all have occurred since 2000, with 2010 and 2005 ranking as the hottest years on record.
The NASA news said:
NASA's Goddard Institute for Space Studies (GISS) in New York, which monitors global surface temperatures on an ongoing basis, released an updated analysis January 15, 2013 that compares temperatures around the globe in 2012 to the average global temperature from the mid-20th century. The comparison shows how Earth continues to experience warmer temperatures than several decades ago.
The average temperature in 2012 was about 58.3 degrees Fahrenheit (14.6 Celsius), which is 1.0 F (0.6 C) warmer than the mid-20th century baseline. The average global temperature has risen about 1.4 degrees F (0.8 C) since 1880, according to the new analysis.
Scientists emphasize that weather patterns always will cause fluctuations in average temperature from year to year, but the continued increase in greenhouse gas levels in Earth's atmosphere assures a long-term rise in global temperatures. Each successive year will not necessarily be warmer than the year before, but on the current course of greenhouse gas increases, scientists expect each successive decade to be warmer than the previous decade.
"One more year of numbers isn't in itself significant," GISS climatologist Gavin Schmidt said. "What matters is this decade is warmer than the last decade, and that decade was warmer than the decade before. The planet is warming. The reason it's warming is because we are pumping increasing amounts of carbon dioxide into the atmosphere."
Driven by increasing man-made emissions, the level of carbon dioxide in Earth's atmosphere has been rising consistently for decades.
The carbon dioxide level in the atmosphere was about 285 parts per million in 1880, the first year in the GISS temperature record. By 1960, the atmospheric carbon dioxide concentration, measured at NOAA's Mauna Loa Observatory, was about 315 parts per million. Today, that measurement exceeds 390 parts per million.
"The U.S. temperatures in the summer of 2012 are an example of a new trend of outlying seasonal extremes that are warmer than the hottest seasonal temperatures of the mid-20th century," GISS director James E. Hansen said. "The climate dice are now loaded. Some seasons still will be cooler than the long-term average, but the perceptive person should notice that the frequency of unusually warm extremes is increasing. It is the extremes that have the most impact on people and other life on the planet."
The temperature analysis produced at GISS is compiled from weather data from more than 1,000 meteorological stations around the world, satellite observations of sea-surface temperature, and Antarctic research station measurements. A publicly available computer program is used to calculate the difference between surface temperature in a given month and the average temperature for the same place during 1951 to 1980. This three-decade period functions as a baseline for the analysis. The last year that experienced cooler temperatures than the 1951 to 1980 average was 1976.
The GISS temperature record is one of several global temperature analyses, along with those produced by the Met Office Hadley Centre in the United Kingdom and the National Oceanic and Atmospheric Administration's National Climatic Data Center in Asheville, N.C. These three primary records use slightly different methods, but overall, their trends show close agreement.
In another news report , Suzanne Goldenberg, US environment correspondent, guardian.co.uk said:
NOAA scientists say 2012 global temperature records further consolidate a pattern of global warming.
2012 was among the 10 warmest years on record, rising above the long-term average for the 36th year in a row, according to data released on January 15, 2013.
Temperature records compiled separately by NASA and the National Oceanic and Atmospheric Administration (NOAA) found global surface temperatures rose 1.03F above the long-term average last year, but did not match America's record-breaking heat.
By NASA's records, that makes 2012 the ninth hottest year on record globally. NOAA's data set put it at the 10th hottest year. The agencies use different methods to analyze data.
In both cases, scientists said the 2012 global temperature records further consolidate a pattern of global warming. Each year of the 21st century has ranked among the 14 hottest since record keeping began in 1880.
With 36 years of above-average temperatures, nobody born since 1976 has lived through a colder than average year.
Tom Karl, director of NOAA's national climatic data centre, told a reporters' conference call the US temperatures were "remarkable".
According to an ongoing temperature analysis conducted by scientists at NASA, the average global temperature on Earth has increased by about 0.8C (1.4F) since 1880, (left 1880-1889) compared to today (right 2000-2009). Photograph: GISS/NASA
"The planet is out of balance and therefore we can predict with confidence that the next decade is going to be warmer," James Hansen said.
Aside from the US, and South America, most of Europe, Africa, western, southern, and far north-eastern Asia experienced above-average temperatures.
Other parts of the world were unusually cooler than average, including most of Alaska, far western Canada, and central Asia, NOAA said.
Britain also experienced slightly below average temperatures, at 0.2F below the 1981-2010 average, which was attributed to the cool summer and autumn. Britain also experienced its second wettest year since records began in 2010.
Other records highlighted by NOAA included the extreme drought across the mid-western United States, and other important farming regions including parts of Russia and Ukraine.
The Arctic experienced record low sea ice throughout the year, with sea ice cover dropping to 1.32m square miles, the lowest value ever recorded, in September 2012.
NASA Global Climate Change news, Jan 15, 2013, “NASA Finds 2012 Sustained Long-Term Climate Warming Trend”,
Jan 16, 2013“2012 among the 10 warmest years on record, figures show”,
Comments are moderated
|
<urn:uuid:1d68f414-3956-401f-bdde-bedbd89dccf6>
|
CC-MAIN-2013-20
|
http://www.countercurrents.org/cc170113A.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926227
| 1,334
| 3.1875
| 3
|
Men and women unequal in life expectancy
Eurostat data | Thursday 19 April 2012
Women in all EU member states have a longer life expectancy than men at age 65, while healthy life expectancy at age 65 is higher for men than for women in ten states, according to statistics published by Eurostat, on 19 April. The figures were released in connection with the first meeting of the European Joint Action on Healthy Life Years
(1) organised as part of the European Year for Active Ageing and Solidarity between Generations 2012.
For the population aged 65, life expectancy was estimated at 21 years for women and 17.4 years for men in the EU27 in 2010 and the number of healthy life years at 8.8 for women and 8.7 for men.
In 2010, the longest life expectancy at age 65 was observed in France (23.4 years), Spain (22.7) and Italy (22.1) for women, and in France (18.9), Spain (18.6) and Greece (18.5) for men. The shortest life expectancy at age 65 was registered in Bulgaria (17), Romania (17.2) and Slovakia (18) for women, and in Latvia (13.3), Lithuania (13.5) and Bulgaria (13.6) for men.
In 2010, the largest number of healthy life years after age 65 was registered in Sweden (15.5), Denmark (12.8), Luxembourg (12.4), Malta (11.9) and the United Kingdom (11.8) for women, and in Sweden (14.1), Malta (12,), Denmark (11.8), Ireland (11.1) and the United Kingdom (10.8) for men. The lowest healthy life expectancy for women and men was observed in Slovakia (2.8 and 3.3).(1) Further information is available at ec.europa.eu/social/main.jsp?langId=fr&catId=88&eventsId=459&furtherEvents=yes
|
<urn:uuid:e99ae6f3-d381-4168-8bf3-b3ca88b58d49>
|
CC-MAIN-2013-20
|
http://www.europolitics.info/social/men-and-women-unequal-in-life-expectancy-art332084-26.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924195
| 421
| 2.671875
| 3
|
09.05.2013 | Climate History of the Arctic as a Key to the Future
Analyses of the longest continental sediment core ever collected in the Arctic have provided an almost continuous archive of information on arctic climate dynamics for the period from 3.6 to 2.2 million years ago. It was during this period that a transition took place from the warm Pliocene to the Quaternary, the so-called Ice Age, in which we live today and through which the polar region, which undergoes glacial /interglacial cycles with varying ice coverage, is characterized.
Queen Elizabeth Prize for the Inventors of the Internet “Nobel Prize for Engineering Sciences” Outstanding achievements of global significance in engineering science will, for the first time, be awarded today, 18 March 2013. With prize money of one million pounds the Royal Academy of Engineering this year honors the inventors of the Internet for their revolutionizing accomplishment. With this, the Queen Elizabeth Prize is the most highly endowed award in the field of engineering science worldwide.
13.03.2013 | Extreme water
The earthly, omnipresent compound water has very unusual properties that become particularly evident when subjected to high pressure and high temperatures. In the latest issue of the Proceedings of the National Academy of Science (PNAS), a German-Finnish-French team, including GFZ scientists Dr. Max Wilke, Dr. Christian Schmidt and Dr. Sandro Jahn, published what happens when water is subjected to pressure and temperature conditions such as those found in the deep Earth.
|
<urn:uuid:8a7ac341-3b38-4e61-ab86-373d8fdf88af>
|
CC-MAIN-2013-20
|
http://www.gfz-potsdam.de/portal/gfz/Public+Relations/Pressemitteilungen
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930809
| 316
| 3.140625
| 3
|
Against human dignity: the development of the Transatlantic Slavery Gallery at Merseyside Maritime Museum
Anthony Tibbles, 1996
From 'Proceedings, IXth International Congress of Maritime Museums', edited Adrian Jarvis, Roger Knight and Michael Stammers, 1996.
The decision to create a Transatlantic Slavery gallery in Liverpool
The history of Transatlantic Slavery is intimately bound up with the history of Liverpool, particularly in the 18th century. David Richardson has written
“it is clear that the traffic in enslaved Africans was the corner-stone of Liverpool overseas trade from about 1730 to 1807... the African and related trades may have occupied at least a third and possibly up to a half of Liverpool shipping tonnage before 1807.”
Even after abolition of the trade, Liverpool merchants continued to trade along two of the sides of the triangle - to West Africa primarily for palm oil, and to North America and the Caribbean, mainly for cotton, sugar and tobacco produced by slave labour.
The Merseyside Maritime Museum had covered the history of the port of Liverpool until 1857 in one of its first galleries opened in 1987. The slave trade was placed in the context of the overall trade of the port and because of this its significance was underplayed. We had also hurried the brief and were unaware of recent research. On reflection our treatment was woefully inadequate and not surprisingly we were criticised for it - not least in the report by Lord Gifford which looked at race relations in the city . By 1989-90 we were looking at ways of improving this gallery and in particular of fully recognising the importance of Liverpool’s role in the slave trade.
It was at this point that the Peter Moores Foundation approached us with the suggestion of creating a separate display about the slave trade. You may think that this is an unlikely source for such an idea. The Peter Moores Foundation is a private charity funded by Peter Moores, until recently a major shareholder in the family’s football pools and retail empire. The proposal to develop some form of display about the slave trade came directly from Peter Moores and I can do no better than quote his own words
“During forty years of work and travel in Europe and America, it became increasingly clear to me that slavery was a taboo subject, both to white and to black people. Forty years ago, most Europeans had managed to suppress any acknowledgement of their connection with the slave trade. In the United States, where it was impossible to ignore the results of the slave trade, there was segregation, later bussing and recently something like integration, but never any mention of how black people came to be in America in the first place. We can come to terms with our past only by accepting it, and in order to be able to accept it we need knowledge of what actually happened. We need to make sense of our history.
It seemed to me that the taboo should be exorcised, and black friends agreed with me.”
After several months of discussion on how we could do it, where we could do it, how much it would cost etc, we came to an agreement whereby the Foundation would make available nearly £550,000 for the development of a 400 square metre gallery devoted to the transatlantic slave trade in the basement of the Merseyside Maritime Museum. The scheme was publicly announced in December 1991 and the development process began.
Advisory committee and guest curators
How did we organise it? Our first task was to establish an advisory committee under the chairmanship of the late Lord Pitt. A former Chairman of the Greater London Council, the British Medical Association and a doyen of the campaign against racial discrimination, he was also a consummate politician - a skill which came to our aid on more than one occasion. As well as representatives of National Museums Liverpool (known as NMGM at the time) and Peter Moores Foundation, we had people from the Black community in this country, including Liverpool, and from abroad. The role of the committee was to advise and guide the project team and to act as a means of communication. They gave valuable advice ranging from organisational issues, such as consultation, procedures for appointments, to the educational aspects and also the overall approach and matters such as the use of illustrative material.
On the academic front we began by hosting a two day seminar at the museum in January 1992. We invited scholars who had research and written about the transatlantic slave trade, about slavery and about related issues, including people working in this country, and from abroad, particularly the United States and Canada. We examined the themes we thought we ought to cover in a series of sessions and asked for advice. It was an invaluable session, though one participant concluded that it was impractical and impolitic to develop such a gallery at the Merseyside Maritime Museum! Others were more optimistic. As a result of the seminar we appointed a group of six people - which later grew to eleven - to help us in the role of guest curators - principally to advise on the story line and the text.
Whilst it was important to have academic and official support, it was also clear from the beginning that our consultations on this gallery had to be much wider - particularly with the Black community and especially with people in Liverpool. We had a difficult public launch for the project and a difficult first meeting with people from the community, which coincided with the first guest curators meeting. The discussions brought out a lot of concerns and some hostility. Why was National Museums Liverpool doing this? What were Peter Moores' motives? What were local Black people going to get out of the project in terms of work or jobs? Was National Museums Liverpool going to make a profit out of this? There was criticism of the composition of the advisory committee and the guest curators’ group. There were also people with entirely different agendas. In general there was suspicion of an institution which was seen to have a poor record of addressing Black issues and Black concerns suddenly undertaking a project so central to the history of Black people.
We were aware of the problems which other institutions had experienced and with the help of the guest curators and the advisory committee began to address some of these concerns. We adopted a mission statement. We took steps to explain our role and the way we saw the gallery developing and crucially the role others could play in that process. We made further appointments to the advisory committee and guest curators’ group - specifically to take account of concerns that not enough women were involved and not enough Africans. Over a period we shared our ideas on the brief and discussed methods of approach and interpretation. We sought advice on what the gallery should be called. We sought advice from individuals, held further meetings, organised a focus group and asked our own visitors about the project. We also issued a couple of newsletters. We did not resolve all the problems and all the concerns but we did listen to what people had to say. It was a challenging experience and the degree of discussion and consultation with individuals and groups outside National Museums Liverpool was quite unlike anything else that we had previously undertaken.
Museum staff involved in the project
On the museum side I acted as the project leader. We also appointed a project curator, Alison Taubman. In the early stages much of her work was linked to making contacts with people whether in museums or in black community groups to get as much information and feedback as possible. A key part of her role was locating objects and illustrative material, and this then extended to organising loans, photography, conservation requirements etc. We had the support of an in-house project group which included other curatorial colleagues, design, education and public relations. The composition of this group varied and various ad hoc and sub groups were also necessary to deal with particular aspects eg the opening. The design of the gallery was undertaken by Ivor Heal Design, a design consultancy with wide museum experience.
Another key appointment was that of Garry Morris as the outreach worker for the gallery. His role was to go out into the community, and in particular the Black community, to stimulate interest in the gallery and to develop activities and programmes that would extend the traditional role of the museum. He began work in November 1993, a year before the opening, and built on the contacts made by the curatorial team. He organised events in the museum and outside such as a workshop on women in slavery and a poetry reading on South Africa’s National Day. He also organised a major performance on the day before the official opening which including a procession and memorial event for all those for suffered as a result of the slave trade.
Remembering the human stories
The story line was obviously the crucial element and immediately begs the question - what is the approach? Do we see this from a European point of view or an African one? A white or a Black? Is African the same as Black? White European? Unfortunately in a case like this, there is no easy middle way, no obvious compromise. At our first guest curators’ meeting we formulated a mission statement:
“The aim of the gallery is to increase public understanding of the experience of Black people in Britain and the modern world through an examination of the Atlantic slave trade and the African diaspora.”
One thing that was very clear was the different perceptions of the slave trade and what it means to different people. There is a perceptive comment on this matter by Stephen Small, a member of both the advisory committee and the guest curators’ group -
"To most white people, slavery and colonialism are just part of a distant memory of nothing in particular. For whites, slavery did not last particularly long, its benefits accrued only to a tiny proportion of white people and the evils of slavery are overshadowed by the role played by British abolitionists. In any case, the rise of Western nations, Britain, and the United States in particular, as the industrial supremos of the world, is explicable to them simply in terms of English innate genius. Poverty and penury in Africa, and racial inequality in the West, is explained in terms of black inability, incompetence or laziness.
To black people, though, slavery and colonialism reiterate themselves in our everyday lives, and evoke poignant and immediate memories of suffering, brutalisation and terror. For black people, Western nations achieved their industrial growth and economic prosperity on the backs of slaves, abolished slavery primarily for economic reasons, have discriminated against black people ever since, and are unrepentant about any of it. African under-development and racial inequality in the West is understood primarily in terms of racism and racist hostility of whites.”
One of the dangers of the European view is that it is very easy to get obsessed by the mechanics of the trade - the ships, the methods of trading, the numbers, the economics - and thus dehumanise it all. This was one of the principal and sustained criticisms of the initial working title of the gallery - the Atlantic Slave Trade Gallery - and why we agreed to change it. The Afrocentric perspective reminded us very forcible that this is a story above all about people. We could have begun the display in Liverpool with fitting out a slave ship and followed the triangular route; instead after a brief introduction explaining what the slave trade was and how it came about, the gallery goes straight to Africa and only later picks up on the European involvement - the traders and their ships. This means the visitor is almost immediately plunged into Africa and re-reinforces the point that the slave trade was about Africans. We have tried to sustain this throughout the gallery and wherever possible make use of personal witness, whether by illustrations, audio or by an interpretative tool of 'inventing' four Africans to introduce at key points throughout the gallery.
Decisions about the content of the gallery
As a gallery in a museum we were very keen that it should be rich in objects and not rely only on illustrations, reconstructions and other interpretative devices - important though those are. So what objects do you use to tell the story of the transatlantic slave trade? Everyone immediately thinks of chains and shackles, the instruments of torture, punishment and restraint, but these are hardly sufficient to provide a full picture. We needed to adopt a more lateral approach to find the items which provided the context and which helped flesh out the story.
This can be illustrated in the first section of the gallery. One of the main intentions was to get across the point that Africa should not be portrayed only as a place where Europeans got 'slaves'. To remind visitors that Africa - and we are talking particularly of West Africa and West Central Africa - had a diversity of states, societies and cultures. That other things went on and that there were influences other than European. There was, therefore, an opportunity to use a range of objects from African cultures to make this point and we are fortunate within National Museums Liverpool that the Liverpool Museum (the former name for World Museum) has substantial African collections. You will, therefore, find a small but crucial group of artefacts which are intended to represent the strength of these cultures.
We were also anxious that we did not use too many European images of Africans but it is not easy to find African material. One of the few examples of where Africans do depict themselves and even more rarely where they depict Europeans are the famous - or perhaps I should say infamous - Benin Bronzes. These date from the 16th and 17th centuries and are thus exceptionally valuable evidence. Here we not only have depictions of Bini soldiers but also of the Portuguese, bearing manillas. Another emotive plaque shown a European soldier armed with a sword and more importantly a gun. These are invaluable images and a necessary counter-balance to the European visual record.
The inclusion of these plaques is not without its problems. We all know that they were looted in a punitive raid on Benin by the British Navy in 1897. The African Reparations Movement has argued that this is a clear case for restitution of cultural property. In the legal context this is obviously a matter for the British Museum but the ethical case is wider. We have taken the view that whilst these plaques are in this country it is better that they should be on display to the public and we feel that it is particular appropriate that they should feature in this context and fulfil the purpose that I have described.
There can, of course, be dangers with visual evidence. For instance, almost all three dimensional material connected with abolition relates to the European humanitarian and moral campaigns to abolish the slaver trade and slavery. Although historians recognise the contribution of these campaigns, they also draw attention to other factors. The enslaved themselves played a significant part, through various forms of resistance - revolts on board ship, the large scale uprisings in the Americas, the passive resistance of go slows. We have tried to reflect this not only in the text but in the visual impact of the display.
The Middle Passage
The most demanding part of the gallery was how to deal with the Middle Passage. It was a subject we discussed in outline at consultative meetings and in more depth with a focus group. Everyone recognised the centrality of the Middle Passage - it was the one common experience of all Africans who were enslaved and was of profound psychological significance. Views varied. Some wanted us to construct an emotive but authentic hold to walk through with manacled bodies covered in excrement, groans, smells - the full works. At the other end of the spectrum, some advocated an accurate illustrative approach or a model.
After a lot of discussion we agreed certain parameters - a walk-thorough experience was essential, visitors needed to experience the dislocation, but we did not want something that frightened people (particularly children) and we did not want to sensationalise.
The solution we adopted was to recreate part of the hold of a slave ship that visitors walk thorough. It is authentic in that it is based on the dimensions of a known Liverpool slaver - the Brooks - give or take a few inches in height. It is dark. There are some atmospheric noises but the principal sound is alternate readings from the log of John Newton, being extracts from his daily entries on voyages made between 1752 and 1754, and readings from the memoirs of Equiano, who made his enforced voyage at about the same time. The matter of fact entries by Newton contrast dramatically with the emotional response of Equiano. We also realised that movement was important and we project images representing shackled human beings, but slightly dislocated, moving in the constricted space. We wanted visitors to use their imaginations and hoped to provide them with enough information and experience to do so.
I have to be honest and say that this solution is not a 100% success. But I hasten to add that I don’t think any solution would be perfect. How could it be? Some people do find it a moving an emotional experience; for others the bareness of the interpretation leaves them unmoved. I suspect that visitors’ responses depend on what they bring with them. For some people - particularly Black people who carry with them the collective memory of generations - it has been very emotional: Maya Angelou, who opened the gallery, would not go in there alone. Others, again including Black people, think it is unemotional and have used words like “sanitised.” The limited research we have done does not suggest it is a major failure and I hesitate to tamper when there is no clear direction to go.
There are, of course, other visual stimuli - the dioramas, the models, the interactive elements and videos. We are also growing live sugar cane in the gallery - a first in this sort of situation as far as I am aware. Sound is also important - we have traditional music in the African section, work songs in the sugar cane display and I have mentioned the readings in the ship. At a number of places in the gallery visitors can also pick up soundstiks and hear audio extracts. These allow a voice from the past to speak, as it were, directly to the visitor - Equiano talking about life in Africa; an African chief 'ordering' his goods from Liverpool; Frederick Douglass and Sojourner Truth talking about their experiences of slavery. In these cases we have used authentic extracts and sought to have them read by actors of approximately the right age and with the right accent. All at union rates!!
The importance of the text in the gallery
I feel I should say a few words about the written word - the text . How do you decide what the text says? In our case this was a long and complex business, involving the guest curators, copywriters and ourselves. There was the physical challenge of reducing a complex and difficult story into just a few thousands words but there was also considerations of language, approach and attitude. We were well aware, for instance, of the problems that the Royal Ontario Museum in Ottawa had experienced with one of their exhibitions when people had demonstrated outside the museum .
The final text ran to some 4,500 words and was the result of about 17 separate stages or drafts. I slightly revised and reordered this text to produce a cheap (50p) gallery guide that visitors could take home with them.
What were the problems? To take a simple example from the introductory panel. The initial text produced read:
“Over more than four centuries millions of Africans were shipped westwards across the Atlantic in conditions of unimaginable cruelty.”
Four centuries is not specific, it could be any four centuries. “Africans were shipped” uses the agentless passive construction. Is the cruelty “unimaginable”?
The final version read:
"In the four hundred years between 1500 and 1900, European enslaved millions of Africans. They shipped them across the Atlantic in conditions of great cruelty.”
This has clear dates and clear actions. I know we still have a non specific 'millions' but there is no easy way around this generalisation. The question of numbers is so fraught with dissension that any number of millions one chooses will be open to serious dispute.
Some wording contains hidden messages or rather certain words and phrases can reinforce attitudes that one does not wish to perpetuate. A case in point is the word 'slaves'. This carries with it all sorts of dehumanising messages. Africans were not slaves to begin with. We have consciously called people African or used their group names in the early sections and avoided phraseology such as 'Where did the slaves come from?' They were people who were enslaved and we have frequently preferred the term 'enslaved Africans'. We have generally reserved the term 'slaves' for the state of slavery in the Americas but have also used 'Blacks' and 'people of African descent'.
And there is the question of generalisation and balance: a problem for all museum displays. Inevitably in telling a story like this in simple clear statements one succumbs. For example, we have portrayed slave masters as cruel, repressive, murderous, exploitative - in a word 'bad'. But what of the 'good' masters? There were such things. Do you include a 'good' example to balance the 'bad' ones? We have not done so. The situation is so unequal that you end up with a balance that is in fact no such thing and the overall message is diluted.
With such complex issues involved it was important that we provided additional ancillary resources for visitors and developed an educational programme, particularly for schools. I have mentioned the gallery guide - priced that most visitors could afford it - and on the academic front I edited a catalogue detailing the objects in the gallery accompanied by 16 essays by our guest curators. We have shied away from souvenirs but produced postcards and stocked a good range of books on issues raised by the gallery. Teachers’ notes were prepared and a variety of teacher training courses held.
The impact of the Transatlantic Slavery gallery
What has been the reaction to the gallery? Initially it was almost overwhelming. Our visitor numbers more than doubled for several weeks and were maintained at above average levels for twelve months. The gallery is still very popular with our visitors and has drawn a sustained interest from around the country and abroad. We commissioned independent formal evaluation in the spring of 1995 which confirmed the generally very positive comments we had received. For instance, the average rating of the gallery was 8.6 out of 10 and the evaluation concluded
“There was no evidence to suggest that visitors felt any aspects of the exhibition to be inappropriate or in poor taste. In our view, it is unusual for any exhibition to evoke such strong, but appropriate emotions.”
As a final word I think it is worth reflecting for a few moments on the impact of doing the gallery on the museum as a whole - not just the Merseyside Maritime Museum but National Museums Liverpool. People began asking questions: What is National Museums Liverpool’s Equal Opportunities Policy? How do the museums reflect Black issues? How many employees are Black or from minorities? What employment prospects are there for Black people?
The answers to some of these questions showed up serious weaknesses. For instance, we only had a handful of Black and minority employees. However, we were able to begin a limited programme of positive training, again with help from the Peter Moores Foundation, which has provided six and twelve month training placements in several different jobs across the institution.
As a result of a specific request at a community meeting, we have produced a career guide which gives information about the types of jobs and the qualifications and experience required. We brought forward racial awareness training for front-of-house staff. Immediately by getting involved with the project the status of Equal Opportunities was increased. We now have an Equal Opportunities Working Group.
Future developments in our venues
One of the earliest concerns was the long-term commitment of National Museums Liverpool to Black and related issues. People did not want us to think that once we had opened the gallery we had done our bit and could sit back and bask in the glory! We had no intention of doing that but I have to say that in the current financial climate for public institutions it is very difficult. For instance, we have worked up proposals for a second phase of our Museum of Liverpool Life entitled 'Homes and Communities' which will include the Black community’s contribution to the city (please note that this venue has now closed and will be replaced by the Museum of Liverpool in 2010). But we are dependant on sponsorship and a lottery bid to go ahead. We have exciting ideas for developing the African collections at Liverpool Museum (now open as the World Cultures gallery at World Museum) but again need very substantial financial support.
But what of the future of the gallery? Any physical changes will be very limited and expanding its size and coverage is impractical in present circumstances. But we are developing the educational role and the outreach work. We have been successful in securing European money to fund a project which builds on the external elements of the gallery - last week we launched a guiding service for a Black history trail around Liverpool provided by four Black guides trained in conjunction with the Tourist Board. A self-guiding trail, video, small travelling exhibition and handling collection will follow later this year.
So we are still looking forward. The gallery is not the definitive statement on transatlantic slavery; it is not even intended to be the definitive museum display on the subject. But it is a beginning and an important one. It is an acknowledgement of the slave trade and transatlantic slavery and the part they played in the history of Liverpool and this country. Black people have rightly sought that acknowledgement for many years. I hope the gallery will continue to encourage debate and discussion and encourage others to take on similar challenges.
Anthony Tibbles, 1996
'Transatlantic Slavery: Against Human Dignity' by Anthony Tibbles is available to purchase from the online bookshop.
David Richardson 'Liverpool and the English Slave Trade' in Anthony Tibbles 'Transatlantic Slavery', HMSO, London, 1994, p 75
Lord Gifford, Wally Brown and Ruth Bundy 'Loosen The Shackles', Liverpool, 1989
Foreword in Anthony Tibbles 'Transatlantic Slavery', HMSO, London, 1994, p 9
Stephen Small 'The General Legacy of the Atlantic Slave Trade' in Anthony Tibbles 'Transatlantic Slavery', HMSO, London, 1994, p 123
For a fuller discussion of the text of the gallery see Helen Coxall 'Speaking Other Voices' in Eileen Hooper-Greenhill 'Cultural Diversity in Museums and Galleries in Britain', Leicester, 1996
J Cannizzo 'Into the Heart of Africa', Royal Ontario Museum, 1989
|
<urn:uuid:aa831606-6f81-4d61-8dea-7c42ecc397f7>
|
CC-MAIN-2013-20
|
http://www.liverpoolmuseums.org.uk/ism/resources/against_human_dignity.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970319
| 5,469
| 2.84375
| 3
|
LOS ANGELES, California -- Going against the flow is always a challenge, but some waterfall-climbing fish have adapted to their extreme lifestyle by using the same set of muscles for both climbing and eating, according to research published January 4 in the open access journal PLOS ONE by Richard Blob and colleagues from Clemson University.
The Nopili rock-climbing goby is known to inch its way up waterfalls as tall as 100 meters by using a combination of two suckers; one of these is an oral sucker also used for feeding on algae. In this study, the researchers filmed jaw muscle movement in these fish while climbing and eating, and found that the overall movements were similar during both activities. The researchers note that it is difficult to determine whether feeding movements were adapted for climbing, or vice versa with the current data, but the similarities are consistent with the idea that these fish have learned to use the same muscles to meet two very different needs of their unique lifestyle.
"We found it fascinating that this extreme behavior of these fish, climbing waterfalls with their mouth, might have been coopted through evolution from a more basic behavior like feeding. The first step in testing this was to measure whether the two behaviors really were as similar as they looked" says Blob, lead author on the study.
citation: Evolutionary Novelty versus Exaptation: Oral Kinematics in Feeding versus Climbing in the Waterfall-Climbing Hawaiian Goby Sicyopterus stimpsoni
Views expressed in this article do not necessarily reflect those of UnderwaterTimes.com, its staff or its advertisers.
|
<urn:uuid:208e9715-d75f-42c5-b680-2b0b65cf5d30>
|
CC-MAIN-2013-20
|
http://www.underwatertimes.com/news.php?article_id=87100152693
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.974668
| 332
| 2.90625
| 3
|
Do drinking giraffes have headaches?
Charles Darwin wrote in his Origin of Species that he had no difficulty in imagining that a long drought could have caused some hypothetical short-necked ancestors of the giraffe to stretch their necks continually higher to reach the diminishing supply of leaves. He had no fossil evidence, of course, for such an evolutionary history. He also apparently was not aware of certain problems peculiar to giraffes which make his easy assumption of giraffe evolution even more difficult to accept.
The giraffe heart is probably the most powerful in the animal kingdom, because about double normal pressure is required to pump blood up that long neck to the brain. But the brain is a very delicate structure which cannot stand high blood pressure. What happens when the giraffe bends down to take a drink? Does he ‘blow his mind’? Fortunately, three design features have been included in the giraffe to control this and related problems.
In the first place, the giraffe must spread his front legs apart in order to drink comfortably. This lowers the level of the heart somewhat and so reduces the difference in height from the heart to the head of the drinking animal. The result is that excess pressure in the brain is less than it would be if the legs were kept straight.
Second, the giraffe has in his jugular veins a series of one-way check valves which immediately close as the head is lowered, thus preventing blood from flowing back down into the brain.
But what of the blood flow through the carotid artery from the heart to the brain?
A third design feature is the ‘wonder net’, a spongy tissue filled with numerous small blood vessels located near the base of the brain. The arterial blood first flows through this net of vessels before it reaches the brain. It is believed that when the animal stoops to drink, the wonder net in some way controls the blood flow so that the full pressure is not exerted on the brain.
Scientists also believe that probably the cerebrospinal fluid which bathes the brain and spinal column produces a counter-pressure which prevents rupture or leakage from the brain capillaries. The effect is similar to that of a G-suit worn by fighter pilots and astronauts. The G-suit exerts pressure on the body and legs of the wearer under high acceleration and prevents blackout. Leakage from the capillaries in the giraffe’s legs, due to high blood pressure, is also probably prevented by a similar pressure of the tissue fluid outside the cells. In addition, the walls of the giraffe’s arteries are thicker than those in any other mammal.
Had Darwin known all these problems peculiar to giraffes, it surely would have given him a headache.
Some careful investigations and measurements of blood pressure have recently been made in live giraffes in action. However, the exact manner in which these various factors operate to enable the strange creature to live has still not been clearly demonstrated. Nevertheless, the giraffe is a great success. When he has finished his drink he stands up, the check valves open, the effects of the wonder net and the various counter-pressure mechanisms relax, and all is well. Not even a headache!
|
<urn:uuid:d3f77803-50c2-4a04-bf2c-66fef995e46c>
|
CC-MAIN-2013-20
|
http://creation.com/do-drinking-giraffes-have-headaches
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.969495
| 660
| 2.796875
| 3
|
Let’s say you need a way to make a project wireless, but don’t have the scratch for a ZigBee or its ilk. You could use IR, but that has a limited range and can only work within a line of sight of the receiver. [Camilo] sent in a project (Spanish, translation) to connect two devices via a wireless serial connection. As a small bonus, his wireless setup is cheap enough to create a wireless network of dozens of sensors.
[Camilo] used the TLP434A transmitter/receiver combination to get his wireless project off the ground. These small devices only cost about $5, but being so inexpensive means the hardware designer needs to whip up their own communications protocol.
For a microcontroller, [Camilo] chose a Freescale MC9S08QC, a pleasant refrain from the AVR or PIC we normally see. After making a small board for his transmitter, [Camilo] had a very small remote control, able to send button presses or other data to a remote receiver.
After the break, you can see a short demo video [Camilo] posted of his wireless transmitter turning on an LED attached to his receiver. Unfortunately, this video was filmed with a potato, but all the schematics and code is on his web site for your perusal.
|
<urn:uuid:a5ef8769-ae5b-44cb-af53-939d7ae39341>
|
CC-MAIN-2013-20
|
http://hackaday.com/2012/08/19/very-inexpensive-rf-module-tutorial/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959716
| 278
| 2.84375
| 3
|
Reducing the ability of certain bacteria to fix carbon dioxide can greatly increase their production of hydrogen gas that can be used as a biofuel. Researchers from the University of Washington, Seattle, report their findings in the current issue of online journal mBio®.
"Hydrogen gas is a promising transportation fuel that can be used in hydrogen fuel cells to generate an electric current with water as the only waste product," says Caroline Harwood, who conducted the study with James McKinlay. "Phototrophic bacteria, like Rhodopseudomonas palustris obtain energy from light and carbon from organic compounds during anaerobic growth. Cells can naturally produce hydrogen gas biofuel as a way of disposing of excess electrons."
Feeding these bacteria more electron rich organic compounds though, does not always produce the logically expected result of increased hydrogen production. Harwood and McKinlay analyzed metabolic functions of R. palustris grown on four different compounds to better understand what other variables might be involved.
One factor involved appears to be the Calvin cycle, a series of biochemical reactions responsible for the process known as carbon dioxide fixation. The Calvin cycle converts carbon dioxide and electrons into organic compounds. Therefore carbon dioxide-fixation and hydrogen production naturally compete for electrons.
When they tested a strain of the bacterium, which had been genetically modified to block carbon dioxide-fixation they observed an increased output of hydrogen from all four substrates.
The Calvin cycle was not the only variable affecting hydrogen production that Harwood and McKinlay identified in the paper. They also determined that the metabolic route a growth substrate took on its way to becoming a building block for making new cells also played a role.
"Our work illustrates how an understanding of bacterial metabolism and physiology can be applied to engineer microbes for the production of sustainable biofuels," says Harwood.
mBio® is an open access online journal published by the American Society for Microbiology to make microbiology research broadly accessible. The focus of the journal is on rapid publication of cutting-edge research spanning the entire spectrum of microbiology and related fields. It can be found online at http://mbio.asm.org.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
|
<urn:uuid:6110e400-d7a3-4759-be5e-3330ddae166d>
|
CC-MAIN-2013-20
|
http://www.eurekalert.org/pub_releases/2011-03/asfm-bcd032911.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938286
| 480
| 3.75
| 4
|
|USDOJ Homepage||Strategic Plan Homepage||A Message from the Attorney General||FY 1999 Annual Accountability Report|
|FY 2001 Performance Plan||Table of Contents||Introduction||Chapter I|
|Chapter II||Goal One||Goal Two||Goal Three|
|Goal Four||Goal Five||Goal Six||Goal Seven|
|Chapter III||External Factors||Appendix A||Appendix B|
|Appendix C||Appendix D||Appendix E||Appendix F|
Goal 3: PROTECT THE RIGHTS AND INTEREST OF THE AMERICAN PEOPLE BY LEGAL REPRESENTATION, ENFORCEMENT OF FEDERAL LAWS AND DEFENSE OF U.S. INTERESTS
The Department of Justice is the nation's chief litigator. We represent the United States Government in court. We enforce federal civil and criminal statutes, including those protecting our civil rights, safeguarding our environment, preserving a competitive market structure, and defending the public fisc against unwarranted claims. Carrying out these responsibilities is the primary task of the U.S. Attorneys (USAs), the Department's litigating divisions, and the Office of the Solicitor General. The USAs serve as the Attorney General's chief law enforcement officer in each of the 94 federal judicial districts, representing the United States in both civil and criminal matters. The litigating divisions are centralized staffs with expert attorney skill and specialized expertise in particular areas of federal law, including civil rights, environmental law, antitrust, tax, civil justice and criminal law. The Office of the Solicitor General represents the interests of the United States before the U.S. Supreme Court and authorizes and monitors the government's activities in the nation's appellate courts. Together, these Justice components ensure that the Federal Government speaks with one voice with respect to the law.
Strategic Objective 3.1 CIVIL RIGHTS - - Uphold the civil rights of all Americans through enforcement of, and education about, federal civil rights laws and protections.
The Department of Justice promotes compliance with basic federal civil rights protections through a multifaceted program of criminal enforcement, civil enforcement, public education and outreach. The nation's civil rights laws influence a broad spectrum of conduct by both individuals and public and private institutions. They prohibit discriminatory conduct in such areas as the administration of justice, housing, employment, education, voting, lending, public accommodations, access to services and facilities, activities that receive federal financial assistance, and the treatment of juvenile and adult detainees and residents of public institutions. They also provide safeguards against criminal actions such as hate crimes, involuntary servitude and slavery and official misconduct.
Recent years have seen growth in the criminal civil rights enforcement area. In 1998, the Department concluded criminal civil rights prosecutions against 2,153 suspects, up 12 percent from 1,916 suspects in 1994. At the same time, the role of the Department has expanded during this period to issues that capture national attention, such as church arsons, clinic bombings, and hate crimes. The Department continues to investigate and prosecute cases involving the violent interference with liberties and rights defined in the Constitution or federal law.
The Department enforces several civil justice statutes designed to protect civil rights, including the Voting Rights Act of 1965 and the National Voter Registration Act. With the new population data available from the 2000 Census, states, counties, cities and school districts across the country will be adjusting their jurisdictional boundaries, i.e., redistricting. In our review of the redistricting plans of specially covered jurisdictions, we ensure that minorities will have a fair opportunity to elect candidates of their choice.
The Department works closely and effectively with the Equal Employment Opportunity Commission (EEOC) to enforce Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act (ADA), as amended. While the EEOC's enforcement efforts are generally focused on addressing discriminatory conduct by private actors, the Department is responsible for litigating charges of employment discrimination lodged against state and local governments where the EEOC, following an investigation, has determined that reasonable cause exists to believe that the charge has merit.
The Fair Housing Act of 1968, the Equal Credit Opportunity Act, and the Civil Rights Act (Title II) prohibit discrimination in housing, consumer credit and public accommodations (restaurants, hotels and motels, places of entertainment, etc.) regardless of race, sex, religion and national origin. Both the Department of Housing and Urban Development (HUD) and the Department of Justice have enforcement responsibilities under the Fair Housing Act. The Department focuses on a variety of pattern and practice situations to stop and/or deter the continuance of any discriminatory conduct or practice.
The ADA extends to people with disabilities the promise of equal access to everyday life. The Department enforces the ADA to make this promise a reality. Enforcement responsibilities cover a broad spectrum of potential actions to encourage individuals and entities to comply with ADA requirements, including removal of physical barriers, provision of auxiliary aids, and elimination of discriminatory policies. The Department also focuses on pattern or practice cases that involve issues of general public importance involving public accommodations and commercial facilities.
The Department enforces in federal court a number of statutes administered by the Department of Education prohibiting discrimination by recipients of federal funds. Additionally, the Department coordinates with the Department of Education with regard to enforcement in federal court of referrals under Title II of the ADA which prohibits discrimination against persons with disabilities by public school officials.
On the civil side, the Department is meeting a growing demand for pattern or practice investigations of major police departments for the presence of police misconduct in the discharge of operational responsibilities. The Department carries out these investigations pursuant to the pattern or practice authority of the Violent Crime Control and Law Enforcement Act of 1994.
Strategies to Achieve the Objective
Investigate and prosecute civil rights crimes.
The Department's most effective strategy to combat violations of individual civil rights is through aggressive investigation and timely criminal prosecutions remedying proven discrimination and punishing guilty actors. The publicity generated by the media resulting from successful prosecutions demonstrates the Department's commitment and ability to prosecute civil rights crimes, thus creating a deterrent for those who might otherwise commit these crimes.
Target key areas or forms of discrimination through pattern or practice litigation to uproot and remedy discriminatory public and private institutional conduct.
Pattern or practice litigation is used to address a wide variety of discrimination problems. For example, in the area of employment and workplace discrimination, such litigation often results in systematic changes to defendants' employment practices and serves as a signal to other employers to review voluntarily their practices to determine compliance. In the "official misconduct" area, pattern or practice investigations have been the catalyst for numerous state and local law enforcement agencies to conduct training and reviews of their own practices and procedures to curtail or prevent police misconduct.
Investigate and prosecute individuals for civil violations of federal laws.
The enforcement of civil violations against individuals is another critical aspect of the Department's civil rights enforcement strategy. The importance and significance of such prosecutions are to remedy discriminatory conduct and make whole persons who have been victimized.
Educate the American business community and state and local governments regarding federal civil rights laws and requirements.
Non-adversarial interaction to achieve voluntary change through education, outreach, and mediation has been an important strategy toward reducing and deterring civil rights violations. For example, the Civil Rights Division's Technical Assistance Program, mandated under Section 506 of the ADA, provides answers to questions and free publications to businesses, state and local governments, people with disabilities, and the general public.
Key Crosscutting Programs
Generally, the Department's civil rights enforcement and outreach are coordinated with all federal agencies which provide financial assistance, including grant funding to state, local and nonprofit agencies, and with the other federal agencies with civil rights enforcement responsibilities (e.g., the Departments of HUD, Education, Labor, Health and Human Services, and Transportation.) Our coordination includes both longstanding working relationships, such as jointly developing policy guidelines and jointly handling enforcement cases, and more short-term task forces created to address specific problems. Current task forces and agreements include:
Interagency Fair Lending Task Force. The bank regulatory agencies (Federal Reserve Board, Office of Thrift Supervision, Office of Comptroller of the Currency and Federal Deposit Insurance Corporation), HUD, and the Department are members of an interagency fair lending task force which meets regularly to consult on fair lending policy and periodically issues joint policy statements.
Worker Exploitation Task Force (WETF). The WETF brings together the Departments of Labor, State, and Agriculture; the EEOC; and several Justice components to address involuntary servitude and slavery and other violations involving undocumented workers. This comprehensive approach on both civil and criminal bases has enhanced the viability of prosecutions by prompt identification of potential violations as well as by ensuring that the victims are available and prepared as witnesses despite their frequent status as undocumented workers.
National Task Force on Violence Against Health Care Providers. The National Task Force on Violence Against Health Care Providers coordinates the investigation and prosecution of violations of the FACE Act (Freedom of Access to Clinic Entrances Act). The Treasury Department's Bureau of Alcohol, Tobacco and Firearms (ATF) and the FBI provide investigators and the Treasury Department helps to oversee this prosecutorial effort, which is staffed primarily by prosecutors from the Department's Civil Rights Division.
Memorandum of Understanding on Housing Rights. The Department's Civil Rights Division and HUD have a Memorandum of Understanding to ensure that criminal interference with housing rights is addressed through the most effective means. HUD refers all forcible interference reports to the Civil Rights Division which reviews and either pursues or defers back to HUD for further action. This allows those instances of provable criminal violations to be addressed through prosecution and then processed for civil remedies through HUD.
Strategic Objective 3.2 ENVIRONMENT - - Enforce and defend federal environmental laws and programs across our land, including Indian Country, by investigating and litigating environmental and natural resources violations and issues.
The Department enforces government pollution abatement laws and programs; defends against suits challenging environmental statutes, regulatory and permit actions, and decisions by federal agencies; preserves natural resources; and litigates on behalf of Indian tribes and individual Indians. We strive to obtain compliance with environmental statutes, obtain redress of past violations that harm the environment, establish credible deterrents against violations of those statutes, obtain monetary civil penalties for past violations, recoup federal funds spent to abate environmental contamination, and obtain money to restore or replace natural resources damaged through oil spills or the release of hazardous substances into the environment. (26)
Thirty-five years ago, Americans began to realize that we were losing an important part of the United States' heritage - - its natural beauty and resources. Smog blanketed our cities, rivers caught fire, and toxic wastes were being found everywhere, even in playgrounds. Since that time, we have made substantial progress in cleaning up and protecting our environment, but there is much left to do. High concentrations of toxic air pollutants linked with cancer, birth defects and other health problems such as asthma still affect millions in urban areas. Approximately 40 percent of the nation's waters are still not fit for swimming or fishing, and groundwater contamination is threatening our supply of drinking water. Suburban sprawl is gobbling up wetlands and other habitat for wildlife, including endangered species, and exacerbating air quality problems and water shortages. And, there continue to be hundreds of hazardous wastes sites around the country that need to be cleaned up.
A different aspect of the ongoing challenge to protect our environment involves the defense of rules that regulate polluters and place appropriate restrictions on the use of natural resources, such as our forests and other public lands, and ensuring that decisions that will have significant environmental effects receive appropriate review. Such rules and decisions are often attacked in ways that - - were the attacks successful - - would undermine important environmental protections, and, hence, require vigorous defense. Environmentally sensitive lands sometimes also require protection through purchase or condemnation of those lands.
The Department faces a growing caseload in such natural resource areas as: defending U.S. interests in "general stream adjudication" involving thousands of parties and tens of thousands of claims in the Western states; restoring and maintaining federally-managed lands, waters, and renewable resources; bringing suits to reclaim abandoned mine sites; managing endangered species on federal lands (wolves, bison); coordinating land exchanges between the government and private developers to protect environmentally sensitive lands, including habitats for endangered species; ensuring that the government receives appropriate royalties and income due from leasing and mining activities on federally-managed lands and waters; battling the environmental consequences of sprawl around urban areas, particularly habitat degradation; and defending ecosystem management programs.
A related concern is the trust relationship that the United States has with Indians and Indian tribes through numerous treaties, statutes, and Executive Orders. Under these authorities, the government is obligated to perform a number of functions on behalf of these tribes, including litigation by the Department to establish and defend their rights. Among other things, this means developing, investigating and litigating environmental issues that arise on Indian reservations and securing tribal resources, including water rights, land, and treaty-based hunting and fishing rights.
Strategies to Achieve the Objective
Pursue cases against those who violate laws that protect public health, the environment and natural resources.
The Department will work closely with client agencies to develop enforcement strategies specifically targeted to achieve widespread deterrence and encourage effective compliance across whole industry sectors. This approach was particularly effective this past year when the Department achieved a landmark settlement with heavy-duty diesel manufacturers who violated the Clean Air Act by installing software that allowed engines to meet EPA standards during testing but disabled emission control standards during normal highway driving. In the coming years, the Department will focus enforcement on industrial and economic sectors that are major sources of pollution.
The Department will pursue affirmative civil litigation concerning enforcement of EPA statutes and rules which regulate discharges into our Nation's air and water and the storage and disposal of hazardous wastes. We will litigate natural resource damage actions on behalf of federal trustees, including the Departments of Commerce, the Interior and Agriculture, and claims for contribution against private parties for contamination of public lands and recoupment of monies spent to clean up oil spills on behalf of the Coast Guard.
The Department faces a growing workload in a wide variety of natural resource areas including water and watersheds, federally-managed lands and renewable resources, endangered species and sensitive habitats, land acquisition and exchanges, mineral activities, and urban sprawl and habitat degradation. Top departmental priorities include implementing the President's Forest Plan for the Pacific Northwest, restoring salmon runs in the Snake and Columbia River systems, and protecting and restoring the Everglades "river of grass." In addition, the Department will continue to focus on illegal occupancy of federal lands.
We will continue to emphasize the use of Alternative Dispute Resolution (ADR) and other litigation streamlining techniques to achieve faster and more comprehensive resolution of these complex cases in a cost-effective manner.
Defend U.S. interests against suits challenging statutes and agency actions that protect public health, the environment and natural resources.
The Department will focus on defending the largest and most complex Comprehensive Environmental Response, Compensation and Liabilities Act (CERCLA) matters involving hundreds of millions of dollars of claims against the public fisc; defending the Army's $15 billion Chemical Demilitarization Program for destroying the nation's stockpile of chemical weapons in eight domestic sites as mandated by Congress and an International Chemical Weapons Convention; protecting multibillion dollar Army and Department of Energy programs designed to store, transport and destroy hazardous materials, both chemical and nuclear, from complicated legal challenges in multiple emergency proceedings; defending standards for ozone (smog) and particulate matter (soot) which will provide hundreds of millions of Americans (including children and the elderly) with urgently needed health protection; and defending a wide range of programs, including those related to ecosystem management, national monument designations, and protection of roadless areas in national forests.
Develop constructive partnerships with other federal agencies (including especially EPA), state and local governments, community representatives, and international enforcement agencies to maximize environmental compliance.
The Department will work in close coordination with communities and other federal agencies such as HUD to enforce the Residential Lead-Based Paint Hazard Reduction Act, a new law designed to protect children from the hazards of lead paint, which causes IQ deficiencies, reading and learning disabilities, impaired hearing, hyperactivity and behavior problems. The Department will participate in interagency task forces and high visibility international agreements to ensure that trade and investment rules promote environmental protection and do not undermine our domestic regulatory authority. The Department will promote multiagency enforcement of Clean Water Action Plans, including regulating against polluted runoff from livestock and poultry feeding operations which foul rivers and coasts, harm marine life, and pollute the air. The Department will monitor cases for environmental justice concerns and work to ensure that affected communities are consulted as appropriate during settlement negotiations.
Act in accordance with U.S. trust responsibilities to individual Indians and Indian tribes in litigation involving Indian interests.
The United States has established trust relationships with Indians and Indian tribes through numerous treaties, statutes, and Executive Orders. Under these authorities, the government is obligated to perform a number of functions on behalf of these tribes, including litigation by the Department to establish and defend their rights. The work includes development, investigation and litigation of environmental issues that arise on Indian reservations (e.g., recognizing tribal government authority to set standards for air and water quality on Indian reservations much as states currently do under the Clean Air and Clean Water Acts) and pursuing land and water claims on behalf of tribes to resolve centuries old disputes. This approach is critical since many reservations lie in arid portions of the country where competition for water is fierce, and tribal rights to water must be established before reservation lands can be developed. More than 50 million acres of reservation lands and the rights to major water systems in dry western states are at stake. The Department is also charged with protecting tribal regulatory, adjudicatory, and tax jurisdiction, including tribal sovereignty to exercise jurisdiction in domestic relations cases involving tribal members and enforcement of gaming laws and state compacts and establishing and protecting treaty-based hunting and fishing rights, including rights of Indians to hunt and fish free of state regulation on off-reservation lands. In defending litigation against Indian tribes, the Department gives careful consideration to negotiation and the use of dispute resolution techniques to resolve the controversy.
Key Crosscutting Programs
Coordination and Enforcement on Environmental Health Hazards. The Department enforces the federal lead-based paint disclosure rule with HUD and EPA, provides assistance to local and state governments in enforcement of their own hazard control regulations, and supports the President's Task Force on Environmental Health Risks and Safety Risks to Children.
Mississippi River Environmental Quality Coordination and Enforcement. The Department works with other agencies in efforts to improve the environmental quality of the Mississippi River. Multiagency planning sessions and enforcement actions aim at keeping illegal pollution ranging from raw sewage to industrial waste out of the Mississippi River and restoring the river and its surrounding communities.
Enforcing National Ambient Air Quality Standards. The Department partners with the EPA, the Army Corps of Engineers, and the Departments of the Interior and Transportation to defend EPA's National Ambient Air Quality Standards and the CERCLA statute.
Policy Coordination on Ecosystem Management. The Department works closely with client agencies such as EPA and the Departments of the Interior and Agriculture on ecosystem management in an effort to enhance protection of wetlands, forests, public lands, and waterways by considering ecological systems on a broad scale.
Strategic Objective 3.3 ANTITRUST - - Promote competition in the United States economy through enforcement of, improvements to, and education about antitrust laws and principles.
The Department maintains and promotes competitive markets largely by enforcing federal civil and criminal antitrust laws. These laws affect virtually all industries and apply to every phase of business, including manufacturing, transportation, distribution, and marketing. They prohibit a variety of practices that restrain trade, such as mergers likely to reduce the competitive vigor of particular markets, predatory acts designed to maintain or achieve monopoly power, and per se illegal bid rigging. Successful enforcement of these laws - - which both decreases and deters anticompetitive behavior - - saves U.S. consumers billions of dollars, allows them to receive goods and services of the highest quality at the lowest price, and enables U.S. businesses to compete on a level playing field nationally and internationally.
Several key trends are impacting the Department's antitrust efforts. The first of these is the globalization of trade. The second of these is rapid technological change. The third is deregulation. All three trends have ramifications for the Department's antitrust work and workload.
The value of mergers occurring globally is on the increase, and large, cross-border mergers are no longer an anomaly. In addition, as markets become increasingly global, so do cartels. More of the Department's criminal investigations involve foreign companies than ever before. Whether taking more time to coordinate with foreign antitrust counterparts or more money to translate foreign documents, the Department's increasingly common investigations with international dimensions are significantly more complex than in previous years.
A number of our most important industries have been characterized recently by unprecedented levels of technological change. The accelerated flow of information means the collection and review of evidence has become more laborious. The greater technological sophistication of the marketplace means the methods to constrain competition have become more sophisticated, as well. New industries are created virtually overnight. The Department must stay on top of all these developments to effectively enforce the antitrust laws.
In recent decades, legislative and regulatory changes in the United States have reversed a generation of pervasive government regulation and deregulated such basic industries as telecommunications, energy, financial services, and transportation. Competition, with appropriate reliance upon antitrust laws, has again become the norm. This transition has meant an increased role for antitrust - - both working with various agencies to find ways to replace regulatory constraints with competitive incentives and effectively following up with necessary enforcement of the broader antitrust laws as it may become necessary. Again, the Department is faced with more work that is more complex.
The Department has focused on three strategies to achieve our objective in the antitrust arena. These three strategies are complementary and provide the flexibility (among them all and within each of them) needed to respond to the key trends described above, effectively meet the challenges of today and tomorrow, and safeguard the competition that is the cornerstone of this country's economic foundation.
Strategies to Achieve the Objective
Investigate and litigate business arrangements and practices that encourage anticompetitive behavior and lessen competition.
The Department employs three distinct strategies to decrease and deter anticompetitive business behavior and practices. First, is our merger enforcement strategy. This strategy focuses on the investigation and litigation of instances in which monopoly power is sought, attained, or maintained through anticompetitive conduct and by seeking injunctive relief against mergers and acquisitions that may tend substantially to lessen competition.
Second, is our criminal enforcement strategy. (27) When businesses are found to be actively engaged in price fixing, bid rigging, and other market allocation schemes, the Department conducts criminal investigations and prosecutions. If the Department detects market collusion and successfully prosecutes, the Department may obtain criminal fines and/or injunctive relief.
Finally, our civil non-merger enforcement strategy investigates and prosecutes civil matters to suspend or deter anticompetitive behavior. It picks up, to some degree, where our criminal enforcement strategy leaves off, pursuing matters under Section 1 of the Sherman Act in instances in which the allegedly illegal behavior falls outside bid rigging, price fixing, and market allocation schemes. Other behavior, such as group boycotts or exclusive dealing arrangements, that constitutes "...contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce..." is also illegal under Section 1 of the Sherman Act. The civil non-merger enforcement strategy relies on a civil compulsory process to investigate alleged violations, obtaining civil damages or injunctive relief, as appropriate.
Advance procompetitive national and international laws, regulations and policies.
With a number of activities distinct in form and audience, the Department endeavors to promote competition through further improvement of the competitive landscape at all levels: inter- or intra-governmentally; nationally; and internationally. Departmental resources are devoted to participation in interagency regulatory processes, for example, to ensure that business practices conform with regulatory rules. In addition, Department officials routinely participate in interagency task forces related to competition issues. At the international level, Department membership in bodies such as the World Trade Organization (WTO) provides an opportunity for the promotion of "competition-friendly" policies and practices. In all cases, our goal remains the deterrence of anticompetitive behavior.
Educate businesses, consumers and counterpart agencies about antitrust law to increase their awareness and understanding.
Whether through direct contact and targeted communication with specific audiences, or via the development, publication and distribution of policy guidance, the Department seeks to increase the breadth and depth of awareness of antitrust law. One example of Departmental activity in this area is our Business Review Program, which provides timely information on antitrust law and how it applies under different situations, along with the likely reaction of the Department to a proposed business action or arrangement. Another example is tailored training provided to state antitrust attorneys and investigators. In all instances, by reaching as many individuals, companies, agencies, and other groups as possible, and by providing them with detailed and specific guidance on the law, the Department seeks to promote competitive behavior and deter anticompetitive behavior.
Key Crosscutting Programs
Antitrust Division and FTC Merger Clearance Process. Section 7 of the Clayton Act, as amended, requires certain enterprises that plan to merge or to enter into acquisition transactions to notify the Department's Antitrust Division and the FTC of their intention, and to submit certain information to those authorities. Once pre-merger notification has been made, the Department and the FTC employ a clearance process, based largely on complementary areas of expertise, in order to quickly determine which body will review and/or investigate a particular merger transaction. Following clearance, the transaction is reviewed to determine whether there are any competitive issues at stake. Throughout the clearance process the agencies maintain close communication in order to ensure that competitive concerns are addressed efficiently and effectively and that the process is undertaken without unduly burdening legitimate business interests.
Strategic Objective 3.4 TAX LAWS - - Promote the fair, correct and uniform enforcement of the federal tax laws and the collection of tax debts to protect the federal fisc from unjustified claims.
The Department strives to enforce the federal tax laws consistently and impartially and ensures that taxpayers are treated fairly. Enforcement plays an important role toward ensuring voluntary compliance and in realizing the maximum legal collection of tax revenues. The Internal Revenue Code is the major authorizing statute governing this area of activity. The Department assists the IRS with one of its key strategic objectives, "Increasing Voluntary Compliance." Referred from the IRS, the Department's work of enforcing federal tax laws includes: litigating all federal civil tax cases appealed to the United States courts of appeal and state appellate courts; investigating and prosecuting individuals and corporations for tax evasion; and litigating all civil tax lawsuits filed in federal district courts, bankruptcy courts, the Court of Federal Claims, and state courts. (28)
The Department assists with resolving a wide variety of federal tax issues and civil violations of the Internal Revenue Code through litigation and expert counsel. The federal tax laws and regulations are complicated and, as a nation, we depend upon individuals and corporations to voluntarily comply with the tax code. Given the complexity of the tax code, many disputes arise on the application of the Internal Revenue Code to a specific individual or business. When the disputes are not resolved through IRS administrative processes, they often become lawsuits in federal and state courts. The taxpayer may appeal an unfavorable lower court decision to a higher federal court of appeals or state appellate court. Department trial attorneys litigate these cases both in the lower courts and the appellate courts.
A significant portion of these suits are tax refund claims challenging the IRS's determination of a taxpayer's federal income, employment, excise, and/or estate tax liabilities. Defending federal tax claims and/or the feasibility of reorganization plans in bankruptcy proceedings represents another major portion of civil litigation. The Department's tax litigation docket also includes: enforcement of IRS administrative summonses that seek information essential to determine and collect taxpayers' liabilities; suits to collect taxes and other monies often hidden by fraudulent conveyances, sham entities, and alter egos; suits against IRS and other government officials for torts and constitutional violations allegedly committed in connection with tax collection activities; suits against the IRS brought pursuant to the Freedom of Information and Privacy Acts; and state and local intergovernmental tax immunity suits. The Department also defends the constitutionality of tax statutes and the validity of Treasury Department regulations. Civil enforcement of the tax laws can also arise from the Department's criminal enforcement initiatives. For example, the Department will be required to enforce an increasing number of administrative summonses as the IRS goes forward with its efforts to curb the problem of abusive trusts.
Strategies to Achieve the Objective
Litigate, both defensively and affirmatively, federal civil tax cases filed by and against taxpayers in federal courts.
Defensive litigation by the Department's civil trial attorneys often involves thousands of tax cases pending administratively at the IRS and generates significant revenue for the federal treasury. Defensive litigation also includes Department trial attorneys representing IRS officers against complaints made by taxpayers who allege misconduct by government officials for activities related to tax collection. These lawsuits can cripple morale if employees who have done nothing improper believe that they can be held personally liable for simply doing their jobs. The IRS workforce relies upon the Department for a vigorous defense against spurious lawsuits.
Approximately 10 percent of the Department's civil tax litigation docket involves responses to frivolous tax protest arguments. These resource-intensive cases are essential to keep illegal tax protest activities from further increasing. Honest taxpayers who perceive that individuals engaging in illegal tax protest activities have "gotten away with it" will themselves be discouraged from voluntarily paying their taxes. This litigation saves the Treasury millions of dollars annually.
Also important to the Department's strategy is its affirmative civil litigation program. Litigation activities include seeking judgments to enforce IRS assessments against taxpayers in cases involving fraudulent transfers made by delinquent taxpayers attempting to place their assets out of the reach of the IRS and the enforcement and foreclosure of federal tax liens. The Department is beginning to initiate more affirmative litigation against persons who employ increasingly sophisticated means to unlawfully shield their assets from collection. Affirmative litigation recovers or generates substantial revenues for the Treasury.
As part of their representation of the IRS in the courts, Department civil trial attorneys conduct, in each case, an independent review of the Service's administrative determinations. This review process often results in the Tax Division declining to bring certain affirmative litigation, and in defensive cases may result in some complete concessions, where Department attorneys determine that the IRS's administrative position cannot be legally and/or factually supported. This vital review function promotes the integrity of the federal tax system by ensuring that taxpayers and others involved in trial-level litigation are treated fairly and consistently nationwide. Additionally, Department trial attorneys monitor and review cases that are handled by the IRS and the U.S. Attorneys offices to ensure that the interests of the United States are appropriately represented and that the federal tax laws are enforced uniformly and correctly.
Provide expert counsel and litigation support to defend U.S. interests in federal civil tax cases appealed to federal appeals and state appellate courts.
Department trial attorneys provide expert counsel and litigation support on all federal civil tax cases that are appealed to the United States courts of appeal and state appellate courts. Defending the IRS against a wide variety of taxpayer appeals is critical for ensuring taxpayers are treated fairly as well as ensuring that the federal tax code is applied in a fair and impartial manner. The Department's work also ensures that the federal fisc is protected against unjustified claims. Many of the tax cases appealed involve millions, and in some cases, billions of dollars of potential tax revenue.
Key Crosscutting Programs
Joint Trust Task Force Working Group. Coordinated efforts between the IRS and the Department are necessary to combat abusive trusts, which pose a significant problem for our tax system. In that regard, the Tax Division and the IRS have established a Joint Trust Task Force Working Group to identify in advance, and to propose solutions for, issues which affect criminal and civil actions in this area.
Strategic Objective 3.5 CIVIL LAWS - - Effectively represent the United States in all civil matters for which the Department of Justice has jurisdiction.
The Department, through its Civil Division and the U.S. Attorneys, each year represents some 200 federal agencies in litigation arising from federal contracts or alleged government misconduct. We also defend challenges to the laws, policies, and programs of the United States.
Civil lawsuits involving large monetary claims are a fact of life. Plaintiffs advancing contract claims, allegations of negligence, claims of patent infringement, and the like seek to assign liability to the government in lawsuits where huge sums of money are at risk. The majority of civil suits handled by the Department are defensive. Over the last decade the number of cases involving multibillion dollar stakes has virtually doubled. Moreover, changes in the law have radically expanded the exposure of the United States as an employer and as an insurer of extra-governmental entities to potential liability. That expansion is reflected in case numbers, complexity and dollar amounts. It is the Department's job to ensure that only those claims with merit under the law are paid.
New laws, typically enacted only after a painstaking legislative process, are often attacked in court. Recent litigation challenging the laws and policies of the United States involves some of the most probing issues of our time. Examples include: gun control, pornography on cable television and the Internet, welfare reform, gays in the military, and tobacco regulation. Unlike the majority of civil suits handled by the Department which involve monetary claims, these lawsuits seek remedies that potentially affect vital aspects of our society - - how we respond to violence, poverty, and the emergence of the Information Age.
Other lawsuits take aim at various provisions of our entitlement programs and can profoundly affect federal expenditures. Reforms embodied in the Welfare Reform Act of 1996 and subsequent legislation will continue to generate broad class actions seeking millions of dollars in increased federal aid. It is likely that housing and health care reform legislation in the next few years will also be fertile areas for litigation. It is a near certainty that as the multiyear effort to reform the Social Security Administration's $58 billion disability benefits program reaches the implementation stage during the next few years, numerous and substantial broad-based challenges will be launched.
In a number of situations, through the implementation of specialized tort compensation systems, the Department has improved access to justice for the nation's citizens, leading to more efficient and effective resolution of disputes in the areas of occupational disease and vaccine injury. The National Vaccine Injury Compensation Program (NVICP) created an alternative to traditional product liability and medical malpractice litigation for persons alleging injury from vaccinations. Under the NVICP, individuals meeting the statutory criteria are compensated fairly and quickly, and non-meritorious cases are successfully defended, thereby preserving Program funds for those who are truly entitled to them. Under the Radiation Exposure Compensation Act (RECA), individuals who contracted certain diseases as a result of their exposure to radiation released during nuclear weapons tests or in underground uranium mines have received over $244 million in compensation since the Department's RECA Program began receiving claims in 1992. Through the RECA Program, individuals whose health was put at risk to serve the national security interests of the United States are provided an effective, efficient, non-adversarial forum in which to seek redress.
The Department must respond to a variety of immigration-related suits, mostly dealing with challenges targeting orders of exclusion, detention, and expulsion. Over the course of the past decade, this workload has tripled, coinciding with intensified enforcement efforts and the emergence of new laws. The lion's share of immigration litigation involves individual challenges and class action suits directed against the actions and determinations of INS, immigration judges, and the Board of Immigration Appeals.
While only a minority of immigration cases and matters involves suspected alien terrorists, antiterrorism efforts comprise a growing emphasis of the Department. The Antiterrorism and Effective Death Penalty Act and the Immigration Reform and Immigrant Responsibility Act have significantly expanded the Department's role in the fight against international terrorism. The Civil Division figures prominently in interagency efforts to designate foreign terrorist organizations for purposes of criminal and civil terrorism fund-raising laws, the defense of such designations, and the defense of the fund-raising provisions themselves against constitutional and other attacks. The Civil Division also heads the Alien Terrorist Removal Court litigation unit.
Hundreds of millions of dollars are lost to the U.S. Treasury each year as a result of procurement fraud, health care fraud, loan defaults, and bankruptcies. These losses reduce resources vital to a host of federally-funded programs, including Medicare. Efforts to recoup money owed to the United States have yielded huge collections in the past decade - - over $11 billion. Further, criminal prosecutions have resulted in court-ordered criminal restitution and fines collection of which is the responsibility of the Department of Justice. Today's docket includes a number of matters that are massive with respect to potential recoveries, the size of evidentiary collections, and the complexity of issues that underscore the government's case. As our adversaries enlist the help of top law firms and consultants, substantial government resources are required to achieve favorable settlements and judgments on behalf of the United States and victims of crime.
Finally, violations of the Food, Drug and Cosmetic Act, the Consumer Product Safety Act, and the Federal Trade Commission Act pose threats to the health and safety of millions of Americans. When such violations involve major patterns of fraud, illegal conduct, unfair credit and marketing practices, the Department pursues civil and criminal actions to stop and deter such activity. The emergence of the Internet has provided a new and extraordinarily powerful medium for marketing products and services. Contributors to the Internet have enjoyed a virtually free rein on marketing approaches. While this "open" approach has provided the public with an explosion of information, it has also created the means for large-scale fraud, deception, and criminal practices.
Strategies to Achieve the Objective
Assert the interests of the U.S. Treasury, prevailing against unwarranted monetary claims while resolving fairly those claims with merit.
Hundreds of millions of dollars are saved annually as a result of the Department's successes in defending national interests in major defensive lawsuits against unwarranted monetary claims on the public fisc. Such defensive litigation requires the diligence of Department staff who fight for and guard the financial interests of the United States at trial, at the settlement table, and at the highest levels of judicial review, asserting the government's interest in major disputes as they proceed through appellate stages.
Defend the laws, programs, and policies of the United States when challenged in court, including those which affect how sizeable portions of the federal budget are spent.
Defending the national interests of the many and varied laws, programs and policies of the United States is a critical role of the Department for maintaining civil law and order. Many of these civil lawsuits threaten or affect our national security, public safety or social and moral codes.
Implement civil justice reform initiatives to resolve classes of claims for which traditional litigation has proven ineffective.
The Department must defend against thousands of plaintiff claims alleging government neglect or wrongful conduct. Such suits usually involve massive discovery requirements, protracted trial schedules, arcane subject matter and substantial damages at stake. When such traditional litigation has proven ineffective, Congress has created specialized programs, (e.g., National Childhood Vaccine Injury Act and the Radiation Exposure Compensation Act). When appropriate, the Department must continue to evaluate cases to determine whether they will benefit from use of ADR and, if necessary, engage in such processes to expedite case resolution and/or reduce costs.
Ensure the intent of Congress and the collective efforts of the immigration agencies by defending immigration laws and policies, as well as class action suits or immigration judgments involving individuals.
The Department's heightened emphasis on immigration enforcement portends a rise in related immigration caseload. This litigation is handled from individual challenges to federal enforcement actions and class action suits directed against federal immigration agencies, e.g., denial of visas and passports, political asylum, administrative judgements on alien removal.
Recover monies owed to the United States and victims as a result of fraud, loan default, and bankruptcy.
The Department protects the public fisc through a variety of affirmative litigation to fight fraud, loan default and bankruptcy, focusing on matters involving widespread fraud and the potential for substantial recoveries. We investigate allegations brought forth by "whistle-blowers" and, where appropriate, pursue recoveries and civil penalties available under the False Claims Act, as amended. The Department emphasizes health care fraud enforcement, through collaborative efforts with other federal and state agencies to recover the billions of dollars lost from Medicare and other federally-funded programs. The Department actively pursues collection of federal and non-federal restitution and criminal fines.
Enforce consumer protection laws by seeking civil and criminal penalties available under existing statutes.
The existence of the Internet has placed new demands on law enforcement regarding the identification, investigation, and pursuit of consumer fraud. In particular, the relatively new phenomenon of Internet pharmacies - - which often dispense powerful prescription drugs without a valid prescription from a doctor - - pose a significant danger to consumers. To fight such trends, the Department will concentrate its activity on matters involving consumer law violations which pose the greatest potential threat to the public.
Key Crosscutting Programs
Civil Cases Involving National Childhood Vaccine Injury Act. The Civil Division will continue to work closely with HHS and the U.S. Court of Federal Claims in handling cases filed under the National Childhood Vaccine Injury Act. Managers at the respective agencies coordinate matters of policy, budget, case processing, and strategy. At the trial level, medical staff at HHS assist the Department in developing medical evidence and providing expert witness support. In conjunction with the Office of Special Masters at the U.S. Court of Federal Claims, HHS and the Department have strived to ensure just decisions in the thousands of cases filed since the inception of the program in 1988.
Coordination with the Department of State in Removing Aliens Posing National Security Risks. In resolving sensitive litigation involving aliens who pose a risk to national security (e.g., terrorists), the Department works closely with the State Department in efforts to remove such aliens to countries other than the alien's country of origin when that country is likely to torture or persecute the alien. Several Department components and the State Department have engaged in ongoing discussions regarding the application of the U.N. Convention on Torture, a treaty which can be expected to surface in many alien terrorist and criminal alien removal cases. The Department also reviews and assists in the production of sensitive documents in coordination with the Central Intelligence Agency, the State Department, and other members of the Intelligence Community.
The Department does not face any mission-critical management problems or challenges which would significantly hinder the Department from achieving this strategic goal.
FY 2000 -- 2005 Strategic Plan
U.S. Department of Justice
|
<urn:uuid:02411092-f1b5-45d3-b362-5f035db5e6eb>
|
CC-MAIN-2013-20
|
http://www.justice.gov/archive/mps/strategic2000_2005/goal3.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939211
| 8,826
| 2.625
| 3
|
"Science of the Summer Olympics," the fourth and latest installment in the "Science of Sports" franchise, explores the science, engineering and technology that are helping athletes maximize their performance at the 2012 London Games.
How does swimmer Missy Franklin use the principles of fluid dynamics to move more quickly through water? What are the unique biomechanics that have helped make sprinter Usain Bolt the world’s fastest human? What does weightlifter Sarah Robles have in common with a high-tech robot? How do engineers build faster pools, stronger safety helmets, and specialized wheelchairs for disabled athletes? Explore these and many other engineering and technology concepts in this free 10-part educational video series.
"Science of the Summer Olympics: Engineering in Sports" is a partnership with NBC Learn, NBC Sports and NSF's Directorate for Engineering. The National Science Teachers Association (NSTA) will provide free lesson plans for each video.
U.S. swimmer Missy Franklin is one of the top medal contenders at the 2012 Summer Olympics. Just as engineers design planes and boats to be more aerodynamic, Franklin will need to master the basic principles of fluid dynamics in order to be the fastest swimmer in the pool.
View video (4:59)
Many runners suffer injuries to their joints due to the repeated impact of their feet hitting the ground. U.S. runner Jenny Simpson relies on new treadmill technology to help rehabilitate from a stress fracture as she trains for the 2012 Summer Olympics.
View video (4:17)
For many athletes at the 2012 Summer Olympics, safety helmets will be an essential part of their athletic gear. Nikhil Gupta, a mechanical engineer at New York University's Polytechnic Institute, explains how safety helmets are designed, constructed and tested.
View video (5:35)
At the 2012 Summer Paralympics, elite athletes with disabilities will rely on strength, speed and skill as they go for the gold in 21 different sporting events. Rory Cooper, a biomechanical engineer at the University of Pittsburgh, demonstrates how engineering can help wheelchair athletes maximize their performance in such diverse sports as wheelchair rugby, basketball and racing.
View video (5:16)
Along with hosting the top swimmers from around the world, the London Aquatics Center at the 2012 Summer Olympics will feature one of the most technologically advanced pools ever built. Through advances in pool design, engineers are helping swimmers reach their maximum speed with technology designed to minimize waves.
View video (4:43)
Jamaican sprinter Usain Bolt holds the World and Olympic records for the fastest time in the 100-meter sprint. Bolt's stride, strength, and muscle coordination make him not just a biomechanical marvel, but also a gold medal favorite at the 2012 Summer Olympics.
View video (5:23)
U.S. weightlifter Sarah Robles will rely on an athletic mix of strength, speed and timing to help create explosive power when she competes at the 2012 Summer Olympics. Robotics engineer Brian Zenowich compares Robles’ movements to those made by the WAM Arm, one of the world’s most advanced robotic arms.
View video (5:34)
South African sprinter Oscar Pistorius is the first double-amputee athlete to compete at the Olympics. At the 2012 Summer Olympics, Pistorius will race in the 400 meter race and 4x400 meter relay using a pair of carbon fiber prosthetic legs engineered to store and release energy from the impact of his strides. "Science of the Summer Olympics" is a 10-part video series produced in partnership with the National Science Foundation.
View video (4:12)
The long jump is one of the most technically challenging events in the decathlon, a track and field competition consisting of 10 events held over two days. In order to maximize his performance, 2008 Olympic gold medalist Bryan Clay teamed up with engineers from BMW to improve measurement of the horizontal and vertical velocities of his long jumps.
View video (5:37)
Timing is everything, especially at the 2012 Summer Olympics where even a millisecond could mean the difference between victory and defeat. Linda Milor, an electrical engineer at Georgia Institute of Technology, explains why Olympic timekeeping technology must be able to measure an athlete's performance with both accuracy and precision.
View video (5:34)
Any opinions, findings, conclusions or recommendations presented in this material are only those of the presenter grantee/researcher, author, or agency employee; and do not necessarily reflect the views of the National Science Foundation.
|
<urn:uuid:a2488aab-47be-43d0-be00-e574cf1333f2>
|
CC-MAIN-2013-20
|
http://www.plainlanguage@nsf.gov/news/special_reports/summer_olympics/index.jsp
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932176
| 945
| 3.421875
| 3
|
Water problems in many parts of the world are chronic and without a crackdown on waste will worsen as demand for food rises and climate change intensifies, the UN warned on Sunday.
Issued on the eve of a six-day gathering on world water issues, the United Nations, in a massive report, said many daunting challenges lie ahead.
They include providing clean water and sanitation to the poor, feeding a world population set to rise from seven billion to nine billion by 2050 and coping with the impact of global warming.
“Pressures on freshwater are rising, from the expanding needs of agriculture, food production and energy consumption to pollution and the weaknesses of water management,” UN Secretary General Ban Ki-moon said in the report.
“Climate change is a real and growing threat. Without good planning and adaptation, hundreds of millions of people are at risk of hunger, disease, energy shortages and poverty.”
The World Water Development Report is issued every three years to coincide with the World Water Forum, opening in this southern French city on Monday.
Written by experts in hydrology, economics and social issues under the aegis of UNESCO, it aims to be the world’s reference manual for water.
The document, the fourth in the series, made these points:
– Population growth and a shift to more meat-intensive diet will drive up demand for food by some 70 percent by 2050. Using current methods, this will lead to a nearly 20 percent increase in global agricultural water consumption.
Farming today accounts for around 70 percent of water use, ranging from 44 percent in rich countries to more than 90 percent in least developed economies.
– Abstraction of aquifers has at least tripled in the past 50 years, supplying nearly half of all drinking water today. “In some hotspots, the availability of non-renewable groundwater resources has reached critical limits,” says the report.
An aquifer is an underground layer of water-bearing rock or soil.
The report calls for an overhaul in water management and a massive effort to curb waste. Better irrigation systems, less thirsty crops and the use of “grey,” meaning used, water to flush toilets are among the options.
– The bill for coping with climate-induced water problems will be between 13.7 billion and 19.2 billion dollars annually between 2020 and 2050. This is based on the assumption UN climate talks limit global warming to two degrees Celsius (3.6 degrees Fahrenheit).
“The current areas with water stress will be suffering more,” said Olcay Unver, who coordinated the report, pointing as examples to the Middle East, South Asia and the southwestern United States.
– About 2.5 billion people have no access to decent sanitation, a figure meaning that a key Millennium Development Goal for 2015 is likely to be missed. In contrast, UN estimates last week said a goal for improving access to clean water would be met.
The report places the spotlight on competition for water between cities, farmers and ecosystems, and between countries as well. An estimated 148 states have international water basins within their territory and 21 countries lie entirely within them.
Even so, there seems no major risk of water wars, Unver told journalists in Paris last week. “Countries have shown great success in cooperating in water resources than fighting over them.”
Emerging as a worrying phenomenon is the acquisition of farmland in Africa by western economies, Middle Eastern states and the emerging giants China and India to provide food or biofuels.
The risk is of simply transferring a wasteful water “footprint” elsewhere, possibly at the expense of a local ecosystem.
“The amount of water required for biofuel plantation could be particularly devastating to regions such as West Africa, where water is already scarce,” says the report.
|
<urn:uuid:b07f7067-9fed-45d3-bec4-6ddbbc85dbc1>
|
CC-MAIN-2013-20
|
http://www.rawstory.com/rs/2012/03/11/worldwide-water-crisis-looms-without-action-on-waste-un/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942137
| 790
| 3.515625
| 4
|
Seattle: Booms and Busts
|Seattle: Booms and Busts
|This piece by Emmett Shear was licensed to Wikipedia under GFDL some time ago. We used it as a starting point for the English-langauge Wikipedia articles on the History of Seattle. At that time it was on the site of Yale University; it has since been taken down. Unfortunately, Yale does not allow the Internet Archive to archive their site, and the piece was in RTF format. Emmett has been kind enough to pass us a new copy.|
History of Seattle - Emmett Shear - Yale University
Climate and Geography
The Emerald City is located along the Puget Sound, in between two large mountain ranges, the Olympics and the Cascades. The climate is mild, with the temperature moderated by the sea and protected from winds and storms by the mountains. The area is hilly, though it flattens out as one moves out from the center of the city. The rain the city is famous for is actually unremarkable; at 35 inches of precipitation a year, it’s less than most major eastern seaboard cities.
What makes it seem so wet in Seattle is the cloudiness, which besides the summer lasts most of the year, and that most that precipitation falls as light rain, not snow or heavy storms. There are two large lakes, Lake Washington and Lake Union, and many smaller ones. The rivers, forests, lakes, and fields were once rich enough to support one of the worlds few sedentary hunter-gatherer societies. Opportunities for sailing, skiing, bicycling, camping, and hiking are close by and accessible almost all of the year.
Traveling through Seattle, it’s hard to find an area that has nothing to recommend it. At the top of every hill there is a view of a lake or the ocean, and at the bottom of every hill is a shore. There is no definable nice part of town; though there are certainly relatively wealthy neighborhoods, they are small and interspersed with less well off ones. Though there are poor neighborhoods, there are few slums. The predominate building material is wood, and has been since Native Americans lived in long houses.
Cycles of Seattle
Seattle has had two essential types of periods throughout its history: booms as a company town, followed by quiescence as the industry subsides and infrastructure is rebuilt. Seattle has been successful when the period as a company town has been weakest and when the quiet period has seen thoughtful investment for the future. Seattle has almost been sent into permanent decline by its worst periods as a company town. There have been four such cycles: Lumber industry followed by an Olmstead built park system, Ship building followed by the unused Bogues Plan, Boeing followed by infrastructure building, and most recently a boom with Microsoft and other software companies, which Seattle is just leaving now.
Early History: 1850-1900
Seattle’s history starts with Arthur Denny, the entrepreneur who with a few other characters would come to define the nature of Seattle until they passed away around the turn of the century. Arthur Denny came west seeking his fortune, and he wasn’t going to let anything or anyone get in his way. When he first arrived in Washington, he found that the spot he felt was the best land to start a city, Alki Point, on was already staked out by another group of enterprising entrepreneurs, led by Charlie Terry. Arthur stayed at Alki for about a year, while he looked to find somewhere else to settle. The industry that they ran at Alki was a timber industry, for rebuilding San Francisco which kept burning down every year or so. After about a year, Arthur wound up in the second best spot on the sound, Seattle, where there would be plenty of trees to build San Francisco and plenty of hills to slide them down to the water with.
At first, Alki was larger than Seattle. “It was platted into six blocks of eight lots…and most of them had buildings on them that were in use. There weren’t eight level, usable blocks in all of Seattle.” But when Henry Yesler brought his steam sawmill to Washington, the mill that would let whichever town got it dominate the lumber industry, he brought it to Seattle. He brought it there because there was a critical flaw with Alki as a port: “During the winter, the north wind, building up the tides in front of it, comes sweeping down the Sound out of Canada, piling might waves on Alki Point. Beginning with Terry…nobody has been able to build anything out in the water at Alki that will withstand those waves.” Terry sold out Alki, Yesler put his mill in Seattle, and that was the end of Alki and the beginning of Seattle’s dominance of the Pacific Northwest.
After Terry sold out in Alki, he immediately moved in on Seattle and began acquiring land. He either owned or partially owned the first ships that moved the lumber and allowed Seattle’s main industry to exist. He gave a land grant for the University of Washington that today supplies more than $1 million a year. He worked in politics to establish street grades, a water system, and a host of other services (that incidentally benefited him as one of the city’s largest landholders).
Henry Yesler brought with him “financial backing from a Massillon capitalist, John E. McLain, to start a steam sawmill once he had isolated the perfect location for such a structure.” Yesler settled on Seattle because it had a good port, plenty of accessible timber, and there was plenty of available land for a growing city. Via the bargaining power of his mill, he wrangled about 20,220 square feet of prime land from some of the original settlers. Then he built the mill and made Seattle the premier city of the northwest. But what really made Yesler rich was his cookhouse. “It did more to ‘set’ the heart of the city in the middle of Yesler’s property holdings than anything else Henry did. Henry never did make a lot of money out o his mill. It was the strategic location of his land that made him a millionaire.” Incidentally, Henry Yesler was a very good developer, at least when it came to making money; he borrowed $30,000 at 8% interest to build the mill, and only repaid McLain after he took Yesler to court three times.
Arthur Denny had not been quiescent this whole time either. He was the second richest man in town, after Yesler, and got himself elected to territorial legislature. From that position, he attempted to get the state capital moved to Seattle from its then temporary location in Olympia. The other potential federal money prizes were the state penitentiary or the University of Washington. When the politics all played out, Vancouver wound up with the capital, Port Townsend with the penitentiary, and Seattle with the University. The legislature had tacked on the requirement that a grant of 10 acres of land would be required for the university to be built, which they thought would be sufficient to prevent its construction. However, Denny wanted his town to grow and donated the land, creating what would be “one of the biggest and most effective central core properties in the United States.” The University of Washington was built, although there were only barely enough students to run it as a high school, let alone as a university.
The population relative to the largest competing city, Tacoma, clearly shows the nature of Seattle’s growth. Though both Seattle and Tacoma grew at a rapid rate from 1880 to 1890, Seattle’s growth continued for another two decades while Tacoma’s dropped to almost zero. The reason for this lies in Tacoma’s nature as a company town and Seattle’s successful avoidance of that condition.
Both Seattle and Tacoma were essentially lumber towns, built on the resulting export income. All over the Puget Sound there are communities with the same assets Seattle started with, lumber and a port. However, Seattle’s early lead with Yesler’s mill meant that its economy was based on manufacturing as well as lumber, and was thus far more diversified than Tacoma’s. Though Tacoma got the Northern Pacific Railroad terminus, the terminus only increased the lumber trade instead of diversifying the economy. Seattle built it’s own railroad to Walla Walla which would come to reinforce Seattle’s place as a hub for the region. While both Seattle and Tacoma experienced huge booms from 1880 to 1890 based on the strength of their timber industries, only Seattle could continue growing as an exporter of services and manufacturing into the 1900s.
Leader of the Northwest: 1900 – 1915
When the gold rush of 1897 happened, Seattle was well positioned to take advantage of it. As the largest city and port in the area, it was natural that prospectors would head to Seattle to get outfitted. The downtown of Seattle was bustling with activity; as quickly as previous inhabitants moved out to newly created neighborhoods, new immigrants came in to take their place in the city core. The first influxes of immigrants after the Chinese who built the railroads began to enter: Japanese, Filipinos, and Jews, as well as more whites from back east.
Most of Seattle’s neighborhoods got their start around this time. “The most densely populated neighborhoods were on either side, to the north or the south, because it was easier to build parallel to the water, on grades less steep than those facing any development to the east.” However, the new rich soon developed the land on First Hill that overlooks downtown “because it was close to downtown without being a part of it, and because it occupied a commanding position.” Downtown, the easily developed properties along the water, and First Hill formed the nucleus for the city.
After the obvious geographical expansion from downtown, “all the other neighborhoods coming into existence…were the result of streetcar lines moving north and east from downtown and providing opportunities for settling that were obviously attractive to all but the poorest.” Several lines, running to most of central Seattle’s modern neighborhoods, created the communities of Capital Hill, Queen Anne, Madrona Beach, Madison Park, and Leschi. All of the expansion was happening without zoning, leading to “different land uses and economic classes everywhere [being] mixed.”
At the same time as the city was expanding dramatically, the city planners put began to put in parks. “Four million dollars worth of bonds were sold between 1905 and 1912 to develop the parks and build the boulevards designed by the Olmsteads to connect them.” Almost all of Seattle’s current parks were constructed during this period: Woodland Park (now the zoo), Volunteer Park, Green Lake, Washington Park (now the University of Washington Arboretum), Ravenna Park, Leschi Park, Baily Peninsula (now Seward Park). The Olmstead plan for boulevards was carried out in full, excepting a few minor pieces that were built in some substitute form or another. The form of the plan was “a winding parkway of about twenty miles which would link most of the existing and planned parks and greenbelts within the city limits”.
There was and still is no main park or particular area of Seattle that stands out above the rest. The whole of the city is filled with small parks, hills, and lakes, and this makes Seattle a very pleasant place to live in and visit.
World War One and the Bogue Plan: 1914 - 1920
In 1910, Seattle voters approved a referendum to create a plan for developing the whole city. The result was the Bogue plan. Virgil Bogue had worked for Olmstead, and was intimately familiar with the land in Seattle. The Bogue plan had at its heart a grand civil center, connected to the rest of the city by a rapid transit rail system, with a huge expansion of the park system crowned by the total conversion of 4000 acre Mercer Island into parkland. Striking in Bogue’s plan is his grasp of the consequences of growth; he foresaw that the city’s residents would eventually number in the millions and that such a grand park or efficient transit system could emplaced early in the development at much lower cost.
Unfortunately, the nature of politics of the time had the conservatives in the majority, and the money to fund this grand scheme was never appropriated. The Bogue plan sat on the shelf, never to be used. Ultimately a large number of the sites proposed for parks were developed as such, either by the public sector or the private as golf courses and such. The rail system was never emplaced, and Mercer Island is now full of pricey houses.
At the same time the government stopped investing for the future, private enterprise also began to stiffen. The war hid this, because it “boomed and expanded Seattle’s economy phenomenally, but in false ways.” The growth in GDP was unmatched, nearly increasing tenfold. However, it was almost all in wartime shipbuilding and lumber, and there was very little growth in “new industries”, the ones that were previously unestablished.
The Wait from Boats to Airplanes: 1919 - 1939
When the war ended, so did Seattle’s prosperity. Economic output crashed as the government stopped buying boats, and there were no new industries to pick up the slack. Seattle stopped being a place of explosive growth and opportunity. Of course, this was during the great depression, so times were rough all over the country, but Seattle was hit particularly hard because the manufacturing value-added industries had been crowded out by the war.
The Seattle between the wars was probably a pretty nice place to live, especially to grow up in. The city was still full of single-family wood houses and parks from the Olmstead development, but because of the crash they were affordable – at least to those who still had jobs. “[Seattle between the wars] is what suburbs try to be, but never achieve because they cannot stand things so jammed together, all for a family whose income could be well under two thousand dollars a year.” Seattle settled down into a kind of stasis between the wars, as growth subsided while those who lived in the city stayed.
WWII and the Boeing Era: 1945 - 1971
Growing out of the fortune of William Boeing’s boat company, the airplane company was the outgrowth of his fascination with airplanes and flying. In 1917, before WWI, Boeing employed only 28 people. But when the orders started coming in for WWI, Boeing grew to “an enterprising firm with the one customer airplane builders had in those days, the federal government. Employing about four thousand people, with sales just under ten million dollars a year, it was a good if unspectacular business for Seattle.”.
Though the company struggled throughout the period between the wars, and “began to build dressers, counters and furniture for a corset company and a confectioner's shop, as well as flat-bottomed boats called sea sleds”, when WWII started, the government suddenly desired tens of thousands of planes a year, and Boeing was in a place to provide them. Working under fixed-fee contract, Boeing churned out airplanes and became by far the largest employer in Seattle.</ref> ibid. 182</ref>
Unfortunately, Boeing did not spawn spin-off industries; only 5% of the subcontracted work was in the Puget Sound. Boeing was by intention a place where engineers designed the planes and line workers assembled parts that were imported from all over the world. Ostensibly, this would reduce the dependency of Seattle’s economy on the fortunes of the airline business. The problem was that Seattle was still dependent on the airline business, without enjoying any of the spin-off industries that might have diversified the economy. When the war ended, “The military canceled its bomber orders; Boeing factories shut down and 70,000 people lost their jobs.” So during this whole period, there was not much new development in the city. While the war was on, almost all production went towards producing either Boeing factories or Boeing planes. After the war, the crash ensured that no one would have the money for much new development.
This period of stasis soon ended with the rise of the jet airplane and Boeing’s reincarnation as the world's leading producer of commercial passenger planes. With the 707-120, Seattle became Boeing’s company town; “in 1947 Boeing employed about one out of every five of King County’s manufacturing workers, in 1957 about every other one.” As Boeing boomed, so did Seattle. During the war, from 1940 to 1950, the population increased 99,289 or 27% from 368,302 to 467,591. From 1950 to 1960, the population increased 89,496 or 20% to 557,087. All of those people had to live somewhere, and the fifties saw a huge boom in housing development. Population density all over Seattle exploded as people filled the boundaries of settlement in the city and began to move north. Most of the development was in single-family houses, since land was plentiful.
At the same time, the freeways were being built to compensate for all this new growth. The communities of “Mercer Island, Bryn Mawr, Newport, Bellevue, Clyde Hill, Hunt’s Point, Medina, Juanita, Kenmore, Lake Forest Park, Lake Hills” had all come into being during the Boeing boom. I-5 cut the city in half on a north-south axis, while I-90 crossed east-west. I-5 in Seattle went straight through the downtown, neatly cutting it off from the rest of the city. I-90 is less disruptive, since it tends to skirt the water and avoid slicing the city into a north half and south half. The Freeway park over I-5 was eventually built in 1976, which to some degree bridged the gap between the east and west sides, but generally did not have enough people on it to really do much good.<
With all this postwar growth, there was growing pollution of the lakes and rivers that made Seattle the beautiful place it is. Also, despite the freeways the sprawl constantly demanded more roads, since the ones already built had terrible traffic. A group of Seattle natives, anxious to preserve the city they grew up in, got a committee called the Metropolitan Problems Committee, or Metro, created to manage and plan the metropolitan area. The driving force behind this movement was a man named Jim Ellis, who headed the committee after repeatedly bringing the issue to the voters and city governments. The logic was that a regional transit system would require a regional political body; the same held for regional sewage and pollution control or planning. Unfortunately for Seattle, Ellis was defeated in a vote by suburbanites whose train of logic was, essentially: there are no problems with pollution or transit or sprawl, regional planning won’t solve the problems anyway, and if Seattle would just build a few more bridges the traffic would get better.
Metro came back, after it was scaled back in size and reduced its plans to only a sewage treatment and transport organization, and prevailed with an overwhelming majority in Seattle and a decent showing in the suburbs. Metro never did manage to get authority for planning, and to this day there is no body responsible for planning the Seattle metropolitan area, or its transportation systems, despite repeated attempts by Jim Ellis and plenty of money that was used to build stadiums and a host of other public works. Even with an entrepreneur, time, a market, locations clearly suited for action, financing from the municipal governments, and fairly good designs for the public work, if no one wants to practice city planning then it will by definition be unsuccessful. There is never a positive widespread sustained reaction to nothing.
During this period, Seattle’s downtown was in decline along with many other downtowns across the nation, and for much the same reason: people shopped in the suburbs, not in the city. The market for goods in the city center was drying up. Seattle’s solution was to host the 1962 World’s Fair. The area directly north of downtown was slumping very badly, and the city owned a lot of property there. The fair, given a futuristic science theme, was based around a city center, what is now called the Seattle Center. The United States Science Pavilion (now the Science Center) was one of the central attractions. Boeing performed one of the only pieces of public action it has in Seattle history, as they “created and installed in the United States Science Pavilion a space age Spacearium, a permanent addition to the center and one of the most attractive features of the fair.”
A monorail was also constructed, running from the center of downtown to the fair, a distance of 0.9 miles; it was constructed at no cost to the city and was paid for out of ticket sales, and then turned over to the city for $600,000. It is currently the only monorail in the United States to turn a profit. It is now used almost exclusively as a tourist attraction, as the distance covered is too small to be of much practical use unless you are living in a hotel downtown and visiting the Seattle Center. The World’s Fair also granted Seattle its trademark landmark, The Space Needle, also a continuing tourist attraction. Seattle also received the Opera House, the Coliseum, a refurbished Arena, and a great location for future carnivals and fairs. Today the Seattle Center is host to Bumbershoot, a music festival jam-packed with people every weekend before Labor Day, and Folklife, a folk music festival that somehow manages to stay in the black. The Science Center to this day draws crowds, along with an amusement park that operates all summer. The world’s fair reenergized the downtown of Seattle, and was generally a smashing success, even finishing with a profit.
After the war, the University of Washington also took a step forward, finally fulfilling the promise of its name after about half a century from its founding. Charles Odegaard was the president of the University, and used the office to press for the creation of community colleges and other four-year colleges in Washington, so that the University of Washington could concentrate on research. By the time Odegaard retired, the UW was second only to MIT in the size of its federal grants, and the number of students attending had swelled. Because the University of Washington campus is open, its impact on the University District as well as the rest of the city was quite significant; “In remaining a largely commuter school, the university has diminished its ability to withdraw as a community in itself and has maintained thereby its ability to the larger and more amorphous community.” In short, after the war, Boeing was hiring, the economy was booming, and while there had been no successful regional planning Seattle was doing quite well for itself, at least internally. In 1970, this all changed.
The Quiet Years: 1970 - 1985
Due to changing external demand, “the Boeing workforce was cut from 80,400 to 37,200 between early 1970 and October 1971” Once again, Seattle was in good company for its recession, since the rest of the country was also experiencing the oil shocks. However, Seattle was hit perhaps harder than most cities due to its over reliance on Boeing as an employer – it had the worst post-Great-Depression unemployment for any major city ever at nearly 12%. As with most periods of downturn, there was not a huge amount of private investment and construction during the 1970s in Seattle. However, there despite the crushing unemployment there was no massive outflux of people; it was “never more than 15% of those laid off.”
Because of this, Seattle did not wind up like Detroit, a ghost of its former self, unable to resume growth and prosperity. In a capitalist state, where there is a surplus of relatively educated unemployed workers along with easy access to a port, jobs will come. Seattle industry did slightly better than the national average during the rest of the 1970s, but nonetheless the boom decades of the 1950s and 1960s were brought to a decisive end.
The Pike Place Market, arguably Seattle’s most important tourist attraction and a very nice market in its own right, was created in its modern form during the aftermath of the Boeing crash. The market had been founded in 1907 with good success, but like most public markets in America had suffered a decline as corporations took over food distribution. The deportation of the Japanese from Seattle during WWII hit the market particularly hard, since 80% of its “wet stall” vendors had been Japanese. The city council wanted to make a “Pike Place Plaza” by demolishing the mostly derelict market and replacing it with “a new hotel, a 32-story apartment building, four 28-story office buildings, a hockey arena, and a 4,000-car parking garage.”Cite error: Closing
</ref> missing for
A similar story occurred with Pioneer Square. An old, old neighborhood, it had fallen into derelict status after the war. However, with a reenergized downtown, businesses started to look for buildings that could be acquired cheaply. When two offices moved into renovated buildings, suddenly there was a market for facilities to service them, leading to a “flood of other restaurants, galleries, boutiques.” Seattle was definitely recovering from the blow dealt by the Boeing recession, refilling in areas that had threatened to become slums.
Microsoft and the World-Class City: 1985-2002
Bill Gates and Paul Allen, founders of Microsoft, Inc., attended Lakeside, a private middle and high school, on the northern border of Seattle. This turned out to have rather fortunate consequences for the entire Seattle area. Microsoft’s first product, BASIC, came out in 1976. It was incorporated in New Mexico the same year. By 1978 sales exceeded one million dollars a year. In 1979, Microsoft moved its offices back to Bellevue from Albuquerque – apparently, it was easier to entice quality programmers to the Seattle area than the deserts of New Mexico. By 1985, sales were over $140 million. By 1990, $1.18 billion. By 1995 it was the world's most profitable corporation. Microsoft has grown from a two-man operation with Bill Gates and Paul Allen to an 11,000 in 1992 to 48,030 in 2001. And Microsoft employees are not just any employees – they tend to be millionaires on a quite disproportionate basis.
Microsoft has also spawned a whole host of other related software companies in the Seattle area: RealNetworks, Itron, Attachmate Corp, Infospace, and a whole host of smaller companies. Quite unlike the Boeing boom which tied Seattle’s fortunes to one company, Microsoft has served as a catalyst for the creation of a whole realm of industry. It has also taken a much more active hand in public works in the area, donating software to many schools (including the University of Washington). Seattle has also been experiencing quite good growth in the biotech and coffee sectors.
Paul Allen, whose fortune was made through Microsoft though he has long since ceased to be an active participant in it, has been a major force in Seattle politics, for better or for worse. He attempted to pass an initiative to build The Commons, a huge park winding through the city and over the freeway, and even put up some of his own money, but it failed to pass. He did get a football stadium built by the same technique. He also has built the Experience Music Project, a Jimmy [sic - JM] Hendrix Rock n’ Roll museum, right outside the grounds of the Seattle Center.
The other piece of urban design that stands is the Washington State Convention and Trade Center completed 1988, which stands over the freeway, connecting First Hill and Capital Hill to downtown. Not only has the convention center also helped fuel further downtown growth, but it has finally successfully reconnected both sides of the freeway by hiding the effects of the highway. Along with the Microsoft boom, the downtown has been doing very well; prices for office space have increased from mediocre in the seventies to “Number four or five on the national hit parade [of real estate prices], and climbing”
The Seattle of today is really not so different from the Seattle of the 1960s. It is still filled with single family households, still mostly white with as many Asians as blacks, still liberal, still with about half a million people, still almost entirely without a centralized method of planning. The suburbs have grown, but they are also in essentially the same state as before, if a little more independent. Seattle’s economy is more vibrant now, with Microsoft, and richer, but the largest employer is still Boeing. The Commons was defeated, just as Jim Ellis was in the sixties. There’s still terrible traffic on the freeways. It’s still a beautiful place to live. We’re even facing the same downturn faced at the end of the 60s, albeit reduced, as many software and biotech companies crash and burn in the recent slowdown.
There is hope that, in the future, some sort of regional planning may actually proceed. Sound Transit has money for a light-rail system; a bond issue was finally passed after three tries in referendum for a monorail system to link the parts of the city together. Seattle will survive, and most probably even prosper, without any sort of central planning at all – it has for the past 150 years. But as more and more land is swallowed by the sprawl of the suburbs, and more and more rivers and aquifers are tapped for the water needed for all those lawns, and the solution to traffic is that we build more and more roads, the Seattle of tomorrow will be a city with a lot more gray and a lot less green.
- Sale, Roger. Seattle: Past to Present(University of Washington Press, Seattle and London 1976) 3.
- Guns, Germs and Steel
- CityofSeattle.net, "Quick facts about the city of Seattle". http://www.cityofseattle.net/leg/clerk/kwikfact.htm#population
- Speidel, William C. Sons of the Profits (Seattle, Washington Nettle Creek Publishing Company 1967) 31.
- ibid. 33.
- Speidel, William C. Sons of the Profits (Seattle, Washington Nettle Creek Publishing Company 1967) 48.
- ibid. 60
- ibid. 63
- Speidel, William C. Sons of the Profits (Seattle, Washington Nettle Creek Publishing Company 1967) 89.
- Figures from Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 3.
- These numbers would appear to be in error; they match the population numbers exactly. - JM
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 54.
- ibid. 56.
- ibid. 58.
- ibid. 59
- ibid. 62
- ibid. 82
- ibid. 83
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 95.
- Bogue, Virgil. Plan of Seattle 2 vols.
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 104
- The United States Census of Manufacturing, several years
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 141
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 180
- The Boeing Company, Boeing: History -- Beginnings – Growing pains http://www.boeing.com/companyoffices/history/boeing/growing.html
- The Boeing Company, Boeing: History – Post-war Developments http://www.boeing.com/companyoffices/history/boeing/growing.html
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 188
- US housing census data, 1950 and 1960
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 196
- ibid. 198
- ibid. 200
- Jones, Nard. Seattle (Doubleday&co, Garden City, New York 1972) 325.
- City of Seattle. Monorail History http://www.seattlemonorail.com/history.html
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 208
- ibid. 211
- The Boeing Company. Boeing: History -- New Markets http://www.boeing.com/companyoffices/history/boeing/markets.html
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 232
- ibid. 233
- Pike Place Market PDA. Pike Place Market – Learn about the Market – History http://www.pikeplacemarket.org/learn/history/
- Sale, Roger. Seattle: Past to Present (University of Washington Press, Seattle and London 1976) 239
- The Microsoft Corporation, Key Events in Microsoft History http://www.microsoft.com/msft/download/keyevents.doc
- Allmon, Michael. Seattle real estate is climbing the charts (Seattle Daily Journal of Commerce 1998) http://www.djc.com/special/cmarket98/10036712.htm
|This work is licensed under the terms of the GNU Free Documentation License.|
|
<urn:uuid:fb761b38-9920-4d4c-ab97-69f5d40da24f>
|
CC-MAIN-2013-20
|
http://en.wikisource.org/wiki/Seattle:_Booms_and_Busts
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970646
| 7,081
| 2.84375
| 3
|
- An individual's ability to cope with stressful situations may indicate their vulnerability to alcoholism.
- Both alcohol and stress are known to have an impact on the β-endorphin system.
- New findings show that dysfunction in the activity of the pituitary β-endorphin system predates the development of alcoholism among individuals with a family history of alcoholism; dysfunction develops following alcohol dependence among individuals without a family history of alcoholism.
A number of sociocultural studies indicate that stress may increase the risk of alcoholism. In other words, an individual's ability to cope with stressful situations may indicate his or her vulnerability to alcoholism. A study in the November issue of Alcoholism: Clinical & Experimental Research has found that individuals with a family history of alcoholism exhibit a dysfunction in their stress response prior to the development of alcohol dependence, while individuals without a family history of alcoholism exhibit a dysfunction in their stress response following the development of alcohol dependence.
"It is not well understood how stress increases alcohol consumption, or what the relationship is between stress and alcohol," said Christina Gianoulakis, a professor in the department of psychiatry at McGill University and corresponding author for the study. "One of the questions we wanted to ask is whether alcohol induces a number of biological responses that help the individual cope with a stressful situation. If that is the case, then the alteration of the activity of biological systems by both alcohol and stress may help us to understand the relationship between stress and alcohol. One such biological system is that of brain and pituitary β-endorphin."
Both Gianoulakis and Maurice Dongier, professor emeritus in the department of psychiatry at McGill University, agree that the effect of stress on β-endorphin is clear and well defined. "Stress increases the release of β-endorphin by both the pituitary gland and the brain," said Dongier. "But the effect of alcohol is not as clear. It is, in fact, somewhat contradictory."
Gianoulakis said that a specific response may depend on the dose of alcohol as well as the species involved. "For example, in experimental animals such as rats, all doses of alcohol increase the release of β-endorphin," she said. "However, in humans low doses of alcohol have either no or a small effect on β-endorphin release, whereas high doses of alcohol are needed to induce a significant increase in β-endorphin release."
For this study, four groups of individuals participated: social and heavy drinkers with a family history of alcoholism (considered "high risk") and without a family history of alcoholism (considered "low risk"). Each participant was given either a placebo or alcohol (0.50g ethanol/kg) drink; researchers then measured their responses to both drinks as well as a stress test performed 30 minutes following ingestion. The stress test had two components: arithmetic computations, and a competition for monetary reward. Plasma β-endorphin levels were also measured prior to and for 3.5 hours after the stress test.
Results indicate that there are differences in both the basal plasma β-endorphin levels as well as the response of the pituitary β-endorphin to stress as a function of an individual's family history of alcohol problems.
"There are two major findings in this study," said Gianoulakis. "In participants with a family history of alcoholism, the lower activity of the pituitary β-endorphin system indicated by the low basal plasma β-endorphin levels and the lower β-endorphin response to stress predate the development of alcoholism, and alcohol dependence does not induce a further decrease in the activity of the pituitary β-endorphin system. However, in subjects without a family history of alcoholism, alcohol dependence induces a decrease of the activity of pituitary β-endorphin, as indicated by the lower basal plasma β-endorphin levels and the lower β-endorphin response to stress." In other words, said Gianoulakis, in high-risk individuals, dysfunction in the activity of the pituitary β-endorphin system predates the development of alcoholism; while in low-risk individuals, dysfunction develops following alcohol dependence.
"The second important finding is that when participants of all four groups ingested a small amount of alcohol, the equivalent of about two standard drinks, the stress task performed 30 minutes after the drink did not increase the release of β-endorphin," said Gianoulakis. "Thus, prior alcohol consumption blocked or decreased the β-endorphin response to stress regardless of family history of alcoholism and presence of alcohol dependence."
Gianoulakis added it was important to note that stress dysfunction can both act as a mediator of alcohol dependence and occur as a consequence of alcohol dependence. "The major objective of a biological response to stress is to help that individual cope with a stressful situation," she said. "A low response to stress may compromise an individual's ability to cope, so that he or she feels the need to search for alternative ways to cope with stress, one of which could be drinking, eventually leading to alcohol dependence. Conversely, we also found that alcohol dependence can induce a decrease of the β-endorphin response to stress in individuals without a family history of alcoholism, eventually compromising their responses to stress and his or her ability to cope with stressful situations. They may cope with subsequent stress by increasing alcohol consumption, which not only prevents recovery of the stress response but may also induce a further dysfunction of the stress response."
Both Gianoulakis and Dongier said that one group in particular – individuals with a family history of alcoholism – needs to be aware of their potentially greater risk of developing alcohol problems.
"Individuals with a family history of alcoholism exhibit a dysfunction of the stress response prior to the development of alcohol dependence," said Gianoulakis. "These individuals may have a greater vulnerability to stressful situations, and should try to avoid drinking alcohol in the face of stress, as well as develop alternative behavioural skills for coping with stress."
Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved.
We must become the change we want to see.
-- Mohandas K. Gandhi
|
<urn:uuid:13c294a7-b935-453a-8b6a-e61f14bf408d>
|
CC-MAIN-2013-20
|
http://psychcentral.com/news/archives/2005-11/ace-asa110705.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957053
| 1,296
| 2.828125
| 3
|
Reviewed by Meredith F. (age 7) and Katie L. (age 8)
We liked how Jane Yolen used descriptive language in her story. I (Katie) also liked how Mr. Schoenherr?s pictures seemed so alive. I also liked the similes in the story. One of the very descriptive sentences I liked was, "A train whistle blew like a sad, sad, song." I (Meredith) liked how she described how to go owling. I liked when she used the sound effects to show the father calling the owl. I liked when she said, "her mouth was furry." It reminded us of when we are alone with our dads. It reminded Meredith of when she got a new bike with her dad. It reminded Katie when she made a wooden car with her dad.
This book is a thumbs up book because it has beautiful descriptive language. We recommend this book to adults who are teaching children about descriptive language and children who love to hear it. This book also won a Caldecott medal in 1988.
|
<urn:uuid:e014f5b1-bca4-43eb-a333-c142242370f1>
|
CC-MAIN-2013-20
|
http://spaghettibookclub.org/review.php?review_id=901
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.97578
| 215
| 2.859375
| 3
|
From Superhero Wiki Encyclopedia
Home Books Clothing DVDs Posters Toys Video Games
Comic Book News
Link to us
Online Comic Books
A penciller (or penciler) The penciller is the first step in rendering the story in visual form and may require several steps of feedback with the writer. These artists are concerned with layout (positions and vantages on scenes) to showcase steps in the plot. In earlier generations it was more common for artists to use a loose pencilling approach, in which the penciller does not take much care to reduce the vagaries of the pencil art, leaving it to the inker to interpret the penciller's intent and render the art in a more finished state.
Because a penciller does not usually create finished art, the extent to which the pencilled pages resemble the final, inked version varies depending on the artist.
Most pencillers develop a preference for the work of certain inkers and vice versa. Some penciller/inker teams have enjoyed long and celebrated collaborations when their styles mesh particularly well. In less successful cases, an inker's style may not complement that of the penciller, or the inker's own style may be so prominent that in effect it buries the work of the penciller.
In earlier generations it was more common for artists to use a loose pencilling approach, in which the penciller does not take much care to reduce the vagaries of the pencil art, leaving it to the inker to interpret the penciller's intent. Today many pencillers prefer to create very meticulously detailed pages, where every nuance that they expect to see in the inked art is indicated in pencil. This is known as tight pencilling. Jim Lee is an artist who exemplifies this approach.
A comic book penciller usually works closely with the comic book's editor, who commissions a script from the writer and sends it to the penciller.
Comic book scripts can take a variety of forms. Some writers, such as Alan Moore, produce complete, elaborate, and lengthy outlines of each page. Others send the artist only a plot outline consisting of no more than a short overview of key scenes with little or no dialogue. Stan Lee, the founder of Marvel Comics, was known to prefer this latter form, and thus it came to be known as the Marvel Method.
Sometimes a writer or another artist (such as an art director) will include basic layouts, called breakdowns, to assist the penciller in scene composition. If no breakdowns are included, then it falls to the penciller to determine the layout of each page, including the number of panels, their shapes and their positions. Even when these visual details are indicated by a script, a penciller may feel when drawing the scene that there is a different way of composing the scene, and may disregard the script, usually following consultation with the editor and/or writer.
Tools and materials
A penciller works in pencil. Beyond this basic description, however, different artists choose to use a wide variety of different tools. While many artists use traditional wood pencils, others prefer mechanical pencils or drafting leads. Pencillers may use any lead hardness they wish, although many artists use a harder lead (like a 2H) to make light lines for initial sketches, then turn to a slightly softer lead for finishing phases of the drawing. Still other artists do their initial layouts using a light blue colored pencil because that color tends to disappear during photocopying.
Most comic book pages are drawn oversized on large sheets of paper, usually Bristol board. The customary size of comic book pages in the mainstream American comics industry is 11 by 17 inches. The inker usually works directly over the penciller's pencil marks, though occasionally pages are inked on translucent paper, such as drafting vellum, preserving the original pencils. The artwork is later photographically reduced in size during the printing process.
|
<urn:uuid:e5189b97-d36b-48a8-96a4-0b97e88e66ae>
|
CC-MAIN-2013-20
|
http://superherouniverse.com/wiki/Penciler/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937724
| 820
| 3.03125
| 3
|
Places to Avoid Planting Trees and Shrubs
Planting the right trees and shrubs in the right place is not just an aesthetic strategy. It promotes safety and prevents damage to the plants, nearby buildings and utilities, and relations with the people who live next door.
Anticipate the consequences of poorly placed plants — dangerous limbs hanging over your roof or growing into electrical wires, roots clogging your sewer pipe or leach field, inaccessible utility service boxes, unhappy neighbors, and unsafe driving conditions.
Pruning efforts to correct the problems after the trees and shrubs mature can damage the plants and leave them looking unnatural and more prone to pests and diseases. Consider the following situations before you plant:
Overhead power lines and utilities: The best way to keep your overhead wires clear of tree limbs is to consider the mature height and spread of trees before you plant. The International Society of Arboriculture recommends planting trees that grow no taller than 20 feet directly beneath utility wires. Taller trees should be planted so that their mature canopy grows no closer than 15 feet from the wires.
Buried wires and gas lines: Frequently, utility companies bury electric, telephone, and cable television wires underground, especially in new developments. Don’t assume that the wires are buried deeper than your planned planting hole — sometimes, they’re buried just below the surface. Although pipes should be buried at least 3 feet below ground, gas companies prefer a tree-free corridor of 15 to 20 feet on either side of pipes to allow for safety and maintenance. Gas leaks within a plant’s root zone can also damage or kill it
To avoid disrupting underground utilities, many states have laws that require you to contact utility companies that may have wires or pipes on or close to your property before you dig.
Service boxes and wellheads: You may want to disguise your wellhead and the unattractive metal box that the utility company planted in your front yard, but someone will need access to them someday. Plan your shrub plantings so that the mature shrubs won’t touch the box or wellhead. Better yet, allow enough space for someone to actually work on the utilities located in the box without having to prune back your shrubs.
Buildings: A strong wind can send branches crashing through your roof. Overhanging limbs also drop leaves that clog your gutters and sticky sap that can stain siding. Keep shrubs at least several feet from your house and plant trees that grow to 60 feet or more at least 35 feet away.
Streets, sidewalks, and septic lines: Some trees, such as poplar and willow, grow large roots close to or on the ground’s surface where they heave paving and everything else out of their path. Shallow-rooted trees also compete with lawn grasses and other plants, and make for bumpy mowing. Plant roots usually grow two to three times farther from the tree trunk than the aboveground branches do, so leave plenty of room between the planting hole and your driveway, sidewalk, or septic field for outward expansion.
Property boundaries and public rights of way: Your state and municipal governments own the land on either side of all public roads. Many communities and highway departments prohibit planting in the public right-of-way. Contact your local government office for guidelines, or call the State Highway Department if your property borders a state or federal highway.
Homeowners commonly plant privacy hedges along their property boundary. If you plan to plant a hedge or row of shrubs or trees between you and the neighbors, avoid future disputes by hiring a professional surveyor to find the actual property lines. When you plant the shrubs, allow enough space so that mature shrubs won’t encroach on the neighboring property. You’ll also have room to maintain them from your own yard.
Merging traffic: Shrubs and hedges near intersections, including the end of your driveway, must be lower than that height or planted far enough from the road to allow drivers to see oncoming motorists, bicyclists, and pedestrians.
|
<urn:uuid:309665b3-3176-420f-96f0-1ee34bd20743>
|
CC-MAIN-2013-20
|
http://www.dummies.com/how-to/content/places-to-avoid-planting-trees-and-shrubs.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94094
| 838
| 3.0625
| 3
|
SVY1110 Introduction to Global Positioning System
|Semester 2, 2012 External Toowoomba|
|Faculty or Section :||Faculty of Engineering & Surveying|
|School or Department :||Surveying & Spatial Science|
|Version produced :||24 May 2013|
Examiner: Albert Kon-Fook Chong
Moderator: Peter Gibbings
Throughout the centuries, people have sought a simple way of determining where they are on Earth, and where they are heading. Positioning and navigation have always been one of the most basic problems facing civilisation. Today GPS has provided us with the ability to know where we are and where we are heading. GPS provides this worldwide navigation service by using a constellation of satellites orbiting the Earth. It is essential that surveyors, GIS specialists, and other casual users be familiar with the fundamentals of GPS and that they have a sound understanding of its uses, and the accuracy achievable by different GPS observation and reduction techniques.
The use of the Global Positioning System (GPS), for accurately determining positions on earth, has grown exponentially since the late 1980s and early 1990s. Today GPS is firmly entrenched in the general operations of professional surveying and GIS organisations. This course presents fundamental information on structure, characteristics and use of GPS and other Global Navigation Satellite Systems (GNSS). Background information is provided and the basic principles of using the GNSS systems are introduced. The course has a bias towards the code observable and the use of GPS for asset mapping, but several sections dealing with higher accuracy measurement techniques make this course relevant to a wide range of students. Consequently, the information will be relevant to those seeking fundamental knowledge in areas of general GPS surveying, agriculture, machine guidance, mapping and general data collection.
The course objectives define the student learning outcomes for a course. On completion of this course, students should be able to:
- discuss the features and applications of GPS and its importance in society today;
- define coordinates systems likely to be encountered by GPS users and calculate and discuss GPS coordinates;
- describe global satellite navigation systems, satellite orbital characteristics, and satellite signal structure;
- define the fundamental characteristics of GPS and outline its development;
- discuss the principles of GPS observations, make observations using a GPS receiver, and calculate and analyse findings;
- explain GPS observations techniques, and calculate and evaluate levels of accuracy associated with GPS observations;
- demonstrate an understanding of error sources in GPS observations, and explain the uses and critical factors of Differential GPS techniques;
- identify and discuss project planning features when using GPS, and discuss the key steps in planning a GPS data collection project for asset mapping;
- explain GPS data collection and processing procedures, including Differential GPS, and evaluate collected and processed data;
- describe the use of GPS for asset mapping, and other common uses.
|8.||Collection and Processing||10.00|
|9.||Asset Mapping and other Applications||10.00|
Text and materials required to be purchased or accessed
ALL textbooks and materials available to be purchased can be sourced from USQ's Online Bookshop (unless otherwise stated). (https://bookshop.usq.edu.au/bookweb/subject.cgi?year=2012&sem=02&subject1=SVY1110)
Please contact us for alternative purchase options from USQ Bookshop. (https://bookshop.usq.edu.au/contact/)
- There are no texts or materials required for this course.
Student workload requirements
|Description||Marks out of||Wtg (%)||Due Date||Notes|
|ASSIGNMENT 1||300||30||10 Sep 2012|
|PART A OF 2 HOUR CLOSED EXAM||300||30||End S2||(see note 1)|
|PART B OF 2 HOUR CLOSED EXAM||400||40||End S2|
- The 2 hour examination is in two parts. Part A requires an Examination Answer Sheet. Part B requires an Answer Booklet. Student Administration will advise students of the dates of their examinations during the semester.
Important assessment information
There are no attendance requirements for this course. However, it is the students' responsibility to study all material provided to them or required to be accessed by them to maximise their chance of meeting the objectives of the course and to be informed of course-related activities and administration.
Requirements for students to complete each assessment item satisfactorily:
To complete each of the assessment items satisfactorily, students must obtain at least 50% of the marks available (or at least a grade C-) for each assessment item.
Penalties for late submission of required work:
If students submit assignments after the due date without (prior) approval of the examiner then a penalty of 5% of the total marks gained by the student for the assignment may apply for each working day late up to ten working days at which time a mark of zero may be recorded. No assignments will be accepted after model answers have been posted.
Requirements for student to be awarded a passing grade in the course:
To be assured of receiving a passing grade in a course a student must obtain at least 50% of the total weighted marks for the course.
Method used to combine assessment results to attain final grade:
The final grades for students will be assigned on the basis of the weighted aggregate of the marks (or grades) obtained for each of the summative assessment items in the course.
In a Closed Examination, candidates are allowed to bring only writing and drawing instruments into the examination.
Examination period when Deferred/Supplementary examinations will be held:
Any Deferred or Supplementary examinations for this course will be held during the examination period at the end of the semester of the next offering of this course.
University Student Policies:
Students should read the USQ policies: Definitions, Assessment and Student Academic Misconduct to avoid actions which might contravene University policies and practices. These policies can be found at http://policy.usq.edu.au/portal/custom/search/category/usq_document_policy_type/Student.1.html.
The due date for an assignment is the date by which a student must despatch the assignment to USQ. The onus is on the student to provide proof of the dispatch date, if requested by the Examiner.
Students must retain a copy of each item submitted for assessment. This must be despatched to USQ within 24 hours if required by the Examiner.
In accordance with University Policy, the Examiner may grant an extension of the due date of an assignment in extenuating circumstances.
If electronic submission of assessments is specified for the course, students will be notified of this in the course Introductory Book and on the USQ Study Desk. All required electronic submission must be made through the Assignment Drop Box located on the USQ Study Desk for the course, unless directed otherwise by the examiner of the course. The due date for an electronically submitted assessment is the date by which a student must electronically submit the assignment. The assignment files must be submitted by 11.55pm on the due date using USQ time (as displayed on the clock on the course home page; that is, Australian Eastern Standard Time).
If the method of assessment submission is by written, typed or printed paper-based media students should (i) submit to the Faculty Office for students enrolled in the course in the on-campus mode, or (ii) mail to the USQ for students enrolled in the course in the external mode. The due date for the assessment is the date by which a student must (i) submit the assessment for students enrolled in the on-campus mode, or (ii) mail the assessment for students enrolled in the external mode.
The Faculty will NOT normally accept submission of assessments by facsimile or email.
Students who do not have regular access to postal services for the submission of paper-based assessments, or regular access to Internet services for electronic submission, or are otherwise disadvantaged by these regulations may be given special consideration. They should contact the examiner of the course to negotiate such special arrangements prior to the submission date.
Students who have undertaken all of the required assessments in a course but who have failed to meet some of the specified objectives of a course within the normally prescribed time, may be awarded one of the temporary grades: IM (Incomplete - Make up), IS (Incomplete - Supplementary Examination) or ISM (Incomplete - Supplementary Examination and Make up_. A temporary grade will only be awarded when, in the opinion of the Examiner, a student will be able to achieve the remaining objectives of the course after a period of non directed personal study.
Students who, for medical, family/personal, or employment-related reasons, are unable to complete an assignment or to sit for an examination at the scheduled time, may apply to defer an assessment in a course. Such a request must be accompanied by appropriate supporting documentation. The following temporary grade may be awarded: IDM (Incomplete Deferred Make up).
Harvard (AGPS) is the referencing system required in this course. Students should use Harvard (AGPS) style in their assignments to format details of the information sources they have cited in their work. The Harvard (AGPS) style to be used is defined by the USQ Library's referencing guide.
Students will require access to e-mail and internet access to UConnect for this course.
|
<urn:uuid:edd059ed-9bd8-4319-9665-fc9df13b34bb>
|
CC-MAIN-2013-20
|
http://www.usq.edu.au/course/specification/2012/SVY1110-S2-2012-EXT-TWMBA.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902916
| 1,953
| 3.5625
| 4
|
by Kenny Cabe, ATC - Athletic Trainer
St. Francis Sports Medicine
Many times we make the analogy between athletes and performance sports cars. This definitely works with sports nutrition. You wouldn't put low octane gas in your Porsche, so why would you put low energy fuel in your high performance athlete? Here are some simple guidelines to fuel your performance.
Foods can be divided into three main categories:
Carbs are a great source of energy. They are broken down into simple sugars to be used for fuel. If not used immediately they are stored as glycogen. Glycogen is used for most anaerobic exercise (short intense bouts) examples are sprinting and weightlifting. When glycogen stores are full, carbs are stored as fat. Foods that contain carbohydrates include: breads, pastas, beans, potatos, oatmeal, rice, cereals, and fruits.
Fat provides the highest concentration of energy (1 gram of fat = 9 calories of energy). Fat is difficult to access. It is broken down and released into the muscle slowly. Fat is much more accessible for endurance exercise such as biking, distance running, and triathlons. Fats can be found in most cooking oils, fish, meat and dairy products.
Protein is broken down into amino acids. Amino acids aid in the repair of fatigued muscles, speed recovery, and build muscle. If there are inadequate carb stores then the body goes into the protein stores. This may inhibit building and maintaining muscle. Examples of proteins are meats, fish, eggs, vegtables and nuts.
While you are fueling your engine, don't forget the fluids. Adequate hydration is vital to calorie absorption and to keep your engine from over heating, even in cooler weather.
|
<urn:uuid:ab30448a-ac49-47ce-9fc2-a5364266ed2f>
|
CC-MAIN-2013-20
|
http://archive.constantcontact.com/fs066/1102122238297/archive/1102334847777.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950983
| 363
| 2.984375
| 3
|
are reported each year, with a good number occurring in the home. You can take the following simple steps to reduce your child's risk of getting burned:
- Make sure your child's sleepwear is flame-resistant.
- Turn pot handles to the center or rear of the stove when cooking and use the back burners whenever possible.
- Test the temperature of food heated in a microwave before giving it to a child. Microwaves tend to heat unevenly and some portions can be very hot.
- Remember that kitchen appliances and cookware remain hot enough to burn for quite a while after you are done using them.
- Do not drink hot liquids when holding a baby. The liquid could spill and burn the baby.
- Avoid using a tablecloth when children are learning to walk. A child could try to use it to pull herself up and knock a heavy object or something containing hot liquid onto herself.
- Use a baby bath thermometer to test the temperature of your child's bath water.
- Lower the hot-water heater setting to 120°F (49°C) or the low-medium setting.
- Keep cigarette lighters and matches away from children. Even a child as young as two can figure out how to use them.
- Do not leave lit candles unattended. They are easy for children (or pets) to knock over.
- Install smoke detectors on every floor of your home. Check battery-operated detectors every six months to make sure they are still working properly. Replace the batteries annually.
- Consider having a fire extinguisher in the house. But only use it for small fires. In the event of a large fire, everyone should leave the house right away.
- Create a fire escape plan and practice it with your children. Teach them to go outside if a fire occurs in the house.
- Always supervise children around fires, stoves, heaters, or anything that could cause burn injury.
- Cover unused electrical outlets with plastic plug covers.
- Keep electrical cords from irons, coffee pots, and other appliances out of the reach of children.
|
<urn:uuid:22c1312e-11e6-44ea-b89c-e87c1e1258d8>
|
CC-MAIN-2013-20
|
http://www.abrazohealth.com/education/treatments_proceedures.aspx?chunkiid=14373
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.915717
| 434
| 3.6875
| 4
|
FORT WAYNE, Ind.—Ever since the first Star Wars movie, kids of all ages have been fascinated with lasers. If you’re between the ages of 8 and 14, you can learn more about lasers at the Saturday, March 9, edition of Lunch with an IPFW Scientist at Science Central.
That’s when Mark Masters, professor and chair of the Department of Physics at Indiana University–Purdue University Fort Wayne, will make his presentation, “Laser: the Light Fantastic!” In his presentation, Masters says attendees will learn about light, how lasers work, and what lasers are used for. More importantly, attendees will put together a working laser spectrometer and look at laser induced fluorescence, which is the emission of light after a laser light hits some materials.
Lunch with an IPFW Scientist is held at Science Central, 1950 N. Clinton, from 11 a.m. to 12:30 p.m. The program is open to the public at a cost of $16 per person; $10 for Science Central members. Lunch is included.
The Lunch with an IPFW Scientist Series is designed for families with children age 8 to 14; it plants in both the young and most seasoned participant a budding interest in science. After each presentation, which includes a hands-on activity, participants enjoy lunch with the presenter. Advance reservations are required.
For more information on the series, visit the Science Central website or call Kathy Larsen, Science Central special programs manager, at 260-424-2400, ext. 427. For reservations, call ext. 451.
|
<urn:uuid:2656c87c-fefa-4dc1-bd9b-452a04b3ec3e>
|
CC-MAIN-2013-20
|
http://new.ipfw.edu/news/detail.html?id=e9054413-bdde-43fc-9e54-ed68a811b5c1&catInode=90804&catName=General
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.935262
| 329
| 2.75
| 3
|
SP.272 / ES.SP272
This class is divided into a series of sections or "modules", each of which concentrates on a particular large technology-related topic in a cultural context. The class will start with a four-week module on Samurai Swords and Blacksmithing, followed by smaller units on Chinese Cooking, the Invention of Clocks, and Andean Weaving, and end with a four-week module on Automobiles and Engines. In addition, there will be a series of hands-on projects that tie theory and practice together. The class discussions range across anthropology, history, and individual development, emphasizing recurring themes, such as the interaction between technology and culture and the relation between "skill" knowledge and "craft" knowledge.
Culture Tech evolved from a more extensive, two-semester course which formed the centerpiece of the Integrated Studies Program at MIT. For 13 years, ISP was an alternative first-year program combining humanities, physics, learning-by-doing, and weekly luncheons. Culture Tech represents the core principles of ISP distilled into a 6-unit seminar. Although many collections of topics have been used over the years, the modules presented here are a representative sequence.
|
<urn:uuid:fc1e406b-9e60-4462-ac0e-2cddcaf5aa6a>
|
CC-MAIN-2013-20
|
http://ocw.mit.edu/courses/special-programs/sp-272-culture-tech-spring-2003/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932238
| 246
| 2.53125
| 3
|
Official US date recently released shows that the number of US citizens living in poverty rose to a record 46 million last year. Yet the world is encouraged to believe that the US model of ‘democracy’ and ‘economic growth’ is the one that should be followed to eliminate poverty. Surely there is a contradiction here?
The BBC reports the release of these data as follows: “The number of Americans living in poverty rose to a record 46.2 million last year, official data has shown. This is the highest figure since the US Census Bureau started collecting the data in 1959. In percentage terms, the poverty rate rose to 15.1%, up from 14.3% in 2009. The US definition of poverty is an annual income of $22,314 (£14,129) or less for a family of four and $11,139 for a single person. The number of Americans living below the poverty line has now risen for four years in a row, while the poverty rate is the biggest since 1993. Poverty among black and Hispanic people was much higher than for the overall US population last year, the figures also showed. The Census Bureau data said 25.8% of black people were living in poverty and 25.3% of Hispanic people. Its latest report also showed that the average annual US household income fell 2.3% in 2010 to $49,445. Meanwhile, the number of Americans without health insurance remained about 50 million. The data comes as the US unemployment rate remains above 9%”.
Is it not time that global organisations, aid agencies, and governments across the world stopped pretending that economic growth leads to a reduction of poverty? Capitalism fundamentally depends on the maintenance of inequalities: between rich and poor countries, between rich and poor people. The increase in US poverty revealed in these data reinforces such arguments. The US ‘system’ enables Bill Gates and Warren Buffet to acquire huge wealth, while large numbers of their compatriots are consigned to poverty.
Freedom carries responsibilities. The focus of US capitalism on the freedom of the individual at the expense of the wider public good is surely not a model that the world should be encouraged to follow. As the BBC report notes, 50 million people in the US do not have health insurance. While the rich can have the benefit of the latest medical research, such care is beyond the means of the poor.
These figures should be seen as a wake up call to economists and politicians across the world. Unfettered capitalism, fueled by a self-reinforcing cycle of individual greed, can never lead to a reduction in poverty. Only when governments act explicitly to support the most marginalised in their societies can we begin to redress the balance.
|
<urn:uuid:ad108ca0-4416-462e-8b25-3085ee71a9d0>
|
CC-MAIN-2013-20
|
http://unwin.wordpress.com/tag/usa/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.953485
| 551
| 2.515625
| 3
|
A Strike: The Hardest Way To Learn Physics
Editor's Note: This article was written for APS News by Javier Cr#z Mena, science editor of the Mexican newspaper Reforma, and a member of UNAM's Engineering faculty.
Some 100,000 striking students of Mexico's National University and supporters march May 21 in defense of free public education. AP/Jose Luis Magana; image from: http://www.internationalist.org/mexunamleaflet0699.html
When Physics 101 is finally in session at the School of Science of UNAM, Mexico's National University, some unexpected analogies will be available to help understand a few bizarre concepts of modern physics.
Take the case of Schr"dinger's cat. If told that a system may, at any given time, exist in two mutually exclusive states, these students will not be quite as puzzled by the notion as they would have otherwise been, had they not lived through the perplexing nine months of a student strike triggered, back in April of 1999, by the Administration's attempt to raise tuition from US$0.02 to about US$140 a year.
The amount might seem low, but opposers argued that it was improper to charge even that much when the general income in Mexico has dropped steadily for two decades, as has the Government's contribution to higher education. To complicate matters further, the wording of the country's Constitution -"All education provided by the State shall be free of charge"-lends itself to controversy as to whether public colleges should be included.
During those 9 months, the University led the kind of uncertain day-to-day existence typical of split personality conditions-much like quantum cats, indeed. This being a student strike, all teaching stopped as schools were closed from day one. But research continued to get done, somehow, throughout UNAM's main campus in Mexico City. It was not business as usual, though. Long walks in the open had to be endured on those days when the strike's steering committee decided-rather haphazardly-to ban automobile access to campus facilities.
While research institutes were allowed to keep their doors open for most of the strike, teaching centers, such as the Schools of Science-home to Physics, Mathematics and Biology undergraduate studies, Chemistry, Medicine and Engineering-were not. Consequently, all experimental work there came to a halt.
But theoretical research wasn't spared either. "Getting new work done was much more difficult, because I had no access to the things I am used to-books, notes and article references," said Rodolfo Mart!nez, full time Professor at the School of Science, who works on high energy physics. "Three papers which are being refereed right now would have already been published had it not been for the strike."
Life was relatively easier at the Institute of Physics, a research center with close academic ties to the School of Science, although working days were shortened for security reasons, affecting such things as all-night runs at the institute's particle accelerators.
Nevertheless, the months of irregular life and high tension did leave a negative mark. According to Manuel Torres, Secretary of Academic Affairs at the institute, they used to have close to 100 graduate students and 150 undergrads. By the end of the strike, he estimates those numbers to have been reduced by 20% and 40% respectively.
"Some 30 research projects were slowed down," said Torres. Then there was the matter of personal and institutional relations. At least two meetings already scheduled had to be held elsewhere, and several visits by foreign scientists were cancelled.
All in all, though, Torres finds reasons to feel rather fortunate "thanks to the positive attitude of our faculty." Research on campus, limited indeed, showed signs of life during the strike. Thus, somehow, the University did look very much like Schr"dinger's cat-both dead and alive all at once.
At least until just before dawn on February 6th, when a recently created military police unit showed up on campus -to the strikers' surprise- and took nearly one thousand prisoners, mostly students, with and without orders of arrest. For all practical purposes, that was the end of the full-scale student strike.
One might argue that the police action was tantamount to the human measurement of the quantum puma-the university's feline mascot-ending the indeterminacy of its state. Classes are being resumed, most strikers -but not all-have been released, and the puma seems to have been alive after all.
Or was it? The core of the strike's steering committee is still in jail -accused of "social dangerousness," an obscure offense just recently added to the Criminal Code, and held on US$5,000 to US$10,000 bonds-but there is considerable support for their release. The longer their imprisonment, the stronger the student protests seem to be getting. Already the School of Science is under threat of being closed again.
"The strike has proven highly destructive of all academic activity," said Mart!nez, "regardless of which facilities were closed. Whether the University will recover is not at all clear."
©1995 - 2013, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.
Associate Editor: Jennifer Ouellette
|
<urn:uuid:ef4b6a47-7005-4356-8be2-dce2193e5534>
|
CC-MAIN-2013-20
|
http://www.aps.org/publications/apsnews/200004/strike.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.97353
| 1,114
| 2.65625
| 3
|
Intel working on a new system to boost Stephen Hawking’s typing speed by 10x
Share This article
Stephen Hawking has done wonders for every scientific field. Not only has his own research in physics and cosmology been useful for other scientists, but he has inspired countless people to learn more about the scientific method and the fabric of reality. As we are all well aware, Hawking is paralyzed due to a degenerative disease called amyotrophic lateral sclerosis (ALS). He uses small muscle twitches in his face to select words on a custom computer system so he can communicate. Sadly, his condition has progressed to the point where he can only manage roughly one word per minute. After meeting with Hawking himself, Intel’s CTO Justin Rattner is spearheading a project to improve Hawking’s computer system, and allow for an increase in words per minute.
Hawking can use other muscles in his face, so Intel is using his cheek twitch, mouth movements, and eyebrow movements to allow more nuanced control of the computer. In combination with an improved text prediction engine, and possibly use of facial recognition (think a high-resolution Kinect), the research team is set on getting Hawking back up to his previous five words per minute. If all goes well, the system might even boost that number upwards of ten.
Keep in mind that this research isn’t just for Hawking. The technology developed here can be used in a broader context of smart gadgetry and assistive tech. Elderly and disabled people will undoubtedly benefit heavily from the software and hardware being developed for a person with such severe physical limitations. By adding more sensors like cameras, accelerometers, and microphones to the system while connecting that data to online services like chat programs and social networks, people who once were extremely isolated from society can maintain close personal connections.
Facial recognition is getting substantially better. Not only are companies like Google using it to interact with your tablets and smartphones, but the government is using it to find people. Increasingly, these sensors are being used for entertainment in video games to personalize the experience. The field of biometrics and assistive tech is already large, and it’s only increasing in complexity and capabilities.
The medical field has a lot to gain from behavioral biometrics as well. Using computers and sensors to sense changes in gait, metabolism, weight, and heart rate will significantly improve doctors’ ability to diagnose illnesses quickly and accurately. Instead of waiting for symptoms to increase to the point where a patient would notice them, small changes can be picked up extremely early, and treated accordingly. Genetic markers for increased risk for diseases like Parkinson’s disease can be tested for, and those patients could be put on a 24/7 symptom watch. It’s only a matter of time before personal systems using specialized sensors start saving countless lives. This type of technology not only improves lives once disaster strikes, but helps avoid disaster in the first place.
|
<urn:uuid:33332b08-9508-4efa-a402-3ad12461c27d>
|
CC-MAIN-2013-20
|
http://www.extremetech.com/extreme/146269-intel-working-on-a-new-system-to-boost-stephen-hawkings-typing-speed-by-10x
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.952025
| 601
| 2.609375
| 3
|
When did FM radio begin?
Howard Armstrong broadcast the first radio transmission employing frequency modulation (FM) in 1935. However, experiments with FM had been conducted years before. For more information on frequency and amplitude modulation, check out our encyclopedia.
You can also find a good online history of FM radio here.
|
<urn:uuid:369a0d02-f1f4-4a41-a29a-e730ed954110>
|
CC-MAIN-2013-20
|
http://www.infoplease.com/askeds/fm-radio.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.916794
| 62
| 3.765625
| 4
|
The journal of design and manufactures
Miscellaneous, pp. 184-190 ff.
Miscellaneous. .Aistelancous. WOLVERHAMPTON SCHOOL OF DESIGN. -The manufacturers of Wolverhampton seem intent upon impressing the people of their neighbourhood with a sense of the value of a good School of Design to the great manufacturing district around them. At the county meeting held on the 15th of December, a committee was appointed to take charge of the matter in hand, and the first public act does credit to the gentlemen composing it. They felt the necessity of doing some- thing to teach those who were ignorant of the subject, and to dispel the apathy which always, at first, stands so much in the way of the establishment of any new public institution of an educational character. They determined to have a public lecture upon the subject, and it was delivered on the 5th of January by Mr. George Wallis, the master of the Birmingham School of Design. The lecture was entitled "Schools of Design, in relation to Art, Manufactures, and General Education," and was illustrated by a series of drawings illustrative of the historic styles of orna- ment: and, moreover, it was very nu- merously attended. Mr. Wallis com- menced with a reference to the Great Exhibition, and dwelt upon the necessity of the establishment of means for the art-education of the industrious classes. He then gave an exposition of the ele- mentary principles of drawing and design, and maintained the importance of teach- ing drawing in all primary schools, and that its practice in certain elementary forms should be prior to any attempt to teach a child to write. He referred to paucity of men of great talent in orna- mental as compared with other branches of art, as an evidence that while the imitative art of drawing was largely cultivated, the inventive faculty or design had been seriously neglected; and he laid great stress upon adaptation as the first principle of design. The lecturer then described the relations of the art- workman, a class of which we have sadly too few in this country, but which it was a primary duty of Schools of Design to furnish us, as well as the draughtsman and designer. He observed that com- paratively few of our students would learn design, in the highest acceptancy of the term; that many would become useful to themselves and their country under the head of draughtsman, but that our great aim should be to make as many as possible of art-workmen, since fifty art-workmen, or perhaps, he might say, ten times that number, would be required to do all that one thoroughly able de- signer could be capable of inventing for them. Art, Mr. Wallis maintained, must be made a portion of handicraft; and inasmuch as we apprenticed our youths to learn a trade to those who practised it, so Schools of Design were necessary for the tuition of these youths in those great essentials which he was endeavour. ing to impress upon their minds. And in conclusion, he urged all present to support the committee appointed to carry out the plans proposed in the establish- ment of a School of Design for Wolver- hampton and South Staffordshire, inas- much as it would be better to depend upon themselves than upon any aid the Government could give, since out of their own intelligence and proper management the true results could alone arise, the functions of a government being neces- sarily limited in such cases. THE LECTURES OF MR. WORNU TO THE STUDENTS OF THE BIRMINGHAM SCHOOL OF DESIGN.-On the evenings of the 1st and 2d of December Mr. Wornum de- livered lectures "On the Analysis of Or- nament," including within the above title the various characteristics and types of Egyptian, Grecian, Roman, Byzantine, Saracen, Gothic, Renaissance, Cinque- Cento, and Louis-Quatorze styles. The several varieties were illustrated by drawings, and the lectures were listened to throughout with marked attention by the best audiences we have yet seen, in so far as they were composed of a few of the leading manufacturers and the ma- jority of the students attending the School. One feature afforded us much pleasure, viz., that the lecturer insisted upon the necessity of those attending the School cultivating their minds by a perusal of theliterary treasures which are now, thanks to the efforts of enterprising publishers, placed within the reach of the humblest artisan, and may be found in the lending libraries which are or should be attached to every school; as also making themselves acquainted with all forms of natural objects, animate and inanimate, not to be slavishly copied because they are suggestive. This we have long insisted upon as an essential element; and we are convinced it will be found as part and parcel wherever success has been achieved. Towards the conclusion, the lecturer, in passing in review the various styles of art, entered something very like a protest against the Gothic: that it is liable to degenerate into a slavish copying of ancient objects,
Based on the date of publication, this material is presumed to be in the public domain.| For information on re-use see: http://digital.library.wisc.edu/1711.dl/Copyright
|
<urn:uuid:9feec25c-fd98-479b-af7b-f0967623ee17>
|
CC-MAIN-2013-20
|
http://digicoll.library.wisc.edu/cgi-bin/DLDecArts/DLDecArts-idx?type=article&did=DLDecArts.JournDesv06.i0086&id=DLDecArts.JournDesv06&isize=text&pview=hide
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.978069
| 1,103
| 2.53125
| 3
|
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
A Workshop Summary Communicating Uncertainties in Weather and Climate Information
Two-way communication and feedback is essential between information providers and users.
Create understanding between the culture of decision making in forecasting and cultures of decision making in the user communities.
Understand not only the words used in the forecasts but also the meanings of those words in the user community.
Accurately understand the forecaster’s role, place, and responsibility in the decision-making process. The following actions were suggested:
Know the audience.
Coordinate across the spectrum from science to decision making to enhance appropriate responses.
Learn about the decision-making process and “thresholds” in that process as a part of the responsibility of the information provider.
Pressures in a competitive market can result in unwarranted urgent responses to many weather threats. The following factors may affect these situations:
Forecasts not fully supported by the state of the science may have an enormous impact on decision makers and may reduce the credibility of future forecasts.
Dissemination of guidelines and case studies and an active role by professional societies could be used to limit the negative effects and user confusion associated with the possible trend toward unwarranted hype and unfounded claims of accuracy of previous forecasts.
Information providers should understand and nurture the role of the media in educating the users of weather and climate information.
Heightened interest during and following weather and climate events provides opportunities to educate the public.
Clear, graphic warnings, which the public can grasp, may increase the chances for intelligent responses to threat.
If part of the goal of a scientific endeavor is to communicate the findings to the public and policy makers, then the charge and findings should be written with that audience in mind. Dissemination should not be an afterthought. Executive summaries and press releases are helpful, but lay language should not be confined exclusively to these documents.
|
<urn:uuid:5f6f99f7-872f-4192-9d51-05f8ecf68e69>
|
CC-MAIN-2013-20
|
http://www.nap.edu/openbook.php?record_id=10597&page=40
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.934611
| 423
| 3.421875
| 3
|
Violence pervades the lives of many people around the world, and touches all of us in some way. To many people, staying out of harm’s way is a matter of locking doors and windows and avoiding dangerous places. To others, escape is not possible. The threat of violence is behind those doors – well hidden from public view. And for those living in the midst of war and conflict, violence permeates every aspect of life.
This report, the first comprehensive summary of the problem on a global scale, shows not only the human toll of violence – over 1.6 million lives lost each year and countless more damaged in ways that are not always apparent – but exposes the many faces of interpersonal, collective and self-directed violence, as well as the settings in which violence occurs. It shows that where violence persists, health is seriously compromised.
The report also challenges us in many respects. It forces us to reach beyond our notions of what is acceptable and comfortable – to challenge notions that acts of violence are simply matters of family privacy, individual choice, or inevitable facets of life. Violence is a complex problem related to patterns of thought and behaviour that are shaped by a multitude of forces within our families and communities, forces that can also transcend national borders. The report urges us to work with a range of partners and to adopt an approach that is proactive, scientific and comprehensive.
We have some of the tools and knowledge to make a difference – the same tools that have successfully been used to tackle other health problems. This is evident throughout the report. And we have a sense of where to apply our knowledge. Violence is often predictable and preventable. Like other health problems, it is not distributed evenly across population groups or settings. Many of the factors that increase the risk of violence are shared across the different types of violence and are modifiable.
One theme that is echoed throughout this report is the importance of primary prevention. Even small investments here can have large and long-lasting benefits, but not without the resolve of leaders and support for prevention efforts from a broad array of partners in both the public and private spheres, and from both industrialized and developing countries.
Public health has made some remarkable achievements in recent decades, particularly with regard to reducing rates of many childhood diseases. However, saving our children from these diseases only to let them fall victim to violence or lose them later to acts of violence between intimate partners, to the savagery of war and conflict, or to self-inflicted injuries or suicide, would be a failure of public health.
While public health does not offer all of the answers to this complex problem, we are determined to play our role in the prevention of violence worldwide. This report will contribute to shaping the global response to violence and to making the world a safer and healthier place for all. I invite you to read the report carefully, and to join me and the many violence prevention experts from around the world who have contributed to it in implementing its vital call for action.
|
<urn:uuid:b28c17c4-1f62-46dd-91ab-433356e46c2e>
|
CC-MAIN-2013-20
|
http://www.peacewomen.org/portal_resources_resource.php?id=1361
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.961204
| 605
| 3.234375
| 3
|
The Islamic financial system
period of economic slowdown. There is, hence, a call for a new architecture that would helpminimize the frequency and severity of such crises in the future.Primary cause of the crisis : It is not possible to design a new architecture without firstdetermining the primary cause of the crisis. The generally recognized most important cause of almost all crises has been excessive and imprudent lending by banks over a long period. This isclearly acknowledged by the Bank for International Settlements (BIS), which states as much inits annual report (released on 30th June 2008).This raises the question of what makes it possible for banks to resort to such an unhealthy practice which not only destabilizes the financial system but is also not in their own long-runinterest. There are three factors that make this possible. One of these is inadequate marketdiscipline in the financial system resulting from the absence of profit-and-loss sharing (PLS).The second is the mind-boggling expansion in the size of derivatives, particularly credit defaultswaps (CDSs), and the third is the ‘too big to fail’ concept, which tends to give an assurance to big banks that the central bank will definitely come to their rescue and not allow them to fail.The false sense of immunity from losses that all these factors together provide, has introduced afault line in the financial system. Banks have not, therefore, undertaken a careful evaluation of the loan applications. This has led to an unhealthy expansion in the overall volume of credit, toexcessive leverage, and to an unsustainable rise in asset prices, living beyond means, andspeculative investment. Unwinding later on gives rise to a steep decline in asset prices, and tofinancial frangibility and debt crises, particularly if there is over-indulgence in short sales. JeanClaude Trichet, president of the European Central Bank, has rightly pointed out that ‘a bubble ismore likely to develop when investors can leverage their positions by investing borrowed funds’.
|
<urn:uuid:61d09ef5-28e7-4f76-bb47-ef781bec5a52>
|
CC-MAIN-2013-20
|
http://www.scribd.com/doc/32174515/The-Islamic-Financial-System-as-a-Viable-Alternative-and-Solution-to-Financial-Crises-Presented-by-Caligula68
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.950386
| 419
| 2.578125
| 3
|
Jar is defined as to shock, shake, vibrate or quarrel.(verb)
An example of jar is to jump out from behind a door and surprise someone.
The definition of a jar is a harsh sound, a jolt or a quarrel, or a container made of stone, glass, etc.(noun)
See jar in Webster's New World College Dictionary
Origin: ult. echoic
Origin: ME jarre < Fr jarre < OProv or Sp jarra < Ar jarrah, earthen water container
See jar in American Heritage Dictionary 4
Origin: Middle English jarre, a liquid measure
Origin: , from Old French (from Provençal jarra)
Origin: and from Medieval Latin jarra
Origin: , both from Arabic jarra, earthen jar
Origin: , from jarra, to draw, pull; see grr in Semitic roots.
verb jarred jarred, jar·ring, jars verb, intransitive
Origin: Perhaps of imitative origin.
Learn more about jar
|
<urn:uuid:030d6887-1eda-4a0b-9c20-6814d0b6b7e2>
|
CC-MAIN-2013-20
|
http://www.yourdictionary.com/jar
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.811407
| 219
| 3.125
| 3
|
A major weapon in the fight against foodborne illness has been added to the nation's food safety arsenal. In early December, the Food and Drug Administration (FDA) approved the use of irradiation on beef, pork and lamb.
Pork previously was approved for low-dose irradiation for the elimination of trichinae. (See: "Pork Industry Joins Push For Irradiation," Oct. 15, 1997, National Hog Farmer.)
First step in implementation is for the U.S. Department of Agriculture (USDA) to develop a rule governing the irradiation process, expected before the end of January. A comment period and issuance of final rules will follow. USDA officials estimate the whole process may be completed by mid-1998.
Irradiated Meat In Stores While there is a limited number of irradiation facilities to handle product, there is a good chance of seeing some irradiated ground beef in stores as early as this summer, shortly after the rule becomes effective, remarks Dennis Olson, director of Iowa State University's (ISU) Utilization Center for Agricultural Products.
That's because there is a growing need for ground beef to be irradiated since the government declared E. coli 0157:H7 an adulterant, says Olson. That occurred after several people died from eating improperly cooked fast food hamburgers. Since then, there has been a massive product recall and allegations of tainted hamburger product being exported.
Beyond ground beef, before significant amounts of meat products including pork can be irradiated, it is going to take a major investment to build an industry infrastructure, and that could take a couple of years, says Olson.
Petition Process The process all started in 1994 with a petition filed by Isomedix Inc. calling for a mid-dose level of irradiation of red meat products. The firm is about ready to open an irradiation facility in Libertyville, IL. It stands as one of only a handful of available facilities ready to irradiate meat. Olson's Linear Accelerator Facility is the nation's only food irradiator dedicated to research and producing test market products.
Olson projects it will take 30 irradiators to handle demand for red meat products. There are about 60 irradiators in the country now, but most of them are used to irradiate medical supplies and devices; a growing number of plants are also cleansing cosmetic products of bacteria.
Olson projects ground beef patty manufacturers will be the first to build irradiation facilities. A number of facilities may also be built near cold storage meat plants. And we may see product moving to locations to be irradiated, then moved into distribution, he says. Of course, that will add to the cost.
Cost Of Irradiation Olson estimates 1-5 cents/lb. to irradiate meat products, more if transportation costs are involved. At retail, cost of irradiated meats could command a price premium of several cents a pound. "There are a few retail stores that have sold irradiated poultry, and generally it has been in the range of 10 cents a pound more than non-irradiated poultry products," he says.
In retail trials by ISU conducted in Manhattan, KS, irradiated chicken breasts held their own. When priced lower, and at the same price as regular chicken, the irradiated chicken showed a 65 "percent" market share, regular chicken a 47 "percent" share. Most interesting, says Olson, is how irradiated chicken maintained an 18% market share when priced about 60 cents/lb. higher.
Surveys indicate the number of consumers concerned about irradiated foods has declined in the past 10 years. Marketing of irradiated foods, although limited in the U.S., has been successful, according to Christine Bruhn, Center for Consumer Research, University of California-Davis. And numerous studies show that half or more of consumers express interest in purchasing irradiated meat and poultry products, she says.
Recognized As Safe Food irradiation is widely recognized by numerous scientific bodies in the U.S. and throughout the world as a safe, food treatment technology, according to Beth Lautner, DVM, vice president of Science and Technology, National Pork Producers Council (NPPC). FDA's approval is based on a thorough, scientific review of a substantial number of studies conducted worldwide on the effects of irradiation on a wide variety of meat products, according to Lautner.
Approval of the irradiation petition will allow for a pasteurization effect and extend the shelf life of pork. Approval is for 4.5 kGy (kilograys) of irradiation in fresh meat products, up to 7.0 kGy in frozen products, says Olson.
It's not a panacea. "It is a tool to complement, not replace, responsible practices by farmers, processors and consumers in the handling of meat products," explains Lautner.
Irradiation: First Step? In irradiation, gamma rays produced by cobalt or electrons from machine sources produce an accelerated beam that "charges" the molecules of an object, altering its structure enough that bacterial pathogens cannot multiply and are destroyed.
It's similar to exposure to sunlight or being X-rayed for medical reasons, explains Donald W. Thayer, a research scientist with USDA's Agricultural Research Service, involved in testing irradiation for 16 years.
The new approved level for meat products will easily kill any trichinae and other contaminants, assures Olson.
But it has little effect on the quality of the food itself because there is no cellular activity, says Thayer.
There has been some concern raised regarding irradiation affecting some very sensitive vitamins like B1 in pork.
"But it has been estimated that if all the pork in the United States were to be irradiated, Americans would lose only 3.2% of the vitamin B1 in their diets," says Thayer.
To avoid recontamination, the best time to irradiate meat products is after they are packaged and sealed, says Olson.
Even so, there are still some spoilage organisms left after this pasteurization level of irradiation, meaning meat products still must be refrigerated, he says.
It is possible to go to a high enough dose of irradiation to completely sterilize meat products. "That is done for some meats that are approved for the space program," says Olson.
So in effect, the FDA approval represents a kind of first step in the use of irradiation to protect meat products. He predicts higher doses will be approved in the future to provide even greater protection.
One day, meat products may be irradiated to the point of sterilization, predicts Olson. Then, combined with other processes to enhance the safety and quality of the product, meat could be shelf-stable for 3-4 years.
A video and print materials are available from Iowa State University (ISU) to help consumers better understand the relationship between food irradiation and food safety.
The Irradiation of Meat educational packet (EDC-22) contains a one-hour video and a binder with background materials, an educator's guide, overhead transparencies and an extensive bibliography for $35. The video-only portion (EDC-84) sells for $25.
For more information or to order, contact Extension Distribution Center, Iowa State University, 119 Kooser Drive, Ames, IA 50011-3171, or call 515/294-5247.
|
<urn:uuid:c1a99fc7-9708-4ab6-bdf2-638921c17abf>
|
CC-MAIN-2013-20
|
http://nationalhogfarmer.com/print/mag/farming_irradiation_gains_fda
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955162
| 1,532
| 3.03125
| 3
|
Preimplantation Genetic Diagnosis
The term PGD refers to preimplantation genetic diagnosis. This is in effect checking the embryo for genetic diseases before replacing it into the uterus.
The most common reason for this is that the one or both partners are known to carry a genetic trait associated usually with a severe genetic disease. A typical example is a couple who has given birth to a child with such a severe genetic disease. Some such diseases are uniformly fatal, while others are associated with severe disability. In another situation, the couple may know that they are carriers for a genetic trait ahead of time by a screening process. The most common severe genetic diseases in the US include sickle cell anemia, cystic fibrosis, Tay Sachs disease, and Huntington's disease.
How is PGD performed?
This process begins with a standard IVF cycle. For full details of this please go to the IVF section of the web site. Briefly, the ovaries are stimulated with medication. The eggs are harvested by ultrasound. Each egg is injected with a single sperm (ICSI). This is done to prevent the embryo from being covered with sperm DNA which can contaminate the embryo biopsy. The fertilized eggs (embryos) are then incubated for 3 days. Some embryos will naturally stop dividing. Others will be healthy and continue to divide. Healthy embryos which are at the 6 - 8 cell stage can then be biopsied. The biopsy technique involves removing carefully one cell and either fixing it to a slide or releasing its DNA for further analysis. The image below is of such a biopsy in progress.
Typically the biopsy is done by the IVF program and the genetic material is sent off to a genetic lab which is frequently in another city. The lab will then try to get results within the next 48 hours. If the results are available in that time frame, we have an opportunity to do a fresh embryo transfer usually 5 or 6 days after egg retrieval. If the results cannot be obtained within this time frame, the embryos can be frozen until the results are known. Then a frozen embryo transfer can be performed.
Is PGD expensive?
PGD is highly complex. It requires 2 teams of lab staff a physician and a genetics expert. It is surprising given its complexity that it only adds $4,000.00 to $5,000.00 to the cost of the typical IVF cycle.
Is PGD 100% effective?
It is not. It is a new technique and cannot test for all genetic defects at once. It can typically test for one at a time. It is too early to say it is 100% effective for testing for that one gene. Most recent studies show that it is more than 90% effective for testing for one genetic defect.
How soon can I do PGD?
It typically takes 6 months to develop specific probes for the individual gene. Usually blood has to be initially collected from the parents and tested. The probes are then developed.
What is the pregnancy rate?
The pregnancy rate will vary from 50% for young patients to less than 20% for patients in the forties.
Are there always normal embryos for transfer?
Not always. In most cases some of the embryos will be normal and available for transfer. In some cases all of the embryos can be abnormal and therefore not suitable for transfer.
The following is a highly detailed article about PGD. It is not meant to scare you off! It is presented for patients who would like to know more and health care professionals. It is reproduced with permission from Freedom Drug.
Preimplantation Genetic Diagnosis
Gina Paoletti-Falcone, RN, BSN
Freedom Drug Priority Healthcare
The term preimplantation genetic diagnosis, PGD, is actually somewhat self explanatory. It implies that there will be a genetic diagnosis of something before implantation. In this case, that something would be an embryo or the egg that could contribute to the formation of an embryo prior to embryo transfer in an in vitro fertilization, IVF, cycle. PGD is a laboratory technique that combines the use of IVF, often with intracytoplasmic sperm injection ( ICSI), and micromanipulation of eggs or embryos by skilled embryologists to biopsy a cell which subsequently undergoes genetic analysis by one of several techniques. PGD is therefore the earliest prenatal testing available to those trying to conceive who may be at greater risk, for a variety of reasons, of not conceiving at all, conceiving and losing a pregnancy or conceiving a child who will be affected by a number of diseases that have their basis in genetic abnormalities. The results allow decisions to be made regarding which embryo(s) would be suitable for transfer to the uterus following IVF to increase the likelihood of the pregnancy and birth of a healthy child.
Edwards and Gardner performed the first successful embryo biopsy on rabbits to sex blastocysts in 1968. Advances in molecular biology and assisted reproductive technologies led to clinical research throughout the 1980s. In 1990 both Handyside and Verlinsky reported on their techniques for PGD. Handyside biopsied embryos at the cleavage stage for sexing by Y specific DNA amplification in X-linked disorders while Verlinsky tested polar bodies for autosomal recessive disease. The First International Symposium on Preimplantation Genetics was held in Chicago that same year. Today PGD is a clinical option in many countries throughout the world with an estimate of over 1000 healthy children born as a result of this technology that combines assisted reproductive technology, embryology and genetics. PGD has enhanced the specialty of prenatal diagnosis by allowing couples at risk for having a child with a genetic disease to make choices prior to pregnancy rather than being faced with the agonizing decision of terminating the pregnancy of an affected child.
PGD can be used to screen eggs, sperm and embryos for chromosome abnormalities and embryos for single gene disorders, sex and human leukocyte antigen (HLA) matching. It is helpful to review some basic information before discussing each of these applications. Human cells should each contain 46 chromosomes. These chromosomes are string like structures that are found in the nucleus, or cell center. 23 chromosomes come from the egg and the other 23 from the sperm that unite to form the embryo. Chromosomes 1 through 22, largest to smallest, are the same for males and females. The 23rd chromosome determines sex. A female has 2 X chromosomes, inheriting one from her mother and one from her father. A male has 1 X chromosome from his mother and 1 Y chromosome from his father. Chromosomes are made of genes which act as chemical messages that tell cells how to grow and function in the various processes that take place in the human body. There are more than 30,000 different genes and each cell contains a pair of each, one from the mother and one from the father. Genes are made of DNA arranged in a particular sequence that holds the "code" for that particular gene and its function.
There are four types of nucleotides that are the building blocks of nucleic acids. Each nucleotide consists of a 5 carbon sugar (which is deoxyribose in DNA), a phosphate group, and one of the following nitrogen bases:
A Adenine G Guanine T Thymine C Cytosine
DNA consists of two strands of these nucleotides, held together at their bases by hydrogen bonds. The bonds form when the two strands run in opposing directions and twist together into a double helix. Two kinds of base pairings form along the length of the molecule: A-T and G-C. This bonding pattern permits variation on the order of the bases in any given strand. Even though all DNA molecules show the same bonding pattern, each species has unique base sequences in its DNA. This molecular constancy and variation among species is the foundation for the unity and diversity of life.
(from Biology The Unity and Diversity of Life 2001).
Disruptions in "normal" structure (code) or number of genes or chromosomes can have consequences. The goal of PGD is to detect these changes prior to embryo transfer and avoid those consequences.
PGD is usually performed on one or two cells that can be obtained in two ways: polar body biopsy of the egg or blastomere biopsy of the embryo. As an egg matures and undergoes meiotic division, it extrudes two polar bodies. The first polar body is a by product of the first meiotic division (prior to fertilization) and the second polar body is a by product of the second division (after fertilization). Fertilization is confirmed by the presence of two pronuclei, about 15-18 hours after insemination with sperm, and the presence of the second polar body in the perivitelline space, the space between the zona pellucida and the cytoplasmic membrane. The most common method for polar body biopsy is to make a slit in the zona pellucida, outer covering of the egg, using a PZD microneedle and aspirate the polar bodies. The disadvantage of polar body biopsy is that it only gives genetic information about the egg and does not allow for testing of the paternal genetic contribution to the embryo. This means that it cannot be used to detect chromosomal abnormalities that occur after fertilization, including translocations that are transmitted paternally, autosomal dominant diseases or sexing of embryos.
Blastomere biopsy is the more widely used method to obtain cells for PGD. It allows testing of both the maternal and paternal genetic contribution to the resulting embryo(s). A blastomere is simply a cell from an embryo. Research established that the 8 cell stage was most suitable for blastomere biopsy which means performing the biopsy on day 3 after egg retrieval with embryo transfer pushed out to day 5. On day 3 the blastomeres are still totipotent, undifferentiated and having potential to develop into any type of cell, and have not yet compacted as in the morula stage. Removing a cell or two, therefore, will not effect fetal development but simply delays cell division for a couple of hours at which point the embryo resumes normal division. The embryo is usually incubated in a calcium and magnesium free media for about 20 minutes prior to biopsy to reduce the adherence of one blastomere to another. The biopsied blastomere must have a visible nucleus present. Before removing the blastomere with the biopsy pipette an opening is made in the outer covering, zona pellucida. This is accomplished using either the application of acidic Tyrode's solution, a diode laser or a PZD microneedle.
Once the cell is removed, it must be prepared for one of two techniques used to analyze it. The technique used will be determined, in advance, by the reason for PGD and the test required. FISH, fluorescent in situ hybridization, can be used on both polar bodies and embryos to analyze whole chromosomes while PCR, polymerase chain reaction, is used to analyze genes on embryos. Preparation for FISH requires that the cell be spread on a slide and fixative is applied such that the cytoplasm dissolves leaving just the nuclear chromosomes. Preparation for PCR requires the cell to be placed in a special tiny PCR tube containing a buffer that allows a reaction for replication and amplification of the genetic signal. All embryos in culture dishes, slides for FISH and PCR tubes must be meticulously prepared and labeled so unequivocal matching of each embryo with it's final PGD report is assured.
FISH uses probes, small pieces of DNA, that are a match for the chromosomes that need to be analyzed. Each probe is labeled with a different color fluorescent dye which is then applied to the biopsied cell on the slide. A coverslip is applied and sealed and then the slide is placed on a slide warmer, then in a humidification incubator. Finally, under a fluorescent microscope, each chromosome color can be counted and cells/embryos that are normal (2 of each analyzed chromosome) can be distinguished from those that are not normal.
FISH can be used for:
- Aneuploidy screening in women of advanced maternal age
- Aneuploidy screening for male infertility
- Aneuploidy screening with repeated IVF failure
- Identification of sex in X linked diseases and for non medical reasons
- Recurrent miscarriages caused by parental translocations
HINT TO REMEMBER
"BIG FISH" used to analyze whole chromosomes
Each biopsed cell contains a tiny amount of DNA, which makes up the genes on the chromosomes. It would be very difficult to accurately read this small amount of DNA. PCR allows the amplification of a specific DNA sequence(s) by using enzymes that allow it to be copied and multiplied billions of times so that it can be read. PCR consists of 3 steps that are repeated 20-40 times.
- Step 1
Denaturation of the two complimentary DNA strands at high temperature. This causes the two strands to unwind and separate into two single strands each serving as a template to build a new double strand.
- Step 2
Annealing at a lower temperature which allows primers (short complimentary pieces of DNA) to connect on either end on the DNA sequence to be amplified
- Step 3
Extension allows a heat resistant DNA polymerase to insert dinucleotide building blocks starting at each primer and working inward thus building two new identical strands.
At the end of this cycle the number of DNA molecules has doubled and the cycle starts again. The mutation or disease being tested for requires the development of a PCR test specific for it. The test development takes time and generally involves blood samples from the couple. Once the DNA has been amplified there are a variety of laboratory techniques to screen that gene for the abnormality such as gel electrophoresis, where a mismatch results in differential migration on the gel, and automated DNA sequencing.
PCR can be used for:
- Single gene defects in autosomal disease
- Single gene defects in male infertility
HINT TO REMEMBER
"Piece C R" used to analyze specific genes (pieces on chromosomes)
PCR for single gene defects requires the use of ICSI, intracytoplasmic sperm injection, to prevent contamination of the biopsied cell with DNA from surplus sperm that may still be embedded in the zona pellucida at the time of blastomere biopsy if conventional IVF drop insemination was used. The cumulus cells attached to the zona can cause similar problems and should be removed prior to blastomere biopsy. The goal is to insure that pure, high quality DNA is available for analysis that is not contaminated by another cell or piece of DNA.
Clinically PGD can benefit a variety of patients who undergo assisted reproductive technologies specifically for PGD or are undergoing assisted reproductive technologies to treat infertility with the addition of PGD to enhance their outcome. Aneuploidy, the most common chromosomal abnormality, simply means having an extra chromosome, trisomy, or a missing chromosome, monosomy. If the egg or the sperm that create the embryo has an extra or missing chromosome then that embryo will be affected in the same way. When there are extra or missing large chromosomes the likelihood of implantation decreases and the spontaneous miscarriage rate increases. When chromosomes 13, 18, 21, X or Y are involved, the pregnancy may implant and continue to develop resulting in the birth of a child with a chromosome condition that can include physical differences and intellectual retardation. Trisomy 21 or Down's Syndrome is the most common trisomy. Others include Patau Syndrome ( trisomy 13), Edward Syndrome ( trisomy 18), Klinefelter Syndrome ( 47 XXY an extra sex chromosome) and Turner Syndrome ( 45 X a missing sex chromosome). Trisomy 16, 22, 15 and 21 are commonly found in spontaneous miscarriages. The most common aneuploidies in day 3 embryos are 22,16,21,15 and 17.
The chance of aneuploidy increases with increasing maternal age. Since women are born with their lifetime supply of eggs, the thought is that older eggs are more likely to make mistakes as their chromosomes divide resulting in a greater percentage of eggs that have either a missing or extra chromosome. This is likely the explanation for the dramatic decline in pregnancy rates and increase in miscarriage rates for women as they age, even with assisted reproductive technologies. Studies have shown that more than 20% of embryos from women 35-39 and 40-60% of embryos in women 40 and older are aneuploid. Screening for aneuploidy using FISH on polar bodies or blastomeres could therefore potentially increase implantation and pregnancy rates, while decreasing pregnancy loss and the number of pregnancies affected by trisomies or monosomy. Several studies have shown increased implantation rates with aneuploidy screening for 8 chromosomes. While PGD for aneuploidy significantly decreases the risk of having a child affected by a trisomy or monosomy, it is not possible at this time to test all of the chromosomes. The most common chromosomes in which monosomies or trisomies have been seen are tested for: 13, 15, 16, 17, 18, 21, 22 and X, Y. The accuracy of PGD for aneuploidy is about 90%. Misdiagnosis may occur because of mosaicism. This means that some of the blastomeres within the embryo are normal and some are abnormal. If a normal blastomere is biopsied, the result could be the transfer of an embryo that could carry an abnormality. Prenatal testing by either chorionic villus sampling or amniocentesis is currently recommended in any pregnancy after PGD to confirm the diagnosis and rule out any other possible aneuploidies not tested for.
PGD can also be used to detect translocations, a change in the structure of chromosomes. Individuals who have "balanced" translocations are generally unaffected as there is no extra or missing chromosomal material and the break does not generally disrupt gene function. Typically these people have no medical problems although some have reduced fertility. This is likely due to producing eggs or sperm that are "unbalanced". An "unbalanced" translocation is one in which there is extra or missing chromosomal material. An embryo with an unbalanced translocation is less likely to implant, more likely to miscarry if it does implant or may result in the livebirth of a child who will likely have physical or mental problems. Therefore individuals with translocations are at risk for pregnancy loss or having a child with severe medical handicaps that may be incompatible with life. Reciprocal translocations affect about 1 in 625 people. This type of translocation involves a break anywhere on two different chromosomes allowing pieces to be swapped between them. About 1 in 900 people have a Robertsonian translocation involving chromosomes 13, 14, 15, 21 or 22. These chromosomes have much larger bottom halves which can fuse together. The risk for having children who are normal, balanced, unbalanced or recurrent pregnancy loss is influenced by the chromosome(s) involved and the size of the fragments exchanged.
Polar body biopsy can be used if the woman has a translocation, although blastomere biopsy is more commonly used. FISH analysis is used to identify normal/balanced and unbalanced genotypes. Analysis of embryos from translocation carriers has shown that:
- carriers of reciprocal translocations have a high number of unbalanced embryos
- it may be beneficial to analyze sperm from male translocation carriers before a PGD cycle to determine the percentage of unbalanced sperm and allow for estimates of the percentage of embryos that may be unbalanced and counsel accordingly
- carriers of reciprocal translocations have a higher incidence of mosaic and chaotic embryos than those with Robertsonian translocations
- infertility in translocation carriers may not only be caused by their unbalanced eggs or sperm but also because of the high incidence of aneuploidy involving other chromosomes
- lower pregnancy rates in translocation cases is primarily caused by the low number of normal embryos available for transfer after PGD
- Evisikov et al (2000) showed that an equal number of normal/balanced (32%) and unbalanced (26%) embryos biopsied made it to the blastocyst stage
PGD for translocations significantly decreases the likelihood of having a child with a translocation as it is about 90% accurate. Prenatal testing by either chorionic villus sampling or amniocentesis is recommended to account for the error rate as well as to test for other chromosomal conditions not tested for. PGD significantly reduces the chance of pregnancy loss in patients with translocations. According to Munne, patients with translocations who achieved a pregnancy after PGD had experienced miscarriage in >90% of their previous pregnancies. After PGD, fewer than 10% of pregnancies resulted in a loss. Munne also noted that female translocation patients produced an average of 9.5 mature eggs in comparison to 13 mature eggs in females without translocations. On average 65% of embryos are abnormal and in 22% of cycles there were no normal embryos available for transfer.
In the past the first indication that many couples had that one or both of them carried a genetic mutation was the birth of a child with a serious medical condition or a history of a relative with a genetic medical condition. Individuals could be tested to see if they "carried" the gene and then counseled as to the odds of having a child with the disease. Prenatal genetic testing by either chorionic villus sampling or amniocentisis then made it possible to diagnose many of these diseases in a fetus during pregnancy. A positive diagnosis placed these couples in the unenviable position of deciding whether or not to continue with the pregnancy or terminate at a point when pregnancy was well established. IVF and micromanipulation for ICSI as well as the Human Genome Project and the development of PCR for DNA amplification have all made detection of many single gene disorders using PGD possible. Single gene disorders are those diseases that are caused by the inheritance of a single defective gene. There are two categories of single gene disorders:
- those that are recessive in which two defective copies of that gene, one from each parent who carries it, is necessary to have the disease
- those that are dominant in which only one copy of the defective gene is necessary in order to be affected
Errors in hundreds of different genes are responsible for hundreds of diseases identified. Many are rare but some are common enough among certain subgroups of the population that they should routinely be screened to see if they are carriers and see a genetic counselor if they are. The following tables list single gene disorders that PGD has been used to screen for.
Alpha and Beta Thalassemia HLA genotyping
Cystic Fibrosis Sanhoff Disease
Sickle Cell Anemia Epidermolysis bullosa
Gaucher Disease Adenosine Deaminase deficiency
Tay Sachs Disease Glycogen Storage Disease type !A Fanconi Anemia types A,C and G Adrenal hyperplasia
Spinal Muscular Atrophy LCHAD
Neurofibromatosis 1 and 2 Li-Fraumini (p53 gene)
Von-Hippel Lindau Myotonis dystrophy
Huntington's Disease Marfan syndrome
Osteogenesis Imperfecta types I and IV Charcot-Marie-Tooth type IA
APP early onset Alzheimers Polycystic Kidney Disease types 1and 2
Multiple Epiphyseal Dysplasia Retinitis pigmentosa
Familial Adenomatous Polyposis (APC gene)
X Linked Diseases
Ornithine Carbamyl Transferase (OTC) deficiency
X linked hydrocephalus
Hemophilia A and B
Duchenne Muscular Dystrophy
Both ASRM and ACOG have recommended preconception screening for some of the most common single gene disorders such as CF and Tay Sachs in the at risk population. In order to do PGD blood samples from the couples may be needed to confirm the particular mutation and the ability to test for it. Reports of genetic testing are also needed to identify the specific mutation. Cystic fibrosis is the most common autosomal recessive disease in Caucasians of European descent. Approximately 1 in 25 carries a defective copy of the gene. Because it is a recessive disease two copies of the defective gene are necessary, one from each parent, to be affected. One copy of the defective gene makes a "carrier". Two carriers have a 25% chance that their child will be affected, a 50% chance that their child will be a carrier and a 25% chance that the child will not have a copy of the defective gene. There are many possible mutations in the CF gene. The most common is deltaF508. A different mutation causes congenital bilateral absence of the vas deferens, CBAVD, a cause of male infertility. Another common autosomal recessive disease is Tay Sachs. The odds of carrying the Tay Sachs mutation are increased among eastern European Ashkenazi Jews. Approximately 1 in 27 Jews in the US is a carrier. Hemoglobin diseases are the most common single gene disorders overall with sickle cell disease common in African ancestry and beta thalassemia common in Mediterranean countries/ancestry. Each of theses diseases has devastating effects on the affected child and is eventually fatal.
Prior to PGD, families with known histories of these diseases were faced with either not having their own children to avoid transmittal of the disease or taking a chance, undergoing amniocentesis and being faced with the possible choice of terminating an affected pregnancy or having an affected child. PGD has given these couples the option of testing embryos prior to conception which could theoretically eliminate the transmission of some of these diseases to the next generation. Additionally, because of preconception screening, families "at risk" (2 carriers of the CF mutation) will be alerted of their risk before they ever have a family history of the disease.
Huntington's disease is a late onset dominant single gene disorder. Symptoms usually present after the individual has had children and potentially passed on the single defective gene. Because it is dominant, having a parent with Huntington's disease means a 50% chance of inheriting Huntington's disease. Studies show that presymptomatic genetic testing is not something the majority of those at risk choose, yet given the opportunity they would choose to prevent the transmission of that dominant gene to their children. Some of these couples undergo IVF and PGD in a "nondisclosure" cycle meaning that they are given no information about the number of eggs or embryos obtained or the results of PGD in their embryos. They are given no information that would allow them to infer that they have the defective Huntington's gene but would only have an embryo transfer of disease free embryos which could eliminate the disease from the next generation of their family. Despite the relative simplicity of this train of thought, it does raise ethical questions that are difficult to answer.
PGD can also be used to screen embryos as an HLA match for a sibling with a life threatening disorder. This may be the last resort for families with a child affected by thalassemia, Fanconi's anemia, leukemia and other inherited or sporadic diseases requiring a hematopoietic stem cell transplant. Matched sibling donors are the best candidates but if none exist IVF with PGD can provide both screening to prevent the transmission of the disease to another child (if it is an inherited disease) and the HLA matched sibling to save the life of the existing child using chord blood obtained at birth.
It is apparent then that the following patients are most likely to benefit from PGD and the information it provides:
- Couples with a family or personal history of an inheritable genetic disease
- Carriers of single gene disorders
- Women over 35
- Couples with a prior history of repeated pregnancy losses or pregnancies with
- Carriers of chromosome translocations or abnormalities
- Patients with repeated IVF failure
- Severe male factor
Once the appropriate PGD testing has been done, results are communicated so that decisions about embryo transfer can be made. Embryos will be classified as normal, abnormal or undiagnosed. Because of all the intricate steps involved in both the biopsy and the actual FISH or PCR technologies there can be technical difficulties that result in a "non diagnosis". Reasons for this can include:
No nucleus in the cell biopsied therefore no chromosomes
A slide fixation error such that cells are lost
Unknown detection failure
Failure to amplify the gene due to technical problems at IVF lab, PGD lab or an embryo
with degraded DNA
Contamination with foreign DNA
Other limitations and challenges to consider are as follow:
There may be few or no normal embryos available for transfer.
There are generally no embryos available for cryopreservation requiring another fresh IVF cycle
Cryopreserved biopsied embryos appear to have a lower implantation rate than non biopsied cryopreserved embryos.
There is no guarantee of pregnancy even in otherwise fertile couples with the transfer of normal good quality embryos.
Embryos can only be diagnosed as "normal" for the defect(s) tested
There is a very low risk ~ 0.1% of damage to the embryo as a result of the biopsy
Analysis of a single cell has limitations and an error rate (5-10%) that allows for a small percentage of misdiagnosis. Therefore if a pregnancy results prenatal testing in the form of chorionic villus sampling or amniocentesis are still required.
Patients who come to an infertility practice for PGD are very often different from infertility patients. They generally are not infertile and may already have children.They may have a child who is affected by a condition they are trying to prevent in another child. They may or may not have a true understanding of what IVF and PGD entail. They may have no understanding of the time frame involved in a PGD cycle and the many steps involved. They may have no information on the cost or coverage of the PGD cycle. They are generally referred by someone who may or may not have started the educational process of how and why PGD may be beneficial to them.
Infertility patients may also require PGD for reasons identified as part of their infertility workup ( both identified as carriers of CF gene mutation) or treatment (multiple failed IVF cycles). Depending on the reason for PGD the first consult for these patients may be with the reproductive endocrinologist or with the genetics counselor. Additionally, they will need to meet with financial and nursing and their cycle will also require the involvement of the embryology lab at the practice and a PGD lab.
Every patient needs genetics counseling before their PGD cycle. The genetics counselor can review the genetic basis for the particular clinical situation the patient presents. Discussion may include an overview of the diagnosis, transmission of disorders, likelihood of transmission and ways to test for it. Family and personal medical histories may be discussed and previous genetic testing reviewed. Meeting with the counselor is an integral part of the "informed consent process" for patients undergoing PGD. Genetics counselors are the experts in discussing genetics with patients.
The physician meets with the patients to discuss their clinical situation and the application of IVF with PGD. Risks and benefits of IVF, polar body/blastomere biopsy, FISH or PCR testing as well as the possibilities of no embryos to transfer, pregnancy rates and follow up testing all need to be discussed. Consents for all of the above procedures need to be signed. Very often the PGD testing will be done at a laboratory that is a separate entity from the infertility practice with a separate set of consents to be signed. The physician will need to discuss the most effective method for biopsy and testing with the embryology and PGD lab and clearly document what will be tested, where and how.
Very often the PGD lab is not part of the infertility practice and may even be in another state. The relationship between the practice and the PGD lab needs to be clearly spelled out with defined roles in each entity and a communication plan for the various steps in the process. Financial issues need to be clearly documented so that all parties involved understand the costs and who is responsible for payment and to whom. Some PGD labs provide embryologists who come to the center to perform the actual polar body or blastomere biopsy while other infertility practices have their own embryologists do the biopsy, prepare the cells and ship to the PGD lab for analysis. Patients may have very little interaction with the lab that will do their genetic testing.
Most PGD labs have the final say as to when a patient is clear to start their cycle based on receipt of consents, pretesting and preparation of probes etc., completion of genetic counseling and financial arrangements. Depending on the reason for PGD it may take 8-12 weeks for all of the testing and preparation to be completed. The PGD lab generally needs to be notified of:
Start of stimulation
Anticipated biopsy date
HCG and egg retrieval dates
Number of eggs retrieved
Number of embryos to be biopsied
All information regarding shipment of the specimens, generally by Fed Ex or another predetermined courier
There needs to be a defined plan for communication within the PGD team at the infertility practice. The embryology lab needs to be involved in plans for upcoming PGD cycles including cycle starts and coordination with the PGD lab for egg retrieval and biopsy dates as well as information regarding eggs and embryos, transport of biopsied cells and communication of results and embryo transfer. There needs to be flexibility within the embryology staff as expected egg retrieval dates may change based on response to stimulation. Embryology needs to know how and who to get in touch with at the PGD lab at any time.
Patients need to meet with the Financial department at the infertility center to discuss the cost of the procedures they will undergo. Some patients may have coverage for some of the pretesting involved and some for the IVF cycle. Very few patients will have insurance coverage for the actual PGD process which can cost somewhere between $2000 to $5000 dollars.
Patients who are planning IVF and PGD can certainly benefit from a consultation with a psychological counselor. They may have issues that need to be discussed in light of their diagnosis and previous experiences. Counselors can help to reinforce the commitment that patients make when planning a PGD cycle in terms of time, money and emotions. Counselors should be available throughout the cycle to help patients cope with the emotional issues treatment can raise.
Nurses play an intricate role in the very precise and detail oriented coordination of PGD cycles. Perhaps the most important word for everyone involved in these cycles to remember is communication. This refers both to the verbal communication that is essential between all the parties involved as well as written communication in the form of documentation of all that has been discussed, agreed to and planned. Nurses are pivotal figures in that they generally have the most contact with the patients and are the point person that patients, physicians, embryologists and the PGD lab all look to for assurance that all the appropriate steps have been followed and documented to allow the cycle to proceed successfully. Some might assume that a PGD cycle is simply an IVF cycle with a few additional laboratory procedures in between egg retrieval and embryo transfer. That is a very simplistic and unrealistic assumption for many reasons.
The nursing consult orients patients to the process of IVF and PGD. Very often patients do not expect that they will need the same basic workup (day 3 hormones, infectious disease testing, uterine evaluation, semen analysis) as infertility patients because they don't consider themselves to be infertility patients. They may need additional bloodwork or records of previous genetic testing done in order for the PGD lab to develop testing specific for their clinical situation. Much of the infertility nurses' role is patient education. Despite the fact that these patients have generally met with various other members of the "PGD team" and been counseled and consented, it is very often the nurse who answers the questions that remain unasked or unanswered. The nurse fills in all the details of the journey from point A to point B in the process of IVF and PGD. Medications are discussed and ordered, the stimulation process and protocol are outlined, monitoring is arranged and the expected time table is covered. It is essential that the nurse has a reasonable understanding of polar body biopsy, blastomere biopsy, FISH and PCR so that they can be explained in terms that patients can understand. Nurses need open communication with the physician regarding the clinical plan for each patient. Some practices may designate specific nurses to handle PGD patients in the same way that there are usually specific donor egg nurses.
Most patients are anxious to get started and may be overwhelmed and disappointed when they realize all that needs to be done before they can go ahead with the cycle. The nurse reassures and coordinates the various steps. The nurse is, in some respect, the gatekeeper who insures that all the i's are dotted and t's crossed so that the patient fulfills all the obligations necessary to get the go ahead from the PGD lab to start their cycle. As the gatekeeper the nurse is very often the key communicator between the physician, embryology lab, PGD lab and the patients.
It takes expertise, cooperation, organization, communication and documentation on everyone's part to make a successful PGD program. It takes empathy, compassion and patience to care for the people who can benefit from these technologies. Defined roles, team meetings and ongoing evaluation of results can help to keep everyone on the same page.
A PGD program can raise issues that may require ethical consideration and discussion. Professor Robert Edwards eloquently summarizes some of these moral issues that PGD forces us to consider:
"A constant worry is the oft repeated charge that these techniques introduce eugenics to human populations rather than helping to avoid inherited diseases in fetuses. Great care is essential to avoid any impression that averting genetic disease in embryos casts any reflection of the value and equality of the handicapped in a modern society. And a final challenge to the democracy of science is that the rich will benefit most from these new advances because health authorities in many countries still crassly decline to fund IVF and PGD despite their overwhelming advantages to so many couples. All these issues have stemmed from the belief that the social advantages of trying to avert genetic disease in children far outweigh the cost of their technologies. There is no doubt that preimplantation genetic diagnosis and other means of averting or alleviating serious inherited disease are bound to offer ever widening opportunities while demanding the closest of ethical attention."
"An Atlas of Preimplantation Genetic Diagnosis"
Verlinsky and Kuliev Parthenon Publishing 2000
The Genetics and Public Policy Center, www.dnapolicy.org, released the result of their public opinion survey toward genetic testing on February 18, 2005. This is believed to be the largest public opinion survey ever conducted on the topic and was funded by The Pew Charitable Trusts. It included 21 focus groups, 62 in-depth interviews, and 2 surveys with a combined sample size of more than 6,000 people and both in person and on line town meetings. The report states that:
"A majority of Americans believes it is appropriate to use reproductive genetic testing to avoid having a child with a life-threatening disease, or to test embryos to see if they will be a good match to provide cells to help a sick sibling. However, most Americans believe it would be wrong to use genetic testing to select the sex or other non-health related, genetic characteristics of a child. Focus groups and town hall meetings revealed that Americans don't fear technology per se, but rather fear that unrestrained human selfishness and vanity will drive people to use reproductive genetic testing inappropriately such as to select for non-medical but socially desirable characteristics."
According to the report, Americans "fear a world in which children are expected to be perfect, and parents are expected to do everything possible to prevent children with genetic disease from being born. For many participants, these technologies raise concerns about how society might treat individuals with disabilities in a world where the birth of disabled persons might be preventable, and where the cost of testing and treatment might lead to disparities in who can afford them."
A majority of those surveyed also "wants and expects oversight to ensure safety, accuracy and quality of reproductive genetic testing" but 70 percent of respondents are also "concerned about government regulators invading private reproductive decisions". Only 38% "support the idea of the government regulating PGD based on ethics and morality."
1. Verlinsky,Y and Kuliev,A."An Atlas of Preimplantation Genetic Diagnosis", Parthenon Publishing 2000.
2. Verlinsky et al. "Over a decade of experience with preimplantation genetic diagnosis: a multicenter report" Fertility & Sterility August 2004 Vol 82 No 2, pp.292-294.
3. Robertson,J. "Embryo screening for tissue matching" Fertility & Sterility August 2004 Vol 82 No 2 pp. 290-291.
4. Marik,J. "Preimplantation Genetic Diagnosis" eMedicine.com January 14, 2005.
5. Cunningham,D. "PGD and the Embryology Lab (what the heck are they doing in there?) powerpoint presentation and inservice for the New England Nurses in Reproductive Medicine February 2004.
6. Keller,M. "Preimplantation Genetic Diagnosis" powerpoint presentation and inservice for the New England Nurses in Reproductive Medicine February 2004
7. Sermon,K. "Current concepts in preimplantation genetic diagnosis (PGD): a molecular biologist's view, Human Reproduction Update, Vol 8, No 1 pp.11-20. 2002
8. www.reprogenetics.com assessed 1/14/05
9. www.givf.com assessed 1/24/05
10. Bielorai,B et al. "Successful umbilical cord blood transplantation for Fanconi anemia using preimplantation genetic diagnosis for HLA match donor" American Journal of Hematology. Dec 2004: 77(4):397-9. assessed on PubMed 1/17/05
11. Kahraman,s. et al. "Clinical aspects of preimplantation genetic diagnosis for single gene disorders combined with HLA typing" Reprod Biomed Online. 2004 Nov;9(5):529-32. assessed on PubMed 1/17/05.
12. Ferraretti,AP et al. "Prognostic role of preimplantation genetic diagnosis for aneuploidy in assisted reproductive technology outcome" Human Reproduction.2004 March;19(3):694-9. assessed on PubMed 1/17/05
13. Gianaroli,L et al. "Preimplantation diagnosis for aneuploidies in patients undergoing in vitro fertilization with a poor prognosis: identification of the categories for which it should be proposed" Fertility & Sterility Nov 1999 Vol 72, pp.837-844.
14. Kahraman,S et al. "The results of aneuploidy screening in 276 couples undergoing assisted reproductive techniques" Prenatal Diagnosis April 2004;24(4):307-11. assessed on PubMed 1/17/05
15. www.rscbayarea.com assessed 2/4/05
16. www.infertilitydoctor.com assessed 2/4/05
17. www.sbivf.com assessed 1/17/05
18. Biology The Unity and Diversity of Life Ninth Edition 2001 Brooks/Cole Thompson Learning Publishers.
19. www.dnapolicy.org assessed 2/18/05
1. Preimplantation genetic diagnosis testing must always be done in conjunction with an
Answer is A
2. Polar body biopsy involves the removal of one or two polar bodies from:
A. an oocyte
B. a day 1 embryo
C. a day 3 embryo
D. a blastocyst
Answer is A
3. Polar body biopsy tests for
A. paternal genetic contribution
B. maternal genetic contribution
C. both paternal and maternal genetic contribution
D. sex of the embryo
Answer is B
4. Blastomere biopsy is usually done:
A. as soon as fertilization is confirmed.
B. after ICSI insemination with sperm.
C. on day 3 after egg retrieval when there are generally 8 cells.
D. on day 5 at the blastocyst stage.
Answer is C
5. FISH involves the use of:
A. probes which are small pieces of DNA.
B. a fluorescent microscope to count the chromosomes analyzed.
C. microscope slides and coverslips.
D. all of the above.
6. FISH can be used to test for aneuploidy of polar bodies, sperm or embryos.
Answer is A
7. Polymerase chain reaction allows for:
A. multiplication of chromosomes.
B. insertion of news genes to replace defective genes.
C. amplification of specific DNA sequences
D. removal of defective genes from embryos so they can be transferred.
Answer is C
8. PGD for aneuploidy:
A. uses fluorescent probes to identify the number of specific chromosomes being
B. cannot test for every chromosome simultaneously at the present time.
C. may help to increase implantation rates in patients with repeated IVF failure.
D. all of the above.
Answer is D
9. PGD eliminates the need for either chorionic villus sampling or amniocentisis.
Answer is B
10. Each embryo that has undergone a blastomere biopsy will have a definitive
Answer is B
|
<urn:uuid:24eac315-e827-41fd-976b-4d556ee91256>
|
CC-MAIN-2013-20
|
http://www.cincinnatifertility.com/infertility-treatment/in-vitro-fertilization/pgd
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926647
| 9,560
| 2.796875
| 3
|
Antifreeze Poisoning in Dogs & Cats (Ethylene Glycol Poisoning)
During certain times of the year (such as summer and winter), dogs and cats are more exposed to antifreeze. Untreated, antifreeze poisoning can be fatal to pets. Prompt, immediate treatment is necessary in order to save a dog or cat’s life from poisoning.
Sources of antifreeze:
The primary dangerous source of antifreeze is automotive radiator coolant. This is typically a high concentration of ethylene glycol (EG), and come in 95-100% concentrations. Other sources of antifreeze include windshield deicing agents, brake fluid, motor oil, developing solutions for hobby photographers, wood stains, solvents, and paints. Here in Minnesota, a lot of people put antifreeze into their cabin’s toilet to prevent it from freezing during the winter, and we see a lot of toxicities here at Pet Poison Helpline from dogs running into cabins and drinking out of the toilet. Finally, there are rumors of small amounts of antifreeze in holiday ornaments such as imported snow globes. Recently, some were found to contain antifreeze (ethylene glycol) in the liquid. If a snow globe falls off the table and cracks open, and your dog or cat licks up the contents of the snow globe, there is the risk of antifreeze poisoning.
Mechanism of action:
Ethylene glycol, the primary ingredient in antifreeze, is metabolized by the body to highly poisonous metabolites which lead to severe, acute kidney failure and secondary development of calcium oxalate crystals forming in the kidneys.
Common signs of poisoning:
There are three stages seen with ethylene glycol poisoning:
- Stage 1: This occurs within 30 minutes to 12 hours, and looks similar to alcohol poisoning. Signs of walking drunk, drooling/hypersalivating, vomiting, seizuring, vomiting, and excessive thirst and urination are seen.
- Stage 2: This occurs 12-24 hours after a dog or cat has gotten into antifreeze, and signs of “alcohol” poisoning appear to resolve, when underlying severe internal damage is still occurring. Signs of drunkenness seem to improve, but signs of an elevated heart rate, increase breathing effort, and dehydration may start to develop.
- Stage 3: In cats, this stage occurs 12-24 hours after getting into antifreeze. In dogs, this stage occurs 36-72 hours after getting into antifreeze. During this stage, severe kidney failure is developing secondary to calcium crystals forming in the kidneys. Severe lethargy, coma, depression, vomiting, seizures, drooling, and inappetance may be seen.
Antidote and treatment:
There are only two antidotes for antifreeze poisoning: either ethanol or 4-MP (fomepizole). Cats must be treated within 3 hours of ingesting of antifreeze to be effective, while dogs must be treated within 8-12 hours of ingestion. Delayed treatment often is not effective, and once a dog or cat has developed kidney failure, the prognosis is poor.
As little as one teaspoon of antifreeze when ingested by a cat or a tablespoon or two for a dog (depending on their size), can be fatal. If you think your dog or cat has gotten into antifreeze, it is very important that you seek veterinary care immediately for blood testing for antifreeze poisoning (including an ethylene glycol test and venous blood gas test).
Published on February 28, 2011
Categorized under: Pet Safety Tips
|
<urn:uuid:7687b88f-82f9-4365-ba74-b6cbab671c69>
|
CC-MAIN-2013-20
|
http://www.petpoisonhelpline.com/2011/02/antifreeze-poisoning-in-dogs-cats-ethylene-glycol-poisoning/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.919069
| 772
| 2.515625
| 3
|
concatinating strings to int... or something :)
None at none.none
Sun Jul 25 15:27:56 CEST 2004
(Simple test code below)
What I am trying to do is print;
you win $50
what I am getting is;
you win $ 50 with the space between the $ and 50.
I tried; print arf + barf
but it tells me you can't concat strings to numbers.
I have read the tutorial (and probably missed the answer)
could someone please tell me how to get rid of that
mon = int(raw_input("get number"))
barf = 2 * mon
arf = "you win $"
print arf, barf
More information about the Python-list
|
<urn:uuid:7ce185bd-0b30-498a-aaf6-dbee3e0541c5>
|
CC-MAIN-2013-20
|
http://mail.python.org/pipermail/python-list/2004-July/248866.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.873148
| 162
| 3.09375
| 3
|
| Steven Phillips|
Information Science Division
1-1-4 Umezono, Tsukuba, 305, Japan
| Graeme S. Halford|
Department of Psychology
The University of Queensland
Brisbane, 4072, Australia
|•||Human cognitive behaviour is grouped on the basis of common structure (e.g., from above, it is not the case that one can do the first inference, but not the second).|
|•||Classical architectures capture this grouping of behaviours by positing structure sensitive processes.|
|•||Connectionist architectures, by specifying context- sensitive (structure insensitive processes), distribute behaviour irrespective of structure.|
|•||Therefore, classical (symbol) systems are a better explanation for cognitive architecture, although connectionist architectures may provide suitable implementations of classical ones.|
At issue here is not whether an architecture can ultimately exhibit all the observed stimulus-response behaviours, but how these behaviours are distributed over their available resources (e.g., learning trials). For example, an architecture based on simple associations requires two association steps (e.g., 1: A→B; 2: B→A) to support a bidirectional link between events A and B. By contrast, a relation based architecture only requires one step (e.g., R(A,B)), since bi(omni)directionality is built into relational operators ( Phillips, Halford, & Wilson, 1995). The two architectures, although supporting the same functionality, distribute that functionality differently. The relevant difference is that there are states of associative based architectures for which representations of events are accessible in one direction, but not the other (e.g., after step 1, but before step 2). If one only ever observes bidirectional behaviour then such observations would be support for the relation based architecture, and not the association based architecture, although the former could be implemented by the latter1.
Clearly, then, the root of the systematicity argument over cognitive architecture rests on the degree to which human cognition is systematic. Fodor and Pylyshyn take systematicity to be self-evident. Without recourse to specific data they claim, for example, that one can make inferences of the form P → Q, P ⊢ Q, if and only if one can make inferences of the form Q → P, Q ⊢ P. Subsequently, Hadley ( 1994) characterized systematicity as generalization to novel syntactic position, based on a review of language learning. Researchers have demonstrated networks supporting this definition of systematicity to various degrees ( Christiansen & Chater, 1994; Hadley & Hayward, 1994; Niklasson & van Gelder, 1994; Phillips, 1994). However, others2 question whether the empirical evidence supports this definition either way, given the difficulty of controlling subjects' background knowledge and observing what knowledge they have acquired in the course of an experiment. Furthermore,____________________
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Proceedings of the Nineteenth Annual Conference of the Cognitive Science Society. Contributors: Michael G. Shafto - Editor, Pat Langley - Editor. Publisher: Lawrence Erlbaum Associates. Place of publication: Mahwah, NJ. Publication year: 1997. Page number: 614.
|
<urn:uuid:7ab22bd7-1c7e-4ae5-96f4-8be2be58fc55>
|
CC-MAIN-2013-20
|
http://www.questia.com/read/96642707/proceedings-of-the-nineteenth-annual-conference-of
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.871634
| 692
| 2.546875
| 3
|
Apr. 15, 2002 CHICAGO --- Some inhibitors of angiogenesis prevent new blood vessel growth by triggering a built-in "failsafe" device in vessel-forming endothelial cells that marks them for apoptosis, or programmed cell death, according to a study from The Feinberg School of Medicine at Northwestern University and Washington University at St. Louis.
By identifying the molecular mechanisms that control this failsafe device, it may be possible to design new anti-angiogenic drugs or to improve already existing drugs to prevent abnormal blood vessel growth, says Olga Volpert, assistant professor of urology at the Feinberg School and lead author the study, which appeared in the April issue of the journal Nature Medicine.
Angiogenesis, or aberrant growth of new blood vessels, enables cancerous tumors to spread through the body and also causes diabetic retinopathy and macular degeneration, the leading causes of blindness in the Western world.
Research has shown that new blood vessel growth relies on an exquisite balance of proteins that either induce or inhibit new growth of the endothelial cells that form the walls of new blood vessels. Identifying the components that influence this balance thus has major scientific relevance for understanding angiogenesis-dependent diseases and for developing therapies to prevent neovascularization.
When certain natural inhibitors are administered as drugs against angiogenesis-dependent diseases like cancer and diabetic retinopathy, they selectively destroy only newly formed vessels, not preexisting ones -- for reasons that were unclear until now.
In the study, endothelial cells activated by an inducer expressed a cell surface protein receptor called Fas, which made the cells sensitive to the inhibitors in their environment. The inhibitors, thrombospondin-1 (TSP1) or pigment epithelial-derived factor (PEDF), activated its ligand, another cell surface protein called FasL -- which fits into the Fas receptor like a key in a lock -- initiating a molecular cascade in the cell that resulted in cell death.
These results indicate that the angiogenesis-inhibiting activity of TSP1 and PEDF was dependent on the dual induction of Fas and FasL as well as on the resulting apoptosis. It has been known for some time that Fas/FasL interactions target immune cells for destruction in immune-privileged and diseased tissues when large populations of cells are to be eliminated. The results of the current study show that these interactions also affect the fate of vascular tissues where new vessels are subject for destruction by inhibitors of angiogenesis.
The researchers also showed that TSP1 and PEDF reduced the expression of the inducer-stimulated molecule that blocks cell death. This unexpected cooperation between pro- and anti-angiogenic factors may have major implications on the therapeutic use of these two inhibitors. Fas and its ligand may serve as new targets to design anti-angiogenic drugs or to improve already existing drugs.
"The data provide an unexpected explanation for the specificity of inhibitors for activated, remodeling endothelium, thus clarifying why they can be used so effectively without side effects," Volpert said. "The data also offer new means to enhance the efficacy of these inhibitors and predict synergies between various inhibitors and between inhibitors and conventional therapies."
Co-authors from the Feinberg School were Noel P. Bouck, emeritus emeritus of microbiology-immunology, Tetiana Zaichuk, Wei Zhou, Frank Reiher and Mohammed Amin. Bouck, Zaichuk, Zhou and Reiher are researchers at The Robert H. Lurie Comprehensive Cancer Center of Northwestern University. Washington University co-authors were Thomas A. Ferguson and Patrick Stewart, department of ophthalmology and visual sciences.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Northwestern University.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
<urn:uuid:3758e272-a78c-4f04-8aad-62b609da655d>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2002/04/020412074814.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.93253
| 822
| 2.5625
| 3
|
Dissociative amnesia is one of a group of conditions called dissociative disorders. Dissociative disorders are mental illnesses that involve disruptions or breakdowns of memory, consciousness, awareness, identity, and/or perception. When one or more of these functions is disrupted, symptoms can result. These symptoms can interfere with a person's general functioning, including social and work activities, and relationships.
Dissociative amnesia occurs when a person blocks out certain information, usually associated with a stressful or traumatic event, leaving him or her unable to remember important personal information. With this disorder, the degree of memory loss goes beyond normal forgetfulness and includes gaps in memory for long periods of time or of memories involving the traumatic event.
Many people view forgiveness as an offshoot of love -- a gift given freely
to those who have hurt you.
Forgiveness, however, may bring enormous benefits to the person who gives
that gift, according to recent research. If you can bring yourself to forgive
and forget, you are likely to enjoy lower blood
pressure, a stronger immune system, and a drop in the stress hormones
circulating in your blood, studies suggest. Back pain, stomach problems,
and headaches may disappear. And you'll reduce the...
Dissociative amnesia is not the same as simple amnesia, which involves a loss of information from memory, usually as the result of disease or injury to the brain. With dissociative amnesia, the memories still exist but are deeply buried within the person's mind and cannot be recalled. However, the memories might resurface on their own or after being triggered by something in the person's surroundings.
What Causes Dissociative Amnesia?
Dissociative amnesia has been linked to overwhelming stress, which might be the result of traumatic events -- such as war, abuse, accidents, or disasters -- that the person has experienced or witnessed. There also might be a genetic link to the development of dissociative disorders, including dissociative amnesia, because people with these disorders sometimes have close relatives who have had similar conditions.
Who Develops Dissociative Amnesia?
Dissociative amnesia is more common in women than in men. The frequency of dissociative amnesia tends to increase during stressful or traumatic periods, such as during wartime or after a natural disaster.
What Are the Symptoms of Dissociative Amnesia?
The primary symptom of dissociative amnesia is the sudden inability to remember past experiences or personal information. Some people with this disorder also might appear confused and suffer from depression and/or anxiety.
How Is Dissociative Amnesia Diagnosed?
If symptoms of dissociative amnesia are present, the doctor will begin an evaluation by performing a complete medical history and physical exam. Although there are no lab tests to specifically diagnose dissociative disorders, the doctor might use various diagnostic tests, such as as neuroimaging, electroencephalograms (EEGs) or blood tests, to rule out neurological or other illnesses or medication side effects as the cause of the symptoms. Certain conditions, including brain diseases, head injuries, drug and alcohol intoxication, and sleep deprivation, can lead to symptoms similar to those of dissociative disorders, including amnesia.
If no physical illness is found, the person might be referred to a psychiatrist or psychologist, health care professionals who are specially trained to diagnose and treat mental illnesses. Psychiatrists and psychologists use specially designed interview and assessment tools to evaluate a person for a dissociative disorder.
|
<urn:uuid:ee05a9bb-c2c6-4f19-b9b8-11ec7bbe2bf0>
|
CC-MAIN-2013-20
|
http://www.webmd.com/mental-health/dissociative-amnesia
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940569
| 726
| 3.671875
| 4
|
Cooperate with a school far away and measure the size of our planet! The method is the same as that used in Egypt almost 2500 years ago. The observations can be done outdoor, or indoor if the Sun is shining into a class room.
In this exercise you have to register. Then, you need to select another school to cooperate with. This school should be as far away from you (to the south or to the north) as possible.
The classes (or even whole schools if the schools choose to do this together) must then agree on a date to make the exercise. It must be done at a very specific time on that date (read more below).
Find an even area where you can see the Sun. Use the spirit level to check that the area is absolutely in level.
Put one end of the pipe on the ground and lift the other end towards the Sun. Avoid looking at the Sun! You will see a shadow on the ground cast by the pipe. Try to make the shadow as small as possible. When it is completely gone, the sunlight penetrates the pipe.
When the sunlight reaches the ground through the pipe, you must keep the pipe absolutely steady. Use the square to measure the angle between the pipe and the ground. When the Sun is exactly in the south (the time for this you or your teacher has already found), you read the angle and make a record of it. Be as accurate as possible!!
You should practice how to set up the experiment and how to read the angle before the day you have agreed on. This is important so that you are able to read the angle accurately on exactly the right time. During the exercise the weather must be good enough that the Sun is visible on both locations.
How can we use these observations to caclulate the circumference of the Earth?
When we know the distance d between to schools - situated at approximately the same longitude - and angle C, we may calculate the circumference o of the Earth: The ratio of d and the angle C is equal to the ratio between the cicumference o of the Earth and the 360 degrees of a full circle:
It is not always that easy to find a school that is situated due North of South. We must there adopt the method a little by claiming that d is the distance between the latitudes of the schools. d can be calculated in this webpage. Please make sure that you use the same longitude for the schools.
Then we only need angle C. But that is just what the observations from the different locations give us.
Then you will find the circumference of the Earth using this expression:
What is the size of our planet? In order to answer that question one must know that the Earth is a globe (almost spherical). During a total eclipse of the Moon this is evident. Before and after the total phase, the edge of the shadow of the Earth is seen - it is circular!
The size of the Earth was determined already more than 2000 years ago. This is quite impressive, but was due to a clever observation. The director of the famous library in Alexandria in Egypt, the geographer Eratosthenes (about 276-195 BC.) did the historic achievement.
Eratosthenes had learned a fascinating fact about the city of Syene in southern Egypt, not far from Aswan: When the Sun was at its highest in the sky in this city on the longest day of the year (today we call it summer solstice), the Sun did not cast any shadows. Actually, the Sun shined into deep wells. The story says that nobody dared to stare into the wells since they could become blinded by the intense light reflected by the water deep down.
In Syene the Sun therefore had to be in the zenith at this time. In Alexandria, further north in Egypt, Eratosthenes knew from his own experience that the shadows at the same time was equivalent to the Sun being one 50th of a circle away from zenith.
The distance from Syene to Alexandria had to be one 50th of the circumference of the Earth.
He could not measure the distance between the cities directly, but estimates of people travelling between the cities and the time they spent, gave a distance of about 5000 stadions. The circumference of the Earth therefore had to be 250 000 stadions, and the diameter about one third of this.
Today we don't know the exact length one one stadion, but many sources claim that 250 000 stadions correspond to 39 900 kilometers. This would be astoundingly close to the correct value which is 40 074 kilometers!
Anyway, the method is correct and can be applied even today to estimate the size of our Earth.
Exercise 2: Measure the distance to the the Sun
- Method A: Timing Venus' entrance and departure on solar disc
- Method B: Drawing Venus' path across the solar disc
On www.astroevents.no there is more information about the Transit of Venus and other celestial events.
|
<urn:uuid:7eb57efa-817f-49f5-aa9c-d742addb54fd>
|
CC-MAIN-2013-20
|
http://astro.viten.no/jordmaaling_eng.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960952
| 1,034
| 3.375
| 3
|
Symptoms of Brain Tumor
All forms of brain tumor are serious, but if someone is diagnosed with a brain tumor, it is better to have benign brain tumor symptoms, rather than metastatic (cancerous) ones, because that means their particular tumor has not resulted in a cancerous growth. Unlike other symptoms of brain tumor-which include headaches, seizures, weakness, and personality changes-benign brain tumor symptoms are far less serious, considering the other possibilities. These sorts of brain tumors are composed of a smaller group of cells that do not follow the regular kind of cell division and growth patterns. They develop into a mass of cells that do not take on the characteristic appearance of a cancer. They might find out about this benign diagnosis when undergoing a CT or MRI scan. (CT stands for Computerized Tomography, often referred to as a CAT scan, or X-ray procedure, http://www.medicinenet.com/cat_scan/article.htm; MRI stands for Magnetic Resonance Imaging, which is a radiology procedure that uses a computer to produce images of body structures http://www.medicinenet.com/mri_scan/article.htm.) On the other hand, this kind of tumor should not to be downplayed, but should be treated with extreme care and caution.
Though a benign brain tumor doesn’t usually develop into something cancerous, it can still happen. If someone has a family history of cancer, they should be diligent about getting frequent to regular medical checkups. The same holds true if they are often exposed to radiation or chemicals, such as formaldehyde.
There are numerous symptoms of brain tumor, such as changes in any of the five senses (sight, smell, taste, hearing, touch), a change in how they feel pain, pressure, or temperature, and loss of control over coordination, balance, or bodily functions. Those individuals with symptoms of brain tumor will often have difficulty doing things that used to be simple, such as walking, talking and retaining information from someone when they are speaking.
Brain tumor symptoms in teenagers are something to be aware of; tumors and cancers are no respecter of age. symptoms of brain tumor in women In fact, brain tumors are more common in young children and older adults than in the ages between. So, though brain tumor symptoms in teenagers may not be any different from the ones that adults experience, it does indicate something important: the symptoms are probably more aggressive than usual. (Since it is far rarer in teens, it should be taken that much more seriously.)
Other symptoms of brain tumor:
” Memory loss, confusion
” Muscle weakness in the face, arm, or leg (usually on one side) brain tumors symptoms
” Changes in alertness
” Changes in behavior, mood, emotions, personality
” Eye abnormalities-eyelid drooping, different sized pupils, uncontrollable movements
Individuals should not immediately jump to conclusions if they display one or several symptoms of brain tumor, whether these are brain tumor symptoms in teenagers or symptoms in adults. If possible, consult a surgeon or medical doctor who is an expert in this area. There is surgery, treatments and procedures available out there for certain brain tumors. For instance, there is radiation therapy and chemotherapy as well as some procedures that will reduce brain swelling, pressure, and seizures. Pain medication and antacids are available to help deal with some symptoms, too.
Be sure to refer to several resources when conducting research for yourself or a loved one, symptoms of brain tumors in addition to getting regular medical checkups. Some such resources are as follows:
Posted in: Basketball - Men
|
<urn:uuid:b9a065d6-964e-4dae-871f-227b4cb832e7>
|
CC-MAIN-2013-20
|
http://cincinnati.com/blogs/uc/2009/12/19/brain-tumors-symptoms/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94314
| 750
| 2.78125
| 3
|
- Japanese, and most other, nuclear plants are designed to withstand earthquakes, and in the event of major earth movement, to shut down safely.
- In 1995, the closest nuclear power plants, some 110 km north of Kobe, were unaffected by the severe Kobe-Osaka earthquake, but in 2004, 2005, 2007, 2009 and 2011 Japanese reactors shut down automatically due to ground acceleration exceeding their trip settings.
- In 1999, three nuclear reactors shut down automatically during the devastating Taiwan earthquake, and were restarted two days later.
- In March 2011 eleven operating nuclear power plants shut down automatically during the major earthquake. Three of these subsequently caused an INES Level 7 Accident due to loss of power leading to loss of cooling and subsequent radioactive releases.
Nuclear facilities are designed so that earthquakes and other external events will not jeopardise the safety of the plant. In France for instance, nuclear plants are designed to withstand an earthquake twice as strong as the 1000-year event calculated for each site. It is estimated that, worldwide, 20% of nuclear reactors are operating in areas of significant seismic activity. The International Atomic Energy Agency (IAEA) has a Safety Guide on Seismic Risks for Nuclear Power Plants. Various systems are used in planning, including Probabilistic Seismic Hazard Assessment (PSHA), which is recommended by IAEA and widely accepted.
Because of the frequency and magnitude of earthquakes in Japan, particular attention is paid to seismic issues in the siting, design and construction of nuclear power plants. The seismic design of such plants is based on criteria far more stringent than those applying to non-nuclear facilities. Power reactors are also built on hard rock foundations (not sediments) to minimise seismic shaking.
Japanese nuclear power plants are designed to withstand specified earthquake intensities evident in ground motion. These used to be specified as S1 and S2, but now simply Ss, in Gal units. The plants are fitted with seismic detectors. If these register ground motions of a set level (formerly 90% of S1, but at Fukushima only 135 Gal), systems will be activated to automatically bring the plant to an immediate safe shutdown. The logarithmic Richter magnitude scale (or more precisely the Moment Magnitude Scale more generally used today) measures the overall energy released in an earthquake, and there is not always a good correlation between that and intensity (ground motion) in a particular place. Japan has a seismic intensity scale in shindo units 0 to 7, with weak/strong divisions at levels 5 & 6, hence ten levels. This describes the surface intensity at particular places, rather than the magnitude of the earthquake itself.
Japan’s revised Regulatory Guide for Reviewing Seismic Design of Nuclear Power Reactor Facilities in September 2006 increased the Ss figure to be equivalent to an earthquake of 6.7 on the Richter or Moment Magnitude scale directly under the reactor – a factor of 1.5 (up from magnitude 6.5). PGA or Design Basis Earthquake Ground Motion is measured in Galileo units – Gal (cm/sec2) or g – the force of gravity, one g being 980 Gal.
The former design basis earthquake ground motion or peak ground acceleration (PGA) level S1 was defined as the largest earthquake which can reasonably be expected to occur at the site of a nuclear power plant, based on the known seismicity of the area and local active faults. A power reactor could continue to operate safely during an S1 level earthquake, though in practice they are set to trip at lower levels. If it did shut down, a reactor would be expected to restart soon after an S1 event. The revised seismic regulations released in May 2007 increased the S1 figure to be equivalent to 6.7 on the logarithmic Richter scale – a factor of 1.5 (up from 6.5). PGA is measured in Galileo units – Gal (cm/sec2) or g – the force of gravity, one g being 980 Gal. The non-SI unit is used here.
Larger earthquake ground motions in the region, considering the tectonic structures and other factors, must also be taken into account, although their probability is very low. The largest conceivable such ground motion was the upper limit design basis extreme earthquake ground motion (PGA) S2, generally assuming a magnitude 6.5 earhtquake directly under the reactor. The plant’s safety systems would be effective during an S2 level earthquake to ensure safe shutdown without release of radioactivity, though extensive inspection would be required before restart. In particular, reactor pressure vessel, control rods and drive system and reactor containment should suffer no damage at all.
After the magnitude 7.2 Kobe earthquake in 1995 the safety of nuclear facilities in Japan was reviewed along with the design guidelines for their construction. The Japanese Nuclear Safety Commission (NSC) then approved new standards. Building and road construction standards were also thoroughly reviewed at this time. After recalculating the seismic design criteria required for a nuclear power plant to survive near the epicentre of a large earthquake the NSC concluded that under current guidelines such a plant could survive a quake of magnitude 7.75. The Kobe earthquake was 7.2.
PGA has long been considered an unsatisfactory indicator of damage to structures, and some seismologists are proposing to replace it with Cumulative Average Velocity (CAV) as a more useful measure since it brings in displacement and duration.
Japan’s Rokkasho reprocessing plant and associated facilities are built on stable rock and are designed to withstand an earthquake of magnitude 8.25 there.
Following a magnitude 7.3 earthquake in 2000 in an area where no geological fault was known, Japan’s NSC ordered a full review of the country’s seismic guidelines (which had been adopted by the NSC in 1981 and partially revised in 2001) in the light of newly accumulated knowledge on seismology and earthquake engineering and advanced technologies of seismic design. The new Regulatory Guide for Reviewing Seismic Design of Nuclear Power Reactor Facilities was published in September 2006 and resulted in NSC and the Nuclear & Industrial Safety Agency (NISA) calling for reactor owners with NISA to undertake plant-specific reviews of seismic safety, to be completed in 2008.
The main result of this review was that the S1 – S2 system was formally replaced by NSC in September 2006 with a single Design Basis Earthquake Ground Motion (DBGM Ss), still measured in Gal. The Guide states that the main reactor facilities “shall maintain their safety functions under the seismic force caused by DBGM Ss.” They and ancillary facilities should also withstand the “seismic force loading of those caused by Elastically Dynamic Design Earthquake Ground Motion Sd (EDGM Sd)” calculated from stress analysis and being at least half the Ss figure.
In March 2008 Tepco upgraded its estimates of likely Design Basis Earthquake Ground Motion Ss for Fukushima to 600 Gal, and other operators have adopted the same figure. (The magnitude 9.0 Tohoku-Taiheiyou-Oki earthquake in March 2011 did not exceed this at Fukushima.) In October 2008 Tepco accepted 1000 Gal (1.02g) DBGM as the new Ss design basis for Kashiwazaki Kariwa, following the July 2007 earthquake there, and Chubu accepted the same for Hamaoka.
Japanese nuclear plants such as Hamaoka near Tokai are in regions where earthquakes of up to magnitude 8.5 may be expected. In fact the Tokai region has been racked by very major earthquakes about every 150 years, and it is 155 years since the last big one. Chubu’s Hamaoka reactors were designed to withstand such anticipated Tokai earthquake and had design basis S1 of 450 Gal and S2 of 600 Gal. Units 3 & 4 were originally designed for 600 Gal, but the Ss standard established in September 2007 required 800 Gal. Since then units 3-5 have been upgraded to the new Ss standard of 1000 Gal. In August 2009 a magnitude 6.5 earthquake nearby automatically shut down Hamaoka 4 & 5, with ground motion of 426 Gal being recorded at unit 5. Some ancillary equipment was damaged and reactors 3 and 4 were restarted after checking. Restart of unit 5 was repeatedly deferred as the company analysed why such high seismic acceleration was recorded on it, coupled with some planned maintenance being undertaken during the shutdown. It restarted in January 2011.
Hamaoka units 1 & 2 had been shut down since 2001 and 2004 respectively, pending seismic upgrading – they were originally designed to withstand only 450 Gal. In December 2008 the company decided to write them off and build a new reactor to replace them. Modifying the two 1970s units to new seismic standards would have cost about US$ 3.3 billion and been uneconomic, so Chubu opted for a US$ 1.7 billion write-down instead.
Early in 2010 Japan’s METI confirmed that the seismic safety of the Monju fast reactor was adequate under new standards requiring Ss of 760 Gal PGA. Assessments were carried out in conjunction with Kansai’s Mihama plant and JAPC’s Tsuruga plant, both nearby.
South Korea’s new APR-1400 reactor is designed to withstand 300 Gal seismic acceleration. The older OPR is designed for 200 Gal but is being upgraded to at least 300 Gal so as to be offered to Turkey and Jordan.
In the USA the Diablo Canyon plant is designed for a 735 Gal peak ground acceleration and the San Onofre plant is designed for a 657 Gal peak ground acceleration. On the east coast, North Anna shut down in August 2011 during a 5.8 magnitude earthquake with epicenter 20 km away when the ground acceleration reached 255 Gal, against design basis of 176 Gal. Subsequent inspections evaluated the plant based on NRC’s Regulatory Guide: Restart Of A Nuclear Power Plant Shut Down By A Seismic Event, adopted in March 1997. It was the first US nuclear plant ever to be shut down by an earthquake.
Japan 1995 – Kobe
Newspaper coverage of the magnitide 7.2 Kobe earthquake which devastated Kobe and the surrounding region on 17 January 1995 raised concerns about the safety of nuclear power plants in the affected area. Horizontal ground acceleration was measures at 817 Gal – more intense than expected – and vertical acceleration was 332 Gal.
In fact none of the power reactors within 200 km of the earthquake epicentre sustained any damage and those running at the time continued to operate at capacity. Takahama and Ohi are located approximately 130 km from the epicentre of the earthquake, on the Pacific Ocean side of the Island of Honshu. Mihama is approximately 180 km away. The research reactors in the region, in Osaka and Kyoto, were also reported to be unaffected by the earthquake.
Taiwan 1999 – Chichi
The shallow magnitude 7.6 earthquake in central Taiwan on 21 September 1999 killed thousands of people. It caused three reactors at Chinshan and Kuosheng in the north of the island to shut down automatically. They were cleared to restart two days later. A fourth reactor there was being refuelled. The two reactors at Maanshan in the south continued operating, but reduced power later due to damage to distribution facilities. A major concern following the earthquake was how quickly power could be restored to industry.
Japan 2005 – Miyagi
On 16 August 2005 Tohuku’s three Onagawa reactors shut down automatically when a magnitude 7.2 earthquake hit northeast Honshu. They were set to trip at 200 Gal, against S1 design basis of 250 Gal (which was reached) and S2 PGA of 350-400 Gal. No damage occurred in any major part of the plant.
Onagawa-2 restarted in January 2006 after comprehensive checks and confirming that an S2 figure of 580 Gal would be safe for that unit (equivalent to magnitude 8.2). Geotechnical analysis and safety evaluation proceeded under NISA, which approved a report from the company. Unit 3 restarted in March 2006, and the smaller unit 1 restarted in May 2007.
Japan 2007 – Niigataken Chuetsu-Oki
On 16 July 2007 the magnitude 6.8 Niigata Chuetsu-Oki earthquake occurred with epicentre only 16 km from Tepco’s Kashiwazaki Kariwa 7965 MWe nuclear power plant. Local geological factors contributed to a magnification of the seismic intensity at the plant. The plant’s seismometers measured PGA of 332 to 680 Gal, the S1 design bases for different units being 170 to 270 Gal and the S2 figure on actual bedrock was 450 Gal. The peak ground acceleration thus exceeded the S1 design values in all units – hence the need to shut down, and the S2 values in units 1, 2 and 4. Four reactors shut down automatically at the pre-set level of 120 Gal, another three were not operating at the time. All the functions of shutdown and cooling worked as designed.
While there were many incidents on site due to the earthquake, none threatened safety and the main reactor and turbine units were structurally unaffected, despite ground accelerations being up to three times the design basis. Analysis of primary cooling water confirmed that there was no damage to the fuel in reactor cores. However, the plant remained closed until full investigation was complete and safety confirmed, about mid 2008. It appears that the four older units may have been more vulnerable than units 5-7 which are located 1.5 km further away.
The Ministry of Economy Trade & Industry (METI) then set up a 20-member Chuetsu Investigation and Countermeasures Committee to investigate the specific impact of this earthquake on the power station, and in the light of this to identify what government and utilities must address to ensure nuclear plant safety. It acknowledged that the government was responsible for approving construction of the first Kashiwazaki Kariwa units in the 1970s very close to what is now perceived to be a geological fault line. NISA invited the International Atomic Energy Agency to join it, the Nuclear Safety Commission and Tepco in reviewing the situation. A report was presented to the IAEA Senior Regulators’ Meeting in September 2007, and a further IAEA visit was made early in 2008.
NISA released its assessment of the safety significance of earthquake damage in November. The worst of the damage rated zero on the International Nuclear Event Scale (INES), having no safety significance. Other damage was deemed not relevant to nuclear safety. The seven main reactor units themselves were still being checked, but appeared undamaged. In May 2008 Tepco adopted a new standard of 2280 Gal (2.33g) maximum design basis seismic motion for Kashiwazaki Kariwa units 1-4, over five times the previous S2 figure, and 1156 Gal (1.18g) for units 5-7, in the light of local geological factors. This standard will be reviewed by NISA and NSC. Meanwhile construction works will be undertaken to bring all units up to be able to withstand a quake producing PGA of 1000 Gal.
Tepco posted a loss of JPY 150 billion (US$ 1.68 billion) for FY2007 (to 31/3/08) due to the prolonged closure of the plant, followed by JPY 109 billion loss in the first half of FY2008. While no damage to the actual reactors has been found, detailed checks continue, and upgrading of earthquake resistance is required. Major civil engineering works are also required before the reactors resume operation. Overall, the FY2007 impact of the earthquake was projected to be JPY 603.5 billion ($5.62 billion), three quarters of that being increased fuel costs to replace the 8000 MWe of lost capacity. NISA approved the utility’s new seismic estimates in November 2008, and conducted final safety reviews of the units as they were upgraded. Unit 7 restarted in May, unit 6 in August 2009, unit 1 in May 2010, and unit 5 in November 2010. Units 2, 3, & 4 remain shut down.
Japan March 2011 – Tohoku-Taiheiyou-Oki, or Gerat East Japan Earthquake
The magnitude 9.0 Tohoku-Taiheiyou-Oki earthquake at 2.46 pm on 11 March did considerable damage, and the 14-metre tsunami it created caused even more. It appears to have been a double quake giving a severe duration of about 3 minutes, and was centred 130 km offshore of the city of Sendai in Miyagi prefecture on the eastern cost of Honshu Island. It moved Honshu 4 metres east and apparently subsided the nearby coastline by half a metre. Eleven reactors at four nuclear power plants in the region were operating at the time and all shut down automatically when the quake hit. Power was available to run the cooling pumps at most of the units, and they achieved cold shutdown in a few days. However, at Tepco’s Fukushima Daiichi plant, a major accident sequence commenced. The three reactors were shut down by the earthquake and the emergency diesel generators started as expected, but then they shut down an hour later when submerged by the tsunami. Other systems proved inadequate and led the authorities to order, and subsequently extend, an evacuation while engineers worked to restore power. About nine hours later mobile power supply units had reached the plant and were being connected. Meanwhile units 1-3 had only battery power, insufficient to drive the cooling pumps.
The operating units which shut down were Tepco’s Fukushima Daiichi 1, 2, 3, Fukushima Daini 1, 2, 3, 4, Tohoku’s Onagawa 1, 2, 3, and Japco’s Tokai. Onogawa 1 briefly suffered a fire in the non-nuclear turbine building, but the main problem centred on Fukushima Daiichi units 1-3. First, pressure inside the containment structures increased steadily and led to this being vented to the atmosphere on an ongoing basis. Vented gases and vapour included hydrogen, produced by the exothermic interaction of the fuel’s very hot zirconium cladding with water. Later on 12th, there was a hydrogen explosion in the building above unit 1 reactor containment, and another one two days later in unit 3, from the venting as hydrogen mixed with air. Then on 15th, unit 2 ruptured its pressure suppression chamber under the actual reactor, releasing significant radioactivity. Inside, water levels had dropped, exposing fuel, and this was addressed by pumping seawater into the reactor pressure vessels.
Then a separate set of problems arose as the spent fuel ponds in the upper part of the reactor structures were found to be depleted in water. In unit 4, the fuel there got hot enough to form hydrogen, and another hydrogen explosion destroyed the top of the building and damaged unit 3′s superstructure further. The focus since has been on replenishing the water in the ponds of units 3 and 4, through the gaps in the roof and cladding. Unit 4 was undergoing maintenance, and all its 548 fuel assemblies were in that pond, along with other used fuel, total 1535 assemblies, giving it a heat load of about 3 MW thermal, according to France’s ISRN. Unit 3′s pool contained 566 fuel assemblies.
Japan’s Nuclear & Industrial Safety Agency initially declared the Fukushima accident as Level 5 on INES scale – an accident with wider consequences, the same level as Three Mile Island in 1979, but after new estimates of radioactive releases in the first few days of the accident NISA reclassified it as level 7, while making it clear that radioactive releases were about one tenth of Chernobyl’s. The design basis acceleration for both Fukushima plants had been upgraded in 2008, and is now quoted at horizontal 441-489 Gal for Daiichi and 415-434 Gal for Daini. The interim recorded data for both plants shows that 550 Gal was the maximum for Daiichi, in the foundation of unit 2 (other figures 281-548 Gal), and 254 Gal was maximum for Daini. Units 2, 3 and 5 exceeded their maximum response acceleration design basis in E-W direction by about 20%. Recording was over 130-150 seconds. (Ground acceleration was around 2000 Gal a few kilometres north, on sediments.)
Earthquakes have previously occurred in the vicinity of a number of Japanese and other power reactors without adverse effect.
An earthquake registering 6.2 on Richter scale occurred offshore Fukushima in northern Japan on 13 June 2010. At the nearest costal cities it registered 5 on the Japanese shindo scale. The nearest nuclear power plants (13 reactors): Fukushima I & II and Onagawa were unaffected. The horizontal ground acceleration reached 60 Gal at reactor building base mats at Fukushima-I.
In two decades to 2004, no Japanese reactor had been tripped by the seismic detectors. In those cases where the plant automatically shutdown (“tripped”) as a safety precaution, it was because of the impact of the earthquake on the operating characteristics of the plant.
In November 1993, a magnitude 5.8 earthquake in northeast Honshu produced a ground acceleration of 121 Gal at Tohuku’s Onagawa-1 power reactor (497 MWe, BWR), located 30 km from the epicentre. The design conditions for the S1 and S2 events at the site were 250 and 375 Gal respectively and the reactor was set to trip at a measured peak ground acceleration (PGA) of 200 Gal. In fact it tripped at a lower level due to variations in the neutron flux outside the set parameters.
In May 2003 a magnitude 7.1 earthquake further from the same Onagawa plant produced ground acceleration of 225 Gal which tripped unit 3 (units 1 & 2 were not operating).
In October 2004 a magnitude 6.8 earthquake in Niigata Prefecture 250 km north of Tokyo had no effect on the nearby Kashiwazaki Kariwa nuclear plant, but a magnitude 5.2 quake there two weeks later caused one of the reactors – unit 7 -to trip.
In March 2005 a magnitude 7.0 earthquake in northern Kyushu did not affect the nearby Genkai and Sendai nuclear plants, nor Shimane and Ikata.
The magnitude 7.8 earthquake off the coast of Hokkaido in July 1993, had no effect on nuclear facilities. Tomari 1 and 2 reactors (550 MWe, PWRs), located 95 km from the epicentre, continued normal operation.
In December 1994, a magnitude 7.5 earthquake struck northern Japan but caused no damage to the 11 boiling water reactors or the nuclear fuel facilities in the vicinity. All operated normally.
Reactors of both western and Soviet design have been subjected to major seismic activity in North America and Europe without damage. California’s power reactors, San Onofre 2 and 3 (1,070 and 1,080 MWe, PWRs) and Diablo Canyon 1 and 2 (1,073 MWe and 1,087 MWe, PWRs) continued to operate normally during the 6.6 magnitude earthquake in January 1994. San Onofre, the closer station, was about 112 km from the epicentre.
In December 1988, a magnitude 6.9 earthquake, resulting in the deaths of at least 25,000 people, occurred in northwestern Armenia. It was felt at the two-unit Armenian nuclear power station located approximately 75 km south of the epicentre, but both Soviet-designed PWRs operated normally and no damage was reported. This was the first Russian nuclear power plant specifically adapted for seismic areas, and it started operating in 1976.
In May 2008 a magnitude 7.9 earthquake affected southwestern Sichuan province in central China. The main nuclear facilities affected were military ones, apparently without any radioactive releases. About 250 km from the epicentre the Yibin fuel fabrication plant which produces both power reactor and research reactor fuel assemblies was undamaged. China’s power reactors were all at least 900 km from the epicentre.
Large undersea earthquakes often cause tsunamis – pressure waves which travel very rapidly across oceans and become massive waves over ten metres high when they reach shallow water, then washing well inland. The December 2004 tsunamis following a magnitude 9 earthquake in Indonesia reached the west coast of India and affected the Kalpakkam nuclear power plant near Madras/Chennai. When very abnormal water levels were detected in the cooling water intake, the plant shut down automatically. It was restarted six days later.
Fukushima Daiichi and Daini nuclear power plants were affected by a major tsunami in March 2011. The design basis tsunami height was 5.7 m for Daiichi and 5.2 m for Daini, though the Daiichi plant was built about 10 metres above sea level and Daini 13 metres above. Tsunami heights coming ashore were more than 14 metres for both plants, and the Daiichi turbine halls were under some 5 metres of seawater until levels subsided. The maximum amplitude of this tsunami was 23 metres at point of origin, about 160 km from Fukushima. In the last century there have been eight tsunamis in the region with maximum amplitudes at origin above 10 metres (some much more), these having arisen from earthquakes of magnitude 7.7 to 8.4, on average one every 12 years. Those in 1983 and in 1993 were the most recent affecting Japan, with maximum heights at origin of 14.5 metres and 31 metres respectively, both induced by magnitude 7.7 earthquakes.
Even for a nuclear plant situated very close to sea level, the robust sealed containment structure around the reactor itself can prevent any damage to the nuclear part from a tsunami, though other parts of the plant might be damaged. At Fukushima, the turbine halls contained both the backup diesel generators and much of the electrical switchgear, which proved fatal for the Daiichi 1-3 reactors.
paper originally prepared by Nuclear Services Section, External Affairs, ANSTO;
Nuclear Safety Commission Sept 2006, Regulatory Guide for Reviewing Seismic Design of Nuclear Power Reactor Facilities
- Tokai village mayor reflects on JCO and Fukushima crises, calls for anti-nuke shift – A government that is both incompetent and unfeeling toward mankind is not qualified to have nuclear power plants (enformable.com)
- Browns Ferry Nuclear Power Plant Turbine Trip and SCRAM leave station on offsite power (enformable.com)
- Unknown Reactor Trip at Robinson Nuclear Power Plant – Unit in Hot Standby (enformable.com)
- Nuclear Power Plant Design for Natural Phenomena and Comparison of Plant-Specific Design Values for Selected Natural Phenomena (enformable.com)
- Japanese Citizens Won’t Allow New Reactors – Utilities Go Postal – Tokyo-Based JAPCO and Vietnam Move Ahead On Nuclear Reactor Plans (enformable.com)
- NRC ramps up inspection at Va. nuclear plant; earthquake may have exceeded plant’s design base – Signs That Utilities Knew Of Shortfall (enformable.com)
- March 18th, 2011 – Tohoku-Taiheiyou-Oki Earthquake Effects On Japanese Nuclear Power Plants (enformable.com)
- NRC inspecting Monticello nuclear power plant (enformable.com)
|
<urn:uuid:966f1187-e4f5-4a66-805f-f2da90162407>
|
CC-MAIN-2013-20
|
http://enformable.com/2011/09/nuclear-power-plants-and-earthquakes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957979
| 5,693
| 3.859375
| 4
|
The Three Kingdoms era refers to a historical period in China that lasted from 220~280 AD. It ended the Han Dynasty and separated the land into civil conflict. Out of the warring regional lords, three of them eventually gained enough power to claim themselves as the emperors of the land.
- The kingdom of Shu Han, led by the Liu family (Liu Bei's branch)
- The kingdom of Cao Wei, led by the Cao family
- The kingdom of Eastern Wu, led by the Sun family
Technically, neither of them ruled any sort of kingdom since their lands were actually individual empires. However, the term "kingdoms" has been popularized in several translations. In Romance of the Three Kingdoms, the concept was devised by Zhuge Liang on Liu Bei's third visit but this is not the case in history. Historical records suggest that Zhou Yu and Lu Su devised the concept of two kingdoms.
Koei has the following IPs and franchises use this era as their main setting:
- Romance of the Three Kingdoms series including its spin-offs in the Eiketsuden series
- Dynasty Warriors series
- Kessen II
- Dynasty Tactics
- Dynasty Tactics 2
Arguably, the beginning of each country's military affairs began when the Yellow Turban Rebellion broke out in 184. The weakened Han empire was being suppressed by the rebels until He Jin organized an army to oppose them. Even when the rebellion was dispelled, the Han continued to be plagued by natural disasters and ill-timed tragedies within the court. He Jin supposedly wanted to sustain the dynasty when Emperor Ling passed away, leaving a dispute between his two heirs, Liu Bian and Liu Xie. With Yuan Shao's support, he planned to remove the scheming eunuchs from capital but was assassinated before his plans could come to fruit.
Yuan Shao and a handful of men duly murdered the eunuchs afterwards but their actions triggered Dong Zhuo to approach the capital. His capable military leadership drove them into a civil conflict. He manipulated the succession and placed Liu Xie on the throne as Emperor Xian, deposing Emperor Ling's immediate successor, Liu Bian. The coalition against Dong Zhuo then argued between themselves on whether to place a new emperor, the kind warlord Liu Yu, as a replacement.
Dong Zhuo obeyed yet lacked the political etiquette and tact to sustain the land for long. His rude manners and brutality instigated several revolts against him led by Yuan Shao and Sun Jian. The emperor and Dong Zhuo escaped to Chang'an though the warlord wasn't killed until his subordinates, Wang Yun and Lu Bu, mutinied against him in 192. Because of Wang Yun's actions, however, four of Dong Zhuo's former generals: Li Jue, Guo Si, Zhang Ji and Niu Fu were able to slay him, repel Lu Bu, and take hostage of the emperor, causing further calamity to the court.
Acting under the Han, Cao Cao continued his conquest to Tao Qian, who was supported by Liu Bei and Gongsun Zan. He was foiled by Lu Bu attacking his base, Yan province, and was forced to retreat. Tao Qian soon passed away, and his province of Xu was then ruled by Liu Bei.
In 195, Lu Bu was defeated and fled to Liu Bei for safety.
Meanwhile, in the south, Sun Ce succeeded his father and served for a time under Yuan Shu. His quick and successful conquests earned him a grand reputation with his lord. Yuan Shu, confident in his subordinates, declared himself as emperor in 197. Sun Ce did not agree with the decision, abandoned him, and negotiated with Cao Cao to destroy him. With Liu Bei and Lu Bu, the coalition surrounded Yuan Shu and forced him to flee. After this conflict, Lu Bu betrayed his benefactor, gaining control of Xu, and attempted to seal an alliance with Yuan Shu. Liu Bei fled to Cao Cao for safety and they both allied together to siege Lu Bu's base, Xiapi. Due to betrayal amongst his officers, Lu Bu was defeated and executed.
Rise to PowerEdit
In 200, Dong Cheng, an officer of the court and imperial in-law, received a secret edict from the emperor to assassinate Cao Cao. He collaborated with Liu Bei and others on this effort, but Cao Cao soon found out about the plot and had the conspirators executed. Only Liu Bei survived and fled to Yuan Shao in the north. After settling the nearby provinces and internal affairs with the court, Cao Cao turned his attention north. Yuan Shao, who came from higher nobility than Cao Cao, amassed a large army and camped along the northern bank of the Yellow River. In the same year, Sun Ce was fatally wounded and names Sun Quan as his successor.
Following months of planning, Cao Cao and Yuan Shao met in force at Guandu. Overcoming Yuan's superior numbers, Cao Cao decisively defeated him by setting fire to his supplies, and in doing so crippled the northern army. Liu Bei fled to Liu Biao of Jing province and many of Yuan Shao's forces were destroyed. In 202, Cao Cao took advantage of Yuan Shao's death and the resulting division among his sons to advance north of the Yellow River. He captured Ye in 204 and occupied the provinces of Ji, Bing, Qing and You. By the end of 207, Cao Cao had achieved undisputed dominance of the northern plains of China.
In 208, Cao Cao marched south with his army hoping to quickly unify the empire. He was able to capture a sizable fleet at Jiangling when Liu Biao's son, Liu Cong, surrendered to him. Even against the huge army, Sun Quan continued to resist. His advisor, Lu Su along with Liu Bei's Zhuge Liang, secured an alliance between their lords and Sun Ce's close friend, Zhou Yu, was placed in command of Sun Quan's navy, along with a veteran officer of the Sun family, Cheng Pu. Their combined armies of 50,000 met Cao Cao's fleet and 200,000-strong force at Chibi that winter. After an initial skirmish, an attack beginning with a plan to set fire to Cao Cao's fleet was set in motion to lead to the decisive defeat of Cao Cao, forcing him to retreat in disarray back to the north. The allied victory ensured the survival of Liu Bei and Sun Quan, and provided security for the future state of Wu and Liu Bei's conquests.
After his return to the north, Cao Cao contented himself with absorbing the northwestern regions in 211 and consolidating his power. He progressively increased his titles and power, eventually becoming the King of Wei in 217, a title bestowed upon him by the puppet Han emperor that he controlled.
Liu Bei, having defeated the weak Jing warlords Han Xuan, Jin Xuan, Zhao Fan, and Liu Du, entered the western Yi province and later in 214 displaced Liu Zhang as ruler, leaving his commander Guan Yu in charge of Jing province. Sun Quan, who had in the intervening years being engaged with defenses against Cao Cao in the southeast at Hefei, now turned his attention to Jing province and the Middle Yangzi. Tensions between the allies were increasingly visible. In 219, after Liu Bei successfully seized Hanzhong from Cao Cao and as Guan Yu was engaged in the siege of Fan Castle, Sun Quan's commander-in-chief Lu Meng secretly seized Jing province, and his forces captured and slew Guan Yu.
In the first month of 220, Cao Cao died and in the tenth month Emperor Xian abdicated his throne to Cao Pi. Many sources assume that the heir forced the emperor to forsake his reign thus ending the Han Dynasty. He named his state Wei and made himself emperor at Luoyang.
Liu Bei named himself Emperor of Han a year later in an attempt to restore the fallen Han dynasty. In the same year, Wei bestowed on Sun Quan the title of King of Wu. In 223, Shu Han troops declared war on Wu and met the Wu armies at the Battle of Yiling. At Yiling, Liu Bei was disastrously defeated by Sun Quan's commander Lu Xun and forced to retreat back to Shu. He died soon after and was succeeded by his son, Liu Shan. Shu and Wu resumed friendly relations at the expense of Wei. In 222, Sun Quan renounced his recognition of Cao Pi's regime and, in 229, he declared himself emperor at Wuchang. Wei presided in the north, Wu was predominantly in the southeast, and Shu in the southwest. Each section seemed naturally divided, based on evidence regarding each kingdom's man-made trade routes.
Shu and Wu focused on suppressing rebellions in the south caused by indigenous tribes who weren't part of the then boarders of China. Wu forced the surrender of the Yue tribe while Shu subdued the Nanman tribe on an expedition led by Zhuge Liang. However, it was no longer a main concern as Zhuge Liang led a northern campaign against Wei in 227. He stationed the main army in Hanzhong and planned to break through to Chang'An and Louyang. In the next seven years, Zhuge Liang executed an estimated five attempts yet ultimately failed to meet his goal due to limited food supplies. During his final campaign, he passed away on the Wuzhang Plains yet news of his death allowed Shu's safe escape since Sima Yi was hesitant to pursue.
During Shu's northern campaigns, Wu was constantly attacked by Wei at Hefei. Thanks to the natural defenses and the Yue troops, however, Wu's defenses were solid. Their well defended lands helped the technology and arts in the south to flourish.
Fall of ShuEdit
Shu weakened after the defeat of Zhuge Liang at Wuzhang Plains. After some few decades, the eventual replacement, Jiang Wei, then continued the expeditions would attempt attacks on Wei several more times. Shu's supplies were rapidly strained by these attempts. Taking advantage, the de facto ruler of Wei, Sima Zhao, sent Zhong Hui and Deng Ai to invade Shu's capital. Zhong Hui held off Jiang Wei, while Deng Ai sneaked around to besiege the capital of Chengdu, then receiving the surrender of Liu Shan in 264. After the surrender, Jiang Wei and Zhong Hui rebelled but they and Deng Ai were killed. Liu Shan lived on, even into the time of Jin, until a natural death in 271.
Fall of WeiEdit
After Wuzhang, Sima Yi seized power and became regent permanently putting the Sima family in Wei's government. After Sima Yi, and Sima Shi, Sima Yi's eldest son, passed away Sima Zhao took position as regent. During Sima Zhao's reign, Wei ended Shu. Sima Zhao passed away in 265 and his son, Sima Yan took position as regent. Also in 265, Cao Huan stepped down and Sima Yan became emperor. Sima Yan replaced the Wei Dynasty with the Jin dynasty.
Fall of WuEdit
Sun Quan stopped attacking Wei after he was defeated at Hefei Castle and Wu fell into a steady decline. The first problem happened after the death of Sun Quan. Sun Quan's sons began fighting of who would become emperor. Sun Liang became emperor and corruption rose throughout the Wu Dynasty. Years later Sun Hao became emperor and promised Wu that they would become a great dynasty and promised to remove corruption throughout Wu's territory, but Sun Hao became a tyrannical ruler and Wu was corrupted even more. Starting 279, years after Jin's establishment, Sima Yan invaded Wu. In 280, Sun Hao surrendered himself to Sima Yan and the land finally became united under the Jin Dynasty. The era of the Three Kingdoms ended.
|
<urn:uuid:3a7eafe3-5859-4150-ac29-ab1c35445003>
|
CC-MAIN-2013-20
|
http://koei.wikia.com/wiki/Three_Kingdoms
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.982505
| 2,412
| 3.53125
| 4
|
Misnomer of the century: "the Greenhouse Effect"
No one disputes that, right? It's established science, yes?
One thing we can get out of the way immediately is that it doesn’t work in the same way as a greenhouse. There used to be a theory, dating back to Joseph Fourier in 1824, that visible radiation could enter through the transparent glass, but because glass is opaque to infrared, when it is re-emitted it gets trapped. Fourier proposed that gases in the atmosphere could act the same way. This theory was proved wrong for actual greenhouses in 1909 by Professor Wood of John Hopkins University. An experiment comparing a pane of glass to a pane of crystallised rock salt (Sodium Chloride) which is totally transparent to infrared found no difference in temperature. In fact, greenhouses work by preventing convection, a mechanism that is of course impossible to freely floating CO2.
The above paragraph is taken from PaAnnoyed's superb post at Counting Cats, which helpfully clarifies the physics of the so-called Greenhouse Effect: it's worth reading the whole post, but I'll present a quick summary.
Some of you might remember that, a few weeks ago, I published a piece pointing out that the approximate mass of Earth's atmosphere is...
... about five quadrillion (5x1015) tonnes, three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface.
This is, you will not be surprised to know, because I was researching the greenhouse gas effect myself: alas, a lack of time meant that I hadn't got around the writing the post—and now I have no need to do so.
So, if the Greenhouse Effect has nothing to do with greenhouses, then why is the Earth warmer than it should be? And why is the mass of the atmosphere relevant? Simples...
What keeps the layer at 10 km so cold? –54 C is far below the –24 C we expect on energy-balance grounds, so it can’t be by radiating to space. And the fact that there is a straight line all the way down to the ground suggests that whatever the mechanism is, it’s the same one that keeps the surface at +14 C. Straight lines don’t happen by accident.
I won’t keep you in suspense any longer. The answer is pressure. Because of the weight of air, the pressure at the surface is greater than it is higher up. This means that if air moves up and down, the pressure changes, and the air expands or is compressed. And when air is compressed its temperature increases.
Air is driven to circulate up and down by convection. As it rises, it expands and its temperature drops. As it descends, it is compressed and its temperature rises. This maintains a constant temperature gradient of about 6 C/km. (It would be bigger, but evaporation of water carries heat upwards too, which somewhat counteracts the effect.)
No heat passes in to or out of the air to effect this change. It is solely an effect of the changing pressure. (If you really want to know, the compression does ‘work’ on the gas, which increases its internal energy. It doesn’t come from any flow of heat or radiation.)
This temperature gradient is called the adiabatic lapse rate, and is an absolutely standard bit of physics.
Is that all clear? Good. Now, let's move onto the second part...
When we look at the Earth in infrared wavelengths, we see it merrily glowing away, like a coal ember, radiating all the heat it has absorbed from the sun. But unlike the view in visible light, where we can clearly see the surface, in infrared the atmosphere is fuzzy and opaque. It is full of water vapour, and a few other trace gases, that fog our view of the surface. And so when we ask what temperature the surface of the Earth should radiate at, the surface we see isn’t solid ground, but this fuzzy layer high up in the air. And therefore, it is this surface that settles down to –24 C, to radiate exactly the right amount of heat away.
It is about 4 km up, and held at –24 C by the heat rising from below balancing radiation directly to space. Below it, compression increases the temperature. Above it, decompression lowers it. The actual mechanism and explanation for the Greenhouse Effect is in fact pressure. To be specific, it is the pressure difference between the surface and the average altitude from which heat radiates to outer space. Moreover, it is the exact same mechanism by which the upper atmosphere is cooled to –54 C, and there is no way you can explain a massive cooling by heat being in any sense “trapped”.
Heat is not trapped by absorption by CO2. That is Wrong, Wrong, Wrong! Such trapping does go on, but it has no long-term effect on the temperature because the adiabatic lapse rate has overriding control. You can even theoretically get a greenhouse effect with no greenhouse gases at all! All you need is some high altitude cloud to radiate heat to space.
So, given that the standard Greenhouse Effect model is... well... let's call it "simplified" rather than "a colossal pack of lies", why are people still banging on about CO2 trapping warm the heat?
Well, because CO2 does have some minor effects.
Now supposedly, according to rather more complicated calculations, doubling CO2 levels in Earth’s atmosphere will raise the average altitude of emission about 150 m, which will therefore raise the pressure difference and hence the surface temperature about 1.1 C. If we raise CO2 by only 40%, surface temperature will go up about half that. So we had half a degree last century (an amount too small to reliably measure). We’ll have half a degree next century. And that’s all the standard Greenhouse Effect can give you.
As PaAnnoyed points out, to get any more than that requires that you factor in a whole bunch of other, less well understood effects—as well as a bunch of Chaotic modelling (which are, by their very nature, not closely understood or predicted).
And no, as PaAnnoyed also explains, Venus is not an example of "runaway global warming"—anyone who tells you that "Venus is what will happen to Earth" is either ignorant or lying. Or both.
As I said, you really need to go and read the whole post, but I do think that we can put to bed the whole concept of CO2 "trapping" heat. Further, I think that we really ought to stop talking about the "Greenhouse Effect" because, having come to mean what it does, it is entirely misleading.
In the meantime, Kerry McCarthy has put an inflammatory title to a post by Next Left that—quite reasonably—points out that many Tory bloggers (and some non-Tories, such as your humble Devil) are somewhat at odds with the stated policy of the Conservative front bench on the issue of climate change.
But the simple fact is that the Tory front bench is extraordinarily short of anyone with any kind of scientific credentials whatsoever. In fact, like the LibDim and NuLabour benches, the Tories' representatives are only really experts in how to steal money off the taxpayers of Britain.
The anthropogenic climate change hoax gives our irredeemably corrupt politicos ample excuse to do precisely that—are you surprised that they have wholeheartedly embraced this massive fraud?
UPDATE: Timmy has commented on this piece and what he says about the IPCC is quite correct: his error lies in ascribing certain motives to your humble Devil.
The result of which is that this explanation of atmospheric physics is not some great “gotcha” showing that the whole climate change set of prognostications is wrong.
Indeed. As I have said over there, I was not intending this as yet another proof that anthropogenic climate change is a colossal hoax—surely I have published enough of those by now.
No, what I intended to do was merely to educate: to show people that the Greenhouse Effect has nothing to do with greenhouses, and that CO2 does not affect the Earth's temperature in the way that most people think it does.
The desire to do so was inspired by reading a number of posts in which bloggers or MSM reporters stated something like "the Greenhouse Effect is not in dispute" or "everyone agrees that CO2 is a Greenhouse Gas" or "no one denies that CO2 traps heat in the atmosphere", and then proceeded to show that they didn't understand how the Greenhouse Effect actually operates.
So, as I said, this article was not supposed to be a "gotcha"—merely educational. After all, I doubt that they teach the truth in schools anymore...
|
<urn:uuid:ab64a4e9-d0dc-485d-9441-c6466c7fe77c>
|
CC-MAIN-2013-20
|
http://www.devilskitchen.me.uk/2009/10/greenhouse-effect.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957029
| 1,870
| 3.109375
| 3
|
American Heritage® Dictionary of the English Language, Fourth Edition
- n. The greatest possible quantity or degree.
- n. The greatest quantity or degree reached or recorded; the upper limit of variation.
- n. The time or period during which the highest point or degree is attained.
- n. An upper limit permitted by law or other authority.
- n. Astronomy The moment when a variable star is most brilliant.
- n. Astronomy The magnitude of the star at such a moment.
- n. Mathematics The greatest value assumed by a function over a given interval.
- n. Mathematics The largest number in a set.
- adj. Having or being the greatest quantity or the highest degree that has been or can be attained: maximum temperature.
- adj. Of, relating to, or making up a maximum: a maximum number in a series.
Century Dictionary and Cyclopedia
- n. The greatest amount, quantity, or degree; the utmost extent or limit: opposed to minimum, the smallest.
- n. In mathematics, that value of a function at which it ceases to increase and begins to decrease.
- Greatest: as, the maximum velocity.
- n. The highest limit.
- n. mathematics The greatest value of a set or other mathematical structure, especially the global maximum or a local maximum of a function.
- n. analysis An upper bound of a set which is also an element of that set.
- n. statistics The largest value of a batch or sample or the upper bound of a probability distribution.
- n. colloquial, snooker A 147 break; the highest possible break.
- n. colloquial, darts A score of 180 with three darts.
- n. colloquial, cricket A scoring shot for 6 runs.
- adj. To the highest degree.
GNU Webster's 1913
- n. The greatest quantity or value attainable in a given case; or, the greatest value attained by a quantity which first increases and then begins to decrease; the highest point or degree; -- opposed to
- adj. Greatest in quantity or highest in degree attainable or attained
- adj. the greatest or most complete or best possible
- n. the greatest possible degree
- n. the largest possible quantity
- n. the point on a curve where the tangent changes from positive on the left to negative on the right
- French from Latin maximum. (Wiktionary)
- Latin, from neuter of maximus, greatest; see meg- in Indo-European roots. (American Heritage® Dictionary of the English Language, Fourth Edition)
“This is an interactive kinda thing, as he challenges his readers, Reply with a title maximum of four words about which you'd like me to write a fast fiction of exactly 200 words, together with a single word you want me to include in the text of the tale.”
“POMEROY: Well, as far as tonight goes, we already were at what we call maximum deployment.”
“National police commissioner George Fivaz said he agreed farmers should take what he described as maximum self-defence steps, but within the ambit of the law.”
“Very little drop in maximum velocity and very accurate.”
“The maximum is a year in jail, but the new law would have permitted up to five years.”
“Micro-usb is far inferior in maximum bandwidth, electrical interference, and physical strength.”
“And even if the official bearish projections turn out to be true, the shortfall could be made up easily by subjecting investment income to Social Security taxes, and by eliminating the cap that exempts wage income above a certain maximum ($68,400 in 1998).”
“And Fed officials framed their decision as being designed to fulfill its "dual mandate" to maintain maximum employment and stable prices.”
“So locking someone up in maximum security and putting them on trial for mass murder or attempted mass murder (with the death penalty as a potential outcome) is treating them “nicely”?”
“I believe that once a few corporate executives are found guilty and sent to be rehabilitated for dozens of years in maximum security federal prisons, existing companies will be far more careful about how they conduct business.”
These user-created lists contain the word ‘maximum’.
Budgetese - not a sexy topic but a very comprehensive list of words and collocations used in EU circles. Budgeting experts please comment and expand.
heading, across-the-board ..., emergency reserve, frontload, mopping-up, performance reserve, positive margin, negative margin, public finances, structural operat..., administrative ex..., management of EU ... and 657 more...
additionality, audit trail, accounting standards, auditing standards, general audit obj..., a posteriori audit, a priori audit, above board, acceptable error ..., access rights, accountability, accountable entities and 1283 more...
Use these and get promoted
All words of the Lisbon Treaty
(Persons' names, foreign and grammatical words have been eliminated, MWEs have been split up into individual words. Capitalization has been retained if r...
Very basic words for ESL students.
This is a list of academic words for students learning English as a Second or Foreign Language. It includes 570 word families that often appear in academic texts. It does not include words that are...
Looking for tweets for maximum.
|
<urn:uuid:af79ae15-5485-41e8-ae96-12b90a09032b>
|
CC-MAIN-2013-20
|
http://www.wordnik.com/words/maximum
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.873385
| 1,163
| 3.0625
| 3
|
Pineapple is a favorite fruit of many people because it’s tangy, sweet and oh so delicious. But did you know that pineapple is an exceptional healthy food with a number of good health benefits? Many healthy food options, including fruits and vegetables, are functional foods and pineapple is not an exception. Functional foods are consumed as part of a normal diet and are known to have physiological benefits and/or they reduce the risk of chronic disease. Functional foods contain bioactive compounds which make them healthy food options. Eating a variety of healthy food, including lots of fresh fruits and vegetables will provide many health benefits when eaten as part of a regular diet.
Let’s take a closer look at pineapple to see what makes it such a healthy food choice…
Pineapples contain an enzyme called bromelain. This enzyme provides the digestive benefits of pineapple by helping with protein breakdown. In addition, bromelain is considered to have anti-inflammatory properties and is believed to help relieve pain that is often associated with osteoarthritis. Bromelain is found in the stem of the pineapple, not the fruit part of this healthy food. It is therefore possible that taking a bromelain supplement in addition to eating pineapple with provide you with enhanced health benefits.
In addition to bromelain, pineapples contain vitamin C. Vitamin C is an anti-oxidant that helps to prevent free radical damage. Additionally, Vitamin C helps to ensure that the immune system is functioning optimally.
Vitamin A and B Complex Vitamins
Pineapples are a good source of Vitamin A and beta-carotene which are both known to have anti-oxidant properties. Vitamin A is essential for healthy mucous membranes, skin, and vision. Pineapples contain manganese which is essential for the maintenance of healthy bones. Additionally, manganese helps with energy production and anti-oxidant defenses. In addition to manganese, pineapples contain thiamin (Vitamin B1) which also helps with energy production.
Eating functional foods including a variety of fresh fruits and vegetables is not only delicious; it will help you support your overall health.
|
<urn:uuid:ba8f09df-b36d-4fea-b865-c80c39d3bf1b>
|
CC-MAIN-2013-20
|
http://www.belmarrahealth.com/weight-management/the-health-benefits-of-pineapple/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95887
| 452
| 3.078125
| 3
|
The Editorial Function in United States History
By Worthington Channing Ford, President of the American Historical Association, 1916–17
Presidential address read before the American Historical Association at Philadelphia, December 27, 1917. Published in the American Historical Review 23, no. 2 (January 1918): 273–86.
Books by Worthington Ford
The long line of my abler predecessors in office has given expression to many views and convictions. There are definitions of history, the application of historical principles, the interpretation of periods or of events, and experiment in forecasting the future in terms of the past. Scholar, publicist, and public servant have expressed their beliefs, outlined their hopes, and even intimated their disappointments in historical language. After such a series of treatments the field has been so well gleaned as to leave little yet to be garnered. If therefore I say a word for an historical agency on which almost no words have been spent, my apology must cover at once the poverty of the subject and the comparatively low rank of the agency. I refer to the editor of original sources of history, the ginning or picking machine which deals with the raw material, the first stage toward the warp and woof of historical writing.
Let us start with something definite. “Was it you”, wrote an Englishman to Joseph Jefferson, the actor, “or was it your grandfather, who wrote the Declaration of Independence?” The inquirer and the question are always with us and one of the objects of writing and teaching history is to make both harmless, if not impossible. And the lowest round of the ladder of accomplishment is the editor. He assumes the existence of the anxious inquirer, he seeks to measure his wants, and he frames the answer on such a plane as to hit the average degree of ignorance. “Ignorance”, wrote Emerson in his journal, “is but an appetite which God made us to gratify.” The editor is a source of information and a measure of quantity suited to a dose. A physician selects his remedies on case practice, on a range of experience which has eliminated every factor of doubt but the personal equation of the subject. The giver of information has few rules based on experience for his guidance, and has a double personal equation to meet—that of his subject and that of his questioner. No wonder the failures are many.
The art is comparatively new, for it arose out of myth and fable and is still painfully groping towards truth. Evolutionists tell us that the development of moral concepts has been as gradual and certain as the development of physical characteristics, and some would lay down a rule of thumb to show how the ideas of truth, right, and justice have been evolved from moral nescience. What would the writer of history not give for such a standard or measure! The pleasure and the relief of being able to determine thus almost mechanically the degree of faith to be given to this or that relator; the delight of placing him in his proper stage of development and the mastery of purpose which would follow—what boons to the plodding reader who must rest his story upon what others, of another time and place, have related. The strata of dependence thus defined would mean a scientific test for reliability, something far beyond the existing method of setting relator against relator and accepting the mean as truth.
Three centuries ago, before there was a wide public to be gulled, the little circle of readers was given on the death of a great man a volume of his testament or parting advice. The contents had just enough verisimilitude to be accepted in part, and the advice was wholly interested. The practice common in its day on the Continent of Europe easily slipped into the later form of memoirs, and from the memoirs came biography. To pass upon the career of a public man immediately after his death involves no light task. The secretarial writer, of which Boswell is such a shining example, may be truthful and interesting; but if he is sincere and loyal he will not lightly relate what may tell against his employer. That appeal to prurient curiosity which finds a market in sensation, has been framed in many ways, and still attracts support. A Pepys holds up a personal mirror with the reflecting surface towards himself, and unconsciously gives material for judging others and his own times such as no serious-minded historian could give and such as no writer on Pepys’s period can neglect. The little has become the important.
The United States has not been rich in self-written history, nor is the little it possesses, of startling moment. An explanation offered by some declares the lack of real interest in American history. However rich in pictures and incidents it does not present flashes and explosions of overwhelming importance. Another explanation is that its people have been too occupied in opening territory to settlement and development to expend much energy on recording and explaining the course of events, much less the participation in the struggle where the overscrupulous were doomed to defeat. A third would say that a democracy is against good history, for it means a slow vulgarizing of the best. No such explanations will account for the absence of those willing and able to relate their own careers after their own point of view. Their names should be legion. The foreign visitor, in the rawest period of our growth, has not failed in picturesque, even lurid contrast, and has not found us inarticulate on ourselves or bashful of suggesting our merits. If the tone has been one of bluster rather than of philosophic analysis, it is genuine and not assumed, even to the wincing at the reflection returned by the not too faultless mirror.
In colonial New England publicity in the religious experiences of members or would-be members of the churches was exacted. If printed they take rank with the confessions of condemned criminals just reprieved, interesting not for their content, but for the state of mind and surroundings they show. They constitute a necessary item in the social history of the time, a crude form of the third degree, by which it was hoped a corner of the curtain of the soul, the token of immortal man, would be raised. The diaries, chiefly kept in interleaved almanacs by the ministers, were never intended for the public eye, and rarely rise above the level of a record of church ministration, with items of farm and household of a singularly bald nature. Once in a great while some one has the itch of putting all his thoughts and feelings on paper, and in seeking to imitate St. Augustine in frankness and scope, presents the most repellent features of religious ecstaticism. Sainthood and martyrdom are able to endure that form of exhibition; but the atmosphere of early New England lacks in the quality which makes martyrdom picturesque; and this self-immolation to dogma long since passed away leaves the reader cold, even in a critical frame of mind. Did the situation of soul really demand this suffering? Is it not the symptom of physical derangement so easily mistaken for a divine afflatus? Of the sincerity of the sufferer there need be no doubt; but for permanent effect the acting is a little overdone.
Whence comes this expansiveness which often mounts to the grotesque; this tendency to publicity of thought and action? It is not English, for that people avoid exhibitions of feeling lest they make themselves ridiculous. It is not French, for they have a better sense of finish and proportion. It is not Scottish, for they are too canny to waste even emotion without some definite return. The Irish have a humor that saves them from ridicule, though it does not endow them with the needed balance-wheel of wisdom. The sentiment of Germany overruns proper bounds, but is not reflected in the leading examples of American self-written biography. The American expression is peculiar, a proper accompaniment of a territory almost without limits. Virgin land at settlement, it had a strong influence on those who came to it. Its symbol is a screaming eagle, and who would blame an eagle for screaming in boundless space? Every American claims the right of free utterance. As a child he has used it, as a man he has abused it, the only restraint being a wholesome fear of the law of libel or an appeal to the medieval and murderous code of honor. Even this right of utterance is quite modern.
Censorship of the press, one stage in the development, is an historical survival, and in English-speaking countries (except Ireland) is merely of historical importance. Liberty “to know, to utter and to argue” Milton placed above all other liberties; but so long as it could be interpreted by an autocratic ruler, by virtue of an undefined general prerogative, the liberty existed only in name. Sir Thomas More in his Utopia made it punishable by death to speak against the ruling power, and by one of those strange sequences of events he was himself brought to trial for countenancing the pretensions of a nun who was charged with treasonable language. Freedom came slowly, and such was the effect of the supervision of the press that under the Restoration the newspaper press was practically reduced to the London Gazette—an official and inspired organ. In two centuries and a half such interferences have been abolished. While Great Britain has, after its fashion, never rested the freedom of the press on law but on its unwritten constitution, the United States have gloried in its recognition in their bills of right, an essential part of their constitutions. The price paid is a confusion of tongues, a multiplicity of opinion which produces indigestion, and an absence of standards which permits the glorification of the seamy and the sordid as freely as of the great and the admirable. Laudation of self and institutions is justified by accomplishment, and if it is pitched in too high a key is excusable by its honesty.
One compensation may be found in this discordant circle of self-praise, filial praise, and disciple praise. The note is unharmonious even in development. There has not long existed a studied combination singing praises of one man or one policy; at no time do we trace that blind sacrifice of opinion which marks the devoted adherent to faction, to party, to Church or to State. There has been no suggestion of general interference by the state to impose upon the people a single interpretation of policy outside of law. The opposition has been as free as the supporters of government, and the third or independent party, or the silent independent voter, tends to correct such an overwhelming drift as could be interpreted as an unrestricted mandate from the people to their representatives, or from the government to the people. Except in great crises the American conception of liberty of speech has been maintained, and in the severe crises, as Rhodes says of the War of Secession, the great principles of liberty have not been invalidated by the exercise of extraordinary powers, although the arbitrary exercise of those powers was to be condemned. Even against the government the citizen can invoke the protection of the courts.
Self-editing finds expression in autobiography, and the one great example of American autobiography is that of Franklin, written, be it remembered, late in life, and never finished. Unable to live his life over again in fact, he took the nearest to it, to make a recollection of that life as durable as possible by putting it down in writing. And he gratified his vanity in so doing, believing that vanity is “often productive of good to the possessor, and to others that are within his sphere of action; and therefore, in many cases, it would not be altogether absurd if a man were to thank God for his vanity among the other comforts of life”. The entire relation is redolent of a studied frankness that lulls the reader into a forgetfulness of much in Franklin’s career that a moralist would dwell upon. I almost fancy that Cotton Mather would have been pleased to preach the last sermon heard by the condemned Benjamin Franklin. And the circumstance would have been possible, for Franklin was born in 1706 and Mather lived until 1728. The autobiography was first published in 1817, and could occasion no serious controversy; but the papers printed with the autobiography by the grandson did arouse comment on both sides of the ocean, more for what had been omitted than for what had been included. The question of an interference by the British government is not one which need delay us in passing. That government and that people have not shown strong inclinations to edit their expressions on America and its history, least of all at the time the Franklin volumes appeared. Jefferson intimated that William Temple Franklin may have been “an accomplice in the parricide of the memory of his immortal grandfather”, but the result of the publication gave proof of the incapacity of the grandson. There is not a line of Franklin’s writings which could not have seen the light in 1817 with as little injury to his reputation as in 1917.
An earlier and the earliest printed autobiography after the War for Independence appeared in 1798. Major-General William Heath took us into his confidence in the form of a journal of events compiled after his active service was past, and published, it has been charged, before its intended time, to promote an election to office. Fully acquainted by his studies, as he believed, “with the theory of war in all its branches and duties, from the private soldier to the Commander in Chief”, he wrote sometimes as a private and sometimes as generalissimo. He was the preacher of preparedness from 1770, and like most such preachers was lacking in action. A trusted lieutenant, he attained rank without distinction, and grew corpulent in inaction and performance. “Our General”, as he pleases to call himself, a term reported to have been applied to him by Bernard in one of his prophetic moments, printed his book, which was greeted by smiles on all sides. It was impossible to misinterpret such a delightful piece of vanity. Its historical value shrinks before its personal quality.
Gradually an interest in personal history was awakened. In biography Marshall’s Life of Washington was easily first to challenge attention. It was based upon original document; it appeared at a time when the power of the Federalists had been shattered, and their shrewdest opponent was in full possession of the executive. Did Marshall intend to raise a monument to Washington or to the Federalist Party? It was good history, good politics, and good biography for the time, yet the neglect into which it has fallen is due more to the writer than to what he used of the subject. Fourteen years later, in 1818, Wirt’s Life of Patrick Henry, necessarily largely based on tradition, carried into biography the oratorical flowers of Independence Day, and succeeded so far as to make its transplanted garden a desert place in comparison to a later and saner cultivation. It is something to have manufactured a good book, yet an example that is to be avoided—otherwise the sense of relation would be weakened. Virginia still held the field for a period. In 1825 the life and correspondence of Richard Henry Lee and in 1829 that of Arthur Lee were given out by a grandson of the former. They were defensive, colored by deliberate but mistaken purpose. Both compilations showed how good material could be wasted in an effort to prepare a brief in a cause of secondary importance.
The first compilation of Jefferson’s letters, by his grandson Thomas Jefferson Randolph, appeared in 1830. Monroe and Madison, the closest intimates to Jefferson after his presidency, were still living, not to mention some of the opposition whose feelings might be touched. They knew some years in advance that this work was in preparation, yet neither attempted to interfere or to control what should be inserted. Randolph possessed the courage of his necessities, for on the last pages of the last volume he printed the Anas, that body of comment which is so characteristic of the Jefferson epos. Yet he did not let stand the criticism of Washington or the word which made John Marshall the mountebank of the X. Y. Z. mission, and he omitted more than half of the record as of lesser importance. Jefferson’s opinions invited dissension, and the publication of the volumes led to an exchange of epithets that enlivened, even if it did not much enlighten, the history and practice of politics. Having gone as far as he did, Randolph need have omitted no part of the record. Those who disliked Jefferson were convinced of the soundness of their dislike; those who practised politics as a profession busily engaged themselves in constructing that Jeffersonian myth which still persists and, judiciously used, has exerted a constant effect in hypnotizing the wavering voter.
These lights of the War for Independence used language unrestrained by a fear of publication. They lived in the day of a newspaper which seems singularly harmless for attack. The party scribblers of low character might dip their pens in venom; the very excess of their invective discounted and the small circulation deadened its force. When Callender turned upon Jefferson, his benefactor, he was obliged to set up a sheet of his own, and the few copies in existence are eloquent on his poverty and incapacity. In the respectable press the discussion of men and measures rarely rose above mediocrity, and mere personalities could not explain policies. Hamilton, one of the best controversialists of his time, might have repeated his letter to John Adams six times over, with six different objects, and had either the Diary or letters of John Adams seen the light in his day, the pot of discord would have remained at boiling point. Both men in their own time experienced the effect of an untoward publication of confidential communications, and the experience embittered their later years. Hamilton’s papers drifted for years looking for a biographer, and when at last in 1840 they were used by a son, his brothers openly expressed their disapprobation and regret on the event.
In this early period of personal relations the editor had no place. The member of the family sufficed. However marked a curiosity over a public character might exist, it did not extend to his writings. An early experiment (1810) of printing Hamilton’s financial papers failed. With the current questions interest ceased, and newspaper discussion rarely dipped into past American history. Precedents and comparisons were drawn from Greece and Rome, not from colonial Britain. In the small number of instances where elaborate defense was deemed proper, it was the leading actor who performed the task—as in Monroe’s defense of his French mission and in Edmund Randolph’s Vindication. A pamphlet would cover the emergency; and it was prepared by an interested party. Yet in the first years the editor appears in a modest but efficient form, dealing with original sources and with some comprehension of the function he was to fulfil.
The earliest example is Ebenezer Hazard and his Historical Collections printed by the author—a euphemism then as now for printed at a loss—in 1792. Wait’s State Papers (1815) were a forerunner of Force’s Archives. As to the publication in 1819 of the Acts and Proceedings of the Convention of 1787 by John Quincy Adams, then secretary of state, as related in his Memoirs, he enlists the heartfelt sympathy of everyone who has dealt with original material as arranged by ambitious but badly equipped adventurers in history, or by pious hands directed by filial apprehension. These early essays in printing sources were guided by the proper spirit. Without undue reverence for the written word, they followed the text without modification in language or in intention. Why should this attitude have undergone a change which for half a century persisted in mutilating the text and giving excuse for every vagary of statement?
’Tis woman that seduces all mankind;
By her we first were taught the wheedling arts.
And it was a Massachusetts woman who pointed out the way. Secretly Eliza Susan Quincy compiled a memoir of her grandfather Josiah Quincy, the patriot, and when she had completed the task, she induced her father, Josiah Quincy, to put his name on the title-page and thus assume responsibility for the dark deed. How she doctored the text, altering, omitting, and mutilating as seemed to her proper and best, has only recently become known. I will not say that she violated all the commandments of good editing, but she was remarkably successful in sinning against the great majority. This volume appeared in 1825, and the first volume of Sparks’s Washington followed nine years later, so perfect an imitation of all the faults embodied in the Quincy publication, that collusion might be assumed, without the excuse of family reticence.
I wish to be just to Mr. Sparks. Admit that he designed and carried into execution large undertakings, and a series of ten volumes is a large undertaking even now; admit his singleness of purpose and consistency of operation; is it harsh to say that his judgment is condemned by the necessity for going again over the ground he covered, not because of new material discovered or available since his day, but because of an unreliable text? The writings of Washington, Franklin, and Gouverneur Morris and the Diplomatic Correspondence which he edited—all have since been republished, and with patience, not from a few samples but from the many, may be discovered the manner in which Sparks misused his opportunity. His good fortune in being a pioneer in this form of compilation, and his industry as an editor, have placed his volumes on the shelves of every self-respecting library, public and private; yet his repute as an authority has been steadily falling.
Deliberate falsification can hardly be charged to these early practitioners in editing. They felt the presence of some who had participated in the events they were to describe. Why print anything unpleasant, or unkind, or partizan, or personal? Why expose the foibles of men looming big as historical characters? These contemporaries, wearied by perpetual party strife, were beyond a capacity to reply; they asked only to be permitted to close their lives in peace. Others were actually in office, honored by the free choice of the electors or by the trust of those who held their office by election. Why raise disputes of the past, much and probably ignorantly discussed at the time, now the ashes of controversy? The supposed necessity of party supplied the newspapers with abuse of individuals, and the pamphlets of the day could match the newspapers in directness and scurrility of language. History and biography should rise to a higher level, and in style attain to some merit. If it bordered on the ultra-patriotic, that was an excusable weakness, for the men of the War of Independence then looked large, larger even than the principles for which they fought.
The influence of official relations must be held responsible for some serious blunders. When Congress assisted to publish Hamilton’s works in 1850, it was the son who edited the material; the Jefferson, three years later, was entrusted to the librarian of the Department of State, and he took remarkable liberties with the text—inexcusable, unless we accept the theory that political exigency rather than historical truth guided the undertaking. The dominance of the South made expedient suppression of some features, for the South had become sensitive to the growing antagonism to slavery and the increase in material power at the North. Even the foreign relations of the United States remained in good part unknown; the executive could give out what it pleased and withhold information on the plea of prejudice to public interests. The Department of State harbors an unmeasured mass of historical material, and has used only what has seemed good to more or less well-informed officials in the past when weighing it in the scale of occasion. Diplomacy, even the open diplomacy of the United States, has had its high victims, and both secretaries of state and agents stand as sacrifices offered to smooth over blunders or to quiet public clamor. What a field for judicious editing!
It may thus be said that the editor has been coming into his own, not rising in importance, but better recognized as a useful albeit somewhat erratic adjunct to the writing of history. The quality of product has improved, and the shadows of family or political doubt are less frequently encountered. Public archives have been made accessible, a generous freedom of use accorded by private owners of papers; and pride of ancestry has contributed its share to the ever increasing quantity of product. If only certain possessors of material could appreciate how far they are like the ostrich, and what damage their aloofness is working on their pet admirations! Imagine trying to prove anything against public morals on John Jay! Yet he has been fastened in a niche of the 1833 model, when reserve darkened reputations. I could name a number of such distorted models, still cramped under a silence that almost confesses guilt. Where papers have been destroyed in the hope that criticism would be ended, the ghosts of old controversies arise and the worst or opposition phases of character are remembered. Descendants who have nestled in self-confidence and wrapped themselves in forgetfulness are pained and shocked to have the old gossip and tradition of their ancestors served up highly spiced in modern journalese. They have only themselves to blame.
For nearly a century after the Declaration of Independence both biography and editing of original materials had not attained success. They lisped, fearful of speaking aloud, and they avoided crucial matters of controversy. Was it this example which led to a series of political autobiographies in the last two generations? From Benjamin F. Butler to George F. Hoar and beyond—the mere writing of the names suggests startling comparisons of product. Was it a suspicion that they could not entrust their reputations to editors or to biographers which tempted them into a difficult adventure? Was it a desire to anticipate the opinion of contemporaries, and while yet living to taste the sweets of servile flattery? They chatter of many things, but are reticent on those most important to the historian. As appeals to a simple faith, and as childlike murmurings of unrelated facts they awaken wonder without gratifying a reasonable curiosity. To compile such works and then to destroy the original records, as if the last word had been said, is a crime against history, and an unavailing plea in abatement against further consideration. Yet most of those self-constituted apologists have been lawyers, and some of them good lawyers.
To approach such modern instances with due reverence is difficult. Conditions have altered, the standard of greatness has changed, and the demands as well as the responsibilities of biographer and editor are other than were accepted unquestioned a half-century ago. History is better written, and the subject is attracting the best; but autobiography lags behind, good-naturedly accepted for its defects rather than for its virtues. The charm of literary autobiography persists, but the unreliability of political autobiography has come to be a byword. To describe action directly and intention truthfully after the event appears to demand opposite qualities. Magna pars fui—the accent is on the magna, and the relator exaggerates his own importance while twisting his facts and misstating his motives.
Is it not a form of conceit, and a vulgar form at that, to suppose that the story of a life can be only self-written? Is man so little influenced by circumstances and so greatly moulded by his own will that he can consciously assume to be master of his own fortunes? The self-made man is subject to attacks of assurance which awaken in him an anxiety to tell others how he accomplished it—it referring to any achievement from making a large fortune to writing a popular song. Success is the worst judge of itself, and some other tribunal should take cognizance and, if possible, commit such budding sprouts to safe quarters where they may interchange their confidences without making an undue exhibition of themselves. The thing is possible, for did not an Italian saint not only overcome the Devil but make him confess all his sins?
The human machine is self-advertising, for its wants are imperative and its acts come for judgment before an immediate tribunal—public opinion. Is not, then, the desire to write autobiography a confession that some explanation of conduct is to say the least expedient? The atmosphere of publicity in which a public character of to-day moves gives to surrounding objects and relations a certain distortion. The distortion becomes natural to him, and he wonders why others do not accept him as unquestioningly as formerly, why they adopt a critical attitude with a tendency to open opposition. If he is pushed out from a public career, and gains time for reflection and self-examination, the injustice and unreason of his former constituency appear large and to him are based upon misconception. So he enters upon his defense, and tells the old story in the old way, with distorted vision and with vanished glamour. It requires a greatness of character to stand the test, and there are few great characters. The majority babble, retail half-truths and vamp the worn and patched shreds until they have encased themselves in nothing but their own too transparent self-consciousness, still not undisturbed by doubts. Seeking to invest themselves with a cloudlike splendor and halo as the reward for upright conduct, they retire into the smoke-shield of their own creation, to emerge streaked with smudge. As a mode of defense autobiography is a failure; it too often confirms the old saying, that a man who is his own lawyer has a fool for a client. The ghastly skull of St. Charles Borromeo looked out from its gorgeous trappings and surroundings, always a reminder of what he had been—a mortal. As ghastly figures stare from the written pages of autobiography, reminders that the mortal or weak parts dominated the whole, and left a record that is unchangeable.
To the biographer, not too closely related to his subject, and to the editor, belongs the task of telling the truth—not the simple or the whole truth, but as much as the records will afford. The writer of biography has the wider field, the better opportunity, for he may wander far and invoke the dramatic and the picturesque, even infusing into the relation a color of his own. His story may read like a romance, it may be a fairy tale, or it may be a verbal cenotaph wherein nothing of its subject may be found; it soon is weighed, judged, and ticketed for remembrance or oblivion.
An editor is restricted to the written record; the memories of oldest inhabitants and the tradition of generations have no attraction for him. His purpose is to give all that may be of service to our host of anxious inquirers and the ever-increasing number of writers of history, and to give it unvarnished, as the documents contain it. This is not to say that he will be unsympathetic. I defy anyone to live among the records of the past without absorbing some spirit kindred to that which actuated the men of that time. He sees through their eyes, and re-enacts their deeds, with a wider vision and a knowledge of consequences not vouchsafed to them. Whatever reserve is imposed arises out of a sense of decency; all else may safely be left to the judgment of history. It is good to humanize Washington, to have the means of tracing the tortuous policy of Jefferson, to measure the ability and ambitions of Hamilton, to comprehend the rash but honest conduct of the Adamses, and to wonder at the little greatness of Monroe. We owe these to modern editors, and in no instance did they inflict injury upon good repute, nor did they greatly modify the great lines of historical writing. They supplied treasuries of fact from which incidents and characters may be written or newly written. To furnish the material in its full and unaltered shape—that is the achievement of the change which has come to editorial methods in a generation.
True perspective requires time and space, and neither historian nor editor can use material of the day in the hope of attaining finality. Yet both are in possession of a trained quality of which few journalists, few civil and military officials can boast. A knowledge of what has gone before, of past events, a habit of analyzing character, of combining facts and weighing evidence, constitute an added sense in seeking some solid foundation in the welter of to-day. They have tested the politicians’ position. They know that from the very beginning of its history the country has been in a chronic state of crisis, requiring the election of this or that man to office, demanding sacrifices which constitute the stock claim of the politician to reward; that the years are strewn with such sacrifices, and that the number of pretended and willing saviors of the country would fill several Valhallas. They know that family, censors, and state are unavailing against time, and that no cause has been without its evil features which cannot be suppressed and ought not to be forgotten. They know that no human agency can belie the character for which the man himself is responsible. The inevitableness of history lies before them in too many examples to be neglected. The editor deals with individuals, the historian with generals. The cultivation of a balanced and non-partizan spirit and utterance, no small accomplishment, brings its reward in confidence and clarity of vision.
What is the application of this excursion? For three years the country has been under a stress which has tested its people and its government. In the mass of interested discussion and propaganda, licit and illicit, it has been difficult not to take a position and express the faith that is in us. Even before actual participation in the war necessary information was wanting. Of partial statements the number was and is in excess, but it may be doubted if the fullest exposure of motives and performance will much change general opinion. The extremist is beyond change, and among these extremists on both sides are some historians. Their honesty of conviction is not to be questioned, but their violence of expression is to be regretted. Exaggeration in language is not confined to the newspaper. The time is not yet come for a final weighing of evidence, for we are living, as in the England of the Restoration, under a “Royal Gazette”. Cables and mails are under a censorship which tends to become more rigid; discussion of governmental policy and execution is under a threatened interference by officials, who are wanting in experience and are fallible and extremely sensitive to currents of public opinion; and American opinion is subject to excitements, fitful and destructive of reputations. But unless a man sells his soul he can be heard and answered, or left to the certainties of time. It is all very well to speak of the sober second thought of the people; the first thought may not be sober and may inflict great injury, and in war times the first thought is explosive. How long has it been since our writers of text-books on history consented to modify their denunciation of Great Britain? How many years have allowed the war with Mexico to pose as a shocking example of greed and broken faith? The word rebel as applied to the South is a survival; the bitterness has slowly turned into sweetness, and the glory of honorable conflict is shared between the two sections. Much of what parades as history to-day will fortunately sink into the forgetfulness of the future, to be exhumed at times as curious examples of misdirected energy and ill-exercised thought. What remains, clarified of its partizanship, may serve for real history. It will be two generations before the full publication of documents can begin, and then will be applied the tests of fair judgment—the real editing. In the meanwhile we should cultivate as far as possible, the editorial attitude, keeping our minds open, restraining our criticism lest it lead to injustice and persecution, avoiding personalities, and exercising the same patience and restraint under wrongs and violations of good faith as have placed our country with an unsoiled record at the front of a world movement.
Worthington Channing Ford (February 16, 1858–March 7, 1941) was the chief of the Bureau of Statistics for the U.S. Department of State, 1885–89 and the Department of Treasury, 1893–98. From 1902–08 he served as chief of the manuscripts division at the Library of Congress. He published The Writings of George Washington (4 vols., 1889). [back to top]
Text scanning: JSTOR
Text proofing and correction: Liz Townsend 7/14/00; 11/26/11
Text encoding and annotation: Kimberly Foote and Robert Townsend 7/23/00
© 2000, American Historical AssociationLast Updated: November 26, 2011 2:47 PM
|
<urn:uuid:d35a932e-eaa7-44c2-93c4-ef981cb4ae7f>
|
CC-MAIN-2013-20
|
http://www.historians.org/info/aha_history/wcford.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.976314
| 7,613
| 2.953125
| 3
|
Chickenpox has reared its head in Madison County, with one case being confirmed in Ennis.
Privacy issues prevent health officials from disclosing whether the case is of a Madison County resident, said Theresa Stack, Madison County Public Health administrator. However, in a press release issued this week, Stack said both Jefferson and Madison County officials are “working to reduce the risk of further infection.”
Chickenpox is caused by the varicella virus, which is the same virus that causes shingles in adults. Symptoms of chickenpox include fever, runny nose, irritability and the well-know itchy rash consisting of small red spots that blister.
Most children are vaccinated for chickenpox, Stack said. And all kids should be.
“We shouldn’t have any cases of chicken pox,” she said. “The reason why we do is people don’t immunize.”
Decades ago, chickenpox used to be a right of passage, but the virus has evolved over the years and become more serious.
“What my parents got and what we got is not the same strain of chickenpox we’re seeing now,” Stack said.
Additionally, getting chickenpox will expose children to a higher risk of getting shingles when they’re adults over the age of 50. Shingles can be a very painful illness that has lingering effects, including nerve damage.
“If we can get these viruses out of the kids’ bodies the better they’re going to be when they’re older,” she said.
Plus, many adults have never had chickenpox and getting the illness when you’re older can lead to complications, Stack said.
For more information on the chickenpox or to report an incident, call the Madison County Health Department at 843-4295.
Chickenpox in Madison County
From the Madison County Public Health Department
The Madison County Public Health Department has confirmed one case of chickenpox (Varicella). Chickenpox is a highly contagious disease but preventable. Unfortunately, the contagious period for chickenpox is 1 to 2 days before the appearance of the rash (blisters) and until the blisters are scabbed over; therefore it is highly likely that infected individuals may not be aware that they have contracted the disease.
Both Jefferson and Madison County Public Health Departments are working together to reduce the risk of further infection. Watch for symptoms that include fever, runny nose, irritability, and a rash consisting of small red spots, which blister over 3-4 days and then scab. The rash is more prevalent on the trunk and body than on the limbs but may appear on the inside of the mouth, ears, and over the scalp. Because of the mouth blisters, chickenpox may also include coughing.
Chickenpox is generally not a serious disease but it is highly contagious. Person to person transmission occurs primarily by direct contact with patients who have it. Chickenpox is transmitted through exposure to infected fluids from the nose, throat, or the skin rash of someone with the chickenpox. This can occur by sharing breathing space (chickenpox is transmitted via the air), by directly touching the infected fluids (droplets), or less frequently from contact with contaminated items.
The incubation period for chickenpox (time for the symptoms to surface after contracting the disease) is 14 to 16 days but can range from 10 to 21 days. The itching from the skin rash can be controlled by cool baths, dabbing the spots with calamine lotion, and avoiding spicy, acidic or hard crunchy foods that may irritate mouth sores. Recovery time is usually 5 to 10 days, or when the rash has scabbed over. Complications of severe cases may include secondary bacterial infections, dehydration, pneumonia, central nervous system problems and even death.
Prevention is the best insurance! The chickenpox vaccine is very effective, with eight to nine of every 10 people vaccinated becoming completely protected. Children should receive their first vaccination for chickenpox between 12 to 15 months of age. A booster shot is also required at least 28 days after the first. Children with only one vaccination are not fully protected. In addition, any individual (of any age) who has had chickenpox is at risk of contracting shingles later in life. Please contact the Madison County Public Health Department for more information or to ensure you and your family is fully immunized and protected from preventable diseases (843-4295; adCoPHD@3Rivers.net).
As a reminder, hand hygiene and respiratory etiquette are always recommended to aid in the prevention of many contagious diseases including the flu and common cold.
|
<urn:uuid:95bb8e2c-66c3-4d11-8816-18c45f96a111>
|
CC-MAIN-2013-20
|
http://www.madisoniannews.com/chickenpox-case-found-in-madison-county/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951584
| 969
| 2.6875
| 3
|
It's your first chance in months to get out of Dodge. It'll be great to pack up some cold ones, a couple of fishing rods and some old clothes and escape to the country for a day or two. But as you leave the mercury-vapor-illuminated metroplex, you realize what you forgot--carrots, lots and lots of carrots, because you can barely see the road in front of you in the starlight of the countryside. Relax, it's not your failing eyesight. It's a burned-out headlamp.
For generations, American cars, and any car sold in the United States, had the same kind of headlight--a sealed-beam, either in a single or a quad arrangement. This fragile blown-glass envelope was filled with an inert gas and worked pretty well until it burned out. It had only modest performance, but the Department of Transportation mandated its use.
Most modern cars use what's called a composite headlamp--a plastic reflector bonded to a plastic or glass lens and fitted with a bulb. The bulb is of a quartz-halogen design. The "glass" bulb is actually made of silica quartz, which is highly resistant to heat. The filament is engineered to run at a much higher temperature, producing more light and heat. The silica envelope is filled with a mixture of halogen gases (iodine or bromine) to scavenge evaporated tungsten filament from the inside of the quartz, keeping each bulb's brightness constant until it fails.
Replacing a broken headlamp assembly is straightforward. Most of the fasteners and mounting hardware will have to be transferred to the new housing.
|
<urn:uuid:dab56608-e620-49b5-a113-9d7e258858ef>
|
CC-MAIN-2013-20
|
http://www.popularmechanics.com/cars/how-to/maintenance/1272456
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954081
| 345
| 2.84375
| 3
|
Asked in 1964 about the most significant thing she had learned about Americans while photographing those fleeing the Dust Bowl in the 1930s, Dorothea Lange answered: "I many times encountered courage, real courage. Undeniable courage." She saw it often, she said, "in unexpected places." She attempted to capture it as well, of course, in her stark black-and-white images of somber migrant farm workers, strong-jawed mothers, fly-dotted toddlers, and gaunt sharecroppers. By showing the stoicism of her subjects, Lange restored dignity to the dispossessed during the Great Depression. (Click here to follow Julia Baird).
As Linda Gordon points out in her excellent new biography, Dorothea Lange: A Life Beyond Limits, the photographs Lange took of the "handsome homeless" symbolized the way the architects of the New Deal analyzed the Depression, so that widespread poverty was no longer blamed on poor people but on financial mismanagement: "The economy, not the people, needed moral reform." Lange's subjects were poor, but also disciplined, hardworking, and upright. And quite beautiful.
These images, taken as Lange explored rural California and the Midwest in her dusty Ford station wagon on behalf of the New Deal's Farm Security Administration, serve as a striking reminder of how subversive it can be simply to view people with respect. Lange chose attractive subjects, Gordon writes, "but she also found the attractiveness in everyone," through courtesy, not flattery. And, when her subjects were uneducated, exhausted, hungry farm workers, "her respect for them became a political statement."
After The San Francisco News published photographs of starving pea pickers, existing on stolen frozen vegetables because a cold spell had destroyed their crop (the iconic "Migrant Mother" was one of them), there was a deluge of public donations. Shortly afterward, the Federal Emergency Relief Administration provided funding for two emergency migrant-worker camps in California. No wonder FDR's critics slammed these photos as sentimental propaganda.
The contrast to today is stark. Last year the number of Americans living in poverty peaked at 13.2 percent, the highest in 11 years. The greatest drop in income has been among lower- and middle-income earners. But poor people appear in the mainstream media only when they are obese, sick, or sad: powerless and to be pitied. Stories center on their lack of jobs, homes, and health insurance, or how some now live in motels or storage units.
Throughout the recession, we have remained largely obsessed with rich people; whether lauding or castigating them, our gaze has been primarily focused on the excesses and excuses of Wall Street. The well-off have not just received most of our attention, but also most of our aid, which means that those responsible for the crisis have been the least affected. Charities have also suffered. A Pew survey found that over the past two years attitudes have hardened toward the poor. In 2007, asked if the government should do more to help the needy, 54 percent said yes. This dropped to 48 percent in March this year.
A year ago there was much talk of how this recession might cause us to redefine—or remember—what it means to be American, recast our values, and to "put aside childish things," as President Obama said. But there is little evidence this has happened. The voices calling for a more civic-minded, prudent, and decent culture have grown quiet as our eyes strain looking for green shoots and fat cats.
Obama's chief of staff, Rahm Emanuel, said, "You never want a serious crisis to go to waste," but the president has yet to succeed at creating a broader narrative about America and the need for reform. He promised to protect the weak, and this remains his challenge. Lange's wage was paid by Franklin Roosevelt's New Deal—she prodded the public in return, and evoked their sympathy by humanizing the poor. Both politician and photographer attempted to build a public culture based on respect, not shame. By doing this, they reminded America what being American meant.
This is why it is so sobering, in the worst downturn since the Depression, to think of the woman who limped through rural America 70 years ago with a leg gnarled by childhood polio, her hair stuck under a spotted scarf, and snapped the impoverished and displaced until she found their beauty. Her greatest lesson, perhaps, was about dignity. A portrait, she said in 1965, is a "lesson in how one human being should approach one another."
Courage, real courage. You hope to see it sometimes, in unexpected places.
|
<urn:uuid:4e73f83f-c0e7-40b8-bf4a-5342b0a9fdd0>
|
CC-MAIN-2013-20
|
http://www.thedailybeast.com/newsweek/2009/11/04/seeing-dignity-in-poverty.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.97126
| 953
| 3.046875
| 3
|
Textbook - Key Concepts in Geomorphology
We are creating a new, different, and up-to-date textbook for Geomorphology. Bierman and Montgomery, Key Concepts in Geomorphology, is being extensively vetted by the community, includes all new, pedagogically- focussed art work, and is linked to freely accessible collections of community-authored electronic resources and photographs. Community vetting, resource creation, and impact assessment are supported by the National Science Foundation.
- There will be 14 chapters in Key Concepts in Geomorphology. The final chapter list resulted from the work of 10 geomorphologists meeting at NSF in April 2008 and input from 60 geomorphologists at the Cutting Edge, Teaching Geomorphology workshop in July 2008.
- Vignettes are electronic supplements - short, illustrated descriptions and place-based examples that allow instructors to customize their class' approach to learning. The first 27 were created by participants at the Cutting Edge, Teaching Geomorphology workshop in July 2008.
- Find an image for teaching Geomorphology from the collection of images we have assembled with NSF support. You can contribute your favorite Geomorphology images to this new archive.
- Submitting a Vignette is a great way to contribute to this community resource and to share what you know from your own geomorphic research. Vignettes are peer-reviewed and hosted by Carleton College's Science Education Resource Center.
- The National Science Foundation's Course, Curriculum, and Laboratory Improvement program is funding community involvement in this project. You can download our proposal.
- Join us for a workshop during which you can contribute electronic resources to Key Concepts in Geomorphology. See the schedule of workshops at GSA, EGU, AGU, and AAG. More to come.
- W. H. Freeman will be publishing the new textbook in full color. They have dedicated significant resources to the art and photography program.
Last modified February 01 2010 06:50 AM
|
<urn:uuid:07b20d7a-a329-476b-81fa-a19fbdc61237>
|
CC-MAIN-2013-20
|
http://www.uvm.edu/~geomorph/textbook/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918468
| 406
| 3.1875
| 3
|
On the left can be seen normal development in a domestic chicken, and on the right, the harmful effects caused by a pleiotropic gene mutation. Close inspection shows that a mutation in a single gene can damage several organs at the same time. Even if we were to admit that mutations did have a positive effect, the pleiotropic effect would eliminate this advantage by damaging several different organs at once.
One of the proofs that mutations inflict only harm on living things is the coding of the genetic code. In developed animals, almost all the known genes contain more than one piece of information about that organism. For example, a single gene may control both height and eye color.
The effects of genes on development are often surprisingly diverse. In the house mouse, nearly every coat-colour gene has some effect on body size. Out of seventeen X-ray-induced eye colour mutations in the fruit fly Drosophila melanogaster, fourteen affected the shape of the sex organs of the female, a characteristic that one would have thought was quite unrelated to eye colour. Almost every gene that has been studied in higher organisms has been found to effect more than one organ system, a multiple effect which is known as pleiotropy. As Mayr argues in Population, Species and Evolution: "It is doubtful whether any genes that are not pleiotropic exist in higher organisms."186
Due to this characteristic in living things' genes, any defect occurring in any gene in the DNA as a result of a chance mutation will affect more than one organ. Thus the mutation will have more than one destructive effect. Even if one of these effects is hypothesized to be beneficial, as the result of an extremely rare coincidence, the other effects' inevitable damage will cancel out any advantage. (See Mutation: An Imaginary Mechanism.)
Therefore, it is impossible for living things to have undergone evolution, because no mechanism exists that can cause them to evolve.
186. Ibid, p. 149.2009-08-17 15:25:05
|
<urn:uuid:f07d83db-f543-4252-9e27-79774995a1b9>
|
CC-MAIN-2013-20
|
http://harunyahya.com/en/Evolution-Dictionary/16606/Pleiotropic-Effect-The
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956551
| 412
| 3.65625
| 4
|
Roman Catholic Faith Examined!
Sign of the cross in its present form, did not exist before 9th century. In about 200AD, the most anyone ever did was sign the cross with their finger on their forehead.
There is absolutely no use of the "sign of the cross" in Apostolic times or the Bible.
We Speak truth in LOVE
Tell us of if we have misrepresented Catholic Faith
The Catholic Encyclopedia says: Under "sign of the cross"
"We have positive evidence in the early Fathers that such a practice was familiar to Christians in the second century. "In all our travels and movements", says Tertullian [200AD] (De cor. Mil., iii), "in all our coming in and going out, in putting of our shoes, at the bath, at the table, in lighting our candles, in lying down, in sitting down, whatever employment occupieth us, we mark our foreheads with the sign of the cross"."
"On the whole it seems probable that the ultimate prevalence of the larger cross is due to an instruction of Leo IV in the middle of the ninth century."
"Most commonly and properly the words "sign of the cross" are used of the large cross traced from forehead to breast and from shoulder to shoulder, such as Catholics are taught to make upon themselves when they begin their prayers, and such also as the priest makes at the foot of the altar when he commences Mass with the words: "In nomine Patris et Filii et Spiritus Sancti". (At the beginning of Mass the celebrant makes the sign of the cross by placing his left hand extended under his breast; then raising his right to his forehead, which he touches with the extremities of his fingers, he says: In nomine Patris; then, touching his breast with the same hand, he says: et Filii; touching his left and right shoulders, he says; et Spiritus Sancti; and as he joins his hands again adds: Amen.) The same sign recurs frequently during Mass, e.g. at the words "Adjutorium nostrum in nomine Domini", at the "Indulgentiam" after the Confiteor, etc., as also in the Divine Office, for example at the invocation "Deus in adjutorium nostrum intende", at the beginning of the "Magnificat", the "Benedictus", the "Nunc Dimittis", and on many other occasions." ... "On the whole it seems probable that the ultimate prevalence of the larger cross is due to an instruction of Leo IV in the middle of the ninth century. "Sign the chalice and the host", he wrote, "with a right cross and not with circles or with a varying of the fingers, but with two fingers stretched out and the thumb hidden within them, by which the Trinity is symbolized. Take heed to make this sign rightly, for otherwise you can bless nothing" (see Georgi, "Liturg. Rom. Pont.", III, 37)." (Sign of the cross, The Catholic Encyclopedia, Volume XIII, Copyright © 1912 by Robert Appleton Company)
Go To Start: WWW.BIBLE.CA
|
<urn:uuid:d4597191-767c-49fd-b8f3-10d2213f6332>
|
CC-MAIN-2013-20
|
http://www.bible.ca/cath-sign-of-cross-history.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956956
| 662
| 2.578125
| 3
|
Basics of Transparent Blitting, Part 1by Michael J. Norton
We’re back in session once more to discuss the fundamentals of Game Boy Advance SP-style graphics programming. Previously we discussed using offscreen buffers in the article, Basics of Offscreen Buffering. In this discussion we’re going to focus on copying sprites to the offscreen buffer. We’ll learn how to use a transparency pixel, and what it is used for in blitting <rendering> sprites.
Lesson 1: Wading Through the Hexadecimal Swamp
Even on the smoothest of sailing trips one can encounter a little turbulence every now and then.. Well, today is no different. We’ve been sailing smoothly through Elementary Graphics topics without having to discuss the complex math involved. Today we have an exception. I need to cover some basic math in order to discuss computer pixels and color.
Computers use a special number system that will seem a little foreign to you at first. With time and experience, you’ll get used to it. When people learn to count, we’re accustomed to starting with 1 and counting to 10. This is called a base-10 decimal system. When a computer counts, it counts to 16. This is called a base-16 hexadecimal system. To make things a bit more interesting a computer uses letters after it counts to 9.
Let’s take a look, side by side, at a base-10 system and a base-16 system, starting from 0 and counting to 15.
|0||1||2||3||4||5||6||7||8||9||10||11||12||13||14||15||-- base 10 counting|
|0||1||2||3||4||5||6||7||8||9||A||B||C||D||E||F||-- base 16 counting|
Why does a computer use base-16 numbering anyway? It all has to do with the computer’s hardware. A single base-16 digit can represent 4 bits. What is a bit? Now you’re making me work for my money, aren’t you?
A bit is the true language of a computer. A bit can have one of two values, 0 and 1. A bit is simply a placeholder for a 0 or a 1. Four bits are four placeholders for four zeros and ones. For example, 0000 is a four-bit representation of the number 0. The number 1 represented in 4-bit notation is 0001. A single-bit computer is pretty much useless. Early computers were 8-bit computers. This meant 4 bits used together could represent an 8-bit number.
To represent the number zero using 8-bit notation we would write, 0000 0000. To write the number 1 using 8-bit notation we would write, 0000 0001. The smallest number we can represent with 4 bits is 0 and the largest number is 15. Four bits, in the good ol’ days of computing, was called a nibble. Two nibbles, 4 bits + 4 bits = 8 bits, is called a byte. Let’s take a look at how bits are used to represent numbers.
The table shows side by side the binary numeric value with its decimal and hexadecimal counterpart. Before you get mad and jump out of a window, let me make this point. You don’t need to clearly understand hexadecimal just yet. Just know that it exists and WHY it exists. Here is WHY a programmer uses hexadecimal numbers in the first place. Look at the chart above. One hexadecimal digit can represent 4 bits. Take for example, the 4-bit value 1111. I know from a lookup chart that this value is F hexadecimal. This is easier for us to read, too. After some experience, you’ll get the feel for hexadecimal numbers.
This concludes today’s quick math lesson. Let’s go play with some video game graphics.
Lesson 2: The Transparency Pixel
We’re going to pick up where we left off in the article, Elementary Computer Graphics: Basics of Offscreen Buffering. Our final programming example from the last article is shown in Figure 1.
Figure 1. Sprite copied with no transparency pixel.
We copied our sprite directly to the offscreen buffer. The sprite has a funky fuchsia background that covers the cool game background. It looks hokey, right? Well, we can fix this. What we need to do is draw all the pixels of the leaping monster sprite, except for the funky fuchsia pixel. This pixel is our transparency pixel, which means we won’t draw it.
Identifying the Transparency Pixel
The first task is to figure out the pixel values of the funky fuchsia color. Building on the code from the blitting sprites discussion in Elementary Computer Graphics: Basics of Offscreen Buffering, we know the sprite’s location in the Tk image sprite's buffer. The leaping monster sprite is located in the sprite's image buffer at location <rectangle> 265 737 328 800. The Tk image library will allow us to retrieve the pixel using the image photo library tools. It looks like this:
# retrieve the transparency pixel value set transparency_pixel [$sprites get 265 737] puts $transparency_pixel % 198 0 107
The image photo procedure returns a list of three numeric values as shown in Figure 2. These are RGB values, for Red, Green, and Blue, the primary color pigments. The three values represent how bright each value should be for red, green, and blue to create a specific pixel color. This is just like mixing watercolors in art class. Only now we're using a computer to mix the paint for us. We need a red brightness of 198, a green brightness of 0, and a blue brightness of 107. These three Red-Green-Blue (RGB) values define our funky fuchsia transparency pixel. We have completed our first task. We have the RGB values for the transparency pixel.
Figure 2. Fuchsia transparency pixel information.
|
<urn:uuid:de08a0fb-0766-472b-8e4e-d63455efa92f>
|
CC-MAIN-2013-20
|
http://www.oreillynet.com/pub/a/mac/2004/08/17/blitting.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.859474
| 1,295
| 3.125
| 3
|
Students have life-changing experiences volunteering in Kenya
On Manale Patel’s first trip volunteering in Kenya, she noticed a man approaching their clinic with complete sorrow and devastation on his face.
The group of College of Charleston students and local medical personnel were administering HIV tests. This man had come to confirm what he already thought to be true: that he was HIV positive.
But after Patel administered the test, the results came back negative. He did not have HIV.
She told the man and comforted him saying “Sawa sawa” meaning “it’s all right” in Swahili.
Patel did not understand his response but knew he was relieved when he fell to his knees and raised his hands praising God.
Patel and others in the College of Charleston student organization Project Harambee travel to rural villages in Kenya for 10 days during their winter break. Some students opt to do an extended, three-week stay in Kenya.
The students said that the outreach has made them better people.
During the school year, Project Harambee has fundraisers to purchase medical supplies to administer once there.
They also get donations from local businesses.
Many of the supplies are purchased in Kenya because it’s cheaper and helps their economy. This past December, they spent about $6,500 on medical supplies. They also bring clothes and other forms of aid to orphanages.
In Kenya, they live with the family of their professor, Fulbright Scholar Mutindi Ndunda, a native of Kenya.
Ndunda said she first took students to do outreach in Kenya in 2002. Then in 2007, she started again with Project Harambee.
“I want my students to have an enduring understanding of the world we live in,” she said.
The students said that on a typical day in Kenya they’ll wake up at the crack of dawn (usually because the roosters start crowing) to have breakfast.
Then they travel to their outreach destination, which could take up to two hours. Once there, they set up seven to eight tents for their temporary clinics to administer medical supplies such as antibiotics, anti-Malaria pills, HIV tests and counseling.
The students are assisted by volunteer medical personnel in each tent. On other days, they spend time with children at orphanages and donate things such as clothes and bedsheets. But each day is different.
“Time doesn’t exist there. Things just happen when they happen,” said Nthenya Ndunda, the professor’s 26-year-old daughter.
Sarah Potts went to Kenya in 2010 and 2011. She said staying with Ndunda’s family makes them feel like part of the community.
“When you leave, it feels like you’ve left your family,” Potts said.
“They’re more open. Western cultures are more closed. You’re family as soon as you walk in the door,” said senior Kelcey Davis, who went to Kenya in 2011 and 2012.
Patel is now a senior who has gone to Kenya three times. She has taken out loans to fund her trips, but said her personal growth and experiences there are priceless.
“It’s worth it. It’s something that made me who I am and I would pay thousands of dollars for that,” Patel said. She plans on pursing a master’s in public health with a focus on global health after graduating.
“She has changed so much. Her confidence and performance increased. Her life is now focused on health-related studies. Her GPA increased and she’s lost weight,” Ndunda said of Patel.
Other students have noticed changes in themselves, too.
Junior Brooke Byers went to Kenya in 2011 and is expecting a baby girl in April.
She said going to Kenya taught her the importance of family, community and selflessness, which she said will make her a better mother.
“It was nice just seeing how open and loving they were. ... I would love for her to experience Kenya one day,” Byers said of her daughter.
“They (Kenyans) are just happy to have life. They have a genuine joy for it. Going there you realize that life is much more than the career you want. That’s not what will bring you joy,” said Nthenya Ndunda.
But Patel noticed that there are cultural similarities when she saw an avocado tree in Nthenya Ndunda’s 94-year-old grandmother’s backyard. The grandmother maintains her youthful look by putting an avocado mixture on her face, Patel was told.
Beyond the classroom
Many of the students, such as Byers, said they chose to join the organization because they wanted more out of a study-abroad experience.
“I wanted to do service. That’s more valuable to me than classes; being involved and giving back. I don’t know if I would have gotten that from a traditional study-abroad program,” Byers said.
“Academics is more than being in a classroom with formal curriculum. It changes people’s lives. ... This project and others like it help them (students) find their purpose,” Ndunda said.
Not all students in Project Harambee go to Kenya every year. Senior Swati Patel said she worked on the administrative side for two years before going for the first time in December.
“I had to build up the courage to travel. I was going out of my comfort zone because I didn’t know what to expect,” Swati Patel said.
Manale Patel, who is not related to Swati, said she enjoys being able to see a direct impact on the people they help, which sets them apart from other organizations that just raise money.
She said that on one trip she asked a man when he took his last HIV test. “The last time you were here,” the man replied.
“There’s no better feeling than knowing you have a purpose,” Patel said.
For more information or to donate to Project Harambee contact firstname.lastname@example.org.Reach Jade McDuffie at 937-5560 or email@example.com.
|
<urn:uuid:4bdd38b0-6ba2-45d3-9fa4-058d7552ca7b>
|
CC-MAIN-2013-20
|
http://www.postandcourier.com/article/20130227/PC1606/130229276/1162/students-have-life-changing-experiences-volunteering-in-kenya
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.978035
| 1,355
| 2.5625
| 3
|
More Than Just Less Problems
For those of us who’ve had difficult childhoods, we’re not just looking to reduce our problems.
We want – we deserve – much more than that. We want well-being, and a good life.
That means having the freedom to make our own decisions. It means being physically active and healthy. It means being good at things that are important to us, and being effective in the world.
Choosing well-being for ourselves can help us to achieve the goals that are most meaningful to us, and to live up to our highest values, or at least move closer to those goals and values. Well-being means having fun. And it means feeling truly happy and fulfilled in our lives.
Well-Being? Sometimes Just Being Is Tough
Of course we can’t feel totally happy and fulfilled all the time. That’s especially true if you’ve had a rough childhood. But we all struggle with getting what we need versus doing what we have to do. Things can get out of balance.
You might be in great physical health and have a good job, but feel like you’re going through the motions. Doing what you have to – not what you need or want to. Or maybe you have lots of freedom but aren’t doing anything you find meaningful.
And of course life is going to throw challenges at you, setbacks that make your path seem steeper – at least at first.
What we can do is build well-being into our daily routines, so we have the stability to experience life’s challenges as manageable bumps in the road.
That way, we can keep moving towards well-being and a truly good life for ourselves.
Well-Being and Its Building Blocks
So what does well-being look like? What is it built on? And how can you bring it into your life?
Let’s talk about what everyone needs, and about the potential that’s inside all of us.
There are things that we all need in our lives, as humans in search of well-being. Unwanted or abusive sexual experiences in childhood can make these things even tougher to achieve as adults.
We’ve divided them into several parts, each about a fundamental human potential or need that, when fulfilled, promotes well-being.
- Freedom – From control or manipulation, by other people or by parts of oneself (e.g., thoughts or ‘voices’ in your head that judge, criticize or yell at you).
- Physical Health – Healthy sleep, exercise, nutrition; freedom from illness and disease, or, if present, they have the least possible negative effects.
- Comfortable in Your Body – Feeling safe with the experience of being in your body, whatever happens to be going on with it (which is not the same as being physically healthy).
- Relatedness, Belonging, and Community – Including caring for others
- Competence and Effectiveness – Being good at things that are important to you. Based on your unique abilities and potentials, being effective in the world (e.g., relationships, work) and with yourself (e.g., managing your emotions).
- Playfulness and Humor
- Moral and Ethical Thoughts and Behavior – It’s not just about being good, but reducing conflicts and bringing greater freedom, effectiveness and enjoyment to yourself and your relationships.
- Part of Something Bigger – Including spirituality or religion, connection with nature, serving others.
It’s All Interconnected
The more you’ve achieved one kind of well-being, the greater your ability to achieve others.
Freedom is often most important, especially for people who have been abused or exploited. To have such experiences is to have your freedom trampled. This can lead to feeling unworthy of making important decisions for yourself, or even not knowing what you really want in life.
Also, no matter what aspect of your well-being you are trying to enhance, if you’re not freely choosing to do so, but doing it because someone else is telling you to, or because you feel like you should or have to, any positive changes you make aren’t likely to feel right inside – or to last.
Some aspects of well-being go hand-in-hand and you can’t achieve one without another. For example, you can’t have playfulness and humor in your life if you’re disconnected from other people.
But it’s also possible to go overboard striving for one aspect of well-being, and to do so in ways that end up making an overall healthy life out of reach.
For example, a person may be enslaved to an exercise addiction. Or someone may have lots of freedom, but not use his time or abilities in ways that are effective or beneficial. Or someone may strive to be moral in a rigid way that shuts out playfulness and causes disconnection from others.
And again, given the unpredictability of life and the inevitable bumps in the road, there will always be times when things get thrown out of whack. But understanding the building blocks of well-being, and how they’re interconnected, and then putting that understanding into action, will bring a lot more well-being into any life, no matter what surprises life dishes out.
|
<urn:uuid:f65ef829-06f4-44f1-a5af-7fcc14453e21>
|
CC-MAIN-2013-20
|
http://1in6.org/men/get-information/well-being/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.952327
| 1,113
| 2.625
| 3
|
Diarrhea is common in patients with the human immunodeficiency virus (HIV). Infections with the organisms Isospora or Cyclospora frequently cause diarrhea in those patients. The antibiotic trimethoprim–sulfamethoxazole is effective treatment for these infections, usually given at a dose of one tablet four times per day for 10 days. It is unknown whether lower doses or shorter courses of this antibiotic would be effective. Since these infections often come back if the treatment is stopped, patients are usually continued on the antibiotic (secondary prophylaxis). Unfortunately, some patients develop side effects related to the trimethoprim–sulfamethoxazole, and others may be allergic to it. Another antibiotic, ciprofloxacin, has been suggested as an option for treating and preventing infections with Isospora and Cyclospora species.
|
<urn:uuid:86f46b83-ee9c-441d-a658-ff783caf14fb>
|
CC-MAIN-2013-20
|
http://annals.org/article.aspx?articleid=713504
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.91098
| 180
| 3.046875
| 3
|
Mill Valley, California
||This article appears to be written like a promotion for the Lucretia Hanson Little History Room. (July 2012)|
|City of Mill Valley|
|— City —|
|Marin County and the state of California|
|• Mayor||Andrew Berman|
|• Senate||Mark Leno (D)|
|• Assembly||Marc Levine (D)|
|• U. S. Congress||Jared Huffman (D)|
|• County Board||District 3
|• Total||4.847 sq mi (12.555 km2)|
|• Land||4.763 sq mi (12.336 km2)|
|• Water||0.084 sq mi (0.219 km2) 1.74%|
|Elevation||79 ft (24 m)|
|• Density||2,900/sq mi ( 1,100/km2)|
|Time zone||PST (UTC-8)|
|• Summer (DST)||PDT (UTC-7)|
|GNIS feature ID||1659128|
Mill Valley is located on the western and northern shores of Richardson Bay. Beyond the flat coastal area and marshlands, it occupies narrow wooded canyons, mostly of second-growth redwoods, on the southern slopes of Mount Tamalpais. The Mill Valley 94941 ZIP code also includes the following adjacent unincorporated communities: Almonte, Alto, Homestead Valley, Strawberry and Tamalpais Valley. The Muir Woods National Monument is also located just outside the city limits.
Coast Miwok
The first people known to inhabit Marin County, the Coast Miwok, arrived approximately 6,000 years ago. The territory of the Coast Miwok included all of Marin County, north to Bodega Bay and southern Sonoma County. More than 600 village sites have been identified,including 14 sites in the Mill Valley area. Nearby archaeological discoveries include the rock carvings and grinding sites on Ring Mountain. The pre-Missionization population of the Coast Miwok is estimated to be between 1,500 (Alfred L. Kroeber's estimate for the year 1770 A.D. to 2,000 (Sherburne F. Cook's estimate for the same year). The pre-Missionization population of the Coast Miwok may have been as high as 5000. Cook speculated that by 1848 their population had decreased to 300, and down to 60 by 1880. As of 2011 there are over 1,000 registered members of the Federated Indians of Graton Rancheria which includes both the Coast Miwok and the Southern Pomo, all of whom can date their ancestry back to the 14 survivors original tribal ancestors. The Lucretia Hanson Little History Room in the Mill Valley Public Library has some oral histories recorded from some Coast Miwok descendants
In Mill Valley, on Locust Avenue between Sycamore and Walnut Avenues, there is now a metal plaque set in the sidewalk in the area believed to be the birthplace of Chief Marin in 1871; the plaque was dedicated on 8 May 2009. The village site was first identified by Nels Nelson in 1907 and his excavation revealed tools, burials and food debris just beyond the driveway of 44 Locust Ave. At that time, the mound was 20 feet (6.1 m) high. Another famous Mill Valley site was in the Manzanita area underneath the Fireside Inn (previously known as the Manzanita Roadhouse, Manzanita Hotel, Emil Plasberg's Top Rail, and Top Rail Tavern, most of which were notorious Prohibition-era gin joints and brothels) located near the intersection of U.S. Route 101 and California State Route 1. Built in 1916, the "blind pig" roadhouse was outside the dry limits of the city itself. Shell mounds have been discovered in areas by streams and along Richardson Bay, including in the Strawberry and Almonte neighborhoods.
Beginning with the foundation of Mission San Francisco de Asís, commonly known as Mission Dolores, in 1776, the Coast Miwok of southern Marin began to slowly enter the mission, first those from Sausalito followed by those from areas we now know as Mill Valley, Belvedere, Tiburon and Bolinas. They called themselves the "Huimen" people. At the mission they were taught the Catholic religion, lost their freedom, and three quarters died as a result of exposure to European diseases. As a result of the high death rate at Mission Dolores it was decided to build a new Mission San Rafael, built in 1817. Over 200 surviving Coast Miwok were taken there from Mission Dolores and Mission San Jose,including the 17 survivors of the Huimen Coast Miwok of the Richardson Bay Area. California Missions.
Early settlers
By 1834 the Mission era had ended and California was under the control of the Mexican government. They took Miwok ancestral lands, divided them and gave them to Mexican soldiers or relatives who had connections with the Mexican governor. The huge tracts of land, called ranchos by the Mexican settlers, or Californios, soon covered the area. The Miwoks who had not died or fled were often employed under a state of indentured servitude to the California land grant owners. In 1834, the governor of Alta California José Figueroa awarded to John T. Reed the first land grant in Marin, Rancho Corte Madera del Presidio. Just west of that, Rancho Saucelito was transferred to William A. Richardson in 1838 after being originally awarded to Nicolas Galindo in 1835. John Reed married Hilarita Sanchez, the daughter of a commandante in the San Francisco Presidio. William Richardson also married a well-connected woman; both he and Reed were originally from Europe. Richardson's name was later applied to Richardson Bay, an arm of the San Francisco Bay that brushes up against the eastern edge of Mill Valley. The latter rancho contained everything south and west of the Corte Madera and Larkspur areas with the Pacific Ocean, San Francisco Bay, and Richardson Bay as the other three borders. The former encompassed what is now southern Corte Madera, the Tiburon Peninsula, and Strawberry Point.
In 1836, Reed married Hilaria Sanchez, the daughter of the commandant of the local presidio. He built the first sawmill in the county on the Cascade Creek (now Old Mill Park) in the mid-1830s on Richardson's rancho and settled near what is now Locke Lane and LaGoma Avenue. The mill cut wood for the San Francisco Presidio. He also raised cattle and horses and had a brickyard and stone quarry. Reed also did brisk businesses in hunting, skins, tallow, and other products until his death in 1843 at 38 years of age. Richardson sold butter, milk and beef to San Francisco during the Gold Rush. Shortly thereafter, he made several poor investments and wound up massively in debt to many creditors. On top of losing his Mendocino County rancho he was forced to deed 640 acres (2.6 km2) of Rancho Saucelito to his wife, Maria Antonia Martinez, daughter of the commandant of the Presidio, in order to protect her. The rest of the rancho, including the part of what is now Mill Valley that did not already belong to Reed's heirs, was given to his administrator Samuel Reading Throckmorton. At his death in 1856 at 61 years old, Richardson was almost entirely destitute.
Throckmorton came to San Francisco in 1850 as an agent for an eastern mining business before working for Richardson. As payment of a debt, Throckmorton acquired a large portion of Rancho Saucelito in 1853-4 and built his own rancho "The Homestead" on what is now Linden Lane and Montford Avenue. The descendants of ranch superintendent Jacob Gardner continue to be active in Marin. Some of the rest of his land was leased out for dairy farming to Portuguese settlers. A majority of the immigrants came from the Azores Islands. Those who were unsuccessful at gold mining came north to the Marin Headlands and later brought their families. In Mill Valley, Ranch "B" is one of the few remaining dairy farm buildings and is located near the parking lot at the Tennessee Valley trailhead. Throckmorton also suffered devastating financial problems before his death in 1887. His surname would later be applied to one of the major thoroughfares in Mill Valley.
Richardson and Reed had never formalized the boundary lines separating their ranchos. Richardson's heirs successfully sued Reed's heirs in 1860 claiming the mill was built on their property. The border was officially marked as running along the Arroyo Corte Madera del Presidio along present day Miller Avenue. Everything to the east of the creek was Reed property, and everything to the west was Richardson land. It was Richardson's territory that would soon become part of Mill Valley when Throckmorton's daughter Suzanna was forced to relinquish several thousand acres to the San Francisco Savings & Union Bank to satisfy a debt of $100,000 against the estate in 1889.
In 1873, San Francisco physician Dr. John Cushing discovered 320 "lost" acres between the Reed and Richardson boundaries between present day Corte Madera Avenue, across the creek, and into West Blithedale Canyon. Using the Homestead Act he petitioned the government and managed to acquire the land. Before his death in 1879 he had built a sanitarium in the peaceful canyon. In Sausalito the North Pacific Coast Railroad had laid down tracks to a station near present day Highway 101 at Strawberry. Seeing the financial advantages of a railroad his descendants then turned the hospital into the Blithedale Hotel after the land title was finally granted in 1884. The sanitarium was enlarged, cottages were built up along the property, and horse-drawn carriages were purchased to pick up guests at the Alto station. Within a few years, several other summer resort hotels had cropped up in the canyon including the Abbey, the Eastland, and the Redwood Lodge. Fishing, hunting, hiking, swimming, horseback riding, and other activities increased in popularity as people came to the area as vacationers or moved in and commuted to San Francisco for work. Meanwhile, Reed's mill deforested much of the surrounding redwoods meaning most of the redwoods growing today are second or third growth.
The King family (King Street) also owned property near the Cushing land. One of its buildings was a small adobe house which, according to oral histories available at the Lucretia Hanson Little History Room in the Mill Valley Public Library, is believed to have predated the King farm. The Blithedale Hotel used it as a milk house. The adobe structure is still standing and connected to a house on West Blithedale Avenue; it is the oldest structure in Mill Valley.
The San Francisco Savings & Union Bank organized the Tamalpais Land & Water Company in 1889 as an agency for disposing of the Richardson land gained from the Throckmorton debt. The Board of Directors was President Joseph Eastland, Secretary Louis L. Janes (Janes Street), Thomas Magee (Magee Avenue), Albert Miller (Miller Avenue), and Lovell White (Lovell Avenue). Eastland, who had been president of the North Pacific Coast Railroad in 1877 and retained an interest, pushed to extend the railroad into the area in 1889. Though Reed, Richardson, and the Cushings were crucial to bringing people to the Mill Valley area, it was Eastland who really propelled the area and set the foundation for the city today. He had founded power companies all around the San Francisco Bay area, was on the board of several banks, and had control of several commercial companies. The Tamalpais Land & Water Co. hired Michael M. O'Shaughnessy, already a noted engineer (later on he would become chief engineer for the Hetch Hetchy Reservoir and O'Shaughnessy Dam and planned many San Francisco streets) to lay out roads, pedestrian paths, and step-systems for what the developers hoped would become a new city. He also built the Cascade Dam & Reservoir for water supply, and set aside land plots for churches, schools, and parks.
On 31 May 1890, nearly 3,000 people attended The Tamalpais Land & Water Co. land auction near the now-crumbling sawmill. More than 200 acres (0.81 km2) were sold that day in the areas of present day Throckmorton, Cascade, Lovell, Summit, and Miller Avenues and extending to the west side of Corte Madera Avenue. By 1892, there were two schools in the area and a few churches. The auction also brought into Mill Valley architects, builders, and craftsmen. Harvey A. Klyce was one of the most prominent of the architects and designed many private homes and public buildings in the area, including the Masonic Lodge in 1904. Before his death in 1894, Eastland built a large summer home, "Burlwood", constructed on Throckmorton Avenue in 1892 that still stands though much of the original land has been parceled off. Burlwood was the first home in the town to have electricity, and when telephones were installed only he and Mrs. Cushing, the owner of the Blithedale Hotel had service. After the land auctions the area was known as both "Eastland" and "Mill Valley".
Janes, by then the resident director of Tamalpais Land & Water Co. (and eventually the city's first town clerk), and Sidney B. Cushing, president of the San Rafael Gas & Electric Co. set out to bring a railroad up Mt. Tamalpais. The Mt. Tamalpais Scenic Railway opened in 1896 (with Cushing as President) and ran from the town center (present day Lytton Square) all the way to the summit. In 1907, the railroad added a branch line into "Redwood Canyon", and in 1908, the canyon became Muir Woods, a national monument. The railroad built the Muir Inn (with a fine restaurant) and overnight cabins for visitors. The Mt. Tamalpais & Muir Woods Scenic Railway, "The Crookedest Railroad in the World" and its unique Gravity Cars brought thousands of tourists to the Tavern of Tamalpais on the mountain summit (built in 1896, rebuilt after the 1923 fire, and razed in 1950 by the California State Parks), the West Point Inn (built in 1904, by the scenic railway, operated commercially until 1943, closed briefly and then run by volunteers to present day, ), and the Muir Woods Inn (burned in 1913, rebuilt in 1914, destroyed in 1930). The tracks were removed in 1930 after the 1929 fire, the drop in tourism due to the Great Depression, and the increase in automobile traffic with the construction of the Panoramic Highway and other roads caused a drastic drop in ridership. Built in an era when people walked most everywhere, train service was a dependable, regular and relatively cheap form of travel. Rails connected Mill Valley with neighboring cities and commuters to San Francisco.
Incorporation through WWII
By 1900, the population was nearing 900 and the locals pushed out the Tamalpais Land & Water Co. in favor of incorporation. Organizations and clubs cropped up including the Outdoor Art Club (1902), Masonic Lodge (1903) which celebrated its centennial in 2003 and the Dipsea Race (1905), the latter marking its 100th anniversary in 2010. The second big population boom came after the 1906 Great Earthquake. While much of San Francisco and Marin County was devastated, many fled to Mill Valley and most never left. In that year alone the population grew to over 1,000 permanent residents. Creeks were bridged over or dammed, more roads laid down and oiled, and cement sidewalks poured. Tamalpais High School opened in 1908, the first city hall was erected in 1908, and Andrew Carnegie's library in 1910. The Post Office opened under the name "Eastland", however after many objections it was changed to "Mill Valley" in 1904. The very first Mountain Play was performed at the Mountain Theater on Mt. Tam in 1913. By the 1920s, most roads were paved over, mail delivery was in full swing, and the population was at its highest at more than 2,500 citizens. Mill Valley Italian settlers made wine during Prohibition, while some local bar owners made bootleg whiskey under the dense foliage around the local creeks. January 1922 saw the first of several years of snow in Marin County, coating Mt. Tam white. Two years later the Sulphur Springs, a natural hot spring where locals could revive their lagging spirits, was covered over and turned in the playground of the Old Mill Elementary School.
1929 was a year of great change for Mill Valley. The Great Fire raged for several days in early July and nearly destroyed the fledgling city. It ravaged much of Mt. Tam (including the Tavern and 117 homes) and the city itself was spared only by a change in wind direction. In October of that year, the Mt. Tamalpais and Muir Woods Scenic Railway ran for the last time. The fire caused great devastation to tourism and tourist destinations, but the railroads were also crushed by the car. Panoramic Highway, a section of Highway 1 running between Mill Valley and Stinson Beach was built in 1929-1930. The stock market crash of 1929 and the ensuing Great Depression crippled what little railroad tourism there was to the point where the tracks were eventually taken up in 1931.
During the Great Depression, many famous local landmarks were constructed with the help of the Works Progress Administration and the Civilian Conservation Corps, including the Mead Theater at Tam High (named after school board Trustee Ernest Mead), the Mountain Theater rock seating, and the Golden Gate Bridge in 1934-1937; the latter event ended railroad commuting between Marin and the city and helped increase the Marin population. With the demise of the railroads came the introduction of local bus service. Greyhound moved into the former train depot in Lytton Square in October 1940. In Sausalito, Marinship brought over 75,000 people to Marin, many of whom moved to Mill Valley permanently. At the height of the War, nearly 400 locals were fighting, including many volunteer firemen and government officials. By 1950, 1 in 10 Mill Valleyans were living in a "Goheen Home". George C. Goheen built the so-called "defense homes" for defense workers throughout the 1940s and 1950s in the Alto neighborhood.
1950s to present
With a population just over 7,000 by 1950, Mill Valley was still relatively rural. Men commuted to San Francisco on the Greyhound bus when the streets were not flooding in heavy rain, and there still were not any traffic lights. The military built the Mill Valley Air Force Station to protect the area during the Korean War. In 1956, a group of Beat poets and writers lived briefly in the Perry house, most notably Jack Kerouac and San Francisco Renaissance Beat poet Gary Snyder. The house and its land is now owned by the Marin County Open Space District. By the beginning of the 1960s, however, the population swelled. The Mill Valley Fall Arts Festival became a permanent annual event and the old Carnegie library was replaced with an award-winning library at 375 Throckmorton Ave. Designed by architect Donn Emmons, the new library was formally dedicated on September 18, 1966. The 1970s saw a change in attitude and population. Mill Valley became an area associated with great wealth, with many people making their millions in San Francisco and moving north. New schools and neighborhoods cropped up, though the city maintained its defense of redwoods and protected open space.
Cascade Dam, built in 1893, was closed in 1972 and drained four years later in an attempt to curb the "hordes" of young people using the reservoir for nude sunbathing and swimming. Youth subculture would come under attack again in 1974 when the City Council banned live music, first at the Sweetwater and later at the Old Mill Tavern, both now defunct. In 1977, the Lucretia Hanson Little History Room in the library opened and became the base of operations for the Mill Valley Historical Society. Marin County was hit with one of the worst droughts on record beginning in 1976 and peaking in 1977, brought on by a combination of several seasons of low rainfall and a refusal to import water from the Russian River, instead relying solely on rain water from Mt. Tam and the West Marin watersheds to fill the then-six reservoirs. By June 1977, the County managed to pipe in water from the Sacramento River Delta, staving off disaster. The rainfall during the winter of 1977-78 was one of the heaviest on record. The Mill Valley Film Festival, now part of the California Film Institute, began in 1978 at the Sequoia Theatre.
The 1980s and 1990s saw the decline of small businesses in Mill Valley. Local establishments like Lockwood's Pharmacy closed in 1981 after running almost continuously for 86 years. Old Mill Tavern, O'Leary's, and the Unknown Museum shut their doors, as did Red Cart Market and Tamalpais Hardware. In their places came boutiques, upscale clothing stores, coffee shops, art galleries, and gourmet grocery stores. Downtown Plaza and Lytton Square were remodled to fit the new attitude. The population in the city alone swelled over 13,000 and many of the old, narrow, winding streets grew clogged with traffic congestion. The Public Library expanded with a new Children's Room, a downstairs Fiction Room, and Internet computers. It also joined MARINet, a consortium of all the public libraries in Marin, to allow patrons greater access to information. MARINet now has an online catalogue of all the materials, both physical and electronic, in the Marin public libraries, which patrons can order, pick up, and drop off materials at any of the participating libraries. The Old Mill also got a face lift; it was rebuilt to the same specifications of the original in 1991. The 1990s also saw another influx of affluence. Many new homeowners gutted homes built in the 19th and early 20th centuries, or tore them down all together.
The dawn of the new millennium brought reflection on the past, as the city celebrated 100 years of incorporation. Soon after Mill Valley got its brand new Community Center at 180 Camino Alto, adjacent to Mill Valley Middle School. On January 31, 2008, Mill Valley's sewage treatment plant spilled 2.45 million gallons of sewage into the San Francisco Bay. This marked the second such spill in Mill Valley within a week (the previous one spilled 2.7 million gallons), and the most recent of several that occurred in Marin County in early 2008. Mill Valley's treatment plant attributed the spills to "human error". The spills caused distress in Mill Valley's administrative government, which remains outspoken about "dedicating itself to the protection of air quality, waste reduction, water and energy conservation, and the protection of wildlife and habitat" in Mill Valley.
According to the United States Census Bureau the city has a total area of 4.8 square miles (12 km2), of which 4.7 square miles (12 km2) is land and 0.1 square miles (0.26 km2) of (1.74%) water.
The Mill Valley 94941 area lies between Mt. Tamalpais on the west, the city of Tiburon on the east,the City of Corte Madera on the north, and the Golden Gate National Recreational Area (GGNRA) on the south. Two streams flow from the slopes of Mt. Tamalpais through Mill Valley to the bay: the Arroyo Corte Madera del Presidio; and Cascade Creek. Mill Valley is surrounded by hundreds of acres (hectares) of state, federal, and county park lands. In addition, there are many municipally maintained open-space reserves, parks, and coastal habitats which, when taken together, ensconce Mill Valley in a natural wilderness. This close and constant proximity to nature has left generations of Mill Valley residents with a strong sense of conservancy toward much of this natural environment. This unique cultural attitude, along with the many natural public spaces preserved within (see below) and around its borders, combine to form one of the main cultural cornerstones that has always defined Mill Valley.[clarification needed]
Mill Valley has a number of scenic and natural features which provides significant habitat for fishes, marine mammals, and other biota. Notable areas of public access to experience these aquatic preserves can be found at:
Mill Valley and the Homestead Valley Land Trust maintains many minimally disturbed wildland areas and preserves which are open to the public from sunrise to dusk everyday. Several nature trails allow access as well as providing gateway access to neighboring state and federal park lands, and the Mt. Tamalpais Watershed wildland on the broad eastern face of Mt. Tamalpais that overlooks Mill Valley. These are undeveloped natural areas and contain many species of wild animals, including some large predators like the coyote, the bobcat, and the cougar. As in all wildland areas, observe daytime access hours, keep dogs on leashes, and keep younger children from wandering about unattended. One may also want to familiarize themselves with how to live and recreate among cougars, coyotes, and bobcats prior to visiting these wildland areas.
- Cascade Falls Park—A natural forested park that spans an area between the western stretches of Cascade Drive and Lovell Ave.
- Blithedale Summit Open Space Preserve—located up West Blithdale Ave.
- Tennessee Valley—located in Tamalpais Valley, off Shoreline Highway
- Alto Bowl Open Space Preserve—located 1.2 miles (1.9 km) up Camino Alto
- Camino Alto Open Space Preserve—located 1/2 mile (800 m) up Camino Alto, up Overhill Rd.
Mill Valley has a mild Mediterranean climate which results in relatively wet winters and very dry summers. Winter lows rarely drop below freezing and summer highs rarely peak 90 °F (32 °C) with 90% of the annual rain falling in November through March. Wind speeds average lower than national averages in winter months and higher in summer, and often become quite gusty in the canyon regions of town. California coastal fog often affects Mill Valley, making relative humidity highly variable. The wetter winter months tend to make for a more consistent daily relative humidity around 70-90% (slightly higher than US averages). During the summer months, however, while the morning fog often keeps morning humidity normal, in a typical 70-80% range, by afternoon after the fog burns off, the humidity regularly plummets to around 30% as one would expect in this dry seasonal climate. Sunlight, as with many northwest coastal cities is low, with usually around 130 clear days a year. Marin county is the 4th least sunny place in the United States, after Cleveland, Seattle and Unalaska, Alaska.
|Climate data for Mill Valley, California|
|Record high °F (°C)||79
|Average high °F (°C)||56
|Average low °F (°C)||41
|Record low °F (°C)||20
|Rainfall inches (mm)||9.60
Mill Valley is also affected by microclimate conditions in the several box canyons with steep north-facing slopes and dense forests which span the southern and western city limits, which, along with the coastal fog, all conspire to make many of the dense forested regions of Mill Valley noticeably cooler and moister, on average, than other regions of town. This microclimate is what makes for the favorable ecology required by the Coastal Redwood forests which still cover much of the town and surrounding area, and have played such a pivotal role throughout the history of Mill Valley (see "History" above).
The 2010 United States Census reported that Mill Valley had a population of 13,903. The population density was 2,868.2 people per square mile (1,107.4/km²). The racial makeup of Mill Valley was 12,341 (88.8%) White, 118 (0.8%) African American, 23 (0.2%) Native American, 755 (5.4%) Asian, 14 (0.1%) Pacific Islander, 152 (1.1%) from other races, and 500 (3.6%) from two or more races. Hispanic or Latino of any race were 622 persons (4.5%).
The Census reported that 99.5% of the population lived in households and 0.5% were institutionalized.
There were 6,084 households, out of which 1,887 (31.0%) had children under the age of 18 living in them, 2,984 (49.0%) were opposite-sex married couples living together, 465 (7.6%) had a female householder with no husband present, 178 (2.9%) had a male householder with no wife present. There were 306 (5.0%) unmarried opposite-sex partnerships, and 55 (0.9%) same-sex married couples or partnerships. 2,016 households (33.1%) were made up of individuals and 888 (14.6%) had someone living alone who was 65 years of age or older. The average household size was 2.27. There were 3,627 families (59.6% of all households); the average family size was 2.94.
The population was spread out with 3,291 people (23.7%) under the age of 18, 459 people (3.3%) aged 18 to 24, 2,816 people (20.3%) aged 25 to 44, 4,714 people (33.9%) aged 45 to 64, and 2,623 people (18.9%) who were 65 years of age or older. The median age was 46.6 years. For every 100 females there were 85.3 males. For every 100 females age 18 and over, there were 80.8 males.
There were 6,534 housing units at an average density of 1,348.0 per square mile (520.4/km²), of which 3,974 (65.3%) were owner-occupied, and 2,110 (34.7%) were occupied by renters. The homeowner vacancy rate was 1.2%; the rental vacancy rate was 4.5%. 9,861 people (70.9% of the population) lived in owner-occupied housing units and 3,966 people (28.5%) lived in rental housing units.
At the 2000 census, there were 13,600 people, 6,147 households and 3,417 families residing in the city, not including those living in unincorporated territories. The population density was 2,883.1 inhabitants per square mile (1,112.5/km²). There were 6,286 housing units at an average density of 1,332.6 per square mile (514.2/km²). The racial makeup of the city in 2010 was 85.8% non-Hispanic White, 0.8% non-Hispanic African American, 0.1% Native American, 5.3% Asian, 0.1% Pacific Islander, 0.3% from other races, and 3.1% from two or more races. Hispanic or Latino of any race were 4.5% of the population.
There were 6,147 households of which 27.1% had children under the age of 18 living with them, 45.2% were married couples living together, 7.6% had a female householder with no husband present, and 44.4% were non-families. 34.1% of all households were made up of individuals and 12.4% had someone living alone who was 65 years of age or older. The average household size was 2.20 and the average family size was 2.85.
21.2% of the population was under the age of 18, 2.9% from 18 to 24, 28.1% from 25 to 44, 32.5% from 45 to 64, and 15.4% who were 65 years of age or older. The median age was 44 years. For every 100 females there were 86.5 males. For every 100 females age 18 and over, there were 82.5 males.
The median household income was $90,794, and the median family income was $119,669. Males had a median income of $94,800 versus $52,088 for females. The per capita income for the city was $64,179. About 2.7% of families and 4.5% of the population were below the poverty line, including 3.6% of those under age 18 and 5.7% of those age 65 or over. The median single-family home price in the city was $1,500,000 in January 2005.
The combination of Mill Valley's idyllic location nestled beneath Mount Tamalpais coupled with its ease of access to nearby San Francisco has made it a popular home for many high-income commuters. Over the last 20 years, following a trend that is endemic throughout the Bay Area, home prices have climbed in Mill Valley (the median price for a single-family home is in excess of $1.5 million as of 2005), which has had the effect of pushing out some earlier residents who can no longer afford to live in the area. This trend has also transformed Mill Valley's commercial activity, with nationally recognized music store Village Music having closed, then replaced in 2008 by more commercial establishments.
In July 2005, CNN/Money and Money magazine ranked Mill Valley tenth on its list of the 100 Best Places to Live in the United States. In 2007, MSN and Forbes magazine ranked Mill Valley seventy-third on its "Most expensive zip codes in America" list.
While Mill Valley has retained elements of its earlier artistic culture through galleries, festivals, and performances, its stock of affordable housing has diminished, forcing some residents to leave the area. This trend has also affected some of the city's well-known cultural centers like Village Music and the Sweetwater Saloon. As of April 2007, only one affordable housing project was underway: an initiative to raze and rebuild an abandoned motel called the Fireside.
Political and religious leanings
||This article needs additional citations for verification. (August 2012)|
Both suburban conservative and West coast liberal elements have shaped the sociocultural and religious life of Mill Valley and the rest of Marin County. It has the Mount Carmel Roman Catholic Church, and Mill Valley is home to a Greek Catholic church from its many Greek and Grico (Greeks in Italy/ Italian) immigrants, as well the Southern Baptist Golden Gate Baptist Theological Seminary, and has one of seven Seventh Day Baptist churches (the Mill Valley Seventh Day Baptist Church) in California, but one of two in the San Francisco Bay area. In the early 2010s, registered Democrats outnumbered local Republicans by 5 to 1, a common trait in cities around heavily Liberal-Progressive San Francisco, but Mill Valley also shows a Libertarian political trend.
Neighborhoods and unincorporated CDPs
Strawberry is an unincorporated Census-designated Place to the east of the City of Mill Valley. Other CDPs with Mill Valley mailing addresses include Tamalpais-Homestead Valley and Muir Beach. Smaller unincorporated areas include Alto and Almonte.
Neighborhoods in Mill Valley:
|Almonte||"Alto" Sutton Manor||Blithedale Canyon||Boyle Park||Cascade Canyon||Country Club||Downtown||East Blithedale Corridor|
|Edgewood Cypress||Enchanted Knolls||Eucalyptus Knolls||Homestead Valley||Kite Hill||Land of Peter Pan||Marin Terrace||Marin View|
|Middle Ridge||Mill Valley Heights||Mill Valley Meadows||Miller Avenue||Molino Edgewood||Muir Woods||Old Mill||Panoramic Highway|
|Scott Highlands||Scott Valley||Sequoia Valley||Shelter Bay||Shelter Ridge||Strawberry||Sycamore||Sycamore Park|
|Tam Junction||Tamalpais Valley||Tamalpais Park||Tennessee Valley||Vernal Heights||Warner Canyon|
City recreational parks
Mill Valley maintains many recreational parks which often contain playgrounds and other designated areas specifically designed for playing various sports. Dogs are required to be on leashes in all but one of these parks, which is specifically designated a dog park to allow the option of off-leash exercise.
Mill Valley has a costly but popular "steps, lanes, and paths program" that provides improved pedestrian access between many of the winding and twisting residential roads that cover the hillsides. Blue stencils on the roadway mark certain paths as potential emergency escape routes from the fire prone hills. A picture book, although not entirely accurately, shows the paths, "Steps, Lanes and Paths of Mill Valley". In 2009 resident Matt Connelly threatened litigation alleging that some of the proposed paths represent a seizure of private property (even though some antique maps suggest that certain potential easements could be thought of as the justification for future steps, lanes, or paths).
For those who prefer to enjoy nature from the comfort of a chair, the city's public library is nestled in a serene and scenic location at the edge of Old Mill Park where visitors may relax indoors near the wood-burning fireplace and view the redwood forest through the library's multi-storied windows, or from the outside deck which overlooks the park and Cascade Creek.
Nature trails
- Tenderfoot Trail (1.5 miles) -- Lower trail head is on Cascade Drive between Cascade Falls park and the lower trail head of the Zigzag trail. The upper trail head is at Edgewood Ave., near Mountain Home Inn. This upper trail head provides access to the Edgewood trail, and also provides gateway access to the upper region of Muir Woods, Tamalpais State Park near the Alice Eastwood Campsite access road, and the main southern access point Mt. Tamalpais Watershed (near the Throckmorton Ridge Fire Station).
- Zigzag Trail (1/2 mile, steep climb) -- This is a very steep trail which has an upper trail head near the Throckmorton Ridge Fire Station and the Mountain Home Inn with gateway access to the upper region of Muir Woods, Tamalpais State Park near the Alice Eastwood Campsite access road, and the main southern access point Mt. Tamalpais Watershed (near the Throckmorton Ridge Fire Station). The lower trail head is near the western end of Cascade Drive, west of Cascade Falls Park and the lower Tenderfoot Trail head.
- Cypress Trail (1 mile) -- runs between the end of Cypress Ave. and the middle of the Tenderfoot Trail. Cypress Avenue leads to Edgewood Blvd. Going down Edgewood leads to the top of Dipsea trail stairs and Cowboy Rock Trail head, and uphill on Edgewood lead to the Edgewood Trail.
- Edgewood Trail (1/2 mile)(aka Pipeline trail) -- runs between the two parts of Edgewood Ave. and provides access to the upper Tenderfoot trail head or, if one follows Edgewood Ave. out to the Mountain Home Inn, leads to a gateway access to the upper region of Muir Woods, Tamalpais State Park near the Alice Eastwood Campsite access road, and the main southern access point Mt. Tamalpais Watershed (near the Throckmorton Ridge Fire Station)
- Cowboy Rock Trail (1/4 mile) -- part of the Homestead Valley Land Trust, the upper trail head is at Edgewood and Sequoia Valley Road intersection, across the street from where the Dipsea trail stairs from downtown end. This path leads to the Homestead Trail and to the path/stairs down to Stolte Grove and the western tip of Homestead Valley.
- Pixie Trail (1/2 mile) -- part of the Homestead Valley Land Trust, this trail has several trail heads. On the upper end the trail head is at Marion Ave, (upper portion) Ridgewood Ave., and Edgewood Ave. intersect. The Pixie Trail also has a mid-access point, where the Pixie Trail becomes paved and developed. The street runs down hill to Stolte Grove. The trail continues on and connects to any of three other trail heads. The first head is at the five way intersection of Molino Ave, Edgewood Ave, Cape Ct, and Mirabel Ave. The second head leads to the end of Seymour lane which is a short road off of Edgewood Ave. Crossing Edgewood, the path continues down a set of stairs to Ethel Ave and the Una Way staircase down to Miller Ave. The third and final head ends at Janes Street, down the way from Molino Avenue Park.
- Homestead Trail (1 mile) -- part of the Homestead Valley Land trust, this longer winding trail traverses the western slope of Homestead Valley itself. It is not well delineated or maintained in parts. It has several other trail heads that leads up into Tamalpais State Park near the "four-corners" intersection, as well as down into the valley via (lower portion) Ridgeview Ave. and Ferndale Ave.
- Dipsea Trail (7.1 miles) The most famous hike in Marin County is the Dipsea Trail, a challenging route beginning with three long, steep stairways leading up from Old Mill Park and ending at Stinson Beach 7.1 miles (11.4 km) later. The annual Dipsea Race is in June, although the trail can be run or hiked any time. The West Marin Stagecoach is a bus that runs from Stinson Beach back to Mill Valley, stopping approximately one mile from downtown. The Dipsea Trail is not well marked, so first timers should consider carrying a guidebook.
- Muir Woods to Bootjack Trail (6.3 miles) This trail is a loop that will take around 3.5 hours and popular among tourists due to the first hour among the redwood trees. Bootjack is accessible from here, transitioning to meadows with bridges and streams. Bootjack itself is 2.2 miles (3.5 km) long, moderate uphill and great for the average hiker.
Public schools
Public schools are managed by the Mill Valley School District. There are five elementary schools and one middle school, Mill Valley Middle School, a four-time winner of the California Distinguished School Award. The public high school, Tamalpais High School, is part of the Tamalpais Union High School District, whose five campuses serve central and southern Marin County. Marin Horizon School is an independent school serving students in grades PK-8. Founded in 1977, the school enrolls approximately 285 students.
Mill Valley Public Library
The municipal library overlooks Old Mill Park and provides many picturesque reading locations, as well as free computer and Internet access. Recently they have begun offering Museum Passes to 94941 residents for free entry to Bay Area museums. As part of the City of Mill Valley's decision to "go Green", the library has a Sustainability Collection with books and DVDs with information about how to become more environmentally friendly.
The Mill Valley library first digitized its vast holdings under the long and innovative stewardship of the late Thelma Weber Percy, a town celebrity of great learning who was determined to see the Mill Valley Public Library come into the computer age, and maintain a healthy population of library cats. In both arenas she is remembered and highly regarded. Her son, Kevin Percy, an historian and inventor of board games, still resides in Mill Valley, just minutes away from the library his mother all but brought into the 20th and 21st centuries, beneath its towering redwoods.
The Mill Valley Public Library is also home to the Lucretia Hanson Little History Room. It has thousands of books, photographs, newspapers, pamphlets, artifacts, and oral histories on the history of California, Marin County, and Mill Valley. It is staffed almost entirely by volunteers. As of 2009, the History Room is in the midst of a digitization project wherein all documents are being scanned and digitized. Eventually the History Room will have all of their documents and artifacts available for public perusal on an online database. It also has a Twitter account, @MVHistoryRoom where updates, historical information, and new acquisitions are posted.
Notable people
- Eve Arden, actress
- Mariel Hemingway, actress
- Russ Hodges, baseball announcer
- John Leslie, porn star
- Bridgit Mendler, actress, singer, and songwriter
- Maury Sterling, actor
Annual events
Mill Valley is the home of several annual events, many of which attract national and international followings:
- Dipsea Race
- The Mountain Play
- Mill Valley Film Festival
- Mill Valley Fall Arts Festival
- Mill Valley Shakespeare in Old Mill Park Amphitheater
Arts and crafts in Mill Valley
Mill Valley is known for being a village with a strong artistic heritage. A visitor to downtown Mill Valley will discover many art galleries, open-air coffee shops, and other hallmarks of a thriving artistic community. In addition, the town has sponsored the Mill Valley Fall Arts Festival for over fifty years and also the Mill Valley Film Festival, which is part of the California Film Institute, for over thirty years. In addition, Mill Valley's Chamber of Commerce has sponsored the annual Gourmet Food and Wine Tasting in Lytton Square for many years.
Theater arts also have a huge following in Mill Valley. In addition to supporting the local 142 Throckmorton Theatre, which hosts theater of all levels, Mill Valley is also home for the Marin Theatre Company, and the Mountain Play Association which hosts annual musical productions in the Sidney B. Cushing Amphitheater located in Mill Valley's neighboring Mount Tamalpais State Park. For several years the Curtain Theatre Group has also been performing annual free Shakespeare plays among the redwoods on the Old Mill Park Amphitheatre behind the Mill Valley Library.
Music, novels, television and movies
Mill Valley has also been home to many musicians, authors, actors, and TV personalities. The actress and comedian Eve Arden was born there in 1908. Jerry Garcia — who recorded music in a Mill Valley recording studio — also once called Mill Valley home. John Lennon and Yoko Ono summered in a Mill Valley home on Lovell Ave. near the library in the early 1970s, having left some of his own graffiti on the wall of the residence "The Maya the Merrier". Other rock stars such as Michael Bloomfield, Huey Lewis, Bob Weir, Lee Michaels, Sammy Hagar, Bonnie Raitt, Pete Sears, Clarence Clemons, John and Mario Cipollina, and Janis Joplin have also called this small town home. Grammy Award winning Jazz singer Jon Hendricks moved to Mill Valley in 1966 and currently still owns his home in Homestead Valley. The composer John Anthony Lennon was raised in Mill Valley. Authors such as Wright Morris and Jack London have also lived here, as does Joyce Maynard. Writer Ki Longfellow lived on Hillside Avenue. Actors Peter Coyote, Dana Carvey, Jill Eikenberry, Kathleen Quinlan, Michael Tucker, and it was the place of birth for actors Eve Arden, Mariel Hemingway and Jonah Hill. Celebrity chef Tyler Florence, Pixar Animation Studios' Andrew Stanton and former women's basketball star Jennifer Azzi also call Mill Valley home. Former naval aviator Dieter Dengler built a home on Mount Tamalpais near the Mountain Home Inn and lived there until his death in 2001; parts of the biographical documentary about him, Little Dieter Needs to Fly were filmed there. Author John Gray who writes the Men are from Mars, Women are from Venus books is a long time Mill Valley resident. Preventive medicine physician, John Travis, founded the first wellness center in the US at 42 Miller Avenue in 1975.
In fiction, character B.J. Hunnicutt from the TV show M*A*S*H called Mill Valley home, and fictional character Charley Furuseth in Jack London's 1904 novel The Sea-Wolf, apparently had a summer cottage here. In the Star Trek universe, it is home to the 602 Club. It is also the setting for resident author Jack Finney's 1954 novel The Body Snatchers, although the 1956 film, Invasion of the Body Snatchers, and subsequent movie versions of the book have been set elsewhere. Fictional character Doris Martin from the TV show The Doris Day Show called Mill Valley home as well. In the syndicated version of Too Close for Comfort, Henry and Muriel Rush got their jobs at the Marin Bugler newspaper in Mill Valley.
Writer Jack Kerouac and beat poet Gary Snyder shared a Mill Valley cabin in 1955-56 around 370 Montford Ave. in Homestead Valley. The cabin's coincidental location in Marin County and its adjacent location to a meadow where horses grazed, combined with Snyder's expertise in Asian languages and cultures, lead to Snyder naming the cabin "Marin-An", which is Japanese for "Horse Grove Hermitage". It was during this stay in Mill Valley that Kerouac's recent budding interest in Zen Buddhism was greatly expanded by Snyder's expertise in the subject. Kerouac's 1958 novel, The Dharma Bums, was consequently composed while living here and contains many semi-fictionalized accounts of the lives of Kerouac and Snyder while living at Marin-An. Part of Kerouac's 1951 novel "On the Road" takes place in a "Mill City", which is a fictionalized reference to Mill Valley.
American writer Cyra McFadden, while living in Mill Valley in the 1970s, wrote a column for the Pacific Sun newspaper entitled, "The Serial", which satirized the trendy lifestyles of the affluent residents of Marin County. In 1977, she turned her column ideas into a novel called The Serial: A Year in the Life of Marin County which focused on the fictional exploits of a Mill Valley couple, Kate and Harvey Holroyd, who never quite fit into the Marin 'Scene.' The highly successful book was later made into a 1980 comedy called Serial, starring Tuesday Weld and Martin Mull.
The song "Mill Valley", recorded in 1970 and released on the album Miss Abrams and the Strawberry Point 4th Grade Class, reached #90 on the Billboard Hot 100. While the school is in the Mill Valley School District, it is not within the city limits.
Richard Laymon, the American horror author, set his novel The Lake primarily in Mill Valley. Other Laymon novels are also either set in or mention Mill Valley.
The Tamalpais High School Marching Band appeared in the 1969 Woody Allen film Take The Money and Run. In the 1973 George Lucas film American Graffiti, the 'sock hop' dance scenes were filmed in the high school's boys gymnasium.
In March 2009, most of the scenes for the pilot for NBC's Parenthood were filmed at 22 Cascade Dr., In Mill Valley.
Points of interest
- Muir Woods
- Mount Tamalpais
- Edgewood Botanic Garden
- Richardson Bay
- Golden Gate Baptist Theological Seminary
- Sweetwater Saloon
- Mill Valley School District
- Tamalpais High School
- Old Mill School
- Mill Valley Air Force Station
See also
- "City Council". Retrieved 2013-05-06.
- "California's 2nd Congressional District - Representatives & District Map". Civic Impulse, LLC. Retrieved March 8, 2013.
- U.S. Census
- U.S. Geological Survey Geographic Names Information System: Mill Valley, California
- C. Michael Hogan. 2008. Ring Mountain, The Megalithic Portal, ed. A. Burnham
- Kroeber, Alfred L. 1925. Handbook of the Indians of California. Washington, D.C: Bureau of American Ethnology Bulletin No. 78. (Chapter 30, The Miwok); available at Yosemite Online Library
- Cook, Sherburne. 1976. The Conflict Between the California Indian and White Civilization. Berkeley and Los Angeles, CA: University of California Press. ISBN 0-520-03143-1.
- Goerke, Betty. 2007. Chief Marin, Leader, Rebel, and Legend: A History of Marin County's Namesake and his People. Berkeley, CA: Heyday Books. ISBN 13:978-1-59714-053-9
- Durham, David L. (1998). California's Geographic Names: A Gazetteer of Historic and Modern Names of the State. Quill Driver Books. p. 664. ISBN 978-1-884995-14-9.
- Mill Valley Historical Society Spring 2000 Review
- See "Average climate in Mill Valley, California" graphs.
- All data are derived from the United States Census Bureau reports from the 2010 United States Census, and are accessible on-line here. The data on unmarried partnerships and same-sex married couples are from the Census report DEC_10_SF1_PCT15. All other housing and population data are from Census report DEC_10_DP_DPDP1. Both reports are viewable online or downloadable in a zip file containing a comma-delimited data file. The area data, from which densities are calculated, are available on-line here. Percentage totals may not add to 100% due to rounding. The Census Bureau defines families as a household containing one or more people related to the householder by birth, opposite-sex marriage, or adoption. People living in group quarters are tabulated by the Census Bureau as neither owners nor renters. For further details, see the text files accompanying the data files containing the Census reports mentioned above.
- "American FactFinder". United States Census Bureau. Retrieved 2008-01-31.
- "MONEY Magazine: Best places to live 2005". CNN. Retrieved 2010-05-12.
- Sandberg, Robert Skip (2010). Steps, Lanes and Paths of Mill Valley. Mill Valley, CA USA: Self. p. 120. ISBN 978-0-9830494-0-1.
- California School Recognition Program distinguished school honorees, accessed January 26, 2008[dead link]
- "Mr. Piano Power". Sounds (Spotlight Publications). 28 August 1971. p. 3.
- "Mill Valley" Chart History, Billboard.com.
||This article's use of external links may not follow Wikipedia's policies or guidelines. (July 2012)|
- Mill Valley Public Library
- Lucretia Hanson Little History Room
- City of Mill Valley
- Mill Valley Historical Society
- Mill Valley Masonic Lodge
- Mill Valley Fall Arts Festival
- California Film Institute (CFI)
- Mill Valley Film Festival
- Marin Theatre Company
- Mountain Play Association
- Curtain Theatre Shakespeare in the Park
- Mill Valley Chamber of Commerce
- The Dipsea Race
- Tamalpais Community Services District (T.C.S.D.)
- Tamalpais Valley Community Center (T.C.C.)
- Homestead Valley Community Association (H.V.C.A.)
|
<urn:uuid:c2970c2c-fa92-45d4-ba80-b46949bbf3aa>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Mill_Valley,_California
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943306
| 11,639
| 2.703125
| 3
|
Learn about the relationship between author James Ross and Mark Twain.
Some say that if you go back far enough in time everyone could very well be related to each other.Please don’t tell that to my “Aunt Marie.”She is now a retired schoolteacher in her late eighties.I don’t know if she would have enough time to research all of those connections.
Our family historian has been my dear “Aunt Marie.”She has spent the better part of her life researching county records, state documents, gravestones, periodicals, and anything else that is part of public knowledge.She has spent virtually her entire adult life composing the family tree. What she turned up in our gene pool was surprising to all of us that now live several generations away from our ancestors.
So as to not sound boring, I’ll simply cut to the chase.As the story goes a Colonel William Casey was born in FrederickCounty, Virginia in 1756.He migrated to Kentucky and had many fights with the Indians over the years as the property was being settled.Through adulthood he was appointed a county judge and served in local politics.Rumor has it that he was a mountain of a man, very kind, and the father of four daughters.This is the start of what I’ll call a mighty oak with several enduring branches.
His third daughter was named Polly.His fourth daughter was named Margaret, but nicknamed Peggy.The two branches of the tree that those two formed are what this article is about.They traveled through Illinois, Missouri, Kentucky, Tennessee, and Iowa.
William Casey died in 1816 after serving in local politics in Adair County, Kentucky.He never lived to see his great grandson which had been placed on a limb of the family tree by his youngest daughter, Peggy.Born in Florida, Missouri in 1835, William Casey’s great grandson was christened Samuel L. Clemens…none other than Mark Twain.
Samuel Clemens was born roughly thirty-five miles inland from Hannibal, Missouri which was where he was raised during his younger years.Being from the Midwest it is quite believable that Casey’s youngest daughter Peggy and her siblings traveled up and down the states that were bordering the Mississippi River. Mark Twain made that tributary legendary in several of his tales.
As an added sidelight “Aunt Polly” was a recognizable character in Tom Sawyer.In all likelihood that was a name that Twain had heard his mother say often as he was growing up.To me the person identified as “Aunt Polly” would be my great, great, great, great grandmother.
At any rate the rest is history as far as Mark Twain goes.He is a legend in American folk yore as an author, philanthropist, statesman, humorist, and traveler.
I doubt that William Casey even cares that his great, great, great, great, great grandson wrote a novel after he turned fifty.That was the limb of the tree that his third daughter Polly helped to form.And I doubt if it matters that his far-removed relative grew up in modern-day St. Louis…only a driver and an eight-iron away from the Mississippi River.
But don’t tell that to my “Aunt Marie.”When she turned over all of the family tree information to me she said, “You know, Jim, you’ve done something that I’ve always dreamed about doing but never found the time.”
Naively, I asked, “What’s that?”
She said, “You wrote a book.I wouldn’t even know where to start.”
Something tells me that maybe she should start with William Casey.He’s the mighty oak in this tale and she’s on one of those limbs too.
James Ross has published a series of books that use the wonderful city of St. Louis as a backdrop. Lifetime Loser (2007), Finish Line (2008) and Tuey's Course (2009) all present a colorful cast of characters that come together on the Prairie Winds Golf Course. Situated high atop the Mississippi river bluffs on the east side of St. Louis the author uses his personal knowledge of St. Louis to fully incorporate the city into the plots of his novels. Residents of and visitors to the GatewayCity will appreciate the author's fine storytelling and how he highlights his home city. All three novels from James Ross can be found at Xlibris.com or through his personal web site at http://www.authorjamesross.com/.
|
<urn:uuid:7474247e-c1e7-4ad5-bdf6-afaaf6345a54>
|
CC-MAIN-2013-20
|
http://www.authorsden.com/visit/viewarticle.asp?AuthorID=100582&id=46013
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.983809
| 964
| 2.59375
| 3
|
There was a time when dentists had limited options for repairing teeth that were damaged and decayed. Amalgam, gold and other metals served as the most common materials for such repairs. Today, advancements in ceramics give dentists and patients a natural-looking repair options.
Ceramics, namely porcelain, were first used in 1774 to create a complete denture. Porcelain compositions that made metal-ceramic restorations possible were introduced in 1962.
Since 1986, new processes have made it possible to use ceramics in veneers, fillings, dental implants, crowns and even orthodontic brackets.
Crowns and Veneers
The ceramics used in crowns and veneers use varying amounts of crystallized leucite, which affects the thermal expansion and strength of the crown or veneer. The natural teeth are exposed to extreme hot and cold elements and are able to withstand those elements without cracking. Ceramics must do the same.
Ceramic crowns typically are coated with porcelain to allow dentists to match the translucency and color of the patient’s crown to their natural teeth.
As mentioned earlier, cavities used to be repaired with silver, mercury or tin amalgams. The ceramic fillings of today quickly are becoming the preferred option by dentists and patients alike, due to their natural appearance.
Dental implants are an alternative to bridges when a tooth is missing. Ceramic dental implants don’t change the integrity of surrounding teeth and they have the appearance of a natural tooth.
Ceramic dental implants also can prevent more natural teeth from having to be altered, as opposed to dental bridges. A bridge to replace a missing tooth requires grinding down two or three teeth, whereas a dental implant replaces only the missing tooth.
Dental implants are secured to a biocompatible metal post such as titanium, which is anchored into the jaw bone.
Orthodontic brackets represent the most recent use of ceramics in dentistry. Its use in the orthodontic field has been driven solely by the desire of orthodontists and their patients to have an alternative to the traditional “train track” braces. Ceramic brackets provide a more aesthetically pleasing appearance than traditional silver braces.
|
<urn:uuid:43ba47aa-866a-4571-8b3e-deed3b1d041e>
|
CC-MAIN-2013-20
|
http://www.infodento.com/ceramics/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942892
| 481
| 2.828125
| 3
|
Organic hazelnut production involves the use of sophisticated agricultural practices to gain better cultivation of crops. Hazelnut is of oval shaped structure with hard shell covered. The overall dimensions of hazelnut would range about 25 mm in length and 15 mm in diameter. These are mainly employed in the production of confectioneries. Also these are ingredients of sweets and food in most of western countries. It satisfies certain essential dietary requirements for people such as vitamin e and dietary fibers. This is cultivated in orchards in some of European countries. Also there are certain considerations to make before going in for actual cultivation of crops. The basic cultivation requirements needed for hazelnut cultivation are given below:
Soil plays a major role in the growth of plant. The soil should permit the expansion of roots but it is limited only for about 75 centimeters or more as per characteristics of plant. Also the soil should be capable enough to retain sufficient level of moisture. This moisture content can be deciding factor for development of nuts. The soil should have abundant supply of organic nutrients required for growth of plant in faster pace.
Hazelnut plants generally prefer temperate climates. These grow extensively ending up in huge amounts of nuts when grown in moist atmosphere. The overall morphological feature of hazelnut can also permit farmers to grow them in hot regions. But cultivation at these regions can require sufficient level of watering to be carried out in order to yield good number of nuts. Presence of moisture in air can bring about extensive growth of nuts.
Farmers present all over world generally prefer organic fertilizers. Since they protect the nature of soil for prolonged period of time. Nitrogen fertilizers are employed to maintain better supply of nitrous compounds for the plant growth. The ph level of about 6 is to be maintained in order to have better growth of nuts. Also size of nuts can be determined by addition of added nutrients such as phosphorous.
Article Summary: Organic hazel cultivation has demanding requirements to be fulfilled by the cultivators in order to have better growth. Size and quality of seed depends on the several influencing factors such as climatic conditions, fertilizers used and plantation procedure adopted. Planned cultivation can amplify the returns to cultivator in appreciable manner. Recent hazelnut production practices are described in this forum.
This forum supplies vital information regarding organic hazelnut production. Tips in production of hazelnut are also discussed here. One can feel free to look for more information on this topic at http://www.agricultureguide.org/ Recent agricultural practices employed in hazelnut production are also discussed quite briefly.
|
<urn:uuid:0c1ec4c8-d9a1-4b70-a3a1-4e58e9f334a0>
|
CC-MAIN-2013-20
|
http://www.agricultureguide.org/organic-hazelnut-production-revisited/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924702
| 524
| 3.546875
| 4
|
Connect to share and comment
Women who wait to have children later in life have a lower risk of developing endometrial cancer.
Women who wait to have children later in life have a lower risk of developing endometrial cancer, a new study shows.
The study, "Age at Last Birth in Relation to Risk of Endometrial Cancer," found that Women who last give birth at age 40 or older have a 44 percent decreased risk of endometrial cancer when compared to women who have their last birth under the age of 25.
Veronica "Wendy" Setiawan, Ph.D., assistant professor of preventive medicine at the Keck School and lead author of the study, said in a statement, "While childbearing at an older age previously has been associated with a lower risk of endometrial cancer, the size of this study definitively shows that late age at last birth is a significant protective factor after taking into account other factors known to influence the disease — body weight, number of kids and oral contraceptive use."
According to MSNBC, Setiawan and other researchers reviewed data from 17 studies involving 8,671 women with endometrial cancer and 16,562 women without the disease.
The researchers could not come to a definitive conclusion as to why later pregnancy cut cancer risk, though Setiawan theorized it may be that hormone levels during pregnancy are beneficial in preventing cancer at older ages.
More from GlobalPost: Moderate drinking during pregnancy deemed 'safe'
|
<urn:uuid:df27f1d3-1b27-4ff9-bbc0-92138e8996ee>
|
CC-MAIN-2013-20
|
http://www.globalpost.com/dispatch/news/health/120726/pregnancy-after-30-lowers-cancer-risk
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958257
| 302
| 2.6875
| 3
|
Who’s Most Ticklish?
Grade Level: 2nd to 4th; Type: Social Science
This project determines what category of person is most likely to be ticklish.
- Do males or females tend to be more ticklish?
- What age(s) of persons tend to be most ticklish?
Why can’t we tickle ourselves? Why do people laugh when tickled, even when they don’t like it? Some scientists, approaching it from an evolutionary standpoint, believe that tickling encourages social bonding. Others believe that it is a primitive form of self-defense practice for young children.
- A long feather
- Test subjects of different ages and genders
- Paper and pencil for recording and analyzing data
- Record the gender and age of test subject.
- Using the feather, tickle the test subject in various commonly-ticklish spots (ear, neck, back of knee, etc.).
- Rate the subject’s response to each tickle on a scale of one to five with one being no response and five being extreme ticklishness.
- Repeat for all subjects.
- Analyze results: On average do males or females tend to be more ticklish? Do younger or older people tend to be more ticklish? Do certain categories of people tend to be ticklish in a particular spot on their body (e.g. You might find that in general boys younger than 7 are ticklish on their knees but not on their ears)? Consider explanations based on scientists' hypotheses of the evolutionary roots of ticklishness.
- Extension: Ask test subjects whether they find tickles pleasant or unpleasant. Analyze subjects’ answers according to gender, age, and overall degree of ticklishness.
Terms/Concepts: ticklish, gender, age
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.
|
<urn:uuid:abc8894c-c0e6-411b-9dc9-edea892face0>
|
CC-MAIN-2013-20
|
http://www.education.com/science-fair/article/who-most-ticklish/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902819
| 454
| 3.59375
| 4
|
FEW materials are more important for a view of American humor than those provided by the comic almanacs during the period from 1830, when they began to appear, to 1860, when they had grown less local and flavorsome. These fascinating small handbooks yield many brief stories and bits of character drawing not to be found elsewhere; more than any single source they prove the wide diffusion of a native comic lore. To list adequately those used for this study would be to compile a small book, if the intricacies of imprints were to be unraveled and descriptive notes added. In general it may be said that the rich collection of comic almanacs in the Library of the American Antiquarian Society has been examined, including numbers of The American, The Old American, The People's, Finn's, The Rip Snorter, the many almanacs put forth by the tireless and sprightly Elton, such as his Whims Whams and his Tragical and Piratical Almanac. The comic grist that poured forth from New York in the '40's and '50's under many titles is well represented in this collection, and has been considered, as have the highly important Crockett almanacs published in Nashville and other places, even in Boston. These too bore many titles, sometimes carrying the name of Crockett's mythical companion, Ben Hardin, or suggesting a large number of other characters, as in Sprees and Scrapes in the West; Life and Manners in the Backwoods and Exploits and Adventures on the Prairies (1841), which contains brief tales of many kinds.
Serious almanacs have been scanned over a period which begins some years before the Revolution and includes the long sequence opening with the first number of The Old Farmer's in 1793. Humor was often contained within the pages of these staid pamphlets; they foreshadow comic effects to be found in more complete and striking forms in later years. They have proved invaluable in suggesting popular preoccupations even when these were not strictly comic. The connotations of The Old Farmer's have been discussed with a wealth of learning by Professor George L. Kittredge in his Old Farmer and His Almanac (1924).
In most of the joke-books before 1840 only the faintest traces of a native humor can be discovered. The preface to The Chaplet of Comus (1811) declares that "the reader will find in this collection more specimens of American humor than in any other publication. The palm of wit has been unjustifiably withheld from our countrymen by foreigners, and even some of our own writers have intimated that no good thing of a humorous kind can come out of New England." But the title hardly suggested American humor; and the promise was not fulfilled in the text. The Aurora Borealis, or Flashes of Wit (1831) contains a slight tale about a Yankee peddler and a few other localized stories; but for the most part this, like other joke-books of these years, reveals brief tales or episodes that are unmistakably English, with a sprinkling of others that go back to Aesop. The early Joe Miller joke-books were often taken over bodily from the English issues. But in 1833 one of the comic almanacs pictured a tombstone bearing the legend, "Here lies Joe Miller"; and though the name survived, these famous little books--some of which Lincoln saw--contained thereafter an increasing bulk of humor that can be distinguished as American. They are now rare; a few of them have been seen for this study, and occasional others like the Nonpareil.
A more direct and important source has been The Spirit of the Times (New York) from 1831 to 1861. Its files have proved a compendium of native tales, notes, comic theatrical items, and lively allusions to current attitudes. Scarcely an aspect of American humor is unrepresented there. This sporting and theatrical journal, edited by a Yankee, William T. Porter, is particularly rich in the humor of the Mississippi Valley and the frontier.
William Jerdan's Yankee Humor and Uncle Sam's Fun (London, 1853) has yielded Yankee and Southwestern humor as seen in England, with glimpses of English attitudes toward comic representations of the American character. Other English reactions have been found in the files of The Spirit of the Times, the New York Mirror, in clippings from London papers in the Harvard Theatre Collection, and in notices incorporated in early biographies of American comedians.
An important contemporary view of the early Yankee is offered in Royall Tyler's A Yankee in London (1811) . Papers by Albert Matthews on Brother Jonathan (Publications of the Colonial Society of Massachusetts, 1902), and on Uncle Sam (Proceedings of the American Antiquarian Society, 1908) have contributed to the study of the early Yankee, as has Oscar G. T. Sonneck's Report on the "Star Spangled Banner," "Hail Columbia," "America," and "Yankee Doodle" (1903). "Corn Cobs Twist Your Hair," a version of "Yankee Doodle," appears in sheet music (1826) and was apparently first sung on the stage by Yankee Hill. Such periodicals as The Yankee (1828-29) and Yankee Notions (1852-60) have added stories or bits of discussion about the Yankee character. John Neal's The Down Easters (1833) and other early literary portrayals of the Yankee have been considered.
Plays embodying the Yankee character and Yankee humor have been surveyed from The Contrast onward, including the popular pieces of Woodward, Logan, Kettell, Jones, Bayle Bernard, and Stone. G. H. Hill's Scenes from the Life of an Actor (1853) has been substantially drawn upon for Yankee portraiture of the lecture platform and the stage, as have Northall's Life and Recollections of Yankee Hill (1850) and Falconbridge's life of Dan Marble. Outlines of the figure of Sam Patch appear in the latter biography, with descriptions of the Sam Patch plays. Other brief allusions to Sam Patch have been found in the Downing papers, in The American Joe Miller (184o), and in contemporary notes on the Yankee character. Perley I. Reed's Realistic Presentation of American Characters in Native American Plays Prior to 1870 (1924) has been a helpful guide for the ess accessible Yankee plays.
Ample studies of the Yankee oracles appear in J. R. Tandy's Crackerbox Philosophers (1925), in M. A. Wyman's Two American Pioneers: Seba Smith and Elizabeth Oakes Smith--which contains an invaluable bibliography disentangling the authentic Downing papers from those of the many imitators--and in V. L. O. Chittick's Thomas Chandler Haliburton (1924), which is particularly rich in its handling of Sam Slick and his times.
The darker legends of New England have survived only in fragments. Hawthorne's tales and his notebooks have been a source for these, as have Whittier's Legends of New England (1831) and his Supernaturalism in New England (1847)
Since the trail of the Yankee led into the backwoods, studies of his character have often included references to the backwoodsman; and at times the two seemed inextricably mixed. The title of Falconbridge's life of Dan Marble, a Yankee actor, may stand as indicative of this mergence: The Gamecock of the Wilderness, or the Life and Times of Dan Marble (1850). In addition to such mingled sources, backwoods or frontier character and humor have been derived from Flint's Recollections of the Last Ten Years1826), Hall's Legends of the West (1832), his Harpe's Head: A Legend of Kentucky (1833), Tales of the Border (1835), and The Wilderness and the Warpath (1846) from Hoffman's A Winter in the West (1835) and his Wild Scenes in Forest and Prairie (1839), from Drake'sDiscourse on the History, Character, and Prospects of the West(1832) ; from Mary R. Mitford's Stories of An American Life (London, 1830), which contains material not easily found in other forms; from Irving's A Tour of the Prairies(1835), and from The Life of John James Audubon by Lucy Audubon (1869), The Life and Adventures of John James Audubon, the Naturalist, by Robert Buchanan (1869), and Audubon's Ornithological Biography (1831-39). Herrick's Audubon the Naturalist (1917) has been useful. Rusk's admirable Literature of the Middle West Frontier (1925) and Venable's Beginnings of Literary Culture in the Ohio Valley (1891) have supplied clews to material on the backwoodsman.
Outlines of the Mike Fink legends have been drawn from Field's Drama in Pokerville (1847), Thorpe's Hive of the Bee-Hunter (1854), from western almanacs, and from The Spirit of the Times. Franklin J. Meine's provisional bibliograpby of Mike Fink material has been an invaluable guide. Only a few fragments of boatmen's songs have survived. The Boathorn by William 0. Butler may be found in The Western Review (Lexington, 1821).
The larger portion of the tales about Crockett in this study have been drawn from the western almanacs; in addition, the familiar Narrative of the Life of David Crockett of the State of Tennessee (1834) and the Sketches and Eccentricities of Colonel David Crockett of West Tennessee (1833) have been used, as well as An Account of Colonel Crockett's Tour of the North and Down Eas t (1835). Since the plays based on the character of Crockett--and indeed the entire group of early backwoods plays--have disappeared, their general substance has been derived from notices in contemporary theatrical journals, biographies of actors, and travels. Such a purely fictional work as Carruthers' A Kentuckian in New York (1834) has furthered the effect of localized character and of acute interaction between American types.
For the homelier stories of the old Southwest Watterson's Oddities of Southern Life and Character(1882) provides important critical notes. A large collection of tales about corricrackers and rapscallions of this region will be found in Franklin J. Meine's Tall Tales of the Southwest, 1830-60, (1930), which contains an excellent brief bibliography. Longstreet's Georgia Scenes (1835), Baldwin's Flush Times of Alabama and Mississippi (1853), Field's Drama in Pokerville (1847), Harris's Sut Lovingood(1867), Thompson's Chronicles of Pineville (1845) and Major Jones's Sketches of Travel (1847), Hooper's Adventures of Simon Suggs (1845), and The Big Bear of Arkansa s (1845), A Quarter Race in Kentucky (1846), edited by William T. Porter, have comprised the principal materials from which conclusions have I been drawn as to the less inflated tall tales of the Southwest.
The literature on early minstrelsy is extremely slight. An important work is still to be done in discovering and describing those extant minstrel songs which bear unmistakable traces of Negro origin. For this study a considerable body of sheet music , bearing early imprints has been scanned, in the American Antiquarian Society and the Widener Library; songs by Rice, Emmett, Foster, and some less-known writers have been thoroughly considered. Emmett's walkarounds--"Dixie" was a walkaround--are particularly significant as suggesting Negro origins. In addition, minstrel songs in pocket song-books of the '40's and '50's, usually printed without music, have supplied interesting variations; the imprints have proved the wide diffusion of such songs. Minstrel plays or sketches, which often indicated the accompanying songs, in the Widener Library and the Chicago University Library, have been used, including such early pieces as 0, Hush, or the Virginny Cupids, The Mummy, and Bone Squash by T. D. Rice. For comparisons between minstrel songs and the spirituals, The Slave Songs of the United State s, coinipiled by W. F. Allen, C. P. Ware, and Lucy McKim Garrison (1867, 1930), has been considered, with other recent compilations of spirituals. Krehbiel's Afro-American Folk-Songs (1914) has been invaluable for its discussion of the character of Negro music and the origins of the spirituals.
Photographs of minstrel players over a long period, in the Harvard Theatre Collection, have provided evidence that early minstrelsy attempted a close impersonation of the Negro, most often of the plantation Negro; the early photographs show a marked contrast with those of later years with their highly stylized figures. Notes in The Spirit of the Times and in contemporary theatrical memoirs describe the characterizations of Jim Crow Rice and his successes throughout the country and in London. Galbraith's Daniel Decatur Emmett (1904) has been useful as offering Emmett's own version of his sources, and as indicating the influences which led him to use Negro melodies, choruses, and animal fables. LeRoy Rice's Monarchs of Minstrelsy (1911) contains biographical sketches suggesting regional alliances of many early minstrels, with notes on their impersonations. Cable's Creole Slave Songs in The Century, April, 1886, has been used for this study.
Theatrical histories, memoirs, and accounts of travel by strolling players have supplied a considerable bulk of material; these writings all but match the almanacs in importance as revealing popular humor, popular preoccupations, and evidences of the national character. Actors were concerned first of all with idiosyncrasies, since these added to their art; they seldom seemed to possess strong prejudices; and they often had a gift for concentrated mimicry and description. The writings of John Bernard, Dunlap, Rees, Wemyss, Northall, Cowell, Vandenhoff, Sol Smith, Ludlow, Tyrone Power, Leman, Hackett, Wallack, Jefferson, have yielded materials on the Yankee, the backwoodsman, the Negro, the minstrel, as well as on theatrical history. Other similar sources include the anonymous The Actor, or A Peep Behind the Curtain (1846), Alger's ,Life of Edudn Forrest (1877), Pyper's Romance of an Old Playhouse (1928) --on the Mormon theater--and materials on the California theater of the gold rush, collected mainly from newspaper sources, for the author's Troupers of the Gold Coast. Josiah Quincy's Figures of the Past (1924) contains interesting references to Mormon theatricals at Nauvoo.
Contemporary pamphlets, tracts, sermons, biographies, memoirs, considered for another study, have been drawn upon for an interpretation of the strollers of the cults and revivals. For the passages on burlesque oratory and on the American language Thorriton's An American Glossary (1912), Mencken's American Language (revised edition, 1923), and Krapp's The English Language in America (1925) have been used, as well as miscellaneous contemporary writings. Sandburg's American Songbag (1927) has proved admirable not only for its rich collection of surviving popular songs but for the notes on regional backgrounds or connections. Esther Shephard's Paul Bunyan (1924) and other scattered stories have provided the outlines of the Bunyan cycle. John Henry: Tracking Down a Negro Legend by Guy B. Johnson contains an excellent summary.
First editions and prefaces, miscellaneous writings, journals, and letters have yielded materials on the literary figures considered in this study. For the most part these general sources are indicated in the text. Hervey Allen's Israfel (1927) has established facts in Poe's early life suggesting immediate influences of his time. Lewis Mumford in The Golden Day (1926) has pointed out that terror and cruelty dominated Poe's mind, as they dominated many phases of pioneer expression. Apart from its thesis, Joseph Wood Krutch's Edgar Allan Poe: A Study in Genius contains an abundance of suggestion as to the play of inner fantasy in Poe's tales. Franklin J. Meine has discovered Poe's review of Longstreet's Georgia Scenes in The Southern Literary Messenger, March, 1836, thus proving a point of contact between Poe and current Southwestern humor. The Pilgrimage of Henry James by Van Wyck Brooks (1925) has proved highly stimulating even though the present conclusion that the international scene is a natural and even traditional American subject is at variance with that of Mr. Brooks. Perhaps no one can read Bergson's Laughter without being influenced by its definitions; some of these have entered into the present interpretation. Meredith, Max Eastman, Freud, and other writers on humor have also been considered; but an effort has been made to describe American humor and the American character without attachment to abstract theory.
|
<urn:uuid:f6439e42-54f2-4a42-8a2c-a7e3df4e788a>
|
CC-MAIN-2013-20
|
http://xroads.virginia.edu/~HYPER/Rourke/biblio.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944779
| 3,604
| 2.84375
| 3
|
By Neepa Sevak
Are you worried on how your child manipulates the mood in your house? Normal parenting is not easy, but raising a child who feeds on expressing rage can be even more challenging. Even the best-behaved children can be moody, argumentative and challenging at times. However, if your child or teen has an enduring pattern of tantrums, anger, unreasonably defiant, hostile or disruptive behaviors affecting their family, social and academic life, he or she may have a medical condition called Oppositional Defiant Disorder or ODD. Children with Oppositional Defiant Disorder exhibit complex behaviors to the level that it can get in the way of learning, school adjustment, and social relationships.
Causes of Oppositional Defiant Disorder
- The child’s inherited personality
- Parent’s criticism to their child's behavior
- Lack of supervision, abuse, indiscipline increases the risk of ODD
- A biochemical or neurological factor.
- Having a parent with alcohol or substance abuse disorder
- Unsatisfactory rapport with parents
- Insecurity from parents divorce, several moves, or changing schools frequently
- Having a parent with ADHD, oppositional defiant disorder or behavior problems.
- Financial problems in the family
- Exposure to violence
- Substance abuse in the child
Symptoms of Oppositional Defiant Disorder
- Hostile behavior
- Temper tantrums
- Very emotional
- Rebellious and disregards rules set by adults
- Infuriates people deliberately
- Violent, harsh and disrespectful
- Always blaming others.
- Is pessimistic, angry, malicious and revengeful.
- Stubborn, rigid and demanding
- Impairment in social and academic functioning
Homeopathic Approach to Oppositional Defiant Disorder
Homeopathy offers a convincing solution for Oppositional Defiant Disorder. Homeopathic treatment of ODD is constitutional, taking a more holistic look at the individual. Every disease is considered as a mind-body process where each individuals personality traits are as important as their physical symptoms thus taking into account their diet, lifestyle, personality, surroundings and emotional factors. Natural remedies are used to effectively treat the symptoms, helping the child to heal and to reach a state of balance and health. Homeopathic remedies are safe, natural, inexpensive and highly effective. Homeopathic remedies can formulate a positive change in children suffering from these distressing states of mind, emotions and behavior. Homeopathy can facilitate the patients to be less hostile, much more accommodating, easier to live with, more capable of dealing with stressful situations, able to control anger, less oppositional or destructive. After a constitutional homeopathic treatment patients become more realistic, much less reactive and more agreeable to explanations. The traits of distractibility, impulsivity, and hyperactivity become much more controllable. Physical struggle with parents, siblings, schoolmates, and friends, lying, stealing and hurting others becomes significantly less frequent, as homeopathic remedies help develop a better sense of right and wrong. Children treated successfully with homeopathy can truly be called rage-free kids. Their parents, family members, teachers, and friends can finally go back to living a more normal life.
Self Care Measures
- Give effective timeouts.
- Avoid power struggle with your child.
- Remain peaceful when your child argues with you.
- Commend your child’s constructive characteristics and good behaviors.
- Allow your child some amount of control with suitable choices.
- Limit consequences to those that can be repeatedly reinforced.
- Schedule more family activities together.
- Model the behavior you want your child to have.
- Attain support from your spouse and your child’s teachers.
Hence, to successfully cope with your child's defiant behavior, temper tantrums and teach them how to curb their temper and learn how to live peacefully in your home, at school, and in the world you should consider homeopathy.
|
<urn:uuid:4f1786ec-9e88-4c88-adf0-b279ab85d97d>
|
CC-MAIN-2013-20
|
http://amcofh.org/blog/homeopathic-treatment-oppositional-defiant-disorder
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924255
| 807
| 2.828125
| 3
|
Quiz: Part 1: Chapter 1 | Part 1: Chapter 2 to Part 2: Chapter 15
|Name: _____________________________||Period: ___________________________|
This quiz consists of 5 multiple choice and 5 short answer questions.
Multiple Choice Questions
Directions: Circle the correct answer.
1. The Tsar is the ruler of which country?
b) The Soviet Union.
2. With whom do the girls go to St. Petersburg on Sundays?
a) Their brother's bodyguards.
b) Their own bodyguards.
c) Their aunt.
d) Their parents.
3. What is one of the nationalities that is NOT part of the empire over which the Tsar rules?
4. In what month and year is Nicholas II crowned Tsar?
a) January 1896.
b) May 1895.
c) January 1895.
d) May 1896.
5. Nicholas' wedding takes place:
a) In Moscow, during Alexander's funeral.
This section contains 302 words|
(approx. 2 pages at 300 words per page)
|
<urn:uuid:88ee1f84-b439-4eb1-9b35-090b6382242e>
|
CC-MAIN-2013-20
|
http://www.bookrags.com/lessonplan/nicholas-and-alexandra/quiz3.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.795573
| 230
| 2.65625
| 3
|
So What Happens During a Hearing Test Anyway?
When you arrive for your exam, you will be greeted by the front office staff and asked to fill out several forms, including those that record your personal information, medical history and verify your insurance. You will also receive a copy of a Notice of Privacy as mandated by law.
As your exam begins, your Hearing Care Professional will review your personal information with you and will ask you some questions that are designed to discover the specific types of environments in which you may be experiencing some difficulty in hearing.
Next, the Hearing Care Professional may look into your ears by using an otoscope. This instrument is used to see the ear canal and the ear drum and whether or not there is ear wax obstructing the canal. Sometimes the Hearing Care Professional will have a video otoscope so you can see inside your ear as well!
The first test that is conducted is the pure tone hearing test. This is conducted in a quiet environment, sometimes in a soundproof booth. The Hearing Care Professional will place headphones that are connected to an audiometer over your ears. The audiometer transmits a series of tones at a variety of volumes into your ears to determine the exact point or "threshold" at which you can hear various frequencies of sounds. When you hear a sound, you will be asked to say "yes" or raise your hand.
The next test is speech testing. The Hearing Care Professional will ask you to listen to a series of one and two syllable words at different volumes and then ask you to repeat them. This will determine the level at which you can detect and understand speech. Another test that may be conducted is a speech in noise test. This test will determine how well you hear sentences in a noisy environment.
The results of your tests will be recorded on a form called an audiogram, which the Hearing Care Professional will review with you. The audiogram reflects your hearing loss in frequencies and decibels. You will be shown the type, pattern and degree of hearing loss, as well as the percentage of normal conversational speech that you are still able to hear. Your Hearing Care Professional will then relate these results to your concerns about your hearing. The next step is to consider treatment solutions.
Call today to schedule an appointment for your hearing test!
|
<urn:uuid:c91c8c33-fb08-4e97-a345-748947045a33>
|
CC-MAIN-2013-20
|
http://www.sahearingcenters.com/hearing-test/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.953107
| 468
| 2.734375
| 3
|
Latin name: Gavia arctica
Size: Length approx 68cm, wingspan approx 125cm
Distribution: Found in Scotland all year, and around England in winter
Months seen: All year round
Habitat: Found on Scottish lochs in summer. In winter seen around coast of England and Scotland
Special features: Also known as the Black-throated Loon. The summer plumage (see photo above) cannot easily be confused. It has a grey head which continues down the back of the neck. The throat (as the name suggests) is black with vivid black and white stripes on either side.
The winter plumage is less distinct. The throat and underside of the neck become white, but traces of the stripes remain on the shoulders. The underside of the body is white and the upper surfaces are dark grey with alternating dark grey and slightly paler grey stripes along the wings. Juvenile birds have a more scalloped pattern on the wings. There is also a triangular white patch of feathers on each flank near the tail end.
Discover what's out there today with our free email newsletter
Simply enter your details and hit send
|
<urn:uuid:cc028067-434c-44cf-b3fe-c8d5f9e02eed>
|
CC-MAIN-2013-20
|
http://www.uksafari.com/blackthroateddivers.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932324
| 237
| 3.171875
| 3
|
Current malaria control strategies recommend (i) early case detection using rapid diagnostic tests (RDT) and treatment with artemisinin combination therapy (ACT), (ii) pre-referral rectal artesunate, (iii) intermittent preventive treatment and (iv) impregnated bed nets. However, these individual malaria control interventions provide only partial protection in most epidemiological situations. Therefore, there is a need to investigate the potential benefits of integrating several malaria interventions to reduce malaria prevalence and morbidity.
A randomized controlled trial was carried out to assess the impact of combining seasonal intermittent preventive treatment in children (IPTc) with home-based management of malaria (HMM) by community health workers (CHWs) in Senegal. Eight CHWs in eight villages covered by the Bonconto health post, (South Eastern part of Senegal) were trained to diagnose malaria using RDT, provide prompt treatment with artemether-lumefantrine for uncomplicated malaria cases and pre-referral rectal artesunate for complicated malaria occurring in children under 10 years. Four CHWs were randomized to also administer monthly IPTc as single dose of sulphadoxine-pyrimethamine (SP) plus three doses of amodiaquine (AQ) in the malaria transmission season, October and November 2010. Primary end point was incidence of single episode of malaria attacks over 8 weeks of follow up. Secondary end points included prevalence of malaria parasitaemia, and prevalence of anaemia at the end of the transmission season. Primary analysis was by intention to treat. The study protocol was approved by the Senegalese National Ethical Committee (approval 0027/MSP/DS/CNRS, 18/03/2010).
A total of 1,000 children were enrolled. The incidence of malaria episodes was 7.1/100 child months at risk [95% CI (3.7-13.7)] in communities with IPTc + HMM compared to 35.6/100 child months at risk [95% CI (26.7-47.4)] in communities with only HMM (aOR = 0.20; 95% CI 0.09-0.41; p = 0.04). At the end of the transmission season, malaria parasitaemia prevalence was lower in communities with IPTc + HMM (2.05% versus 4.6% p = 0.03). Adjusted for age groups, sex, Plasmodium falciparum carriage and prevalence of malnutrition, IPTc + HMM showed a significant protective effect against anaemia (aOR = 0.59; 95% CI 0.42-0.82; p = 0.02).
Combining IPTc and HMM can provide significant additional benefit in preventing clinical episodes of malaria as well as anaemia among children in Senegal.
Keywords:Malaria; Intermittent preventive treatment; Home-based management; Anaemia
Malaria remains a major public health problem in tropical regions. According to the World Health Organization (WHO), in 2009 there were an estimated 169-294 million cases and 628,000-968,000 deaths worldwide. Over 89% of these deaths occur in Africa , most of the time outside health facilities [2,3]. In view of this situation, there is need to strengthen malaria case management and malaria prevention at the community level to reduce the burden of disease. Recently, the WHO advocated for scaling up malaria control interventions in order to accelerate malaria elimination Several malaria control strategies were developed recently, including (i) early case detection using rapid diagnostic tests (RDT), prompt treatment with effective anti-malarial drugs, such as artemisinin combination therapy (ACT) for uncomplicated malaria cases, (ii) pre-referral rectal artesunate for severe malaria cases, (iii) intermittent preventive treatment, and (iv) long-lasting insecticide-treated nets (LLIN).
Effective case management is a fundamental element of malaria control. To improve treatment practices at community level, the strategy of home-based management of malaria (HMM) has been developed [5,6]. Home-based management of malaria (HMM) is now considered as an important strategy for reducing severe morbidity and mortality from malaria in resource-poor countries [7,8].
In Senegal, the National Malaria Control Programme (NMCP) has initiated the scaling up of the use of ACT at community level, in the context of HMM strategy, implemented by community health workers (CHWs) in order to strengthen malaria control efforts. This strategy includes the use of RDT for malaria confirmation and ACT for the treatment of uncomplicated malaria.
Intermittent preventive treatment (IPT) is a new approach aiming at reducing malaria morbidity among children or other high-risk individuals. IPT involves administration of anti-malarial drugs at defined time intervals to individuals regardless of whether they are known to be infected with malaria to prevent morbidity and mortality from the infection . IPT was initially recommended for pregnant women involving the administration of at least two doses of sulphadoxine-pyrimethamine (SP) during antenatal visits after the first trimester of pregnancy. More recently the strategy was extended to infants (IPTi) with the administration of three doses of an anti-malarial drug during the expanded programme of immunization (EPI) visits . In children under 5 years of age, several studies have shown IPT to be effective in reducing malaria burden [11,12]. Intermittent preventive treatment of malaria in children less than 5 years of age (IPTc), involves the administration of two to three doses of anti-malarial drug during the high malaria transmission season .
Cissé et al. during a randomized double-blind controlled trial, conducted in Niakhar in Senegal have shown that administering SP plus artesunate three times during the transmission season can reduce malaria incidence among children under 5 years by 86% . Protective efficacy of IPTc in Mali was estimated at 67.5% with two doses of SP at 8 weeks intervals during the high malaria transmission season . Another study conducted in the rural area of Niakhar (Senegal) demonstrated that the most optimal regimen for IPTc in children is the combination of SP-amodiaquine. To ensure a maximum protective effect, the IPTc should preferably combine two long half-life drugs .
In most African countries, anti-malarial interventions are being promoted on an individual basis and many communities are still not getting access to those services . In 2008, it was estimated by WHO that confirmation of malaria cases was done in only 22% on average in most African regions, while less than 15% of patients under 5 years of age suffering from malaria attacks benefitted from treatment with ACT. It thus appears that the use of effective malaria control strategies and their integration into national health systems and services continue to be a challenge in Africa . In addition, individual anti-malarial interventions provide only partial protection in most epidemiological situations [16,17]. Therefore, there is a need to investigate the potential benefits of integrating several malaria interventions, in reducing malaria prevalence and morbidity. This study aimed to assess the impact of combining seasonal intermittent preventive treatment in children (IPTc) with home-based management of malaria (HMM) by community health workers (CHWs) in Senegal.
Study area and population
The study was carried out at the Bonconto health post, located at the Velingara health district in the south-eastern part of Senegal, 500 km from the capital city of Dakar. The health post is headed by a nurse and has eight functional health huts staffed with community health workers, serving a total population of 10,016 inhabitants. In this area malaria transmission is seasonal, occurring during the rainy season (July to November) with a peak transmission in October and November. Plasmodium falciparum is the predominant parasite species and transmission is mainly due to Anopheles gambiae s.l. (Konate Lassana, personal communication). In this area, the National Malaria Control Programme initiated the universal coverage of LLIN strategy in 2010.
The study was designed as a cluster randomized trial. Eight CHWs in the eight villages around the Bonconto health post, were trained to diagnose malaria using RDT and provide prompt treatment with artemether-lumefantrine to children less than 10 years. Four of them were randomized to also administer monthly IPTc with single dose of SP plus three doses of amodiaquine (AQ) in October and November 2010. The randomization unit was the CHW in order to avoid contamination. Each CHW is covering one village. The CHWs were randomized using a random number generator from Excel software.
Primary end point was incidence of single or first malaria attack over 8 weeks of follow up throughout an active surveillance system. Malaria attack was defined as presence of fever (temperature > 37.5°C) with a positive RDT. Secondary end points were prevalence of malaria at the end of the transmission season, and prevalence of anaemia at the end of the transmission season in the two groups.
The two main interventions in this study were HMM and IPTc for children aged from one to 10 years. During the study period, RDT were deployed at the level of health huts. The RDT used in this study was based on the detection of the Histidine Rich Protein II (Malaria Antigen P.f SD®), and was provided by the NMCP.
For uncomplicated malaria cases, treatment was done by CHWs using artemether-lumefantrine according to age group; children presenting severe malaria cases received one dose (10 mg/kg) administration of pre-referral artesunate suppositories prior to their transfer to the Bonconto health post. In the four villages with combined HMM and IPTc, all doses of AQ and SP were administered by CHWs under direct observation. IPTc drug delivery was organized at the level of health huts. At scheduled days for IPTc administration, parents were asked to bring their child at the health huts for IPTc delivery. In case a child was not seen at time of administration, the CHW was advised to visit that child at home and give the treatment. To facilitate SP and AQ administration, treatment doses were tabulated on a document and distributed to each CHW to serve as job aid.
Each tablet of AQ contains 153 mg of amodiaquine base while SP tablet contains 500 mg sulphadoxine and 25 mg pyrimethamine. Treatment doses were according to age group. Children under 2 years of age received half a tablet of SP; a whole tablet of SP was given to children age from two to 6 years, while children age from seven to 10 years received one tablet and half of SP. For AQ half a tablet was given to children under 2 years, one tablet and one and a half tablet were given daily for 3 days to children aged 2-7 years and 8-10 years respectively. This drug regimen has been shown to be the most optimal regimen to minimize overdosing as well as under dosing of AQ .
Artemether-lumefantrine (NovartisLTD) was provided by the Senegalese NMCP, rectal artesunate was obtained from MephaLtd, while AQ and SP were provided by Kina PharmLtd.
Prior to start of the study, meetings were held in the villages to explain the study purpose and answer to the population's questions. Consent was obtained from the community leaders as well as parents or children's guardians. A census of all children aged from one to 10 years leaving in each randomized village was done. A baseline assessment was done prior to the intervention (beginning of October). At baseline, all registered children were examined by a study physician, and their mothers interviewed to assess the use of bed nets, and the presence of any chronic illness which might interfere with the outcome of the trial. Thick and thin blood films were prepared and haemoglobin concentration measured using HemoCue Hb 201®. Children with acute malaria during the baseline study (temperature > 37.5°C and positive RDT) were treated with artemether-lumefantrine and those with anaemia (Hb < 11 g/dl) received oral iron supplementation for 1 month.
Malaria cases detection
An active surveillance system was organized in the eight villages from the date of first IPTc administration, to the end of the transmission season in December. Children were visited at home by CHW once a week during 8 weeks. At each visit children's axillary temperature was measured. If the child had fever (temperature > 37.5°C), or a history of fever within the previous 24 h, a RDT was performed by the CHW. Children with acute malaria (temperature > 37.5°C and positive RDT) received a three-day treatment with artemether-lumefantrine. A follow-up was done by the CHWs up to day seven after treatment, to monitor the patient's clinical conditions. In case the child did not recover on day 3, CHWs were advised to refer the child to the health post. The mothers were encouraged to take their child to the CHWs if the child presented fever in non-scheduled days visits.
Malaria parasitaemia and anaemia prevalence evaluation
A cross-sectional survey was carried out at the end of the malaria transmission season in a subsample of study participants, randomly selected from the list of children less than 10 years living in the eight villages. For each randomly selected child, thick and thin smear test were done, haemoglobin (Hb) concentration measured and anthropometric data collected.
Sample size calculation
With four clusters in each intervention arm and 125 children less than 10 years sampled in each cluster, assuming a current incidence of malaria attacks of 35 per 100 person-month at risk, the study was powered at 80% to detect 20% of reduction of malaria incidence in the HMM + IPTc group at 5% significance level, with a coefficient of variation of 0.3. For the cross sectional survey the total number of children to examine was calculated at 800, based on a prevalence of malaria parasitaemia at 20% in the study area (Senegal MIS 2009) a confidence level at 95% with a precision of 5%, power level at 90% and assuming a percentage of 20% of withdrawal.
Blood samples were collected using finger prick blood. The first drop was used for thick and thin smear test for the diagnosis of malaria. Thick and thin smear test were stained with Giemsa and read by a laboratory technician. Malaria parasitaemia was defined as any asexual parasitaemia detected on a thick or thin blood smear. Parasite density was determined by counting the number of asexual parasites per 200 white blood cells, and calculated per μL using the following formula: numbered parasites × 8,000/200 assuming a white blood cell count of 8,000 cells per μL. Absence of malaria parasite in 200 high power ocular fields of the thick film was considered as negative.
The second drop of finger prick blood was drawn into a microcuvette for Hb determination (g/dl) using HemoCue machine (HemoCue® Hb 201). Moderate and severe anaemia were defined as Hb concentration below 11 g/dl and 8 g/dl, respectively.
Data analysis and data management
Data were entered in Excel™ software and analysed using STATA 11™ software. For descriptive data, percentage was used to assess the frequency of each outcome. For quantitative data, mean and standard deviation were used to describe normally distributed variables, median and range for other data. Characteristics of all children included in the study were tabulated by study arm.
For the primary end point of incidence of malaria attacks over 8 weeks of follow up, analysis was by intention to treat including all children who attended the baseline survey. For the secondary end points of cross sectional prevalence of malaria and anaemia at the end of the transmission season, analysis was done by per protocol, including all children seen at cross sectional survey at the end of the transmission season.
To assess the impact of combining HMM + IPTc, analysis was conducted at the individual level with adjustment for clustering using robust standard errors . Time at risk was calculated from date of the first IPTc administration to the date of the cross-sectional survey. Children were not considered at risk for 28 days after treatment for malaria attacks, and thus were censored from the analysis for 4 weeks, although no child presented more than one malaria episode. Time at risk to the first malaria episode between the two study arms was compared using Kaplan Meier method with a log rank test stratified by clusters. The incidence rate ratio (IRR) of HMM + IPTC and HMM alone was determined after adjustment by age group and gender using Cox regression model with robust standard errors to account for clustering. The protective efficacy of HMM + IPTc on malaria incidence was calculated as (1-IRR) × 100. P values below 5% were considered as significant (two sided). Prevalence of malaria parasitaemia and anaemia at the end of the transmission season were measured in the two groups and compared using a logistic regression analysis with robust standard errors to take into account for the cluster design.
Prior to the study, a community sensitization was undertaken and community consent was obtained from community leaders (religious guide, village head). Informed consent was obtained from parents or children's guardians the days of surveys. The study protocol was approved by the Senegalese National Ethical Committee (Conseil National de Recherche en Santé). Approval N 027/MSP/DS/CNRS, 18/03/2010.
One thousand and twenty children aged from 1 to 10 years were registered in the eight villages with functional health huts, covered by the Bonconto health post; 1,000 children (500 in the HMM group and 500 in the HMM + IPTc group) who met the entry criteria were enrolled (Figure 1).
Figure 1. Trial profile: 1Two children in the HMM group were not seen by the CHWs during the second week of home visit. 2One child in the HMM + IPTc group was not seen during the first week of follow up and two during the second week.
At baseline the two groups were similar in terms of demographic characteristics (age, gender and P. falciparum carriage). Prevalence of moderate anaemia and severe anaemia were similar in the two groups, as well as prevalence of under nutrition (stunting, underweight); 95.8% of study subjects in the HMM + IPTc group and 95.4% in the HMM group slept under a LLIN (Table 1).
Table 1. Baseline characteristics of children in the two groups
Impact of the interventions on malaria incidence
Overall, the cumulative incidence of malaria episodes was significantly lower in the HMM + IPTc group. Thus, the Kaplan Meier survival estimates of time to first malaria episode showed a significant difference between the two groups (p = 0.001, log rank test) (Figure 2).
Figure 2. Kaplan-Meier plot comparing time to first episode of malaria attack defined as fever (> 37.5°C) and positive RDT, between the two groups.
The incidence of clinical malaria attacks during the study period was 35.6 per 100 children-months at risk in the HMM group while that for children in the HMM + IPTc group was only 7.2 per 100 children-month at risk (p = 0.04). After controlling for age group, and gender, the combination of IPTc + HMM significantly reduced the number of malaria episodes in children: adjusted incidence rate ratio: 0.21 (95% CI [0.10-0.42]); p = 0.04. Thus, the protective efficacy of IPTc + HMM against malaria attacks incidence (all cases) was 79% (95% CI [58%-90%]) (Table 2). During the intervention period, 5/510 children in the HMM group (0.98%) presented severe malaria, while no severe malaria cases were noted in the HMM + IPTc group.
Table 2. Impact of IPTc combined to HMM on malaria incidence in the two groups
Impact of the interventions on malaria parasitaemia at the end of the transmission season
At the end of the malaria transmission season, 28 children (3.3%) were found with P. falciparum. A proportion of 4.6% (95% CI [2.5-6.6]) children in the HMM group had asexual P. falciparum (any density) compared with 2.1% (95% CI [0.7-3.3]) in the HMM + IPTc group. The proportion of children with P. falciparum parasitaemia at any density, was significantly lower in the HMM + IPTc group (OR = 0.43 (95% CI [0.19-0.95]); p = 0.03), thus IPTc + HMM had a protective efficacy against P. falciparum parasitaemia (at any density) of 57% (95% CI [5%-81%]).
Children with parasitaemia at a density > 1,000 parasites/μL at the end of the transmission season, represented 3.39% and 1.37% in the HMM and HMM + IPTc groups, respectively (OR = 0.40 (95% CI [0.16-0.96]); p = 0.05) resulting in a protective efficacy at 60% (95% CI [04%-84%]) (Table 3).
Table 3. Impact of interventions on malaria parasitaemia at the end of the transmission season
Impact of the interventions on anaemia prevalence at the end of the transmission season
Mean Hb concentration among children less than 10 years of age at the end of the malaria transmission season was 10.4 ± 1.98 g/dl in the HMM + IPTc group and 10.2 ± 1.8 g/dl in the HMM group (p = 0.07). Proportion of anaemic children (hb < 11 g/dl) at the end of the transmission season was 54.11% in HMM + IPTc group, while anaemic children represented 60.3% in the HMM group (p = 0.06). In a logistic regression analysis with robust standard errors, HMM + IPTc showed a significant protective effect against anaemia (adjusted Odds Ratio (aOR): 0.59 (95% CI [0.42-0.82]); p = 0.025. The protective efficacy of HMM + IPTc in reducing anaemia among children under 10 years was estimated in this study at 41% (95% CI 95 [18%-58%]).
Anaemia was also significantly associated with P. falciparum carriage at the end of the transmission season, (aOR = 2.57; 95% CI [1.1-6.70]; p = 0.026), stunting (aOR = 2.97; 95% CI [2.08-4.23]; p = 0.001), age range from two to 5 years (aOR = 0.14; 95% CI [0.08-0.25]; p = 0.001) and age above 5 years (aOR = 0.04; 95% CI [0.02-0.07]; p = 0.001) (Table 4).
Table 4. Impact of interventions on anaemia prevalence at the end of the transmission season
Malaria remains a major public health problem in Africa, despite the decline in malaria incidence reported by most African countries in recent years . Early case detection and prompt effective treatment with ACT are essential tools for malaria control. Intermittent Preventive Treatment (IPT) is a new approach aiming at reducing malaria morbidity and mortality. IPT is recommended by the WHO for pregnant women and infants. In children, the strategy is still debated and several studies are in progress [7-9].
This study assessed the potential benefit of combining home based management of malaria with IPTc in an area with high coverage of ITNs. The trial, conducted in a rural area in Senegal, where malaria is highly seasonal, showed that combination of IPTc and HMM can provide substantial benefit in reducing malaria. Indeed, malaria incidence was lower in villages where HMM was combined with IPTc compared to villages with only HMM strategy. P. falciparum carriage at the end of the transmission season was significantly lower in communities assigned to IPTc + HMM. No severe malaria cases were noted in the HMM + IPTc arm, while five severe malaria cases were registered in the HMM arm; thus the combination of IPTc and HMM can provide substantial benefit in reducing occurrence of severe malaria cases. The combined interventions also provided an additional benefit in reducing the occurrence of anaemia in children less than 10 years of age.
These results are consistent with data from other trials. Tagbor et al., in a randomized controlled trial conducted in children under 5 years in Ghana, demonstrated that combining IPTc with HMM can significantly reduce the incidence of malaria presumptive fevers . Another trial in the Gambia showed a reduction in the incidence of malaria in children under 5 years of age, when HMM is combined with IPT.
The expansion of malaria control measures at community level has been recommended by the WHO, in order to accelerate malaria elimination . Malaria elimination will require the use of combination of interventions , and this study showed that community health workers can play an important role in scaling up anti-malarial interventions and even contribute to the malaria elimination process.
The study showed that combining HMM to IPTc in an area with high coverage of ITN (95%) will provide additional benefit in reducing malaria burden. The high coverage of ITN in the study area means that study participants had access to two or three interventions (HMM and ITNs or HMM, IPTc, ITNs). Thus, a third arm with only ITNs use would be appropriate to better understand the effect on malaria burden of several anti-malarial interventions. In other trials, conducted in Burkina Faso and Mali, IPTc showed a high level of protective efficacy against symptomatic malaria, severe malaria, as well as moderate and severe anaemia in children less than 5 years sleeping under ITNs. It thus appears that IPTc would provide a valuable contribution in reducing malaria by itself, or integrated with other intervention strategies, in areas with highly seasonal malaria .
The combination of IPTc and HMM was effective in reducing the magnitude of malaria and anemia in children less than 10 years. Although combined malaria control strategies at community level are likely to reduce malaria burden drastically, there are however, limited information on how the resultant drug pressure (IPTc drugs, ACT) may impact existing drug resistance. Consequently, it is important to monitor drug resistance while scaling up anti-malarial interventions at community level.
Although HMM + IPTc showed a significant protective effect against anaemia, the prevalence of anaemia at the end of the transmission season was still high. Anaemia was closely associated with P. falciparum carriage and stunting. It is thus important to implement community-based interventions to reduce anaemia among children in rural areas, to complement interventions against malaria, and malaria related anaemia. These interventions could include, among other things, strengthening and improving children's nutritional status and investigating for other possible causes of anaemia.
Combining IPTc and HMM can provide significant additional benefit in preventing clinical episodes of malaria as well as anaemia among children in Senegal. IPTc would provide a valuable contribution in reducing malaria, by itself or integrated with other intervention strategies, in areas with highly seasonal malaria.
The authors declare that they have no competing interests.
RCT, CTN, PM, ICB, OG conceived and designed the study. RT, CB and KS trained CHWs, supervised the fieldwork and the data collection. RT analysed the data. RT, CTN, PM, ICB, OG, JLN, BF, MN, JDN wrote the manuscript. All authors read and approved the final manuscript.
This study was supported by the Malaria Capacity Development Consortium (MCDC). We acknowledge the heads of villages, families and the staff of Bonconto health post for their diligent help during this study. We also thank the children for their participation and cooperation. Matt Cairns (LSTMH) is thanked for statistical support during data analysis.
World malaria report. 2010.
ISBN 978 92 4 156410 6 (NLM classification: WC 765)
Adjuik M, Smith T, Clark S, Todd J, Garrib A, Kinfu Y, Kahn K, Mola M, Ashraf A, Masanja H, Adazu K, Sacarlal J, Alam N, Marra A, Gbangou A, Mwageni E, Binka F: Cause-specific mortality rates in sub-Saharan Africa and Bangladesh.
NLM classification: WC 765
Schellenberg D, Menendez C, Kahigwa E, Aponte J, Vidal J, Tanner M, Mshinda H, Alonso P: Intermittent treatment for malaria and anaemia control at time of routine vaccination in Tanzanian infants: a randomized, placebo-controlled trial.
Cissé B, Sokhna C, Boulanger D, Milet J, Bâ el H, Richardson K, Hallett R, Sutherland C, Simondon K, Simondon F, Alexander N, Gaye O, Targett G, Lines J, Greenwood B, Trape JF: Seasonal intermittent preventive treatment with artesunate and sulfadoxine-pyrimethamine for prevention of malaria in Senegalese children: a randomized, placebo-controlled, double-blind trial.
Dicko A, Sagara I, Sissoko MS, Guindo O, Diallo AI, Kone M, Toure OB, Sacko M, Doumbo OK: Impact of intermittent preventive treatment with sulphadoxine-pyrimethamine targeting the transmission season on the incidence of clinical malaria in children in Mali.
Kweku M, Liu D, Adjuik M, Binka F, Seidu M, Greenwood B, Chandramohan D: Seasonal intermittent preventive treatment for the prevention of anaemia and malaria in Ghanaian children: a randomized, placebo controlled trial.
Sokhna C, Cissé B, Bâ el H, Milligan P, Hallett R, Sutherland C, Gaye O, Boulanger D, Simondon K, Simondon F, Targett G, Lines J, Greenwood B, Trape JF: A trial of the efficacy, safety and impact on drug resistance of four drug regimens for seasonal intermittent preventive treatment for malaria in Senegalese children.
Elmardi KA, Malik EM, Abdelgadir T, Ali SH, Elsyed AH, Mudather MA, Elhassan AH, Adam I: Feasibility and acceptability of home-based management of malaria strategy adapted to Sudan's conditions using artemisinin-based combination therapy and rapid diagnostic test.
Cairns M, Cisse B, Sokhna C, Cames C, Simondon K, Ba e H, Trape J-F, Gaye O, Greenwood B, Milligan P: Amodiaquine dosage and tolerability for intermittent preventive treatment to prevent malaria in children.
Tagbor H, Cairns M, Nakwa E, Browne E, Sarkodie B, Counihan H, Meek S, Chandramohan D: The clinical impact of combining intermittent preventive treatment with home management of malaria in children aged below 5 years: cluster randomised trial.
Dicko A, Diallo AI, Tembine I, Dicko Y, Dara N, Sidibe Y, Santara G, Diawara H, Conaré T, Djimde A, Chandramohan D, Cousens S, Milligan PJ, Diallo DA, Doumbo OK, Greenwood B: Intermittent preventive treatment of malaria provides substantial protection against malaria in children already protected by an insecticide-treated bednet in Mali: a randomised, double-blind, placebo-controlled trial.
Konaté AT, Yaro JB, Ouédraogo AZ, Diarra A, Gansané A, Soulama I, Kangoyé DT, Kaboré Y, Ouédraogo E, Ouédraogo A, Tiono AB, Ouédraogo IN, Chandramohan D, Cousens S, Milligan PJ, Sirima SB, Greenwood B, Diallo DA: Intermittent preventive treatment of malaria provides substantial protection against malaria in children already protected by an insecticide-treated bednet in Burkina Faso: a randomised, double-blind, placebo-controlled trial.
|
<urn:uuid:e4aae819-51ee-4b8f-a289-d8d8429f95c4>
|
CC-MAIN-2013-20
|
http://www.malariajournal.com/content/10/1/358
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941835
| 6,974
| 2.53125
| 3
|
There once were two brothers who were in dispute over their land. After arguing for some time, one of them suggested that they ask the land who owned it. That night, one of the brothers took his young son and went out to the disputed land. There he dug a hole and placed his son in the hole. Before covering the hole he instructed his son to answer back that he was the owner of the land when the question was asked. The next day the two brothers went to ask the land who owned it. When the first brother asked, "do I own you?", the land was silent. When the second brother asked if he owned it, the land spoke back, "yes, you are the owner." The first brother was astounded and agreed that the other brother must be the real owner. Later that day, the new owner went to recover his son. When he got there, he called for his son. All he heard was a whistle. When he began to dig up his son, all he found was a marmot hole. The more he dug, the longer the tunnel was. He never recovered his son and all he heard were marmot whistles.
Return to The Marmot Burrow
|
<urn:uuid:cd1c1273-8437-4ada-ae2a-28d9d3a5bde6>
|
CC-MAIN-2013-20
|
http://www.marmotburrow.ucla.edu/goldfolk.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.99608
| 245
| 2.8125
| 3
|
The North American Water Program (NAWP)’s vision is to establish the scientific basis and the observation, modeling and decision approaches needed to manage North American water security and sustainability through climate, population and environmental change uncertainties. By addressing three NAWP challenges – adaptation, benchmarking and informing decisions – NAWP will provide solutions for North America's freshwater sustainability challenges.
The development of this effort started in 2010 with the recognized need for coordinated effort to advance North American hydroclimate science and solutions. This vision culminated in an April 2011 "Terrestrial Regional North American Hydroclimate Experiment" (TRACE) workshop in Silver Spring, Maryland. Over 75 participants provided valuable insights, consensus and mandates for embarking on this ambitious effort. Following the TRACE workshop, a small team reworked the vision and renamed it the North American Water Program (NAWP). The white paper detailing the scope of NAWP can be seen here.
|
<urn:uuid:50d1f461-7b98-48bd-98a6-4272c769d69a>
|
CC-MAIN-2013-20
|
http://www.nawaterprogram.net/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.900097
| 184
| 2.65625
| 3
|
From Valerie Jarrett, senior advisor to the president:
Recently, I watched the movie BULLY with my mom. We were both deeply moved by the film and the stories it tells of students, families, and communities impacted by bullying.
Earlier today, we screened BULLY at the White House. We were joined by bullying prevention advocates from a range of communities – LGBT, AAPI, faith, disability, and others – as well as educational partners and key Obama Administration staff who work on these issues every day, including Secretary of Education Arne Duncan. Before the film, a panel of nationally recognized experts on bullying prevention spoke from their perspectives about challenges and opportunities, and after the film, we heard from Lee Hirsch, the director and filmmaker, and several of the students and families who were directly impacted by bullying and intolerance and whose stories were featured in the film.
This film is a powerful call to action: We must do everything we can to work toward the day when no young person or family suffers the pain, agony, and loss caused by bulling in our schools and communities.
In the last few years, President Obama and his Administration have taken significant steps towards this goal.
In March of 2010, we held the first-ever White House Conference on Bullying Prevention, attended by both the President and First Lady. The conference brought together students, teachers, advocates, the private sector, and policymakers, to discuss ways to make our schools safer. President Obama explained it this way: “If there’s one goal of this conference, it’s to dispel the myth that bullying is just a harmless rite of passage or an inevitable part of growing up. It’s not.”
The President recorded a video for the It Gets Better Project, and so did the Vice President, Cabinet Secretaries, and members of the White House Staff.
The Department of Education has issued guidance to schools, colleges, and universities, making it clear that existing civil rights laws apply to bullying. Schools have not just a moral responsibility, but a legal responsibility, to protect our young people from harassment. They have also worked with states to help them in their own anti-bullying efforts, and recently released a report that documents key components of anti-bullying laws across all 50 states. And the Department of Education has issued guidance to Governors and state school officials, in order to help them incorporate the best practices for protecting students.
We recently re-launched StopBullying.gov, a website that contains detailed descriptions of the work we’re doing on bullying, along with resources for young people, parents, and educators.
We’ve partnered with businesses, foundations, non-profits, and universities that are coming up with new, creative ways to make our schools safe.
And recently, the Departments of Education and Justice reached a landmark settlement in the Anoka-Hennepin School District after an extensive investigation into bullying and harassment against students who are or are perceived to be LGBT.
These Administrative actions have been critically important – and effective – and we will continue to work across the entire Federal government to address and prevent bullying.
We also hope that Congress will take action to ensure that all students are safe and healthy and can learn in environments free from discrimination, bullying, and harassment by passing the Student Non-Discrimination Act (SNDA) and the Safe Schools Improvement Act (SSIA). These pieces of legislation are critically important to addressing bullying in our schools and safeguarding our most vulnerable students. The Student Non-Discrimination Act, sponsored by Senator Al Franken of Minnesota, and Representative Jared Polis of Colorado, would prohibit discrimination in public schools against any student on the basis of actual or perceived sexual orientation and gender identity. And the Safe Schools Improvement Act, sponsored by Senator Bob Casey of Pennsylvania and Representative Linda Sanchez of California, would require school districts to adopt codes of conduct specifically prohibiting bullying and harassment, including on the basis of race, color, national origin, sex, disability, sexual orientation, gender identity, and religion. I would also like to thank Illinois Representative Danny Davis for his advocacy on this issue. All of our students have the same right to go to school in an environment free of discrimination and harassment, and that’s why the President supports these two important pieces of legislation and wants to work with Congress as they move forward in the process.
Every day, we are striving to do our part to make progress. And I believe that day by day, step by step, we will change not just our laws and policies, but behavior, so that every young person is able to thrive in our schools and communities, without worrying about being bullied.
Here are a few news releases with reaction from national LGBT groups and allies:
Student Non-Discrimination Act and Safe Schools Improvement Act Needed to Address Anti-LGBT Discrimination and Bullying in Schools
WASHINGTON – The Human Rights Campaign, the nation’s largest lesbian, gay, bisexual and transgender (LGBT) civil rights organization, today applauded President Obama for announcing his support of the Student Non-Discrimination Act (SNDA) and the Safe Schools Improvement Act (SSIA).
“The President’s endorsement of the SNDA and SSIA recognizes the importance of providing LGBT students with the same civil rights protections as other students,” said HRC President Joe Solmonese. “No student should feel scared when walking into their school and these bills would address the discrimination and bullying that our youth have endured for far too long.”
SNDA is sponsored by Sen. Al Franken (D-MN) in the Senate and Rep. Jared Polis (D-CO) in the House of Representatives. SNDA would prohibit public elementary and secondary schools from discriminating against any student on the basis of actual or perceived sexual orientation or gender identity. SSIA is sponsored by Sens. Robert Casey (D-PA) and Mark Kirk (R-IL) in the Senate and by Rep. Linda Sanchez (D-CA) in the House. The bill would amend the Elementary and Secondary Education Act to require schools and districts receiving federal funds to adopt codes of conduct specifically prohibiting bullying and harassment, including on the basis of sexual orientation and gender identity. This is the first time the President has expressed his support for either piece of legislation.
Discrimination and bullying against students based on sexual orientation and gender identity contributes to high dropout rates, absenteeism, adverse health consequences and academic underachievement. When left unchecked, such discrimination can lead to, and has led to, dangerous situations for young people. Federal statutory and/or constitutional protections expressly address discrimination on the basis of race, color, national origin, religion, sex and disability, but do not expressly address sexual orientation or gender identity. As a result, students and parents have limited legal recourse to redress for discrimination on the basis of sexual orientation or gender identity.
Despite recent inaccurate criticisms of the bill by Heather Wilson, a Republican running for U.S. Senate in New Mexico, the SNDA does not inhibit constitutionally guaranteed freedoms of speech and expression for individuals and student groups. Language in SNDA recognizes that nothing in the Act alters the legal standards and rights available to individuals or religious and other student groups under the First Amendment and the Equal Access Act. SNDA prohibits discrimination, including severe, persistent or pervasive harassment; it does not prevent an individual or organization from expressing disagreement with an individual’s sexual orientation or gender identity.
The Human Rights Campaign is America's largest civil rights organization working to achieve lesbian, gay, bisexual and transgender equality. By inspiring and engaging all Americans, HRC strives to end discrimination against LGBT citizens and realize a nation that achieves fundamental fairness and equality for all.
"Gay, lesbian, bisexual and transgender students have long been at a significant disadvantage without specific protection under federal law."
(New York, April 20, 2012) - Today, on GLSENs' National Day of Silence, the White House announced its support of the Student Non-Discrimination Act (SNDA), and Lambda Legal released the following statement by Hayley Gorenberg, Deputy Legal Director of Lambda Legal:
"We applaud the Obama administration for endorsing this critical piece of legislation. We thank Sen. Al Franken, Rep. Jared Polis, Rep. Barney Frank and Rep. Tammy Baldwin and over 50 other current sponsors for their leadership on this bill and we urge Congress to pass it.
"At Lambda Legal, we've encountered extraordinary cases of violence and discrimination against LGBT young people in schools - and sometimes against the allies who try to support them. The Student Non-Discrimination Act takes a big step toward a safer and healthier environment in every public school.
"Gay, lesbian, bisexual and transgender students have long been at a significant disadvantage without specific protection under federal law. All students have a right to a safe learning environment, and this law will leave no doubt as to public schools' responsibility to provide it."
Washington, D.C. - In response to President Obama's endorsement of the Student Non-Discrimination Act (SNDA) and the Safe Schools Improvement Act (SSIA), NCTE Executive Director Mara Keisling said:
"These two safe schools bills are just tremendously important to trans youth and President Obama's endorsement is another example of his broad commitment to trans people and trans issues. We are thankful to Senators Al Franken and Bob Casey and Representatives Jared Polis and Linda Sanchez for their leadership on these issues. According to the National Transgender Discrimination Survey, trans and gender nonconforming young people face startling amounts of harassment, assault and sexual violence at school, with more extreme rates of harassment and violence among trans youth of color. Trans kids are hurting and we have a way to stop that. Congress must act quickly to protect our transgender young people."
WASHINGTON – At an event at the White House today, the Obama administration endorsed a crucial bill that would protect LGBT youth from harassment and bullying in schools. The Student Non-Discrimination Act (SNDA) would protect students from discrimination, including harassment “based on actual or perceived sexual orientation or gender identity” in public elementary and secondary schools.
The bill, introduced by Sen. Al Franken (D-Minn.) in the Senate and Rep. Jared Polis (D-Colo.) in the House, would help to end entrenched biases toward LGBT students in our public education system by filling gaps in our federal civil rights laws.
“Having the White House stand behind the Student Non-Discrimination Act is key to getting this necessary legislation passed into law,” said Ian Thompson, ACLU legislative representative. “Our public schools should be a safe harbor for our youth, not a place of exclusion and ridicule. By passing the Student Non-Discrimination Act, Congress can have a profound and very real impact in improving the lives of LGBT students. It’s time to make passage of this bill a priority.”
NEW YORK - April 20, 2012 - Today, on GLSEN's 17th annual Day of Silence, the White House released the following statement of support for the Safe Schools Improvement Act and the Student Non-Discrimination Act:
“The President and his Administration have taken many steps to address the issue of bullying. He is proud to support the Student Non-Discrimination Act, introduced by Senator Franken and Congressman Polis, and the Safe Schools Improvement Act, introduced by Senator Casey and Congresswoman Linda Sanchez. These bills will help ensure that all students are safe and healthy and can learn in environments free from discrimination, bullying and harassment.”
The following statements are from GLSEN Executive Director Dr. Eliza Byard and GLSEN National Board member Sirdeaner Walker:
"Today's announcement is a vital show of support to students everywhere of all identities, backgrounds and beliefs who face bullying and harassment in school," said Byard. "By speaking out on GLSEN's Day of Silence in support of these two critical bills, the President has given greater hope to students who often feel that they have nowhere to turn. It is deeply moving to know that lesbian, gay, bisexual and transgender students who face the multiple threats of harassment, violence and discrimination have the President as an ally in their efforts to win all of the protections that they deserve."
“Today is a day that I have hoped for since I began my work as an anti-bullying advocate after losing my son Carl," said Walker. "I believe that President Obama’s explicit endorsement of the Safe Schools Improvement Act will make a tremendous difference in moving this issue forward. Having met with the President three times, I knew his support for SSIA and the Student Non-Discrimination Act was genuine. But stating that publicly on GLSEN's Day of Silence pushes it to a whole new level. While nothing can bring Carl back, I know that these bills can make a real difference to end the bullying and harassment that is faced by too many other sons and daughters today.”
WASHINGTON, April 20 — President Obama today announced his support for the Safe Schools Improvement Act and Student Non-Discrimination Act, federal legislation aimed at combating anti-lesbian, gay, bisexual and transgender (LGBT) bullying and discrimination in our nation’s schools. The National Gay and Lesbian Task Force is working in coalition toward passage of both these critical bills.
Statement by Rea Carey, Executive Director
National Gay and Lesbian Task Force
“We thank President Obama for endorsing the Safe Schools Improvement Act and Student Non-Discrimination Act. The epidemic of bullying and discrimination in our nation’s schools is a tragedy and an outrage. No student should fear getting beaten up, harassed and tormented while simply trying to get an education. We have a responsibility to ensure all young people are protected from this pervasive bullying, discrimination and abuse. Parents, educators, policymakers — all of us — need to stand against this unacceptable behavior. The president did that today. We urge him to now help get these life-saving bills through Congress.”
|
<urn:uuid:331ea93e-a7f9-4428-bcfe-8d089fc3519a>
|
CC-MAIN-2013-20
|
http://miamiherald.typepad.com/gaysouthflorida/2012/04/20/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.9601
| 2,867
| 2.671875
| 3
|
Village Tribes of the Desert Land
By Edward S. Curtis
Illustrations from Photographs by the Author
In former articles in Scribner's s Magazine I have pictured the Apache and their linguistic kin, the Navajo; the Indians of the Northern Plains; the community-dwelling Indians in stone houses, the descendants of the cliff-dwellers. In this I will describe the tribes of Southwestern Arizona. They are in appearance, mythology and religion, as well as in life and manners, quite different. In the region spoken of we find the Yuma, Mohave, Havasupai, Walapai and Maricopa of the Yuman linguistic stock; the Pima, Papago and Kwahatika of the Piman stock. The combined population of these groups is approximated at twenty thousand, the Pima and Papago being the two largest tribes. With the exception of the Walapai and one branch of the Papago they are sedentary tribes, living in fixed villages. Their home structure is of poles and brush with an outer earth covering, naturally lacking the stability of the stone homes to the North, and for this reason a study of their prehistoric life is more difficult.
Compared to the Northern Plains Indians, who reverence a brave heart nest to their worship of the Great Mystery, these tribes lack bravery and the war spirit. The Pima and Papago did, however, prove rather brave in defensive warfare with marauding Apaches, who crept down on them from their mountain homes. The Yuma and Mohave were too indolent to have a brave heart, but rather preferred a life of idleness. These dwellers in the valley of the Colorado are physically a magnificent group of people. Previous to the introduction of the white man's diseases, there was probably nothing comparable to them as physical types in the United States. Their lazy life, low altitude, with its excessively hot climate, seemed to develop their physique, but the same conditions which made giants in stature seemed to require no mental activity or development. Their mythology is apparently an incipient one, and, compared to that of the Pueblos, is so crude that it would seem to be of a people uncountable ages closer to the beginning of man. This probably is not the case, however, but merely indicates a lack of mental activity. The ease with which they gained food undoubtedly tended to retard their mental growth. The valley had its great annual overflow. Following this, the vegetation, wild and planted, sprang up and grew as those unacquainted with such localities cannot realize. The river had countless quantities of fish to be caught in rude traps with little effort. Rats, rabbits and small game were abundant and close at hand. The swift-footed deer of the hills they did not molest, as that would require effort.
The greatest display of natural intelligence seen among them is their tribal custom of cremation. This, according to their legends, was taught them by their creator, Matevilye, and they follow his instructions with true Indian tenacity. One of my Yuma interpreters of the past season came to me with considerable anxiety: "Missioner wants me to come to church. I go to church, when I die they put me in the ground to rot. I no like that. Burned up the way Matevilye say, just ashes. That all right. What you think?" I must confess, as one rather favoring his way, to not being able to see why the old chap should be punished for going to church by being buried against his will. It is likely that an ethnologist would make a poor missionary. However, the missionary has something of an argument for his desire to change their way of disposing of the body, as it is certain that we see in the Indian cremation much that is not a part of the modern method, as it is here that he pays the greatest attention to the teachings of the Yuman creeds as taught them by the Creator. At their approach of death, relatives, friends and neighbors gather about waiting for the soul to take its flight to the sand-hills of the after-world: a land of plenty, where when one melon is picked another comes; "no one sees it come; you just pick it, another one there." The body is prepared at once after death and carried out in blankets to the place of cremation which has in the meanwhile been prepared. A shallow hole has been dug in the earth and the fuel piled high on this. The body is placed on the top of the fuel, after which fire is applied at the four cardinal points. During the time of the burning the multitude stand about and wail in the most melancholy fashion, continuing until fuel and body are but a mass of embers and ashes. These are raked into a pit and covered with earth. And so are the last rites to the departed among the tribes of the Yuman stock.
The Havasupai, who have their home in Cataract Canon, a branch of the Grand Canon of the Colorado, have the most unique home-land of any tribe of our Indians. It is but a tiny garden spot in this vast chaotic wilderness. To reach the home of the Havasupai one must first make the trip across the plateau to the rim of the canon, no small part, and then he who would enter must take his choice of two trails; either one will try the courage of all but the experienced traveller of canon trails. Once there, you feel well repaid for your effort. If the season is midsummer, while every rock wall and crag is reflecting the sun's heat, the upper regions are a veritable Hades, and this garden spot, with its cool shadows and bubbling streams, an oasis in it. On my first journey here the upper world at the canon's rim was wrapped in a blinding snow-storm which chilled one's very life. As our pack animals picked their way down the trail and entered this canon home the peach trees were in bloom, birds were singing, all in the joy of life and spring. I could but think, "This is paradise." Theirs is a world of but a few hundred acres walled in first by sheer red stone walls 400 feet high, and beyond those perpendicular walls are broken, crumbling piles of rock of many colors, reaching on and on, but ever up, until one comes out on the pinon-clad plateau.
These canon dwellers have always been a small group. Disease and change in manner of life have dwindled their ranks until there is now but a few more than a hundred, and it is a safe prophecy that ten years from now there will be no more than half of that. They are an agricultural people, and have been from prehistoric times. They insist God gave them the seeds of the corn and vegetables, and the peach-tree, but admit that man brought the fig. Their water-supply is from a beautiful spring which has its source in the upper end of their home spot. Here it springs from the canon's floor, a beautiful, transparent stream, flowing along with willow-bordered banks, then, as a cataract, leaps high over a sheer cliff and forms in dark pools below. The water of the stream is used for irrigation; they throw out ditches and guide them close along the outer margin of the field. They call this "making water run uphill," and claim it was taught them by Lee, who, after the Mountain Meadow massacre, took refuge with these people for a time.
In the spring they gather and cook great quantities of the mescal, cooking it in large pits like other tribes of the region. These pits can be seen all through Grand Canon and its many branches. Many of them undoubtedly have been used from pre-Columbian times.
In the winter season the Havasupai went out on the plains above in their annual hunt for deer, at once changing from vegetarians to meat-eating people, fairly gorging themselves on the flesh of the deer. All this is changed. The deer are scarce, and permission to hunt now rarely and grudgingly given by their Father in Washington.
The Maricopa long ago seceded from the parent Yuman group in the valley of the Colorado, and slowly worked their way up the Gila valley until they reached the land of the Pima. Here they became neighbors of the Pima and affiliated with them, particularly in war of defence against the wandering tribes. When or why the secession from the original group is buried in the confusion of Indian traditions. However, that the wandering began but a few generations ago is quite certain, as the type still retains traces of the Yuman. In life and manners they have become as Pima, but in language, religion and mythology they show but little change by reason of the tribal separation and contact with an alien culture; this seems to be a good argument that it is here we should anticipate the least change in studying any primitive people.
Looking on the map, the smallness, as indicated, of the Pima Reservation would lead one to presume that they were a small tribe. Far from it! They are a large and strong tribe, mentally one of the keenest in our land. The Pima claim to have lived always in the Gila valley, their lands stretching along some sixty miles of its length. They farm by irrigation and likely had canals larger and longer than other tribes. The very large prehistoric canals which formed a part of the development, with the building and occupancy of the Casa Grande and other like large prehistoric ruins, are in the country of the Pima. In their legends they account for these ruins and ditches and claim them as the work of Pima. There is, however, little to encourage this claim. The ruins of the region show structures of massive walls, many rooms and several stories in height, while the Pima home structure, when first observed, was, as it is now, a single-room affair, round in shape, built of poles, covered with earth. Their traditions of the former occupancy of these many-roomed communal structures is probably but an attempt to fit their tradition to the fact of the old ruins.
One of the most picturesque features of the Pima home country is the giant cactus, Sahuaro. This strikingly grotesque plant is of much importance in their life. Great quantities of its fruit are gathered; they use it fresh, dried, make it into a thick, heavy jelly; and, lastly, but by no means least, is the making of it into wine. They, like the Maricopa and Papago, are expert basket-makers and potters. Their large ollas are the universal water container, while ollas of small size and more graceful lines are used as head-jars in carrying water from the supply to the home. The principal kitchen-utensils are pottery-ware of their own making. One can, without too far stretching of fact, say that the Pima are well advanced in the ways of civilization, much of which is due to one man--the sort of man that is born, not made. It is Dr. Cook, the Pima missionary. No doubt the advance has seemed to him heart-breakingly slow, and there have been many days when he could but wonder, "What is the worth of it all?" Still, his thirty years of patient, faithful work have brought a real uplifting of the tribe, a showing that few men can make for their life effort.
The Papago, close kin to the Pima, can well be divided into two groups, the sedentary and the wandering. The community-living, home-loving group can scarcely be driven from their homes, while those of the gypsy-like bands cannot be kept in any one place. Good authorities, like Dr. Cook, insist they wander from mere necessity of gaining a livelihood. These wandering Papago, numbering several thousand, are scattered about the whole of South-western Arizona from the juncture of the Gila and Salt Rivers to and into Old Mexico. Some villages of a few families will be dwelling far down in the mountains to the border of Old Mexico, and probably have small herds of scrubby cattle, and, as well as the cattle, patches of wheat which are grown from freshet water, flowing down out of the hills. Their wheat harvest is very early in the spring, and when it is closed they trek off to the north to take part in the Pima harvest. The harvest season is always hot, so the wily Pima prefers to hire the wandering Papago to do the work while he dozes in the shade. The Papago's pay is a portion of the wheat, which he loads onto his ponies and takes back to his winter's home in the mountains of the south. To add to this store the Papago takes advantage of the proverbial Indian hospitality and desire to make gifts, and gives a Papago dance for the entertainment of the Pima. The Pima, to show their appreciation, give them much grain.
The sedentary Papago's home is in the valley of the Santa Cruz, about the mission of San Xavier, one of the finest ever built in North America, and without doubt the finest still standing in the United States. The wonderful old church is on a slightly raised plain, overlooking the valley of the Santa Cruz, and about it are grouped the homes of the Papago. Their home structure has, in a great part, changed from the old-time round houses, and has become the rectangular one, half Papago, half Mexican. Below their village lie the well-kept farms, small in size, yet ample, as the Indian's wants are few.
A few days' journey to the south of these people are the Kwahatika villages, so little known that the name scarcely appears in print. Broadly, they are like their linguistic kindred of the Piman group, from which they probably separated within the last few hundred years. In religion and mythology they are still the same. They are, in fact, like a degenerate outcast from the family; in appearance the same, but not of the good blood. They can truly be termed "desert Indians." Their five villages are scattered about in the desert, and he who did not know the way of the land could well wonder how even the hardiest of human beings could contrive to live here. The secret of their existence is that they were past masters in dry farming before Colorado was named. Each village is located where it receives the natural drain of a vast area. The Kwahatika will prepare his small farm, and if there is not a natural rise of ground about it, will enclose it in an earth embankment. When the severe winter rains come on the freshet water flows down the valley and they catch this and guide it out on the prepared land, using the collected water of tens of thousands of acres to thoroughly soak their five or ten. With a rain or two of this sort they are certain of a fair crop. The low foot-hills of the region abound in the giant cactus, furnishing fruit in endless quantities, the only limit of the supply being their ability to gather it in the few weeks of the harvest. Six months after the harvest season one can see huts still containing wagonloads of the earthen jars filled with the thick jelly, each jar carefully sealed with clay. The mesquite pod, which forms such a large part of the natural food of this region, does not abound in the land of the Kwahatika, but the mesquite forests are not so far away but what they can journey to them in harvest time.
|
<urn:uuid:fb0a98c5-8b01-4aab-b601-5b91b67d9aad>
|
CC-MAIN-2013-20
|
http://xroads.virginia.edu/~ma02/daniels/curtis/scribners/march1909.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.975083
| 3,234
| 3.15625
| 3
|
Culture and the Arts constitutes one of five strands of the University of Leeds Legacies of War hub. It aims to organise a number of events and activities that draw attention to the cultural and artistic legacies of the First World War and reach out to the people of Leeds, Yorkshire and beyond with a programme that is academically informed, accessible and entertaining.
It also provides a supportive environment for academic symposia and conferences as well as original research and practice-led explorations of matters relating to the First World War, its historical period and legacies. The strand encourages the engagement of researchers, writers, artists, media practitioners, groups and schools with University archives such as the Liddle Collection and the holdings of the Stanley and Audrey Burton Gallery, and makes use of venues in Leeds on and off campus. The Culture and the Arts project team will work with and across the other Legacies of War strands, i.e. War and Medicine, Science and Technology, War and Resistance, Yorkshire and the Great War.
The objectives of the Culture and the Arts strand are
- to show how the First World War relates to the present
- to commission, exhibit, screen, perform and/or discuss new work
- to find new ways of looking at canonical cultural productions of the War
- to excavate neglected materials and engage with lesser known artefacts and/or intangible heritage
- to consider the everyday life of men, women and families who had no stakes in the world of ‘Culture’ or who belonged to social groups not normally associated with the experience and legacies of World War One (the latter might include, for example, colonial troops and colonised civilians, children, travellers and other cultural and ethnic minorities)
- to seek ways of engaging audiences from different constituencies
- to record events and activities and create an open access digital resource for the future.
The strand will operate on a local level with involvement of staff and students in the Faculty of Performance, Visual Arts and Communication and various communities and cultural institutions. To acknowledge the fact that the First World War was an international conflict with global impact, but not the only event and development of significance in the early 20th century, the strand will also attempt to initiate and facilitate face-to-face and/or digital collaborations with stakeholders, archives and/or universities in the twin cities of Leeds:
- Brno, Czech Republic (Czechoslovakian nationhood and independence was forged during the First World War)
- Dortmund, Germany (this large industrial town in the Ruhr region was targeted by the Allies as a site of war production)
- Durban, South Africa (South Africans fought with the Allies in Europe, the Middle East and in the German colonies in West and East Africa)
- Hangzhou, China (in the Far East, Japan and Germany had special interests in Chinese territory, while many Chinese labourers risked their lives in war-torn Flanders)
- Lille, France (occupied by Germany until liberated by the British in 1918)
- Louisville, Kentucky/USA (home of the largest WW1 army training camp built in 1917)
A Legacies of War Proposal template is available to download and complete, to submit ideas to the Culture and the Arts team.
This strand will be led by Dr Claudia Sternberg.
|
<urn:uuid:0ee558af-9f7b-4a61-b083-4d54f2f60e72>
|
CC-MAIN-2013-20
|
http://arts.leeds.ac.uk/legaciesofwar/culture-and-the-arts/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94162
| 677
| 2.734375
| 3
|
For many, the Pakistan Tehreek-e-Insaf (PTI) rally in Lahore indicated a nationalist upsurge — the sudden pride in being a Pakistani who was part of the process of an upbeat political activity. The sense of elation was natural, given the fact that crisis rather than the lack of it has become the rule rather than the exception. An average Pakistani seems to be on a never-ending roller coaster ride. Nations that get sucked into such a whirlwind often lose their sense of making appropriate choices. In fact, the appropriate choice becomes the one which provides instant, though short-termed, relief from an immediate crisis.
Under the circumstances, the tendency is to deconstruct existing structures, often at the pace of destruction, and replace them with something which is often militantly nationalistic, self-righteous and generally dictatorial in character. Hence, extreme sociopolitical crises results in extreme solutions that may not bring long-term relief but are akin to a shot of morphine that gives an immediate high.
One of the best examples of what results from the collapse of a sociopolitical system is the rise of the Third Reich in Germany during the 1930s. Burdened by global recession and a humiliating military defeat, the bulk of middle-class Germany found refuge in Adolf Hitler’s ideology. The Fuhrer promised getting rid of the Treaty of Versailles and unemployment. The silver lining was that once in power, the Nazis would change everything that had been spoiled by the ruling elite of those days. The Weimar government was ferociously accused of capsising to the enemy. The moral fabric of German society had thinned to a degree that there was little possibility of questioning Hitler’s logic.
Thus, the rise of the Nazis was phenomenal. From getting 12 seats in 1928, the Nazi party gained popularity, winning 107 seats in 1930 and 230 in 1932. The sociopolitical and cultural discourse also began to change. There was greater emphasis on German traditions and values, which the Nazis promised to reinforce. This became extremely popular with the youth and women. The latter played an important role in enhancing the political power of the Nazis, just like we saw in the case of Maulana Fazlullah in Swat.
The ascendency of the Nazis to power was not a reflection of some inherent unreasonableness of the German people but an indicator of the utter collapse of German society. Eager to survive and frustrated by the callousness of a political structure that didn’t deliver or dialogue, middle-class Germany opted for a dictatorial philosophy that had the potential of providing immediate relief. The German society at that time had completely lost the sense and ability to transform, hence temporary transition was the only option. The choice itself indicated the depravity of the then existing political system for which the best option was Hitler. Every act of political misdemeanour such as making concessions to the forces of evil and compromising on larger public good comes to haunt a state and its society. The Nazi party, which was a natural beneficiary of the flawed system, made gains through the excellent use of technology and modern tools of communication. Part of the problem of a weakening political structure is that the stakeholders are unable to reinvent themselves.
The crumbling power of the Weimer Republic forced various powerful interest groups to search for a more potent player with the capacity to generate a more gripping ideology, which the Nazis presented in the form of fascism or an extreme form of nationalism. Not that foreign players did not have a hand in Germany’s military and economic devastation, but fascism held European powers entirely responsible for the chaos. At one level, the society had become very politicised and, on the other, extremely apolitical because the formula for changing conditions was absolute force and not dialogue and negotiations.
Pragmatism is indeed a double-edged sword. Political survival is necessary but not at the cost of ideals and values. Hitler was a choice made by a society that had forgotten the art to negotiate dialogue and stand up for some principles. In the mid-1930s, when everyone in Germany thought they were transiting to a safe option, they were actually burning all their boats. Transition does not happen without transformation!
Published in The Express Tribune, November 6th, 2011.
More in OpinionNot another tsunami
|
<urn:uuid:c0553175-bb61-4ecd-9573-5bd7bac7c3a6>
|
CC-MAIN-2013-20
|
http://tribune.com.pk/story/288139/transition-without-transformation/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.970294
| 877
| 2.703125
| 3
|
The drinker might not experience any functioning problems in the short term, says graduate student Megan Anderson, of Rutgers University, but in the long term the story may be different. New research published in Neuroscience shows that moderate to binge drinking reduces the number of nerve cells in the hippocampus by about 40%. This means that the brain is less capable of creating new brain cells, thus finding it more difficult to learn new things.
The researchers, from Rutgers University, in the US, and the University of Jyvaskyla, in Finland, tested the effects of alcohol consumption in rats. They found that, with a level of intoxication of only 0.8%, the hippocampus’ ability to create new cells was already affected. On the other hand, their motor and associative learning skills remained the same; at least in the short term.
“If this area of your brain was affected every day over many months and years, eventually you might not be able to learn how to get somewhere new or to learn something new about your life,” says Anderson. “It’s something that you might not even be aware is occurring.”
Anderson, M., Nokia, M., Govindaraju, K., & Shors, T. (2012). Moderate drinking? Alcohol consumption significantly decreases neurogenesis in the adult hippocampus Neuroscience, 224, 202-209 DOI: 10.1016/j.neuroscience.2012.08.018
|
<urn:uuid:386d94e2-bbc6-4716-b22e-4a69f6b21d9c>
|
CC-MAIN-2013-20
|
http://www.united-academics.org/magazine/homefeat/even-moderate-drinking-can-be-harmful-for-the-brain/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.929852
| 298
| 3.09375
| 3
|
|History and lists|
A literary magazine is a periodical devoted to literature in a broad sense. Literary magazines usually publish short stories, poetry and essays along with literary criticism, book reviews, biographical profiles of authors, interviews and letters. Literary magazines are often called literary journals, or little magazines, which is not meant as a pejorative but instead as a contrast with larger, commercial magazines.
History of literary magazines
Literary magazines first began to appear in the early part of the 19th century, mirroring an overall rise in the number of books, magazines and scholarly journals being published at that time. In Great Britain, critics Francis Jeffrey, Henry Brougham and Sydney Smith founded the Edinburgh Review in 1802. Other British reviews of this period included the Westminster Review (1824), The Spectator (1828) and Athenaeum (1828). In the United States, early journals included the Philadelphia Literary Magazine (1803–08), the Monthly Anthology (1803–11), which became the North American Review, the Yale Review (founded in 1819), The Knickerbocker (1833-1865), Dial (1840–44) and the New Orleans-based De Bow's Review (1846–80). Several prominent literary magazines were published in Charleston, South Carolina, including The Southern Review from 1828–32 and Russell's Magazine from 1857–60).
The North American Review is the oldest American literary magazine, but publication was suspended during World War II whereas the Yale Review was not, making the Yale journal the oldest literary magazine in continuous publication. By the end of the century, literary magazines had become an important feature of intellectual life in many parts of the world.
Among the literary magazines that began in the early part of the 20th century is Poetry magazine founded in 1912, which published T. S. Eliot's first poem, "The Love Song of J. Alfred Prufrock." Other important early-20th century literary magazines include The Times Literary Supplement (1902), Southwest Review (1915), Virginia Quarterly Review (1925), Southern Review (1935) and New Letters (1935). The Sewanee Review, although founded in 1892, achieved prominence largely thanks to Allen Tate, who became editor in 1944.
Two of the most influential — and radically different — journals of the last-half of the 20th century were The Kenyon Review (KR) and the Partisan Review. The Kenyon Review, founded by John Crowe Ransom, espoused the so-called New Criticism. Its platform was avowedly unpolitical. Although Ransom came from the South and published authors from that region, KR also published many New York-based and international authors. The Partisan Review was first associated with the American Communist Party and the John Reed Club, however, it soon broke ranks with the party. Nevertheless, politics remained central to its character, while it also published significant literature and criticism.
The middle-20th century saw a boom in the number of literary magazines, which corresponded with the rise of the small press. Among the important journals which began in this period were Nimbus: A Magazine of Literature, the Arts, and New Ideas, which began publication in 1951 in England, the Paris Review, which was founded in 1953, The Massachusetts Review and Poetry Northwest, which were founded in 1959, X Magazine, which ran from 1959–62, and the Denver Quarterly, which began in 1965. The 1970s saw another surge in the number of literary magazines, with a number of distinguished journals getting their start during this decade, including Columbia: A Journal of Literature and Art, Ploughshares, The Iowa Review, Granta, Agni, The Missouri Review, and New England Review. Other highly regarded print magazines of recent years include The Threepenny Review, The Georgia Review, Ascent, Shenandoah, The Greensboro Review, ZYZZYVA, Glimmer Train, Tin House, the Canadian magazine Brick, the Australian magazine HEAT, and Zoetrope: All-Story. Some short fiction writers, such as Steve Almond and Stephen Dixon have built national reputations in the United States primarily through publication in literary magazines.
The Committee of Small Magazine Editors and Publishers (COSMEP) was founded by Hugh Fox in the mid-1970s. It was an attempt to organize the energy of the small presses. Len Fulton, editor and founder of Dustbook Publishing, assembled and published the first real list of these small magazines and their editors in the mid-1970s. This made it possible for poets to pick and choose the publications most amenable to their work and the vitality of these independent publishers was recognized by the larger community, including the National Endowment for the Arts, which created a committee to distribute support money for this burgeoning group of publishers called the Coordinating Council of Literary Magazines (CCLM). This organisation evolved into the Council of Literary Magazines and Presses (CLMP).
Many prestigious awards exist for works published in literary magazines including the Pushcart Prize and the O. Henry Awards. Literary magazines also provide many of the pieces in The Best American Short Stories and The Best American Essays annual volumes.
Online literary magazines
Around 1996, online literary magazines began to appear. At first, some writers and readers dismissed online literary magazines as not equal in quality or prestige to their print counterparts, while others said that these were not properly magazines and were instead ezines. Since then, though, many writers and readers have accepted online literary magazines as another step in the evolution of the independent literary journals. Among the better known online literary magazines are Twisted Vine, Evergreen Review, World Literature Today, New World Writing, The Applicant, Lantern Journal, Drunken Boat, Blackbird, Painted Bride Quarterly, 3:AM Magazine, Muumuu House, elimae, Juked, 20x20 magazine, The Barcelona Review, Eclectica Magazine, ĕm, Failbetter, Guernica Magazine, Identity Theory, Literary Mama, McSweeney's Internet Tendency, Monkeybicycle, Narrative Magazine, Sensitive Skin Magazine, Spike Magazine, StorySouth, The Washington Pastime, and Word Riot, Parabaas (in Bengali) but there are higher quality smaller markets like Literarily, Unlikely Stories, Pank, Fleeting, La Petite Zine, Fringe and Cha and literally thousands of online literary publications so it is difficult to judge the quality and overall impact of this relatively new publishing medium.
See also
- Cowley, Malcolm, The Little Magazines Growing Up; The Little Magazines September 14, 1947, Sunday
- Documenting the American South: Articles from Encyclopedia of Southern Culture: Antebellum Era
- "Technology, Genres, and Value Change:the Case of Literary Magazines" by S. Pauling and M. Nilan. Journal of the American Society for Information Science and Technology 57(7):662-672 doi10.1022/asi.20345
- Peter Brooker and Andrew Thacker, "The Oxford critical and cultural history of modernist magazines, Volume One: Britain and Ireland 1880–1955" (Oxford University Press. ISBN 978-0-19-921115-9)
- Twisted Vine - The interdisciplinary literary arts journal.
- The Coffin Factory - The magazine for people who love books.
- Columbia: A Journal of Literature and Art
- The Cortland Review An online literary magazine in text, audio and video, founded in 1997.
- Council of Literary Magazines and Small Presses
- The Little Magazine a Hundred Years On A Reader's Report by Steve Evans
- Little Magazine Interview Index Housed at the University of Wisconsin–Madison Special Collections, the Little Magazine Collection, one of the most extensive of its kind in the United States, includes approximately 7,000 English-language literary magazines published in the United States, Great Britain, Canada, and Australia/New Zealand, mostly in the 20th century.
- Narrative Magazine An online literature and poetry magazine founded in 2003.
- The Paris Review A literary quarterly founded in 1953.
- NewPages Guide to Literary Magazines in Print and Online.
- Poets & Writers Literary Magazine Database
- The Washington Pastime Literary Magazine & Literary Publication
|
<urn:uuid:f204fbf6-6519-4ac6-8203-fa11fd86f7da>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Little_magazine
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.928282
| 1,714
| 3.03125
| 3
|
Anomos introduces a layer of security and anonymity currently absent in peer to peer file sharing protocols. Through the study of cryptography and anonymous networks such as TOR, a system is being designed which allows any individual to safely distribute files to a large audience without fear of legal or social repercussions. This technology is an important part of modern free society, and a tool which may be used around the world to bring about positive social change. With Anomos, one can distribute the file anonymously to thousands of people at once. Because Anomos is based on BitTorrent, each download makes the network faster, more robust, and harder to eliminate.
This technology can benefit thousands of people all around the world, to those who live in religiously oppressive places, those to whom the mere accusation of apostasy or sexual deviance could be life threatening; to mash-up artists concerned about copyright infringement, or anyone fearful that their actions on the Internet may lead to unjust punishment. First and foremost, Anomos has been designed as a tool for free speech.
- DIFR-TSPM -- DIFR Tag-Scan-Privacy-Match
Increasingly, products for sale in shops are being tagged by RFID tags. These tags contain a unique product or item number, which can be read out wirelessly over a short distance by an RFID reader. Their function in shops and supermarkets is similar to the ubiquitous paper barcode, except that RFID tags can also be read out if the tag is not in plain sight of the reader. This means these tags can also be read out surreptitiously when walking around the store, or afterwards when the items are in your shopping bag and you are walking on the street. This also holds true for payment cards and travel passes (e.g. the OV chipcard in the Netherlands) that people carry with them. This has raised concerns about the impact for RFID technology on the privacy in our society.
The goal of the project is to develop a demonstrator of a different way to inform consumers about the RFID tags on the items they buy or the tags that surround them in their environment. Main idea is to use a mobile phone to display information about RFID tags in the vicinity.
This demonstrator will be used to show how such a concept:
- empowers users in deciding for themselves how their privacy is affected and how to respond to that information, and
Until recent developments of domain name authentication, Internet mail has not had access to scalable mechanisms for validating an identity associated with a message. Any identifier could be used fraudulently.
The Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) are relatively new technologies that create a foundational change by validating domain identifiers. However they are only the first step. DMARC takes additional steps in allowing domain owners to publish statements about their email use of their identifiers and DMARC facilitates much easier operational reporting from mail recipients to domain owners.
Thus this project will improve use of DNSSEC in the email security space. Two major upcoming applications will drive this:
- DMARC which relies on the DNS for advertising policy information.
- Domain-based reputation system that relies on DKIM, which in turn relies on secure DNS use to advertise keys and polices.
- e-Passports -- Authenticating users over the Internet using e-Passports
Over the past two years, electronic passports (e-passports) have been introduced in most countries of the world. An e-passport embeds a chip with card holder details. While there are concerns about the privacy consequences of the introduction, caused by the contactless nature of communication and the sensitive nature of contained biometric data, these also presents a unique opportunity: it provides every citizen of the world with a strong authentication token within a global Public Key Infrastructure (PKI).
The technical standards which describe how to verify the authenticity of electronic passports are open and publicly available from the International Civil Aviation Organization (ICAO). Although likely not intended as such by ICAO, e-passports are ideal for authenticating users of Web services. The current proposal intends to build such an Identity 2.0 solution with open source software.
We propose to create a trustworthy identity solution that allows a user to use their e-passport for authentication at regular websites or webservices (e.g. for e-government like services). Such a solution may contain a browser plug-in that integrates the software developed in JMRTD with an open source identity selector (perhaps compatible with InfoCard).Additionally, the solution may require the establishment of a central server that acts as an identity provider (perhaps compatible with OpenID). A question that will need to be answered is to what degree end-users and service providers need to trust our identity provider (in case of end-users: trust with respect to dealing with privacy sensitive data).
GNUnet is GNU's framework for secure peer-to-peer networking. The framework is designed to support a range of applications. The primary application at this point is anonymous and censorship-resistant file-sharing.
The main thrust of the proposed research is the design, implementation, deployment and evaluation of a secure, fully decentralized P2P routing protocol. Centralization increases operational costs, creating prominent targets for attacks and single points of failure as well as raising privacy concerns. The resulting network must be open, allowing new peers to join at any time. Adversaries are assumed to participate in the network, and the protocols must gracefully degrade in the presence of adversaries. Graceful degradation means that adversaries may only reduce the eciency of network operations, and that this reduction in eciency should be at most proportional to the resources available to the adversary.
Our quest for practical protocols also implies that the design must handle real-world constraints. In particular, we want to handle connectivity issues that arise on the Internet (for example, due to firewalls). We use the term restricted-route networks to describe networks with restrictions limiting direct communications between participants. The proposed protocol also addresses the possibility of peers leaving the overlay network abruptly, joining and leaving the network frequently, and the fact that the amount of resources available to peers can differ by a few orders of magnitude.
Our goal is to come up with adaptive protocols which adjust resource allocation based on automatically obtained network performance metrics that characterize the behavior of faulty or malicious nodes. Specifically, if an alternative path without faulty nodes exists, it must be possible for the routing algorithm to eventually discover it. The routing protocol must also be able to address disproportional consumption of resources. In particular, an adversary should not be able to issue a request that consumes more than a small constant factor of resources above the amount consumed by the normal operation of benign nodes. As a result, the proposed new protocol is able to prevent peers from launching asymmetric attacks, which leverage weaknesses in the system and magnify the damage caused.
NLnet's contribution is used to pay a graduate student's salary for a full year (the university will waive tuition) to work on the implementation and evaluation of an improved routing algorithm for GNUnet. The routing algorithm will be implemented as a GNUnet service which means that many (existing and future) applications using the GNUnet framework will be able to take advantage of it. The specific proposed work is about a new routing algorithm that will support scalable and secure routing in a restricted-route topology.
GoogleSharing is a special kind of anonymizing proxy service, designed for a very specific threat. It ultimately aims to provide a level of anonymity that will prevent google from tracking your searches, movements, and what websites you visit. GoogleSharing is not a full proxy service designed to anonymize all your traffic, but rather something designed exclusively for your communication with Google. The system is totally transparent, with no special "alternative" websites to visit. Your normal work flow should be exactly the same.
GoogleSharing is different from general anonymizing proxies:
- Most will mask your IP address, but not the identifying information in your HTTP headers. Google will still know who you are based on your Cookies, User Agent, etc.
- If the proxy does attempt to anonymize HTTP headers, they will do it by completely stripping cookies from your request. Google does not like this, and will tag you as a SPAM bot (how convient for them to do), which will force you to type in a CAPTCHA every time you issue a Google search, and will prevent you from issuing Maps requests at all.
- These types of proxies can be slow. It's not necessary to proxy all of your internet traffic if you're just trying to protect yourself from Google. Since GoogleSharing only proxies Google traffic, our bandwidth needs are much lower and thus our performance is much greater.
GoogleSharing is different from Google replacements:
- GoogleSharing does not require that users change their workflow by visiting different websites.
- GoogleSharing supports all Google services which don't require a login, so it does more than just anonymize search. As Google continues to expand its grasp of the internet, GoogleSharing will automatically expand with it, automatically anonymizing whatever new services emerge in a fully transparent way.
- GoogleSharing has the potential to be fully distributed. As we make the move towards distributing requests across multiple configured servers, this is a definite step in the direction of P2P.
- GSM-Sec -- GSM Security Project
The popular GSM cell phone standard uses outdated security and provides much less protection than its increasing use in security applications suggests. This project aims to correct the disconnection between technical facts and security perception by creating a GSM tool that allows users to record and analyze GSM data.
This project complements several other current open research projects into GSM technology. These projects --including OpenBTS, OpenBSC, and OsmoconBB-- create open re-implementations of network equipment and hand sets to make the technology more accessible and open. It builds on these insights and shows the security limits of the technology. The feedback loop, however, goes both ways: the record and decode tool, for example, will allow the OpenBTS base station to operate on multiple frequencies thereby supporting more concurrent phone calls. The target audiences of the tools are security and radio researchers.
By Security Research Labs.
- HTTPS-Obs -- HTTPS Observatory
The project collects an Internet-wide dataset of all publicly visible TLS CA certificates in order to
- search for CA-certified Man In The Middle (MITM) attacks against HTTPS privacy and
- measure the extent to which browsers really need to trust 60-200 CAs completely.
Extended datasets measuring from multiple source networks (via Tor) and using SNI will also be collected.In collaboration with volunteers from security consulting firm iSEC Partners, EFF intends to write a program that accesses every Web server on the public IPv4 Internet running HTTPS on port 443. We will create a complete dataset of the certificates each server offers to visitors. Then we will analyze the data, comparing:
- Who is the Certificate Authority?
- For which domains is the certificate valid?
- Where is the machine issuing the certificate located?
- Who operates that network
With these data it will be possible to answer the following questions:
- How many CA services are used by publicly accessible sites? Which ones are rarely used?
- Can one find evidence of specific MITM attacks in the form of publicly visible attack servers (that victims in the wild would have been redirected to via DNS or other mechanisms) or in the form of network-layer attacks detected against our own survey machines? Concrete evidence would be useful for motivating browser developers to adopt more secure trust models.
- How many domains intentionally use more than one apparently legitimate, apparently valid certificate at the same time? (This impacts on the design of enhancements to the TLS trust model)
- How many sites in the wild show different valid certificates to users who come from different parts of the Internet?
- How many CAs are used primarily or exclusively in particular countries or DNS domains?
- Jitsi-FMJ -- Replacing JMF with FMJ
Jitsi became a focus project of NLnet as it offers free, open and secure alternative for Skype and similar communication tools. Today offers chat, Audio/Video calls with SIP and XMPP, and Jitsi is the only tool which does it in a secure way (using ZRTP), on all three major operating systems.
At the heart of Jitsi's media service lies the Java Media Framework (JMF) of SUN, which was not released under a FLOSS license. Free Media for Java (FMJ) which was founded by Ken Larson is meant to be a free and open alternative of JMF.
The goal of this subproject is to continue the work on the FMJ project and take it to a stage where it can be used within Jitsi as a viable alternative of JMF. This would hugely benefit the community:
- It would essentially provide Java developers with an active, free media library.
- More importantly however, it will be an essential step toward porting Jitsi to other environments such as Android or porting it as a web application.
Ksplice is a new technology for protecting the security and reliability of machines on the network. Currently, all computer systems need to be rebooted regularly to apply OS updates, in order to be secure against potential attacks over the network. Ksplice makes it possible for system administrators and end-users to perform OS updates effortlessly, without a reboot. This project will make an open source Linux distribution be the first operating system in the world that does not require regular reboots for security updates.
This technology also has the potential to significantly hinder network attackers by reducing the window of vulnerability during which computer systems are running software with known problems.
Thus, Ksplice solves the underlying weakness in the system so that no malicious activity, no matter how it has been disguised, will be able to achieve its objective of compromising the system.
- Ksplice2 -- Ksplice for mainline Linux
With previous support from NLnet, Ksplice has made the free software Linux distribution Ubuntu be the first operating system in the world that does not require regular reboots for security updates. Ksplice Ltd has started providing rebootless OS updates to more than 10,000 users of Ubuntu -a significant step, but larger-scale deployment is needed in order for the technology to become truly mainstream.
The goals of this project are:
- to freely provide rebootless OS updates to 100,000+ users running the major community Linux distributions, and
- to get the Ksplice kernel software merged into the mainstream Linux kernel.
The NLnet support is used for the development required to get Ksplice tool merged into the mainstream Linux kernel and the development work on the Uptrack application required to freely bring rebootless updates to Fedora, the second most popular desktop Linux distribution behind Ubuntu. These initiatives are critical to the path of taking this open innovation to mainstream adoption. Specifically, getting Ksplice merged into the mainstream Linux kernel is the best way to ensure that Ksplice has the full support of the diverse Linux kernel community. This support will improve Ksplice’s technical quality and encourage more people to trust and use Ksplice.
Bringing Ksplice beyond Ubuntu is necessary since so many Linux users use distributions other than Ubuntu. One of Linux’s strengths is the variety of choices that it provides, so it makes sense to provide Ksplice for many community Linux distributions rather than just one community Linux distribution. Fedora is the next step in this direction.
- Lantern -- DNSSEC in Lantern
The goal of Lantern - a censorship circumvention and monitoring-prevention tool - is to build an easy-to-use, secure, and indestructible tool to keep the internet open and unfettered for anyone in the world.
Lantern uses a P2P infrastructure, particularly the LittleShoot P2P stack, along with the LittleProxy HTTP proxy and the Smack XMPP client library. All of these utilize DNS in a number of areas. In environments where e.g. the government has access and control over all network traffic in and out of the country authenticity of DNS records is of paramount importance.
This project aims integrating of DNSSEC into every DNS lookup in Lantern, including all DNS lookups in the LittleProxy, Smack, and LittleShoot sub-modules.
- Mailman-SSLS -- Mailman Secure List Server
Currently, there is no re-encrypting mailing list manager with support for both PGP and S/MIME. Mailman is the most popular Open Source mailing list manager. The Secure List Server project "mailman-pgp-smime" aims to include OpenPGP and S/MIME support in Mailman, the GNU Mailing List Manager.
Adding re-encryption will enable groups of people to cooperate and communicate securely via email: mail can get distributed encrypted to a group of people, while the burden of managing individual keys is dealt with by the list software, not the sender. Furthermore, authentication is possible: the list server software takes care of checking this. This way, strong security for groups of people gets available for a wide audience.
This project will publish a patch for the official Mailman distribution. This patch handles both RFC 2633 (S/MIME) and RFC 2440 (OpenPGP) email messages.
A post will be distributed only if the PGP (or S/MIME) signature on the post is from one of the list members. For sending encrypted email, a list member encrypts with the public key of the list. The mailing list server will decrypted the posting and re-encrypted it with the public keys of all list members.
In order to achieve this, each list has a public and private key. (The private keys optionally protected by passphrases) Furthermore, new list settings are defined:
- gpg_postings_allowed: is it allowed to send to this list postings which are encrypted with the GPG list key?
- gpg_msg_distribution: are subscribers allowed (or even forced) to upload their GPG public key in order to receive all messages encrypted?
- gpg_post_sign: should posts be GPG signed with an acknowledged subscriber key before being distributed?
- gpg_msg_sign: should the server sign encrypted messages?
Similar settings are defined for S/MIME. Finally, each subscriber can upload her PGP and S/MIME public key using the Mailman webinterface.
- NoScript-Andr -- Android Native NoScript
NoScript is a popular GPL add-on for Firefox and other Mozilla Gecko-based browsers which increases the web client security in several innovative and ground-breaking ways.
NoScript was extensively supported by NLnet and active users are currently almost 3 millions, and it has pretty much no competitors. That's because it goes very far beyond simple script blocking, having established itself as the "ultimate" security enhancement for the web browser, even though it's available on Mozilla Gecko-based browsers only.
Unfortunately, no NoScript equivalent is available on mobile platforms yet. This is intended to be the unique final result of this project.
- NoScript-Mob -- NoScript Mobile
NoScript is a popular GPL add-on for Firefox and other Mozilla Gecko-based browsers, which considerably increases the web client security in several innovative and ground-breaking ways. Numerous useful features make NoScript the most advanced browser security tool, used and respected by most web security experts and serving as an example and an inspiration for safety enhancements which are slowly finding their way in mainstream web browser technologies.
The way people use the web is steadily moving towards mobility: we've got smart phones rivaling in power and usability with desktop PCs, and open source mobile OSes, like the Debian-derivative Maemo by Nokia or, even more prominently, Google's Android, which open exciting scenarios but also pose significant challenges.
The challenge NoScript wants to accept and win is bringing the safest web browsing experience on the mobile platforms. In order to achieve this, NoScript will be re-designed and re-implemented to be compatible with the latest Firefox Mobile versions, which run both on Android and Maemo devices, trying to retain as much as possible of its core components and functionality.
- NoScript-Mob2 -- NoScript Mobile part 2
NoScript is a popular GPL add-on for Firefox and other Mozilla Gecko-based browsers which considerably increases the web client security in several innovative and ground-breaking ways. Numerous useful features make NoScript the most advanced browser security tool, used and respected by most web security experts and serving as an example and an inspiration for safety enhancements which are slowly finding their way in mainstream web browser technologies.
This project is the follow up of the first NoScript Mobile project, and will implement specific components: XSS Filter, ClearClick, Mobile-friendly Setup Interface, Remote Synchronization, ABE component (Application Boundaries Enforcer).
The way people use the web is steadily moving towards mobility: we've got smart phones rivaling in power and usability with desktop PCs, and open source mobile OSes, like the Debian-derivative Maemo by Nokia or, even more prominently, Google's Android, which open exciting scenarios but also pose significant challenges. The challenge NoScript wants to accept and win is bringing the safest web browsing experience on the mobile platforms. In order to achieve this, NoScript will be re-designed and re-implemented to be compatible with the latest Firefox Mobile versions, which run both on Android and Maemo devices, trying to retain as much as possible of its core components and functionality.
- NoScriptABE -- NoScript ABE-component
The Application Boundaries Enforcer (ABE) module will attempt to harden the web application oriented protections already provided by NoScript with a firewall-like component running inside the browser.
This project is specifically focused on developing a new web browser component called ABE, aimed to mitigate or defeat Cross Site Request Forgery (CSRF) attacks against sensitive web applications. This component will be built on the existing request interception, tracing and blocking framework of NoScript, and it will be integrated in NoScript's broader web security infrastructure, together with whitelist-based scripting, active content execution policies, anti-XSS filters, ClearClick anti-ClickJacking protection and HTTPS/Secure Cookies enhancements. After a working ABE implementation as a NoScript component gets completed, a refactoring and repackaging activity to deploy it as a separate “ABE Firefox Add-On” will be done.
- OSN-PPCP -- OSN Privacy
Today online social networks (OSNs) have become an indispensable platform for internet users to find friendship and share information. However, users are pretty much electronically naked in any OSN: (1) User’s data is in clear to the OSN service provider, and can be accessed by many other parties without any consent; (2) User’s activities are under surveillance by the OSN service provider.
Numerous privacy breaches have been reported, often with disastrous consequences to the user concerned, such as getting fired by the employer, getting rejected from a job application, even leading to suicide. To mitigate the problem, most OSN service providers provide some privacy controls to users to protect their information. However, this is not the antidote and will never be, because the aforementioned problems (1) and (2) still remain.
This project will design and implement a privacy-preserving communication protocol to mitigate the problems (1) and (2). In more detail, it will achieve the following features:
- A user always keeps his private data in encrypted form.
- Two users can match each other based on their respective private data sets, without revealing anything.
- Two friends who share some common private date, communicate in private. The communication will remain private against the OSN service provider and other users.
This project is about the OV-chipkaart, a single national chipcard for all public transport in the Netherlands, which is similar to London's Oyster card or Hong Kong's Octopus card. It is a propriatory solution being introduced by Trans Link Systems (TLS), a consortium of public transport companies. Currently the OV-chipkaart is being tested in practice in and around Rotterdam and Amsterdam. National introduction has been postponed a couple of times, but is now foreseen in 2009.
Early 2008 the OV-chipkaart has come under heavy attack because of both security and privacy concerns:
- Individual travel movements are collected centrally and will be used for direct marketing purposes. The Dutch Data Protection Authority (College Bescherming Persoonsgegevens, CBP) has therefore described the approach as: not in accordance with the law (CBP report).
- The cryptographic protection in the Mifare Classic chipcard, used in the personalised cards is broken.
- The throw-away cards have been cloned, enabling free travel.
- Very little is known about how the system actually works, and about how (private) data are protected.
The aims for this project are twofold:
- On the one hand, to concentrate documenting of the current OV-chipkaart system, make a public repository of knowledge. Factual information about the design, strengths and weaknesses of the current system; an explanation of all the things that were in the news since roughly January 2008.
- On the other hand, experiment with the card in order to transparently develop a new system from scratch in which RFID technology is used for ticketing in public transport. Using an open design process, the design criteria and the quality of the solutions can be evaluated by a broad audience, including scientists, hackers, but of course also stakeholders such as transport companies. This process may eventually result in an open standard.
- RFID Guardian -- RFID Guardian Quick Start Action
This Project intends to accelerate hardware prototyping of the RFID Guardian Project. All people getting in touch with the RFID technology, i.e. buyers and users of virtually any goods sold, shall have means to manage the information which is sampled and uncontrollably transmitted by the RFID chips.
The RFID Guardian is a battery-powered device that represents the first-ever unied platform for RFID security and privacy administration. The RFID Guardian acts as an "RFID Firewall", enabling individuals to monitor and control access to their RFID tags by combining a standard-issue RFID reader with unique RFID tag emulation capabilities. Additionally, the RFID Guardian is useful as an RFID security diagnostic and auditing tool.
This "RFID Guardian Quick Start Action" project is intended to bootstrap the larger RFID Guardian project. It is also intended to place the Quick Start Action in a larger context, and in this helping to transform the concept of the RFID Guardian into a commercial open-source hardware product.
- RFID Guardian(2) -- RFID Guardian Development
The RFID Guardian is a battery-powered device that represents the first-ever unified platform for RFID security and privacy administration. The RFID Guardian acts as an 'RFID Firewall', enabling individuals to monitor and control access to their RFID tags by combining a standard-issue RFID reader with unique RFID tag emulation capabilities. Additionally, the RFID Guardian is useful as an RFID security diagnostic and auditing tool.
The RFID Guardian Project is focused upon providing security and privacy in Radio Frequency Identification (RFID) systems. The goals of the project are to:
- Investigate the security and privacy threats faced by RFID systems
- Design and implement real solutions against these threats
- Investigate the associated technological and legal issues
Samizdat is intended, in part, as a tool for activists -- or, generally, for anyone who desires secure communication with others who lack the computer literacy (or merely patience) to configure public key cryptography or VPNs. Samizdat would also be useful to give an outsider access to a network without being easily detected; for example, it could facilitate document leaking.
Samizdat is a LiveCD intended primarily to make public key cryptography accessible: to distribute public keys securely, and to pre-configure various applications of cryptography, especially VPN-based applications.
Samizdat LiveCDs are self-replicating, with the replicated system not being identical, instead having one other's public keys and various other information. The replicated systems automatically become nodes on a VPN. The LiveCD serves as a secure boot medium for a fully-functional, fully-encrypted persistent system.
This project integrates many existing projects: Tor, Onioncat, GPG, LUKS, Git and others.
- Seahorse SmartCard -- Seahorse Smart Card Support
Smart Cards provide solid, tamper-proof security. When used with modern web authentication technology, they can be used to provide a protection against phishing and can also be used to solve other problems facing one's identity on the web today. But, desktops ignore their existence.
In order to get things rolling with better smart card support on the Desktop, users and developers need simple access to smart card technology. Seahorse is a key manager that's used on the GNOME Desktop. Currently it can manage stored passwords, PGP, and SSH keys. This project will add smart card support to the Seahorse key manager.
This project will implement basic management of certificates and keys stored on smart cards in the Seahorse key manager. Users will be able to examine and use their smart card with the same management operations as available to certificates and keys stored in software key tokens.
- SelfDef -- Online self-defence
Bits of Freedom foundation develops an "Online Selfdefense in ten minutes" tool. Many people use the Internet carelessly and are not aware that such behavior entails risks for their privacy. And those who are familiar with this kind of risks often think that it is too difficult to undertake something to defend their privacy.
This guide provides every Internet user with simple set of measures to protect them on the Internet in ten minutes. For more advanced users the guide provides links to specific tools for such self protection of their Internet surfing, email, social media applications, IP telephony and file sharing.
- Tor hidden services -- Tor anonymity system Hidden Services
The Tor Anonymity System's key functionality `Hidden Services' allow users to set up anonymous information services (like websites) that can only be accessed through the Tor network and therefore are protected against identification of the host that runs the services.
Using these Hidden Services, critical political and human rights information can be published in a way that both the publisher and users of the service are protected from identification. The current version of Tor Hidden Services has a number of drawbacks that hamper the active use of this important feature. The most serious limitation is the performance: the time it takes until a Hidden Service gets registered in the network and the latency of contact establishment when being accessed by a user. Due to design issues in the original Tor protocol, the connection to a new Hidden Service can take several minutes, leading most users to give up before the connection has been established. Using the Tor Hidden Services for direct interactive user-to-user communication (like for instant messaging) is nearly impossible due to this high latency in the Hidden Service circuit setup.
An evolution of the Tor protocol is proposed to speed up the Tor Hidden Services. The improved protocol will change the way circuits are set up. The end goal is to have the protocol change production ready and propagated to the Tor users within nine months. The resulting software will be published under the GPL license, like the rest of the Tor code. All deliverables will be fully public.
- Tor low-bandwidth -- Tor for low-bandwidth users
The Tor anonymity system is currently only usable by internet users with high-bandwidth connections. Upon start of a Tor client, a large file with all Tor server descriptions is being downloaded. This "Tor Directory" file enables the client to pick from the available mix-servers in the Tor network. This Directory file is too large for users on modem lines or on mobile data networks (like GPRS) as it gets downloaded each time a user logs in, taking 10 to 30 minutes over a slow connection. Therefore, Tor is not usable by modem and mobile users.
One of the major goals of the Tor project is to provide secure anonymous internet access to users in repressive states. These location often have very slow internet connections to the outside world. By enabling these users to use the Tor network, significant progress can be made towards free communication and free information in these countries.
An evolution of the Tor protocol is proposed to reduce the initial download size. The new Tor protocol version should change the way a client receives the information for its Tor circuit setup in such a way, that the initial download can be performed over a slow modem line in less then three minutes.
The work to be conducted under the proposal is split into two major deliverables, with the end goal of having the protocol change production ready and propagated to the Tor users within a timeframe of less then 8 months. The resulting software will be published under the GPL license, like the rest of the Tor code. All deliverables will be fully public.
Turtle aims at the creation of a peer-to-peer (P2P) infrastructure for safe sharing of sensitive data. The truly revolutionary aspect of Turtle rests in its novel way of dealing with trust issues. Where other P2P architectures attempt to build trust relationships on top of a trust-agnostic P2P overlay, Turtle builds its overlay on top of pre-existent trust relationships among its users. This allows both data sender and receiver anonymity. At the same time, it protects each intermediate relay in the data query path against liability. Furthermore, its trust model should allow Turtle to withstand most of the denial of service attacks that plague other peer-to-peer data sharing networks.
The web is not as open as it used to be: big monopoly platforms have formed new proprietary layers on top of it. This project breaks the "you get our app, we get your data" package deal. This by providing a cross-origin data storage protocol, thus separating data servers from application servers.
More and more applications are hosted online and force users to put their data onto servers where applications run. Apart from our data being locked inside a place we don't have control over, many websites sell the data to third parties. This is a huge emergency in terms of consumer rights. Unhosted improves the web infrastructure by separating web applications from your data:
- Your can store your data remotely anywhere, preferably encrypted;
- Unhosted apps, which are web applications, will run locally in your browser.
The project will define a standard and submit it to W3C.
- Unhosted-2 -- Unhosted
Unhosted is an approach to the "cloud" opposite to the current web2.0 trend: it separates the user data from the application, rather than putting user data "into" the application. This leads to much better privacy management.
End-users of "cloud" capable applications use Unhosted directly, they don't have to do anything special for that - just need to log in to remoteStorage enabled applications using their remoteStorage-enabled email address.
As example, all Dutch students and academic staff already have remoteStorage connected to their university email addresses. Now the target community is web developers. They need to enable their applications so that they accept login with remoteStorage.
Contrary to other projects (that usually create 1 product with 1 function, and offer that as a free software of which everyone can run their own server, like Diaspora, MediaGoblin, ownCloud, etc.), Unhosted aims for a generic storage server. Everyone just needs a bit of very simple and dumb cloud storage, with no application-specific features. Cloud storage becomes an interchangeable commodity, and the market of useful cloud applications becomes entirely separate from the market of reliable cloud storage.
Currently, XSS attack is one of the most widespread vulnerabilities in Web applications. Incorrect filtering and the appearance of new increasingly sophisticated techniques make protection a complex and time-consuming task.
Cross Site "Scripter" aka XSSer, is an open source penetration testing tool that automates the process of detecting and exploiting XSS injections in different applications. It contains several options to bypass certain filters, and various special techniques of code injection. It makes possible to test an application on vulnerabilities to Cross Site Scripting (XSS) attacks.
The XSSer tool aims to automate these complex application security testing tasks.
Run by R.C. Merida (psy)
|
<urn:uuid:4575223d-6853-468e-acb8-194d64986208>
|
CC-MAIN-2013-20
|
http://nlnet.nl/thema/Privacy.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.91658
| 7,628
| 2.765625
| 3
|
A decision tree analysis is a specific technique in which a diagram (in this case referred to as a decision tree) is used for the purposes of assisting the project leader and the project team in making a difficult decision. The decision tree is a diagram that presents the decision under consideration and, along different branches, the implications that may arise from choosing one path or another. The decision tree analysis is often conducted when a number of future outcomes of scenarios remains uncertain, and is a form of brainstorming which, when decision making, can help to assure all factors are given proper consideration. The decision tree analysis takes into account a number of factors including probabilities, costs, and rewards of each event and decision to be made in the future. The analysis also uses expected monetary value analysis to assist in determining the relative value of each alternate action.
This term is defined in the 3rd and the 4th edition of the PMBOK.
|
<urn:uuid:62cb3b17-8597-44e0-bd87-a8b8f584bf6c>
|
CC-MAIN-2013-20
|
http://project-management-knowledge.com/definitions/d/decision-tree-analysis/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956994
| 185
| 2.859375
| 3
|
Yale Alumni Magazine
May/June 2006 (Cover story)Link
Charles Darwin never thought he could witness evolutionary change. He relied instead on indirect clues. He looked at its effects after millions of years -- in the fossil record and in the similarities and differences among living species. He got clues to the workings of evolution from the work of pigeon breeders, who consciously chose which birds could reproduce and thus created birds with extravagant plumage. But that was artificial selection -- not natural selection that had been operating long before humans came on the scene. Darwin was pretty sure that natural selection worked too slowly for him or anyone else to witness.
Darwin got a great many things right, but on this score, he was most definitely wrong. Just ask Paul Turner. In his lab at the department of ecology and evolutionary biology, Turner and his colleagues watch evolution play out in a matter of days. They observe organisms acquire new traits, adapt to new habitats, and become new species in the making.
Turner can hold one of these experiments in his hand. It's a sealed petri dish. "So here we have a lawn of bacteria," he says, gesturing to a cloudy smear in the dish. Then he points out a large clear spot in the middle of the lawn, where millions of the bacteria have died. "That's a beautiful example of a plaque," he says. There's a tinge of admiration in his voice. The plaque was made by an organism Turner is particularly fond of: a virus known as phi-6. The virus invades bacteria and uses their cellular machinery to make hundreds of copies of itself. The bacteria rupture, and the new viruses escape. As the bacteria die, they leave behind a clear spot.
The plaques are evidence of the virus's staggering powers of reproduction. In a day, a single virus can produce a billion offspring. From generation to generation their genes mutate, creating opportunities for the viruses to evolve. They evolve so quickly, in fact, that scientists can set up experiments to test ideas about how evolution works. And because viruses carry just a handful of genes, scientists can identify exactly which mutations provide an evolutionary edge. "They're really one of the few organisms we can study in the lab from nuts to bolts," says Turner. "We can see the molecular changes."
Turner has become a leader in a relatively young field: experimental evolution. He is using bacteria and viruses -- especially phi-6 -- to investigate some of the most profound questions about life on Earth. How do new species emerge? Why do so many species reproduce sexually, when they could just clone themselves? Why do organisms evolve into peaceful cooperators in some cases and ruthless competitors in others?
In many of these experiments, Turner and his colleagues are searching for rules that may govern the evolution of all living things. But the research also has a practical side. While phi-6 infects bacteria, viruses similar to phi-6 like to infect humans -- including HIV and influenza. New viruses such as SARS are also now emerging. Their emergence is a case of evolution in action: the viruses acquire mutations that allow them to shift from an animal host to humans. Turner's research may help scientists better understand how that transition happens. "Within our lifetime we're going to see more and more viruses shift onto humans," he says. "What are the next likely pathogens to emerge? That's something we'd like to predict."
Turner, now 39, started out with an interest in larger fauna and flora. He grew up in upstate New York, where he loved to wander through the forests. "I was the kid who always liked to go to the zoo," he says. As an undergraduate at the University of Rochester he thought he might like to be a biologist, and in 1989 he started graduate work at the University of California-Irvine. It was when he got to know Richard Lenski that he made the jump from macroscopic to microscopic.
Lenski had started his career studying beetles in North Carolina forests, but he had been frustrated by how long it took to run experiments to tease out the forces controlling their populations. Many of the same basic forces also govern the existence of microbes, which breed far faster. And microbes are an ideal lab organism -- so small and fast-breeding that scientists can run many trials of the same experiment simultaneously to make sure their results are valid.
So Lenski began running experiments on harmless strains of the gut bacteria Escherichia coli. In one series of experiments, he founded 12 colonies from the genetically identical offspring of a single microbe. Each colony was allotted only a meager supply of glucose. Lenski expected that, with food so scarce, natural selection would favor individuals that grew faster than others. He froze samples of bacteria from many generations; he would thaw them out later to compare them with their descendants. The experiment is still running today, some 40,000 generations later. The bacteria in all 12 colonies have evolved to the point where they can reproduce nearly twice as fast as the microbe Lenski started out with.
Lenski (who now teaches at Michigan State University) interviewed Turner at Irvine when he was a prospective student. "Paul had some ideas about investigating carrion -- rotting meat -- and some of the interesting ecological interactions that might take place there," he remembers. "Anyone who was attracted to that system, I figured, must be more interested in the questions themselves than in nature and all its beauty." Lenski suggested that hauling rotting meat into a laboratory might make for difficult experiments, and described his own work. Turner was immediately interested. "I knew then that Paul was my kind of scientist," Lenski says.
At first, admits Turner, "the faith you put into working with things you cannot see was a very foreign concept for me." But he started running experiments on E. coli to track how genes were gained and lost over the generations. It was only when he had almost finished graduate school that he shifted his research down to the even smaller scale of viruses. A scientist named Lin Chao, then at the University of Maryland, gave a talk at Irvine about his research on phi-6. Chao was using the virus to answer a particularly deep and difficult question in biology: why does sex exist?
Although sex is the only way humans naturally reproduce, some other species do well enough without it. Whiptail lizards in the southwestern United States, for example, are all female. Their eggs require no sperm to begin developing into healthy baby lizards; in essence, they just clone themselves. The late great biologist John Maynard Smith once pointed out that sex should put organisms at an evolutionary disadvantage. It takes two individuals to reproduce sexually, but just one to clone. Over a few generations, that difference should allow a population of cloners to become far bigger than one of sexual reproducers. "Our problem is to explain why sex arose, and why it is today so widespread," Maynard Smith wrote in 1999. "If it is not necessary, why do it?"
Viruses, Chao recognized, would allow scientists to explore this question like never before. Billions of them can reside on a dish, they reproduce quickly, and some, including phi-6, sometimes engage in their own sort of sexual reproduction.
When a single phi-6 invades a host cell, it makes clones of itself. Its genetic material is inserted into the host, and the host begins producing copies of the virus's genes and pieces of the virus's protective protein shell. These chunks of genes and shell float around inside the microbe before assembling themselves into new viruses. All the new viruses are clones of the original invader, differing only by whatever mutations emerged as their genes were produced.
If two or more phi-6 viruses invade the same cell at the same time, their host produces new copies of both sets of genes -- which can then mix together. The new viruses carry combinations of genes from the original invaders. In other words, the new viruses have two (or more) parents. They are the product of viral sex.
Turner was fascinated by the way Chao was using viruses to study a big evolutionary question. And he was struck by the fact that phi-6 could serve as a model for similar viruses that have sex and infect humans -- viruses such as influenza and HIV. "I'm living in southern California, and the AIDS crisis is starting to become a big deal," he recalls. "It was becoming clear that HIV was doing major damage. It all dovetailed."
Turner joined Chao's lab as a postdoctoral student. They proceeded to design a series of experiments to explore the interplay of sex and evolution in phi-6. They created lines of promiscuous viruses and celibate ones. To make celibate viruses, they kept the ratio of viruses to hosts low, so that each microbe was invaded by just one virus. For the promiscuous line, they made sure the viruses outnumbered their hosts, so that each microbe was infected on average by five different viruses. They allowed the viruses to invade new hosts and replicate for many generations. They then measured how quickly the evolved lines of viruses could replicate compared with their ancestors.
Turner and Chao discovered that the promiscuous viruses sped up their replication. Turner suspects that one important factor behind their success was their ability to trade genes. Imagine that two viruses invade a single microbe. One of them carries an inferior gene that slows down its replication, and the other virus is slowed down by a different inferior gene. When they invade their host, they can produce a superior virus by combining their good genes and leaving the bad ones behind.
Viruses may not be alone in benefiting from stripping out bad genes with sex. Other researchers have been examining sexual reproduction in other organisms, and they've found similar patterns. Susanne Paland and Michael Lynch at Indiana University recently published a study on water fleas. Some species of water fleas reproduce sexually, while some do not. Paland and Lynch discovered that the asexual water fleas accumulated harmful mutations four times faster than the sexual ones. This sort of genomic hygiene may have played a role in the evolution of our own distant ancestors as well, as they shifted permanently to sexual reproduction.
But the mystery of sex is far from solved. Turner and Chao's work is proof of that. Viruses that have lots of sex, their experiments revealed, evolve into cheaters. Natural selection favored viruses that could use the proteins made by other viruses in the same cell. By exploiting their neighbors, these cheaters could put more resources into reproducing quickly. "Why do something for yourself, if you can get someone else to do it for you?" says Turner.
Cheating is a classic puzzle of science. In 1968, the ecologist Garrett Hardin wrote an influential essay known as "The Tragedy of the Commons." Hardin asked his readers to picture a pasture open to all the herders in the region. The rational choice for each herder would be to add more animals to his herd. But since all the herders are increasing their herds, they're making a collective demand on the commons larger than it can support. The herders might try to stave off destruction of the commons by limiting their herds. But this solution can easily come undone, since individuals may still be tempted to cheat. "Ruin is the destination toward which all men rush, each pursuing his own best interest," Hardin wrote.
The evolutionary parallel might be a species of birds that live on a remote island, eating seeds from a single species of plant. For their long-term survival, it would make sense for the birds not to gorge themselves on the seeds and drive the plant extinct. But natural selection cannot shape instincts to reach some long-term goal. It can only shape the behavior of individuals based on their reproductive success.
Turner and Chao demonstrated this in their virus work when they showed that too much sex may be a bad thing (at least from an evolutionary point of view). Viruses that had evolved with lots of sex, they found, became too good at cheating. When viruses that had adapted to a promiscuous life were forced to reproduce on their own, they reproduced far more slowly.
"It's a beautiful study," comments Lenski. "It's like the tragedy of the commons on a microscopic scale."
Unlike other life forms, viruses lack the means to reproduce themselves. Viruses typically get into their hosts by latching onto proteins on the surface of cells and managing to gain passage inside. Different species have different proteins on the surface of their cells. If a virus's key doesn't fit a species' lock, it cannot make that species a host.
Turner is intrigued by how that key sometimes changes. A virus may mutate in such a way that allows it to slip into cells of another species. It often takes many of these mutations for a virus to complete such a transition. But it's clear that viruses do manage to make the transition fairly frequently. Influenza viruses reside in birds and other animals; when a strain evolves the ability to spread quickly from human to human, it can become a pandemic -- like the 1918 Spanish flu pandemic, which killed 20 million people before it was over. Scientists are now watching a new strain of bird flu spread across the world, acquiring mutations that allow it to infect humans. It still can't spread from human to human, but it may be just a few mutations away.
Meanwhile, we are also encountering entirely new diseases thanks to host-shifting viruses. HIV-1 began as a chimpanzee virus. Hunters likely contracted the virus through cuts, and while most of the viruses died off, a few survived. In the 1930s strains of the virus began establishing themselves in humans, and eventually became specialized on our own species. SARS Coronavirus appears to have emerged from civet cats sold in Chinese markets.
It's just going to get worse, Turner predicts. "As the human population continues to grow, we're a target. We're also creating agricultural landscapes where there were wild landscapes. We're driving native species out into the open, and those native species can be reservoirs for viruses."
As serious as the threat of emerging viruses is, scientists still know relatively little about how viruses shift hosts. They cannot, for example, confidently predict which viruses in animals are most likely to colonize human hosts in the future. They still need to understand some of the basic rules of this particularly dangerous sort of evolution.
Turner believes that phi-6 can shed light on some of those rules. Like HIV, it has its own host of choice. The strain that Turner studies lives on plant bacteria called Pseudomonas syringae. Turner is carrying out experiments to see which conditions favor its shift to other bacteria. "Right now we're at the early stages of looking at those questions," he says.
Turner's graduate student Siobain Duffy has been studying phi-6 to track the earliest stages of host-jumping. Previous research suggested that viruses face some serious challenges in jumping from one host to another. Instead of a clean leap, viruses apparently had to make a gradual shift. Early in a transition, the virus needed to live in both its new host and its old one. Yet being a jack-of-all-trades may not make much evolutionary sense for a virus. A mutation that made a virus able to invade a new species might interfere with its ability to invade its traditional host. Many studies on evolution hinted at such trade-offs.
Duffy began to search for signs of a trade-off. She prepared lawns of bacteria from 14 different types of Pseudomonas. She then added phi-6 to their dishes and allowed the viruses to infect their new hosts for one day. The vast majority of viruses failed miserably at the task. But Duffy identified 30 mutant viruses that succeeded in creating plaques.
Duffy then looked at the genes of the mutants. She focused on a gene that encodes a protein called P3. The virus uses its P3 protein to attach to its host. Since each type of host has different proteins on its surface, it seemed likely that P3 would likely undergo mutations in viruses that could attach to new hosts. She discovered that each of the host-shifters did indeed carry a mutant P3 gene. Remarkably, the mutant genes differed only by a single "letter" from the normal code. That's all it takes for phi-6 to invade a new host: one random mutation in a single gene could do it. And while the host species Duffy studied were all in the genus Pseudomonas, many are separated by millions of years of evolution.
All told, Duffy identified nine mutations that allowed host-shifting. In some cases only one virus carried a particular mutation; in others, nine shared the same one. To measure the cost of these mutations, Duffy then infected the original Pseudomonas host with viruses carrying each of the nine mutations. When she checked how quickly they reproduced, she found that seven out of the nine mutations caused the viruses to grow more slowly on their original host. The discovery, which she and Turner and their colleagues reported in the journal Genetics earlier this year, marks the first time that scientists have precisely measured the cost of being a jack-of-all-trades.
But the other two mutations go against conventional wisdom: phi-6 strains with these mutations can still grow quickly on their old host. In other words, sometimes a virus can be a jack-of-all-trades for free. Duffy also discovered another unexpected result: some mutations discovered in a virus infecting one new host could also let it infect another new host that it had not yet seen.
Does this mean that we're vulnerable to any virus with a host-shifting mutation? Not quite, says Turner. On their own, these sorts of mutations are not enough to allow a virus to spread into a new species: "It has to mutate and sustain itself long enough to take off." Turner and his students have been closely observing one host-shifting strain of phi-6, and they find that it grows ten times slower in the new host than the old one.
Slow growth can put a new strain of virus at risk. If the viruses aren't producing enough offspring, they might not find new hosts to infect, and the new strain could become extinct. But Turner and his students have now shown that we shouldn't take too much comfort in that fact.
Turner and postdoctoral researcher John Dennehy are exploring what it takes for a new strain of virus to survive this dangerous passage. Dennehy put a host-shifting strain to the test by forcing it to shift back and forth, between its old and new hosts, four times.
The virus proved remarkably resilient. Dennehy and Turner had expected that, in the trials with small founding populations, the virus strain would become extinct in its new host. Instead, it survived and managed to expand its numbers. Somehow, reproducing in the old host gives viruses an extra boost when they infect the new host. Based on what scientists know about viruses, that shouldn't make a difference. But it does. "And that's mysterious," says Turner. "There's no good reason for that. It's like an ecological hangover." It's possible, he thinks, that the virus grabbed one or more key proteins from the previous host.
These experiments are just the first steps in Turner's study of host-shifting. Dennehy is now trying to create experiments that mimic natural conditions more accurately. He hopes to create dishes in which different species of bacteria live side by side. He's curious to see how the viruses "decide" which species to infect. Duffy meanwhile has been allowing host-shifting mutants to evolve. She wants to see whether they can shift completely to a new host -- becoming, in other words, a new species.
The research going on in Turner's lab hints that viruses are even more adept at shifting hosts than previously thought. They may not have to sacrifice their ability to breed in their old host to begin breeding in their new one. And some viruses may not even need to be "trained" on human hosts. Mutations that allow them to infect rodents or other mammals may give them the ability to invade our cells as well. "It may be easy for viruses to enter a new host type even if they haven't seen that new host type before. If something is hanging out in a mouse and jumps into a human, maybe that shouldn't be so surprising," Turner says.
Turner pauses for a moment, noticing that he has caused a visitor some distress at the thought of viruses easily gliding into our species. Witnessing evolution is not always a happy thrill. "Scary," he says. "Stay healthy. Wash your hands."
Given the urgency of these risks, Turner finds the continuing debate over creationism versus evolution a dangerous distraction. "I could take somebody into the lab, and over the course of a week, I could prove to them that evolution actually happens in microbes," says Turner. "And it has been profoundly important in the rise of antibiotic resistance and our inability to make effective anti-HIV drugs. We'd better be aware of it." the end
Copyright 2006 Carl Zimmer
|
<urn:uuid:9e984edd-8fa8-4274-9f95-dc940e0762b1>
|
CC-MAIN-2013-20
|
http://carlzimmer.com/articles/2006.php?subaction=showfull&id=1177181182&archive=&start_from=&ucat=9
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.974578
| 4,387
| 3.234375
| 3
|
Whisky lovers have another excuse to enjoy a dram -- scientists in Scotland on Tuesday unveiled a biofuel to help power cars developed from the by-products of the distillation process.
Researchers at Edinburgh Napier University have developed the biofuel and filed a patent for the product, which they said could be used to fuel ordinary cars without any special adaptations.
The biofuel, which has been developed during a two-year research project, uses the two main by-products from the whisky production process.
These are "pot ale", the liquid from the copper stills, and the spent grains called "draff", as the base to produce butanol which can then be used as fuel.
"The new biofuel is made from biological material which has been already generated," said Martin Tangney, who is leading the research.
"Theoretically it could be used entirely on its own but you would have to find a company to distribute it."
He added the most likely way the biofuel would be used was by blending five or 10 percent of the product with petrol or diesel.
"Five or 10 percent means less oil which would make a big, big difference," he said.
The biofuel "potentially offers new revenue on the back of one Scotland's biggest industries," added Tangney.
Richard Dixon, the Scotland director of environmental campaign group WWF, praised the new product, saying unlike other biofuels it could be made without causing "massive environmental damage to forests and wildlife.
"Whisky-powered cars could help Scotland avoid having to use those forest-trashing biofuels."
Explore further: Tesla recalls Model S cars over problem weld
|
<urn:uuid:bcaa6731-e98a-42a3-a3ed-0c9b9454d41b>
|
CC-MAIN-2013-20
|
http://phys.org/news201327681.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.976672
| 340
| 2.96875
| 3
|
|Adult male in-flight|
The Rusty Blackbird (Euphagus carolinus) is a species of Blackbirds. Its bill is pointed. The adult females are more greyer then the adult male. The population of the Rusty Blackbird has declined in recent decades nobody know's why for sure.
The birds usually breed in muskeg and forests across Canada and Alaska and migrate to the eastern and southeastern United States, into parts of the Grain Belt and sometimes straying into Mexico
These birds feed insects, small fish and some seeds in wet ground or in shallow water.
|
<urn:uuid:810982bd-fe94-4386-a1d1-c70335e98757>
|
CC-MAIN-2013-20
|
http://simple.wikipedia.org/wiki/Rusty_Blackbird
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.805107
| 119
| 3.125
| 3
|
SQL tutorial of w3resource is a comprehensive tutorial to learn SQL. We have followed SQL:2003 standard of ANSI. There are hundreds of examples given in this tutorial. Output are shown with Oracle 10G. Often outputs are followed by pictorial presentation and explanation for better understanding. You will hardly find a vendor neutral SQL tutorial covering SQL in such great detail.
What is SQL?
SQL stands for Structured Query Language and it is an ANSI (American National Standards Institute) standard computer language for accessing and manipulating database systems. It is used for managing data in relational database management system which stores data in the form of tables and relationship between data is also stored in the form of tables. SQL statements are used to retrieve and update data in a database. SQL works with database programs like DB2, Oracle, SQL Server, Sybase, MS Access, etc. There are many different versions of the SQL language, but to be in compliance with the ANSI standard, they support the major keyword such as SELECT, UPDATE, DELETE, INSERT, WHERE, and others. Following picture shows the communicating with an RDBMS using SQL.
History of SQL
In June 1970 Dr. E. F. Codd published the paper, "A Relational Model of Data for Large Shared Data Banks" in the Association of Computer Machinery (ACM) journal. Codd's model is now accepted as the definitive model for relational database management systems (RDBMS). Using Codd's model the language, Structured English Query Language (SEQUEL) was developed by IBM Corporation in San Jose Research Center. The language was first called SEQUEL but Official pronunciation of SQL is ESS QUE ELL. In 1979 Oracle introduced the first commercially available implementation of SQL. Today, SQL is accepted as the standard RDBMS language. Later other players join in the race. Here is the year wise development history :
– 1970 E.F. Codd publishes Definition of Relational Model
– 1975 Initial version of SQL Implemented (D. Chamberlin)
– IBM experimental version: System R (1977) w/revised SQL
– IBM commercial versions: SQL/DS and DB2 (early 1980s)
– Oracle introduces commercial version before IBM's SQL/DS
– INGRES 1981 & 85
– ShareBase 1982 & 86
– Data General (1984)
– Sybase (1986)
– by 1992 over 100 SQL products
SQL Standard Revisions
– SEQUEL/Original SQL - 1974
– SQL/86 - Ratification and acceptance of a formal SQL standard by ANSI (American National Standards Institute) and ISO (International Standards Organization).
– SQL/92 - Major revision (ISO 9075), Entry Level SQL-92 adopted as FIPS 127-2.
– SQL/99 - Added regular expression matching, recursive queries (e.g. transitive closure), triggers, support for procedural and control-of-flow statements, non-scalar types, and some object-oriented features (e.g. structured types).
– SQL/2003 - Introduced XML-related features (SQL/XML), Window functions, Auto generation.
– SQL/2006 - Lots of XML Support for XQuery, an XML-SQL interface standard.
– SQL/2008 - Adds INSTEAD OF triggers, TRUNCATE statement.
Constructs of SQL
Here is list of the key elements of SQL along with a brief description :
Queries : Retrives data against some criteria.
Statements : Controls transactions, program flow, connections, sessions, or diagnostics.
Clauses : Components of Queries and Statements.
Expressions : Combination of symbols and operators and key part of the SQL statements.
Predicates : Specifies conditions.
Some Key terms of SQL 2003
To know the key terms of SQL 2003, you should know the statement classes of both SQL 92 AND SQL 2003, since both are used to refer SQL features and statements.
In SQL 92, SQL statements are grouped into following categories :
The Data Manipulation Language (DML) is the subset of SQL which is used to add, update and delete data.
The Data Definition Language (DDL) is used to manage table and index structure. CREATE, ALTER, RENAME, DROP and TRUNCATE statements are to name a few data definition elements.
The Data Control Language (DCL) is used to set permissions to users and groups of users whether they can access and manipulate data.
A transaction contains number of SQL statements. After the transaction begins, all of the SQL statements are executed and at the end of the transaction, permanent changes are made in the associated tables.
Using a stored procedure, a method is created which contains source code for performing repetitive tasks.
In SQL 2003 statements are grouped in seven categories which are called classes. See the following table :
|SQL data statements||SELECT, INSERT, UPDATE, DELETE|
|SQL connection statements||CONNECT, DISCONNECT|
|SQL schema statements||ALTER, CREATE, DROP|
|SQL control statements||CALL, RETURN|
|SQL diagnostic statements||GET DIAGNOSTICS|
|SQL session statements||SET CONSTRAINT|
|SQL transaction statements||COMMIT, ROLLBACK|
PL-SQL, TSQL and PL/pgSQL
PL/SQL - Procedural Language/Structured Query Language ( PL/SQL) is Oracle Corporation's procedural extension language for SQL and the Oracle relational database.
TSQL - Transact-SQL (T-SQL) is Microsoft's and Sybase's proprietary extension to SQL.
PL/pgSQL - Procedural Language/PostgreSQL(PL/pgSQL) is a procedural programming language supported by the PostgreSQL.
What you will learn
In w3resource SQL tutorials, we have covered SQL 2003 standard in detail. Following is a list of the features we have included in our tutorials :
1. A simple but thorough description.
2. SQL Syntax.
3. Description of the Parameters used in the SQL command.
4. Sample table with data.
5. SQL command.
6. Explanation of the SQL command.
7. Output of the SQL command.
The programming language trends
photo credit: Tim Morgan. Photo is used under creative Common License.
|
<urn:uuid:f819cc3e-8511-4ca9-a0a1-b4d80becd3d9>
|
CC-MAIN-2013-20
|
http://www.w3resource.com/sql/tutorials.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00040-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.83884
| 1,325
| 3.71875
| 4
|
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or—as here—simply a vector) is a geometric object that has magnitude (or length) and direction and can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by
Vectors play an important role in physics: velocity and acceleration of a moving object and forces acting on it are all described by vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can be still represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.
It is important to distinguish Euclidean vectors from the more general concept in linear algebra of vectors as elements of a vector space. General vectors in this sense are fixed-size, ordered collections of items as in the case of Euclidean vectors, but the individual items may not be real numbers, and the normal Euclidean concepts of length, distance and angle may not be applicable. (A vector space with a definition of these concepts is called an inner product space.) In turn, both of these definitions of vector should be distinguished from the statistical concept of a random vector. The individual items in a random vector are individual real-valued random variables, and are often manipulated using the same sort of mathematical vector and matrix operations that apply to the other types of vectors, but otherwise usually behave more like collections of individual values. Concepts of length, distance and angle do not normally apply to these vectors, either; rather, what links the values together is the potential correlations among them.
The word "vector" originates from the Latin vehere meaning "to carry". It was first used by 18th century astronomers investigating planet rotation around the Sun.
In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space. In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above mentioned geometric entities are a special kind of vectors, as they are elements of a special kind of vector space called Euclidean space.
This article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors.
Being an arrow, a Euclidean vector possesses a definite initial point and terminal point. A vector with fixed initial and terminal point is called a bound vector. When only the magnitude and direction of the vector matter, then the particular initial point is of no importance, and the vector is called a free vector. Thus two arrows and in space represent the same free vector if they have the same magnitude and direction: that is, they are equivalent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin.
The term vector also has generalizations to higher dimensions and to more formal approaches with much wider applications.
Examples in one dimension
Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters to the right would be 4 m or −4 m, and its magnitude would be 4 m regardless.
In physics and engineering
Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For example, the velocity 5 meters per second upward could be represented by the vector (0,5) (in 2 dimensions with the positive y axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction but fail to follow the rules of vector addition: Angular displacement and electric current. Consequently, these are not vectors.
In Cartesian space
In the Cartesian coordinate system, a vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points A = (1,0,0) and B = (0,1,0) in space determine the free vector pointing from the point x=1 on the x-axis to the point y=1 on the y-axis.
Typically in Cartesian coordinates, one considers primarily bound vectors. A bound vector is determined by the coordinates of the terminal point, its initial point always having the coordinates of the origin O = (0,0,0). Thus the bound vector represented by (1,0,0) is a vector of unit length pointing from the origin along the positive x-axis.
The coordinate representation of vectors allows the algebraic features of vectors to be expressed in a convenient numerical fashion. For example, the sum of the vectors (1,2,3) and (−2,0,4) is the vector
- (1, 2, 3) + (−2, 0, 4) = (1 − 2, 2 + 0, 3 + 4) = (−1, 2, 7).
Euclidean and affine vectors
In the geometrical and physical settings, sometimes it is possible to associate, in a natural way, a length or magnitude and a direction to vectors. In turn, the notion of direction is strictly associated with the notion of an angle between two vectors. When the length of vectors is defined, it is possible to also define a dot product — a scalar-valued product of two vectors — which gives a convenient algebraic characterization of both length (the square root of the dot product of a vector by itself) and angle (a function of the dot product between any two non-zero vectors). In three dimensions, it is further possible to define a cross product which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors (used as sides of the parallelogram).
However, it is not always possible or desirable to define the length of a vector in a natural way. This more general type of spatial vector is the subject of vector spaces (for bound vectors) and affine spaces (for free vectors). An important example is Minkowski space that is important to our understanding of special relativity, where there is a generalization of length that permits non-zero vectors to have zero length. Other physical examples come from thermodynamics, where many of the quantities of interest can be considered vectors in a space with no notion of length or angle.
In physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on some auxiliary coordinate system or reference frame. When the coordinates are transformed, for example by rotation or stretching, then the components of the vector also transform. The vector itself has not changed, but the reference frame has, so the components of the vector (or measurements taken with respect to the reference frame) must change to compensate. The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of coordinates) from meters to milimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm–a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm–a covariant change in value. See covariance and contravariance of vectors. Tensors are another type of quantity that behave in this way; in fact a vector is a special type of tensor.
In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction".
The concept of vector, as we know it today, evolved gradually over a period of more than 200 years. About a dozen people made significant contributions. The immediate predecessor of vectors were quaternions, devised by William Rowan Hamilton in 1843 as a generalization of complex numbers. Initially, his search was for a formalism to enable the analysis of three-dimensional space in the same way that complex numbers had enabled analysis of two-dimensional space, but he arrived at a four-dimensional system. In 1846 Hamilton divided his quaternions into the sum of real and imaginary parts that he respectively called "scalar" and "vector":
- The algebraically imaginary part, being geometrically constructed by a straight line, or radius vector, which has, in general, for each determined quaternion, a determined length and determined direction in space, may be called the vector part, or simply the vector of the quaternion.
Several other mathematicians developed vector-like systems around the same time as Hamilton including Giusto Bellavitis, Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O’Brien. Grassmann's 1840 work Theorie der Ebbe und Flut (Theory of the Ebb and Flow) was the first system of spatial analysis similar to today's system and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870's.
In 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth.
Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibb's lectures, and banishing any mention of quaternions in the development of vector calculus.
Vectors are usually denoted in lowercase boldface, as a or lowercase italic boldface, as a. (Uppercase letters are typically used to represent matrices.) Other conventions include or a, especially in handwriting. Alternatively, some use a tilde (~) or a wavy underline drawn beneath the symbol, e.g. , which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as or AB. Especially in literature in German it was common to represent vectors with small fraktur letters as .
Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here the point A is called the origin, tail, base, or initial point; point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction.
On a two-dimensional diagram, sometimes a vector perpendicular to the plane of the diagram is desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the vanes of an arrow from the back.
In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system.
As an example in two dimensions (see figure), the vector from the origin O = (0,0) to the point A = (2,3) is simply written as
The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation is usually not deemed necessary and very rarely used.
In three dimensional Euclidean space (or ), vectors are identified with triples of scalar components:
- also written
Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them:
These have the intuitive interpretation as vectors of unit length pointing up the x, y, and z axis of a Cartesian coordinate system, respectively. In terms of these, any vector a in can be expressed in the form:
where a1, a2, a3 are called the vector components (or vector projections) of a on the basis vectors or, equivalently, on the corresponding Cartesian axes x, y, and z (see figure), while a1, a2, a3 are the respective scalar components (or scalar projections).
In introductory physics textbooks, the standard basis vectors are often instead denoted (or , in which the hat symbol ^ typically denotes unit vectors). In this case, the scalar and vector components are denoted respectively ax, ay, az, and ax, ay, az (note the difference in boldface). Thus,
As explained above a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be decomposed or resolved with respect to that set.
However, the decomposition of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected.
Moreover, the use of Cartesian unit vectors such as as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of the unit vectors of a cylindrical coordinate system () or spherical coordinate system (). The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry respectively.
The choice of a coordinate system doesn't affect the properties of a vector or its behaviour under transformations.
A vector can be also decomposed with respect to "non-fixed" axes which change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface (see figure). Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it.
In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a global coordinate system, or inertial reference frame).
Basic properties
The following section uses the Cartesian coordinate system with basis vectors
and assumes that all vectors have the origin as a common base point. A vector a will be written as
Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors
are equal if
Addition and subtraction
Assume now that a and b are not necessarily equal vectors, but that they may have different magnitudes and directions. The sum of a and b is
The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below:
This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c).
The difference of a and b is
Subtraction of two vectors can be geometrically defined as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector a − b, as illustrated below:
Subtraction of two vectors may also be performed by adding the opposite of the second vector to the first vector, that is, a − b = a + (−b).
Scalar multiplication
A vector may also be multiplied, or re-scaled, by a real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is
Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector.
If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = −1 and r = 2) are given below:
Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + (−1)b.
The length of the vector a can be computed with the Euclidean norm
which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors.
This happens to be equal to the square root of the dot product, discussed below, of the vector with itself:
- Unit vector
A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â.
To normalize a vector a = [a1, a2, a3], scale the vector by the reciprocal of its length ||a||. That is:
- Null vector
The null vector (or zero vector) is the vector with length zero. Written out in coordinates, the vector is (0,0,0), and it is commonly denoted , or 0, or simply 0. Unlike any other vector it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector which is a multiple of the null vector). The sum of the null vector with any vector a is a (that is, 0+a=a).
Dot product
The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b and is defined as:
where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point and then the length of a is multiplied with the length of that component of b that points in the same direction as a.
The dot product can also be defined as the sum of the products of the components of each vector as
Cross product
The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as
where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (–n).
The length of a × b can be interpreted as the area of the parallelogram having a and b as sides.
The cross product can be written as
For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below).
Scalar triple product
The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by (a b c) and defined as:
It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed.
In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows
The scalar triple product is linear in all three entries and anti-symmetric in the following sense:
Multiple Cartesian bases
All examples thus far have dealt with vectors expressed in terms of the same basis, namely, e1, e2, e3. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. For example, using the vector a from above,
where n1, n2, n3 form another orthonormal basis not aligned with e1, e2, e3. The values of u, v, and w are such that the resulting vector sum is exactly a.
It is not uncommon to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In order to perform many of the operations defined above, it is necessary to know the vectors in terms of the same basis. One simple way to express a vector known in one basis in terms of another uses column matrices that represent the vector in each basis along with a third matrix containing the information that relates the two bases. For example, in order to find the values of u, v, and w that define a in the n1, n2, n3 basis, a matrix multiplication may be employed in the form
where each matrix element cjk is the direction cosine relating nj to ek. The term direction cosine refers to the cosine of the angle between two unit vectors, which is also equal to their dot product.
By referring collectively to e1, e2, e3 as the e basis and to n1, n2, n3 as the n basis, the matrix containing all the cjk is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n" (because it contains direction cosines).
By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases.
Other dimensions
With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as
and in four dimensions as
A seven-dimensional cross product is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products.
Vectors have many uses in physics and other sciences.
Length and units
In abstract vector spaces, the length of the arrow depends on a dimensionless scale. If it represents, for example, a force, the "scale" is of physical dimension length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1:250 and 1 m:50 N respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance.
Vector-valued functions
Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter t. For instance, if r represents the position vector of a particle, then r(t) gives a parametric representation of the trajectory of the particle. Vector-valued functions can be differentiated and integrated by differentiating or integrating the components of the vector, and many of the familiar rules from calculus continue to hold for the derivative and integral of vector-valued functions.
Position, velocity and acceleration
The position of a point x = (x1, x2, x3) in three-dimensional space can be represented as a position vector whose base point is the origin
The position vector has dimensions of length.
Given two points x = (x1, x2, x3), y = (y1, y2, y3) their displacement is a vector
which specifies the position of y relative to x. The length of this vector gives the straight line distance from x to y. Displacement has the dimensions of length.
where x0 is the position at time t=0. Velocity is the time derivative of position. Its dimensions are length/time.
Force, energy, work
Vectors as directional derivatives
where the index is summed over the appropriate number of dimensions (for example, from 1 to 3 in 3-dimensional Euclidean space, from 0 to 3 in 4-dimensional spacetime, etc.). Then consider a vector tangent to :
The directional derivative can be rewritten in differential form (without a given function ) as
Therefore any directional derivative can be identified with a corresponding vector, and any vector can be identified with a corresponding directional derivative. A vector can therefore be defined precisely as
Vectors, pseudovectors, and transformations
An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a coordinate transformation. A contravariant vector is required to have components that "transform like the coordinates" under changes of coordinates such as rotation and dilation. The vector itself does not change under these operations; instead, the components of the vector make a change that cancels the change in the spatial axes, in the same way that co-ordinates change. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, like the co-ordinates, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to x′ = Mx, then a contravariant vector v must be similarly transformed via v′ = Mv. This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, electric field, momentum, force, and acceleration.
In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a contravariant vector to be a tensor of contravariant rank one. Alternatively, a contravariant vector is defined to be a tangent vector, and the rules for transforming a contravariant vector follow from the chain rule.
Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip and gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the orientation of space. A vector which gains a minus sign when the orientation of space changes is called a pseudovector or an axial vector. Ordinary vectors are sometimes called true vectors or polar vectors to distinguish them from pseudovectors. Pseudovectors occur most frequently as the cross product of two ordinary vectors.
One example of a pseudovector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the actual angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors.
See also
- Affine space, which distinguishes between vectors and points
- Array data structure or Vector (Computer Science)
- Banach space
- Clifford algebra
- Complex number
- Coordinate system
- Covariance and contravariance of vectors
- Four-vector, a non-Euclidean vector in Minkowski space (i.e. four-dimensional spacetime), important in relativity
- Function space
- Grassmann's Ausdehnungslehre
- Hilbert space
- Normal vector
- Null vector
- Tangential and normal components (of a vector)
- Unit vector
- Vector bundle
- Vector calculus
- Vector notation
- Vector-valued function
- Ivanov 2001
- Heinbockel 2001
- Ito 1993, p. 1678; Pedoe 1988
- The Oxford english dictionary. (2nd. ed. ed.). London: Claredon Press. 2001. ISBN 9780195219425.
- Ito 1993, p. 1678
- Thermodynamics and Differential Forms
- Michael J. Crowe, A History of Vector Analysis; see also his lecture notes on the subject.
- W. R. Hamilton (1846) London, Edinburgh & Dublin Philosophical Magazine 3rd series 29 27
- U. Guelph Physics Dept., "Torque and Angular Acceleration"
- Kane & Levinson 1996, pp. 20–22
- Apostol, T. (1967). Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra. John Wiley and Sons. ISBN 978-0-471-00005-1.
- Apostol, T. (1969). Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications. John Wiley and Sons. ISBN 978-0-471-00007-5.
- Kane, Thomas R.; Levinson, David A. (1996), Dynamics Online, Sunnyvale, California: OnLine Dynamics, Inc.
- Heinbockel, J. H. (2001), Introduction to Tensor Calculus and Continuum Mechanics, Trafford Publishing, ISBN 1-55369-133-4
- Ito, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4
- Ivanov, A.B. (2001), "Vector, geometric", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Pedoe, D. (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0..
- Aris, R. (1990). Vectors, Tensors and the Basic Equations of Fluid Mechanics. Dover. ISBN 978-0-486-66110-0.
- Feynman, R., Leighton, R., and Sands, M. (2005). "Chapter 11". The Feynman Lectures on Physics, Volume I (2nd ed ed.). Addison Wesley. ISBN 978-0-8053-9046-9.
|Wikimedia Commons has media related to: Vectors|
|The Wikibook Waves has a page on the topic of: Vectors|
- Hazewinkel, Michiel, ed. (2001), "Vector", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Online vector identities (PDF)
- Introducing Vectors A conceptual introduction (applied mathematics)
- Addition of forces (vectors) Java Applet
- French tutorials on vectors and their application to video games
|
<urn:uuid:b4fe2484-7580-42a0-80da-5d5ce7207d43>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Euclidean_vector
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918352
| 7,642
| 3.84375
| 4
|
From the busy BEES at Drexel University, worry that beach sand temperature 40 to 50 centimeters deep will be affected by the global warming air temperature rise of 0.8C over the last century, projected to increase. The models identified this as the leading projected cause of climate-related decline in leatherback turtles. They say “if actual climate patterns follow projections in the study, the eastern Pacific population of leatherback turtles will decline by 75 percent by the year 2100.” Gosh.
But they write in their press release as if the projections are actually happening:
Leatherback turtles, Spotila says, are in critical need of human help to survive. “Warming climate is killing eggs and hatchlings,” Spotila said. “Action is needed, both to mitigate this effect and, ultimately, to reverse it to avoid extinction. We need to change fishing practices that kill turtles at sea, intervene to cool the beach to save the developing eggs and find a way to stop global warming. Otherwise, the leatherback and many other species will be lost.”
It makes you wonder how the turtles ever survived the Roman Warm Period or the Medieval Warm Period or the early part of the Holocene?
Rising heat at the beach threatens largest sea turtles, climate change models show
PHILADELPHIA (July 1, 2012)—For eastern Pacific populations of leatherback turtles, the 21st century could be the last. New research suggests that climate change could exacerbate existing threats and nearly wipe out the population. Deaths of turtle eggs and hatchlings in nests buried at hotter, drier beaches are the leading projected cause of the potential climate-related decline, according to a new study in the journal Nature Climate Change by a research team from Drexel University, Princeton University, other institutions and government agencies.
Leatherbacks, the largest sea turtle species, are among the most critically endangered due to a combination of historical and ongoing threats including egg poaching at nesting beaches and juvenile and adult turtles being caught in fishing operations. The new research on climate dynamics suggests that climate change could impede this population’s ability to recover. If actual climate patterns follow projections in the study, the eastern Pacific population of leatherback turtles will decline by 75 percent by the year 2100.
Modeling the Ebb and Flow of Turtle Hatching with Climate Variation
“We used three models of this leatherback population to construct a climate-forced population dynamics model. Two parts were based on the population’s observed sensitivity to the nesting beach climate and one part was based on its sensitivity to the ocean climate,” said the study’s lead author Dr. Vincent Saba, a research fishery biologist with the NOAA National Marine Fisheries Service Northeast Fisheries Science Center, visiting research collaborator at Princeton University, and a Drexel University alumnus.
Leatherback turtle births naturally ebb and flow from year to year in response to climate variations, with more hatchlings, and rare pulses of male hatchlings, entering the eastern Pacific Ocean in cooler, rainier years. Female turtles are more likely to return to nesting beaches in Costa Rica to lay eggs in years when they have more jellyfish to eat, and jellyfish in the eastern Pacific are likely more abundant during cooler seasons. Turtle eggs and hatchlings are also more likely to survive in these cooler, rainier seasons associated with the La Niña climate phase, as this research team recently reported in the journal PLoS ONE. In addition, temperature inside the nest affects turtles’ sex ratio, with most male hatchlings emerging during cooler, rainier seasons to join the predominantly-female turtle population.
The researchers applied Saba’s combined model of these population dynamics to seven climate model projections assessed by the Intergovernmental Panel on Climate Change (IPCC). The climate model projections were chosen based on their ability to model El Niño Southern Oscillation (ENSO) patterns on the temperature and precipitation in the region of Costa Rica where this team has conducted long-term leatherback studies.
Hot Beaches, More Warm Years Threaten Turtles’ Recovery
The resulting projections indicate that warmer, drier years will become increasingly frequent in Central America throughout this century. High egg and hatchling mortality associated with warmer, drier beach conditions was the most significant cause of the projected climate-related population decline: This nesting population of leatherbacks could decline by 7 percent per decade, or 75 percent overall by the year 2100.
The population is already critically low.
“In 1990, there were 1,500 turtles nesting on the Playa Grande beach,” said Dr. James Spotila, the Betz Chair Professor of Environmental Science in the College of Arts and Sciences at Drexel. “Now, there are 30 to 40 nesting females per season.”
Spotila, a co-author of the study, has been studying leatherback turtles at Playa Grande in Costa Rica, the largest leatherback nesting beach in the eastern Pacific, with colleagues and Drexel students, for 22 years.
Poaching of turtle eggs was a major cause of the initial decline, and was once such a widespread problem that virtually no turtle hatchlings would survive at Playa Grande. Spotila and colleagues worked with the local authorities in Costa Rica to protect the leatherbacks’ nesting beaches so that turtle nests can hatch in safety. By catch of juvenile and adult turtles in fishing operations in the eastern Pacific remains a threat.
For the population to recover successfully, Spotila said, “the challenge is to produce as many good hatchlings as possible. That requires us to be hands-on and manipulate the beach to make sure that happens.”
Spotila’s research team is already investigating methods such as watering and shading turtle nests that could mitigate the impact of hot, dry beach conditions on hatching success.
Link to this Nature Climate Change study: http://dx.doi.org/10.1038/NCLIMATE1582
Link to recent news release about a related study by this research team in PLoS ONE: http://www.drexel.edu/now/news-media/releases/archive/2012/May/El-Nino-Climate-Change-Threaten-Leatherback-Sea-Turtles/
Maybe this is a bigger problem? From Wikipedia:
Asian exploitation of turtle nests has been cited as the most significant factor for the species’ global population decline. In Southeast Asia, egg harvesting in countries such as Thailand and Malaysia has led to a near-total collapse of local nesting populations. In Malaysia, where the turtle is practically locally extinct, the eggs are considered a delicacy. In the Caribbean, some cultures consider the eggs to be aphrodisiacs.
|
<urn:uuid:41cfe320-f4b9-46b2-9dec-d9146875ad69>
|
CC-MAIN-2013-20
|
http://wattsupwiththat.com/2012/07/02/oh-noes-models-say-that-climate-change-enso-and-beach-sand-heat-will-kill-sea-turtles/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.935677
| 1,388
| 3.640625
| 4
|
While still a youth Paracelsus became aware of many of the conflicting currents of his age. His father was a physician in Einsiedeln and he practiced in a number of mining towns. The boy surely learned some practical medicine at home through observing his father. It is likely that he learned some folk medicine as well. He also picked up some alchemy from his father who had an interest in the subject. And in mining towns he would have observed metallurgical practices as well as the diseases that afflicted the men who worked the mines. Traditionally it has been said that Paracelsus was taught by several bishops and the occultist abbot of Sponheim, Johannes Trithemius. At the age of fourteen the boy left home to begin a long period of wandering. He apparently visited a number of universities, but there is no proof that he ever took a medical degree. As an adult, however, he picked up practical medical knowledge by working as a surgeon in a number of the mercenary armies that ravaged Europe in the seemingly endless wars of the period. He wrote that he visited most of the countries of Central, Northern, and Eastern Europe.
It is only in the final fifteen years of his life that the records of his travels become clearer. In 1527 he was called to Basel to treat a leg ailment of the famed publisher of humanist classics, Johannes Frobenius. In Basel Paracelsus also gave medical advice to the Dutch scholar Erasmus and came in contact with some of the more prominent scholars of the religious Reformation. He was appointed city physician and professor of medicine. But although he was permitted to lecture at the University of Basel, he had no official appointment with the medical faculty there.
Almost immediately Paracelsus became a figure of contention. He heaped scorn on the conservative physicians of the University, and, at the St. John's Day bonfire, threw Avicenna's revered Canon of medicine to the blaze. Then, his patient, Frobenius, died. This was followed by a disastrous lawsuit and he left Basel in haste, even leaving behind his manuscripts.
The final years of his life find Paracelsus moving from town to town, and again, he often left his manuscripts behind as he had in Basel. He comes across as an angry man who antagonized many of those he met -- even those who tried to help him. In the end he was called to Salzburg to treat the suffragan bishop, Ernest of Wittelsbach. There he died at the early age of forty-eight.
|
<urn:uuid:75e9aef1-e070-4605-bbe6-a532349b5f4e>
|
CC-MAIN-2013-20
|
http://www.nlm.nih.gov/exhibition/paracelsus/paracelsus.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.990785
| 531
| 2.875
| 3
|
Facts & Figures
This section of the website is your springboard to quantitative information regarding recycling and waste reduction. In California recycling levels are measured by city and are generally referred to as diversion rates. Diversion is the percentage of waste that is being diverted from the landfill as compared to a base year. The theory is that the more you divert from the landfill, the more is being recycled or not generated in the first place. Diversion rates of each city are reported annually to CalRecycle.
If you are interested in learning more about particular materials that make up the waste stream or would like to know what sectors different types of waste are coming from, visit the Waste Stream Profiles page.
If you are looking for statistics on various quality of life indicators in San Mateo County, you will want to view the Sustainability Indicator's Report. This report is updated annually and provides fact-based information on local trends such as air quality, energy use, solid waste, water use, housing, transportation, public library use, wealth distribution, etc.
|
<urn:uuid:96dfc5a6-dd87-424d-8af5-9203bb9893b4>
|
CC-MAIN-2013-20
|
http://www.recycleworks.org/facts_figures/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951908
| 216
| 3.421875
| 3
|
(By MEG KERR and SHEILA DORMODY)
Water. We take it for granted. We turn on the faucet and out it comes. Rhode Island is blessed with plentiful freshwater resources. But even here, freshwater is limited, and increasingly, we are bumping up against those limits. Throughout Rhode Island, residential overuse of treated drinking water, particularly in the summer months for lawn irrigation, creates excessive demands on water supplies.
North Kingstown has learned about the limits of water supply and is taking proactive measures to ensure adequate water for the town’s future development. North Kingstown’s municipal water supply is drawn from wells in the HAP (Hunt-Annaquatucket-Pettaquamscutt) aquifer. The town of North Kingstown shares the HAP aquifer with the Kent County Water Authority and the Quonset Development Corp. The average annual withdrawal by these three users is about 3.55 million gallons per day (mgd) and it can double during the summer months. The impact of these summertime withdrawals can be seen in the area’s dry streambeds and the declining populations of river fish that once flourished in Rhode Island’s freshwater streams.
In May 2010, the Water and Planning Directors for North Kingstown notified the Town Council that their projections indicated that the town had a water supply shortfall under the highest days of demand in the summer. In other words, the town did not have adequate pumping capacity to consistently meet this increased use. The planners strongly advised the Council to put in place effective policies to reduce peak demand and other wasteful uses of potable water supply so North Kingstown could to continue to grow and expand municipal water and fire services for new development.
During the summer, the Council thoroughly studied the issue, and in response to this warning light, put a new development proposal on hold because of water supply concerns. Council members learned that rather than developing new supplies, which would take a very long time and would be very expensive, they could create an increased supply by reducing summertime over-use of water. In late September, the Council passed a watering ordinance, limiting residential watering to twice a week. Complementing this action, in November the Council added a fourth, highest-cost tier to their water rates, targeted at high volume users, creating an economic incentive for water conservation. Both of these steps were laudable in their recognition of the water supply problem and finding solutions to address it. With these measures in place, the Council has been able to move forward with new development proposals, knowing that the water required to make them viable would not be endangering the town’s fire-fighting capability or harming environmental resources which rely upon an adequate supply of water
Other water suppliers should follow North Kingstown’s lead.
In 2009, the General Assembly passed the Water Use and Government Efficiency Act to better manage the state’s shared water resources and to invest in infrastructure repair and replacement. The Act recognizes the essential role played by the RI Water Resources Board in balancing water resource uses, and charged the board with establishing targets for non-agricultural demand management and water use by July 1, 2010. The board is also required to work with the RI Department of Environmental Management to provide water availability estimates to municipalities for use in local comprehensive plans. The Water Resources Board did not meet the July 1 deadline, and scaled back the scope of the initial drafts of the regulations considerably; but plans to go to public hearing in early 2011 with regulations. The regulations address targets and methods for efficient water use for major public water suppliers.
This is an important first step, but there is significant work still to do. In particular, Rhode Island needs the Water Resources Board to determine water availability so municipalities can make the critical link between development decisions and available future water supply. As part of that effort, we need complete and accurate reporting on water use so we can monitor progress toward improved water management.
Rhode Island has laws in place enabling the Water Resources Board to better manage water. The board has now taken an important first step in carrying out that responsibility – and not a moment too soon for a Rhode Island community like North Kingstown.
Meg Kerr is Watershed Program Manager with the Narragansett Bay Estuary Program; Sheila Dormody is Rhode Island Director of Clean Water Action. Both are also members of and representing the views of the Coalition for Water Security.View more articles in:
|
<urn:uuid:7d0ff8b3-d97b-4ea4-82b3-00ce4ad2022c>
|
CC-MAIN-2013-20
|
http://www.woonsocketcall.com/node/1506?quicktabs_2=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.952132
| 921
| 2.765625
| 3
|
Treat & Prevent Eczema & Other Skin Disorders
Skin disorders like eczema, psoriasis, and chronic dryness are all heavily affected by the moisture content of the skin.
Showering and bathing in chlorinated tap water robs skin and hair of their natural protective oils, causing scaling and itching. Chlorine is a strong oxidant that causes damage to skin and hair even at very low levels. Most tap water maintains a residual chlorine level greater than is recommended for swimming pools, 1.5 ppm.
Chlorine also kills much of the beneficial bacteria on the surface of the skin that offer a natural defense against skin disorders.
By reducing chlorine in our showering water, where most of the exposure to chlorine occurs, and keeping our bodies hydrated properly, we allow our body's natural healing mechanisms to work properly. In many cases, shower filtration can help prevent and cure skin disorders. As is the case with most health problems, prevention is always more effective than treatment.
|
<urn:uuid:ac513966-c323-4ef5-a502-f9f95c42d630>
|
CC-MAIN-2013-20
|
http://aquasanastore.com/water-facts_b01.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936985
| 206
| 2.640625
| 3
|
Diagrams courtesy Sky & Telescope
Published February 28, 2012
For the first time in almost a decade, sky-watchers this week will be able to see all five naked-eye planets over the course of one night for several nights in a row.
The classical naked-eye planets—Mercury, Venus, Mars, Jupiter, and Saturn—can be seen easily without optical aids and so have been known since ancient times.
But the quintet hasn't appeared together during a single night since 2004.
What's more, this week's parade of planets will be joined in the nighttime skies by the waxing crescent to waxing gibbous moon and the superbright stars Sirius and Canopus.
"Although being able to see these objects simultaneously doesn't have any scientific value as such, it is a really fun opportunity to get a sense of how we fit in the universe," said Geza Gyuk, an astronomer with the Adler Planetarium in Chicago.
"It is a bit like looking at an astronomy class in a nutshell."
Best Views for Cosmic Parade
Although the moon and the seven bright objects will all be visible in one night, they won't all appear at the same time or in the same region of the sky.
The best time to catch sight of the cosmic parade will be between February 28 and March 7. This is when the more elusive planets Mercury and Mars will be at their brightest in the evening sky for 2012, and when the moon will be above the horizon for many hours before setting.
Catching Mercury in particular is notoriously difficult, Gyuk said, because the tiny world is the closest to the sun and so never appears very far above the nighttime horizon.
"Follow the line connecting Jupiter to Venus below right, and continue on until you almost reach the horizon," Gyuk said.
"Mercury will be in this vicinity and should be fairly bright in binoculars, but will be getting dimmer and harder to locate as the days of early March progress."
Saturn—which looks like a bright, yellowish star—will rise near local midnight in the east.
According to Gyuk, the best place to view the sky show will be from either a large field or the top of a hill with eastern, western, and southern views.
Full Show a Limited-Time Offer
Sirius and Canopus are the brightest stars visible any time year-round, but this week they'll be at their highest in the sky for 2012 soon after local dusk sets in.
Located just under nine light-years away, Sirius is the brightest star we can see from Earth and the lead star in the constellation Canis Major, the mythical "big dog" that shines high in the Northern Hemisphere's winter sky.
Canopus, the second brightest star in the sky, is part of the southern constellation Carina, the keel of the mythological ship the Argo. (Related: "'Light Echoes' From Monster Star's Eruption Found—A First.")
While Sirius is visible from all mid-latitude regions, Canopus—to the lower right of Sirius—can be seen only by observers in more southerly latitudes, for instance, below Los Angeles and Atlanta.
The northern limit for viewing the other six bright objects this week is around the Arctic Circle, beyond which Sirius is invisible. The southern limit is around the Equator, beyond which it becomes very difficult to spot Mercury.
"The moon, of course, is our closest cosmic neighbor and the only one we can really study as a world with the naked eye or even simple binoculars," Gyuk added. (Related: "New Scars Found on Moon, Hint at 'Recent' Tectonic Activity.")
"However these other points of light are all really bright objects in the sky too, so to get the full experience, take your time and let your eyes adapt to the darkness and enjoy."
These six scientists were snubbed for awards or robbed of credit for discoveries … because they were women.
Sweden needs garbage to maintain its energy habits, so it’s begun importing trash—just over 881,000 tons—from nearby Norway to do it.
A boulder-size meteor slammed into the moon in March, igniting an explosion so bright that anyone looking up at right moment might have spotted it.
|
<urn:uuid:9e7912b2-64be-45ed-a444-71e3acdcce76>
|
CC-MAIN-2013-20
|
http://news.nationalgeographic.com/news/2012/02/120228-planets-moon-night-sky-venus-jupiter-space-science/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.944519
| 894
| 2.578125
| 3
|
Chronotypes, Social Jet Lag, and Why You're So Tired
Aims and Scope
Early birds and night owls are born, not made. Sleep patterns may be the most obvious manifestation of the highly individualized biological clocks we inherit, but these clocks also regulate bodily functions from digestion to hormone levels to cognition. Living at odds with our internal timepieces, Till Roenneberg shows, can make us chronically sleep deprived and more likely to smoke, gain weight, feel depressed, fall ill, and fail geometry. By understanding and respecting our internal time, we can live better.
Internal Time combines storytelling with accessible science tutorials to explain how our internal clocks work—for example, why morning classes are so unpopular and why “lazy” adolescents are wise to avoid them. We learn why the constant twilight of our largely indoor lives makes us dependent on alarm clocks and tired, and why social demands and work schedules lead to a social jet lag that compromises our daily functioning.
Many of the factors that make us early or late “chronotypes” are beyond our control, but that doesn’t make us powerless. Roenneberg recommends that the best way to sync our internal time with our external environment and feel better is to get more sunlight. Such simple steps as cycling to work and eating breakfast outside may be the tickets to a good night’s sleep, better overall health, and less grouchiness in the morning.
- 288 pages
- 1 halftone, 40 line illustrations
- HARVARD UNIVERSITY PRESS
|
<urn:uuid:803eaea7-f9c8-45d8-82bb-b4fe08890fc1>
|
CC-MAIN-2013-20
|
http://www.degruyter.com/view/product/184682?result=2&rskey=dhb7ou&tab=comments
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.892085
| 316
| 2.9375
| 3
|
Higher education graduation rates have grown massively in OECD countries in recent decades. But what is the impact of this on labour markets? Has the increasing supply of well-educated labour been matched by the creation of an equivalent number of high-paying jobs? Or one day will everyone have a university degree and work for the minimum wage? The analysis below of this year’s edition of Education at a Glance suggests that the expansion has had a positive impact for individuals and economies and that there are, as yet, no signs of an "inflation" of the value of qualifications. The sustainability of the continued expansion will, however, depend on re-thinking how it is financed and how to ensure that it is more efficient.
Indicator A1 To what level have adults studied?
This indicator profiles the educational attainment of the adult population, as captured through formal educational qualifications. As such it provides a proxy for the knowledge and skills available to national economies and societies. Data on attainment by fields of education and by age groups are also used in this indicator both to examine the distribution of skills in the population and to have a rough measure of what skills have recently entered the labour market and of what skills will be leaving the labour market in the coming years. It also looks at the effects of tertiary education expansion and asks whether this leads to the overqualified crowding out the lesser qualified.
Indicator A2 How many students finish secondary education?
This indicator shows the current upper secondary graduate output of education systems, i.e. the percentage of the typical population of upper secondary school age that follows and successfully completes upper secondary programmes.
Indicator A3 How many students finish tertiary education?
This indicator first shows the current tertiary graduate output of educational systems, i.e. the percentage of the population in the typical age cohort for tertiary education that follows and successfully completes tertiary programmes, as well as the distribution of tertiary graduates across fields of education. The indicator then examines the number of science graduates in relation to employed persons. It also considers whether gender differences concerning motivation in mathematics at the age of 15 may affect tertiary graduation rates. Finally, the indicator shows survival rates at the tertiary level, i.e. the proportion of new entrants into the specified level of education who successfully complete a first qualification.
Indicator A4 What are students' expectations for education?.
Drawing on data from the Programme for International Student Assessment (PISA) 2003 survey, this indicator presents the highest level of education that 15-year-old students report they expect to complete. The indicator first provides an overall picture of students’ academic expectations in OECD countries and then examines relationships between expectations for tertiary education (ISCED 5 or 6) and variables such as individual performance levels, gender, socio-economic status and immigrant status, in order to shed light on equity issues.
Indicator A5 What are students' attitudes towards mathematics?
This indicator examines how 15-year-old students’ attitudes toward and approaches to learning and school vary across countries and across groups of countries, as well as the relationship between these characteristics and students’ performance in mathematics. The indicator draws on data from the OECD Programme for International Student Assessment’s (PISA) 2003 survey.
Indicator A6 What is the impact of immigrant background on student performance?
This indicator compares the performance in mathematics and reading of 15-yearold students with an immigrant background with their native counterparts, using data from the OECD Programme for International Student Assessment 2003 survey. It also looks at the motivation of these students to learn.
Indicator A7 Does the socio-economic status of their parents affect students' participation in higher education?
This indicator examines the socio-economic status of students enrolled in higher education, an important gauge of access to higher education for all. International comparable data on the socio-economic status of students in higher education is not widely available and this indicator is a first attempt to illustrate the analytical potential that would be offered by better data on this issue. It takes a close look at data from ten OECD countries, examining the occupational status (white collar or blue collar) of students’ fathers and the fathers’ educational background and also considers data from the OECD Programme for International Student Assessment (PISA) 2000 survey.
Indicator A8 How does participation in education affect participation in the labour market?
This indicator examines relationships between educational attainment and labour force status, for both males and females, and considers changes in these relationships over time.
Indicator A9 What are the economic benefits of education?
This indicator examines the relative earnings of workers with different levels of educational attainment of 25 OECD countries and the partner economy Israel. This indicator also presents data that describe the distribution of pre-tax earnings (see Annex 3 for notes) within five ISCED levels of educational attainment to help show how returns to education vary within countries among individuals with comparable levels of educational attainment. The financial returns to educational attainment are calculated for investments undertaken as a part of initial education, as well as for the case of a hypothetical 40-year-old who decides to return to education in mid-career. For the first time, this indicator presents new estimates of the rate of return for an individual investing in upper secondary education instead of working for the minimum wage with a lower secondary level of education.
Classification of educational expenditure
Indicator B1 How much is spent per student?
This indicator provides an assessment of the investment made in each student. Expenditure per student is largely influenced by teacher salaries (see Indicators B6 and D3), pension systems, instructional and teaching hours (see Indicators D1 and D4), teaching materials and facilities, the programme orientation provided to pupils/students (see Indicator C2) and the number of students enrolled in the education system (see Indicator C1). Policies put in place to attract new teachers or to reduce average class size or staffing patterns (see Indicator D2) have also contributed to changes over the time in expenditure per student.
Indicator B2 What proportion of national wealth is spent on education?
Education expenditure as a percentage of GDP shows how a country prioritises education in relation to its overall allocation of resources. Tuition fees and investment in education from private entities other than households (see Indicator B5) have a strong impact on differences in the overall amount of financial resources that OECD countries devote to their education systems, especially at the tertiary level.
Indicator B3 How much public and private investment is there in education?
This indicator examines the proportion of public and private funding allocated to educational institutions for each level of education. It also provides the breakdown of private funding between household expenditure and expenditure from private entities other than households. This indicator sheds some light on the widely debated issue of how the financing of educational institutions should be shared between public entities and private ones, particularly those at the tertiary level.
Indicator B4 What is the total public spending on education?
Public expenditure on education as a percentage of total public expenditure indicates the value placed on education relative to that of other public investments such as health care, social security, defence and security. It provides an important context for the other indicators on expenditure, particularly for Indicator B3 (the public and private shares of educational expenditure), as well as quantification of an important policy lever in its own right.
Indicator B5 How much do tertiary students pay and what public subsidies do they receive?
This indicator examines the relationships between annual tuition fees charged by institutions, direct and indirect public spending on educational institutions, and public subsidies to households for student living costs. It considers whether financial subsidies for households are provided in the form of grants or loans and poses related questions central to this discussion: Are scholarships/grants and loans more appropriate in countries with higher tuitions fees charged by institutions? Are loans an effective means to help increase the efficiency of financial resources invested in education and shift some of the cost of education to the beneficiaries of educational investment? Or are student loans less appropriate than grants in encouraging lowincome students to pursue their education? While these questions cannot be fully answered here, this indicator presents information about the policies for tuition fees and subsidies in different OECD countries.
Indicator B6 On what resources and services is education funding spent?
This indicator compares OECD countries with respect to the division of spending between current and capital expenditure, and the distribution of current expenditure by resource category. It is largely influenced by teacher salaries (see Indicator D3), pension systems, teacher age distribution, size of the non-teaching staff employed in education (see Indicator D2 in Education at a Glance 2005) and the degree to which expansion in enrolments requires the construction of new buildings. It also compares how OECD countries’ spending is distributed by different functions of educational institutions.
Indicator B7 How efficiently are resources used in education?
This indicator examines the relationship between resources invested and outcomes achieved in primary and lower secondary education across OECD countries and thus raises questions about the efficiency of their education systems.
Indicator C1 How prevalent are vocational programmes?
This indicator shows the participation of students in vocational education and training (VET) at the upper secondary level of education and compares the levels of education expenditure per student for general programmes and VET. This indicator also compares the educational outcomes of 15-year-old students enrolled in general education and in vocational education.
Indicator C2 Who participates in education?
This indicator examines access to education and its evolution by using information on enrolment rates and trends in enrolments from 1995 to 2005. It also shows patterns of participation at the secondary level of education and the percentage of the youth cohort that will enter different types of tertiary education during their lives. Entry and participation rates reflect both the accessibility of tertiary education and the perceived value of attending tertiary programmes. For information on vocational education and training in secondary education, see Indicator C1.
Indicator C3 Who studies abroad and where?
This indicator is providing a picture of student mobility and the extent of the internationalisation of tertiary education in OECD countries and partner economies. It shows global trends and highlights the major destinations of international students and trends in market shares of the international student pool. Some of the factors underlying students’ choice of a country of study are also examined. In addition, the indicator looks at the extent of student mobility in different destinations and presents the profile of the international student intake in terms of their distribution by countries and regions of origin, types of programmes, and fields of education. The distribution of students enrolled outside of their country of citizenship by destination is also examined. Finally, the contribution of international students to the graduate output is examined alongside immigration implications for their host countries. The proportion of international students in tertiary enrolments provides a good indication of the magnitude of student mobility in different countries.
Indicator C4 How successful are students in moving from education to work?
This indicator shows the number of years that young people are expected to spend in education, employment and non-employment and examines the education and employment status of young people by gender. During the past decade, young people have spent more time in initial education, delaying their entry into the world of work. Part of this additional time is spent combining work and education, a practice that is widespread in some countries. Once young people have completed their initial education, access to the labour market is often impeded by periods of unemployment or non-employment, although this situation affects males and females differently. Based on the current situation of persons between the ages of 15 and 29, this indicator gives a picture of major trends in the transition from school to work.
Indicator C5 Do adults participate in training and education at work?
This indicator examines the participation of the adult population in non-formal job-related education and training by showing the expected number of hours in such education and training. A particular focus of this indicator is the time that a hypothetical individual (facing current conditions in terms of adult learning opportunities at different stages in life) is expected to spend in such education and training over a typical working life (a 40-year period).
Indicator D1 How much time do students spend in the classroom?
This indicator examines the amount of instruction time that students are expected to receive between the ages of 7 and 15. It also discusses the relationship between instruction time and student learning outcomes.
Indicator D2 What is the student-teacher ratio and how big are classes?
This indicator examines the number of students per class at the primary and lower secondary levels, and the ratio of students to teaching staff at all levels; it distinguishes between public and private institutions. Class size and student-teacher ratios are much discussed aspects of the education students receive and – along with the total instruction time of students (see Indicator D1), teachers’ average working time (see Indicator D4) and the division of teachers’ time between teaching and other duties – are among the determinants of the size of the teaching force within countries.
Indicator D3 How much are teachers paid?
This indicator shows the starting, mid-career and maximum statutory salaries of teachers in public primary and secondary education, and various additional payments and incentive schemes used in teacher reward systems. It also presents information on aspects of teachers’ contractual arrangements. Together with average class size (see Indicator D2) and teachers’ working time (see Indicator D4), this indicator presents some key measures of the working lives of teachers. Differences in teachers’ salaries, along with other factors such as student to staff ratios (see Indicator D2) provide some explanation for differences in expenditure per student (see Indicator B1).
Indicator D4 How much time do teachers spend teaching?
This indicator focuses on the statutory working time of teachers at different levels of education as well as their statutory teaching time. Although working time and teaching time only partly determine the actual workload of teachers, they do give some valuable insights into differences among countries in what is demanded of teachers. Together with teachers’ salaries (see Indicator D3) and average class size (see Indicator D2), this indicator presents some key measures of the work lives of teachers.
Indicator D5 How do education systems monitor school performance?
This indicator focuses on the evaluation and accountability arrangements for lower secondary public schools that exist across countries. The focus is upon the collection, use and availability of student and school performance information. This indicator complements the quantitative information relating to teacher salaries and working and teaching time (Indicators D3 and D4), instruction time of students (Indicator D1), and the relationship between number of students and numbers of teachers (Indicator D2) by providing qualitative information on the type and use of particular school accountability and evaluation arrangements.
The typical graduation age is the age at the end of the last school/academic year of the corresponding level and programme when the degree is obtained. The age is the age that normally corresponds to the age of graduation. (Note that at some levels of education the term "graduation age" may not translate literally and is used here purely as a convention.)
Add to Marked List
|
<urn:uuid:dfb1391f-1a1e-4a23-a119-bd6cf67bc328>
|
CC-MAIN-2013-20
|
http://www.oecd-ilibrary.org/education/education-at-a-glance-2007_eag-2007-en
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.947441
| 3,109
| 2.78125
| 3
|
It turns out facing your fears really does work—researchers at Northwestern University have found that just one positive exposure to spiders had lasting effects in people with arachnophobia six months later.
The parts of the brain responsible for producing fear remained relatively inactive six months after patients underwent a single two-hour "exposure therapy" session in which they were able to touch a live tarantula. The brain changes were seen immediately after therapy and remained essentially the same six months later, according to Katherina Hauner, lead author and therapist of the study, which appears in Monday's Proceedings of the National Academy of Sciences.
"These people had been clinically afraid of spiders since childhood … they'd have to leave the house if they thought there was a spider inside," she says. According to the NIH, about 8 percent of people have a "specific phobia," considered to be a "marked and persistent fear and avoidance of a specific object or situation."
Over the course of two hours, participants touched a live tarantula with a paintbrush, a gloved hand, and eventually their bare hand. "It's this idea that you slowly approach the thing you're afraid of. They learned that the spider was predictable and controllable, and by that time, they feel like it's not a spider anymore."
The study sheds light on the brain responses to fear and the changes that happen when a fear is overcome. Immediately after therapy, activity in the participants' amygdalas, the part of the brain believed to be responsible for fear responses, remained relatively dormant and stayed that way six months later when participants were exposed to spiders.
Hauner says the study proves that exposure therapy works and can potentially be used to develop new treatment methods for people with extreme phobias. She says a similar method can be used on people with fears of confined spaces, heights, flying, blood, and more.
"It has to be an innocuous object or situation—it's not a phobia if you're scared of sharks and don't want to go in shark-infested water," she says. "That's called being safe."
In the near future, therapists might be able to inhibit the part of the brain responsible for fear or stimulate the region of the brain responsible for blocking fear in order to begin new therapies.
"There's already techniques we use to stimulate regions of the brain to treat depression and [obsessive-compulsive disorder]," she says. "It's not too far off in the future that we can use these techniques to treat other types of disorders."
- Social Phobia in Teens Goes Beyond Shyness
- Check out U.S. News Weekly: An insider's guide to politics and policy.
- See today's best photos.
Jason Koebler is a science and technology reporter for U.S. News & World Report. You can follow him on Twitter or reach him at firstname.lastname@example.org
|
<urn:uuid:fb873200-d667-457b-b1a3-9b540c4c6c40>
|
CC-MAIN-2013-20
|
http://www.usnews.com/news/articles/2012/05/21/face-your-fears-and-scare-the-phobia-out-of-your-brain
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966002
| 609
| 3.15625
| 3
|